Image Operations

Image Operations Overview

Vulkan Image Operations are operations performed by those SPIR-V Image Instructions which take an OpTypeImage (representing a VkImageView) or OpTypeSampledImage (representing a (VkImageView, VkSampler) pair). Read, write, and atomic operations also take texel coordinates as operands, and return a value based on a neighborhood of texture elements (texels) within the image. Query operations return properties of the bound image or of the lookup itself. The Depth operand of OpTypeImage is ignored.

Texel is a term which is a combination of the words texture and element. Early interactive computer graphics supported texture operations on textures, a small subset of the image operations on images described here. The discrete samples remain essentially equivalent, however, so we retain the historical term texel to refer to them.

Image Operations include the functionality of the following SPIR-V Image Instructions:

  • OpImageSample* and OpImageSparseSample* read one or more neighboring texels of the image, and filter the texel values based on the state of the sampler.
    • Instructions with ImplicitLod in the name determine the LOD used in the sampling operation based on the coordinates used in neighboring fragments.
    • Instructions with ExplicitLod in the name determine the LOD used in the sampling operation based on additional coordinates.
    • Instructions with Proj in the name apply homogeneous projection to the coordinates.
  • OpImageFetch and OpImageSparseFetch return a single texel of the image. No sampler is used.
  • OpImage*Gather and OpImageSparse*Gather read neighboring texels and return a single component of each.
  • OpImageRead (and OpImageSparseRead) and OpImageWrite read and write, respectively, a texel in the image. No sampler is used.
  • OpImageSampleFootprintNV identifies and returns information about the set of texels in the image that would be accessed by an equivalent OpImageSample* instruction.
  • OpImage*Dref* instructions apply depth comparison on the texel values.
  • OpImageSparse* instructions additionally return a sparse residency code.
  • OpImageQuerySize, OpImageQuerySizeLod, OpImageQueryLevels, and OpImageQuerySamples return properties of the image descriptor that would be accessed. The image itself is not accessed.
  • OpImageQueryLod returns the LOD parameters that would be used in a sample operation. The actual operation is not performed.
  • OpImageWeightedSampleQCOM reads a 2D neighborhood of texels and computes a weighted average using weight values from a separate weight texture.
  • opImageBlockMatchSADQCOM and opTextureBlockMatchSSD compare 2D neighborhoods of texels from two textures.
  • OpImageBoxFilterQCOM reads a 2D neighborhood of texels and computes a weighted average of the texels.
  • opImageBlockMatchWindowSADQCOM and opImageBlockMatchWindowSSDQCOM compare 2D neighborhoods of texels from two textures with the comparison repeated across a window region in the target texture.
  • opImageBlockMatchGatherSADQCOM and opImageBlockMatchWindowSSDQCOM compares four 2D neighborhoods of texels from a target texture with a single 2D neighborhood in the reference texture. The R component of each comparison is gathered and returned in the output.

Texel Coordinate Systems

Images are addressed by texel coordinates. There are three texel coordinate systems:

  • normalized texel coordinates [0.0, 1.0]
  • unnormalized texel coordinates [0.0, width / height / depth)
  • integer texel coordinates [0, width / height / depth)

SPIR-V OpImageFetch, OpImageSparseFetch, OpImageRead, OpImageSparseRead, opImageBlockMatchSADQCOM, opImageBlockMatchSSDQCOM, opImageBlockMatchWindowSADQCOM, opImageBlockMatchWindowSSDQCOM, and OpImageWrite instructions use integer texel coordinates.

Other image instructions can use either normalized or unnormalized texel coordinates (selected by the unnormalizedCoordinates state of the sampler used in the instruction), but there are limitations on what operations, image state, and sampler state is supported. Normalized coordinates are logically converted to unnormalized as part of image operations, and certain steps are only performed on normalized coordinates. The array layer coordinate is always treated as unnormalized even when other coordinates are normalized.

Normalized texel coordinates are referred to as (s,t,r,q,a), with the coordinates having the following meanings:

  • s: Coordinate in the first dimension of an image.
  • t: Coordinate in the second dimension of an image.
  • r: Coordinate in the third dimension of an image.
    • (s,t,r) are interpreted as a direction vector for Cube images.
  • q: Fourth coordinate, for homogeneous (projective) coordinates.
  • a: Coordinate for array layer.

The coordinates are extracted from the SPIR-V operand based on the dimensionality of the image variable and type of instruction. For Proj instructions, the components are in order (s, [t,] [r,] q), with t and r being conditionally present based on the Dim of the image. For non-Proj instructions, the coordinates are (s [,t] [,r] [,a]), with t and r being conditionally present based on the Dim of the image and a being conditionally present based on the Arrayed property of the image. Projective image instructions are not supported on Arrayed images.

Unnormalized texel coordinates are referred to as (u,v,w,a), with the coordinates having the following meanings:

  • u: Coordinate in the first dimension of an image.
  • v: Coordinate in the second dimension of an image.
  • w: Coordinate in the third dimension of an image.
  • a: Coordinate for array layer.

Only the u and v coordinates are directly extracted from the SPIR-V operand, because only 1D and 2D (non-Arrayed) dimensionalities support unnormalized coordinates. The components are in order (u [,v]), with v being conditionally present when the dimensionality is 2D. When normalized coordinates are converted to unnormalized coordinates, all four coordinates are used.

Integer texel coordinates are referred to as (i,j,k,l,n), with the coordinates having the following meanings:

  • i: Coordinate in the first dimension of an image.
  • j: Coordinate in the second dimension of an image.
  • k: Coordinate in the third dimension of an image.
  • l: Coordinate for array layer.
  • n: Index of the sample within the texel.

They are extracted from the SPIR-V operand in order (i [,j] [,k] [,l] [,n]), with j and k conditionally present based on the Dim of the image, and l conditionally present based on the Arrayed property of the image. n is conditionally present and is taken from the Sample image operand.

If an accessed image was created from a view using VkImageViewSlicedCreateInfoEXT and accessed through a VK_DESCRIPTOR_TYPE_STORAGE_IMAGE descriptor, then the value of k is incremented by VkImageViewSlicedCreateInfoEXT::sliceOffset, giving k ← sliceOffset + k. The image’s accessible range in the third dimension is k < sliceOffset + sliceCount. If VkImageViewSlicedCreateInfoEXT::sliceCount is VK_REMAINING_3D_SLICES_EXT, the range is inherited from the image’s depth extent as specified by Image Mip Level Sizing.

For all coordinate types, unused coordinates are assigned a value of zero.

vulkantexture0 ll

The Texel Coordinate Systems - For the example shown of an 8×4 texel two dimensional image.

  • Normalized texel coordinates:
    • The s coordinate goes from 0.0 to 1.0.
    • The t coordinate goes from 0.0 to 1.0.
  • Unnormalized texel coordinates:
    • The u coordinate within the range 0.0 to 8.0 is within the image, otherwise it is outside the image.
    • The v coordinate within the range 0.0 to 4.0 is within the image, otherwise it is outside the image.
  • Integer texel coordinates:
    • The i coordinate within the range 0 to 7 addresses texels within the image, otherwise it is outside the image.
    • The j coordinate within the range 0 to 3 addresses texels within the image, otherwise it is outside the image.
  • Also shown for linear filtering:
    • Given the unnormalized coordinates (u,v), the four texels selected are i0j0, i1j0, i0j1, and i1j1.
    • The fractions α and β.
    • Given the offset Δi and Δj, the four texels selected by the offset are i0j'0, i1j'0, i0j'1, and i1j'1.

For formats with reduced-resolution components, Δi and Δj are relative to the resolution of the highest-resolution component, and therefore may be divided by two relative to the unnormalized coordinate space of the lower-resolution components.

vulkantexture1 ll

The Texel Coordinate Systems - For the example shown of an 8×4 texel two dimensional image.

  • Texel coordinates as above. Also shown for nearest filtering:
    • Given the unnormalized coordinates (u,v), the texel selected is ij.
    • Given the offset Δi and Δj, the texel selected by the offset is ij'.

For corner-sampled images, the texel samples are located at the grid intersections instead of the texel centers.

vulkantexture0 corner alternative a ll

Conversion Formulas

RGB to Shared Exponent Conversion

An RGB color (red, green, blue) is transformed to a shared exponent color (redshared, greenshared, blueshared, expshared) as follows:

First, the components (red, green, blue) are clamped to (redclamped, greenclamped, blueclamped) as:

  • redclamped = max(0, min(sharedexpmax, red))
  • greenclamped = max(0, min(sharedexpmax, green))
  • blueclamped = max(0, min(sharedexpmax, blue))

where:

N=9number of mantissa bits per component B=15exponent bias E_max=31maximum possible biased exponent value sharedexp_max=(2N1)2N×2(E_maxB)\begin{aligned} N & = 9 & \text{number of mantissa bits per component} \\\ B & = 15 & \text{exponent bias} \\\ E\_{max} & = 31 & \text{maximum possible biased exponent value} \\\ sharedexp\_{max} & = \frac{(2^N-1)}{2^N} \times 2^{(E\_{max}-B)} \end{aligned}

NaN, if supported, is handled as in\

IEEE 754-2008 minNum() and maxNum(). This results in any NaN being mapped to zero.

The largest clamped component, maxclamped is determined:

  • maxclamped = max(redclamped, greenclamped, blueclamped)

A preliminary shared exponent exp' is computed:

exp={log_2(max_clamped)+(B+1)for max_clamped>2(B+1) 0for max_clamped2(B+1)\begin{aligned} exp' = \begin{cases} \left \lfloor \log\_2(max\_{clamped}) \right \rfloor + (B+1) & \text{for}\ max\_{clamped} > 2^{-(B+1)} \\\ 0 & \text{for}\ max\_{clamped} \leq 2^{-(B+1)} \end{cases} \end{aligned}

The shared exponent expshared is computed:

max_shared=max_clamped2(expBN)+12\begin{aligned} max\_{shared} = \left \lfloor { \frac{max\_{clamped}}{2^{(exp'-B-N)}} + \frac{1}{2} } \right \rfloor \end{aligned}exp_shared={expfor 0max_shared<2N exp+1for max_shared=2N\begin{aligned} exp\_{shared} = \begin{cases} exp' & \text{for}\ 0 \leq max\_{shared} < 2^N \\\ exp'+1 & \text{for}\ max\_{shared} = 2^N \end{cases} \end{aligned}

Finally, three integer values in the range 0 to 2N are computed:

red_shared=red_clamped2(exp_sharedBN)+12 green_shared=green_clamped2(exp_sharedBN)+12 blue_shared=blue_clamped2(exp_sharedBN)+12\begin{aligned} red\_{shared} & = \left \lfloor { \frac{red\_{clamped}}{2^{(exp\_{shared}-B-N)}}+ \frac{1}{2} } \right \rfloor \\\ green\_{shared} & = \left \lfloor { \frac{green\_{clamped}}{2^{(exp\_{shared}-B-N)}}+ \frac{1}{2} } \right \rfloor \\\ blue\_{shared} & = \left \lfloor { \frac{blue\_{clamped}}{2^{(exp\_{shared}-B-N)}}+ \frac{1}{2} } \right \rfloor \end{aligned}

Shared Exponent to RGB

A shared exponent color (redshared, greenshared, blueshared, expshared) is transformed to an RGB color (red, green, blue) as follows:

  • red=red_shared×2(exp_sharedBN)red = red\_{shared} \times {2^{(exp\_{shared}-B-N)}}
  • green=green_shared×2(exp_sharedBN)green = green\_{shared} \times {2^{(exp\_{shared}-B-N)}}
  • blue=blue_shared×2(exp_sharedBN)blue = blue\_{shared} \times {2^{(exp\_{shared}-B-N)}}

where:

  • N = 9 (number of mantissa bits per component)
  • B = 15 (exponent bias)

Texel Input Operations

Texel input instructions are SPIR-V image instructions that read from an image. Texel input operations are a set of steps that are performed on state, coordinates, and texel values while processing a texel input instruction, and which are common to some or all texel input instructions. They include the following steps, which are performed in the listed order:

For texel input instructions involving multiple texels (for sampling or gathering), these steps are applied for each texel that is used in the instruction. Depending on the type of image instruction, other steps are conditionally performed between these steps or involving multiple coordinate or texel values.

If Chroma Reconstruction is implicit, Texel Filtering instead takes place during chroma reconstruction, before sampler Y′CBCR conversion occurs.

The operations described in block matching and weight image sampling are performed before Conversion to RGBA and Component swizzle.

Texel Input Validation Operations

Texel input validation operations inspect instruction/image/sampler state or coordinates, and in certain circumstances cause the texel value to be replaced or become undefined:. There are a series of validations that the texel undergoes.

Instruction/Sampler/Image View Validation

There are a number of cases where a SPIR-V instruction can mismatch with the sampler, the image view, or both, and a number of further cases where the sampler can mismatch with the image view. In such cases the value of the texel returned is undefined:.

These cases include:

  • The sampler borderColor is an integer type and the image view format is not one of the VkFormat integer types or a stencil component of a depth/stencil format.
  • The sampler borderColor is a float type and the image view format is not one of the VkFormat float types or a depth component of a depth/stencil format.
  • The sampler borderColor is one of the opaque black colors (VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK or VK_BORDER_COLOR_INT_OPAQUE_BLACK) and the image view VkComponentSwizzle for any of the VkComponentMapping components is not the identity swizzle, and VkPhysicalDeviceBorderColorSwizzleFeaturesEXT::borderColorSwizzleFromImage feature is not enabled, and VkSamplerBorderColorComponentMappingCreateInfoEXT is not specified.
  • VkSamplerBorderColorComponentMappingCreateInfoEXT::components, if specified, has a component swizzle that does not match the component swizzle of the image view, and either component swizzle is not a form of identity swizzle.
  • VkSamplerBorderColorComponentMappingCreateInfoEXT::srgb, if specified, does not match the sRGB encoding of the image view.
  • The sampler borderColor is a custom color (VK_BORDER_COLOR_FLOAT_CUSTOM_EXT or VK_BORDER_COLOR_INT_CUSTOM_EXT) and the supplied VkSamplerCustomBorderColorCreateInfoEXT::customBorderColor is outside the bounds of the values representable in the image view’s format.
  • The sampler borderColor is a custom color (VK_BORDER_COLOR_FLOAT_CUSTOM_EXT or VK_BORDER_COLOR_INT_CUSTOM_EXT) and the image view VkComponentSwizzle for any of the VkComponentMapping components is not the identity swizzle, and VkPhysicalDeviceBorderColorSwizzleFeaturesEXT::borderColorSwizzleFromImage feature is not enabled, and VkSamplerBorderColorComponentMappingCreateInfoEXT is not specified.
  • The VkImageLayout of any subresource in the image view does not match the VkDescriptorImageInfo::imageLayout used to write the image descriptor.
  • The SPIR-V Image Format is not compatible with the image view’s format.
  • The sampler unnormalizedCoordinates is VK_TRUE and any of the limitations of unnormalized coordinates are violated.
  • The sampler was created with flags containing VK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT and the image was not created with flags containing VK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT.
  • The sampler was not created with flags containing VK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT and the image was created with flags containing VK_IMAGE_CREATE_SUBSAMPLED_BIT_EXT.
  • The sampler was created with flags containing VK_SAMPLER_CREATE_SUBSAMPLED_BIT_EXT and is used with a function that is not OpImageSampleImplicitLod or OpImageSampleExplicitLod, or is used with operands Offset or ConstOffsets.
  • The SPIR-V instruction is one of the OpImage*Dref* instructions and the sampler compareEnable is VK_FALSE
  • The SPIR-V instruction is not one of the OpImage*Dref* instructions and the sampler compareEnable is VK_TRUE
  • The SPIR-V instruction is one of the OpImage*Dref* instructions, the image view format is one of the depth/stencil formats, and the image view aspect is not VK_IMAGE_ASPECT_DEPTH_BIT.
  • The SPIR-V instruction’s image variable’s properties are not compatible with the image view:
    • If the image view’s viewType is one of VK_IMAGE_VIEW_TYPE_1D_ARRAY, VK_IMAGE_VIEW_TYPE_2D_ARRAY, or VK_IMAGE_VIEW_TYPE_CUBE_ARRAY then the instruction must have Arrayed = 1. Otherwise the instruction must have Arrayed = 0.
    • If the image was created with VkImageCreateInfo::samples equal to VK_SAMPLE_COUNT_1_BIT, the instruction must have MS = 0.
    • If the image was created with VkImageCreateInfo::samples not equal to VK_SAMPLE_COUNT_1_BIT, the instruction must have MS = 1.
    • If the Sampled Type of the OpTypeImage does not match the SPIR-V Type.
    • If the signedness of any read or sample operation does not match the signedness of the image’s format.
  • If the image was created with VkImageCreateInfo::flags containing VK_IMAGE_CREATE_CORNER_SAMPLED_BIT_NV, the sampler addressing modes must only use a VkSamplerAddressMode of VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.
  • The SPIR-V instruction is OpImageSampleFootprintNV with Dim = 2D and addressModeU or addressModeV in the sampler is not VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.
  • The SPIR-V instruction is OpImageSampleFootprintNV with Dim = 3D and addressModeU, addressModeV, or addressModeW in the sampler is not VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.
  • The sampler was created with a specified VkSamplerCustomBorderColorCreateInfoEXT::format which does not match the VkFormat of the image view(s) it is sampling.
  • The sampler is sampling an image view of VK_FORMAT_B4G4R4A4_UNORM_PACK16, VK_FORMAT_B5G6R5_UNORM_PACK16, or VK_FORMAT_B5G5R5A1_UNORM_PACK16 format without a specified VkSamplerCustomBorderColorCreateInfoEXT::format.

Only OpImageSample* and OpImageSparseSample* can be used with a sampler or image view that enables sampler Y′CBCR conversion.

OpImageFetch, OpImageSparseFetch, OpImage*Gather, and OpImageSparse*Gather must not be used with a sampler or image view that enables sampler Y′CBCR conversion.

The ConstOffset and Offset operands must not be used with a sampler or image view that enables sampler Y′CBCR conversion.

If the underlying VkImage format has an X component in its format description, undefined: values are read from those bits.

If the VkImage format and VkImageView format are the same, these bits will be unused by format conversion and this will have no effect. However, if the VkImageView format is different, then some bits of the result may be undefined:. For example, when a VK_FORMAT_R10X6_UNORM_PACK16 VkImage is sampled via a VK_FORMAT_R16_UNORM VkImageView, the low 6 bits of the value before format conversion are undefined: and format conversion may return a range of different values.

Some implementations will return undefined: values in the case where a sampler uses a VkSamplerAddressMode of VK_SAMPLER_ADDRESS_MODE_MIRRORED_REPEAT, the sampler is used with operands Offset, ConstOffset, or ConstOffsets, and the value of the offset is larger than or equal to the corresponding width, height, or depth of any accessed image level.

This behavior was not tested prior to Vulkan conformance test suite version 1.3.8.0. Affected implementations will have a conformance test waiver for this issue.

Integer Texel Coordinate Validation

Integer texel coordinates are validated against the size of the image level, and the number of layers and number of samples in the image. For SPIR-V instructions that use integer texel coordinates, this is performed directly on the integer coordinates. For instructions that use normalized or unnormalized texel coordinates, this is performed on the coordinates that result after conversion to integer texel coordinates.

If the integer texel coordinates do not satisfy all of the conditions

  • 0 ≤ i < ws
  • 0 ≤ j < hs
  • 0 ≤ k < ds
  • 0 ≤ l < layers
  • 0 ≤ n < samples

where:

  • ws = width of the image level
  • hs = height of the image level
  • ds = depth of the image level
  • layers = number of layers in the image
  • samples = number of samples per texel in the image

then the texel fails integer texel coordinate validation.

There are four cases to consider:

  1. Valid Texel Coordinates
    • If the texel coordinates pass validation (that is, the coordinates lie within the image),

    then the texel value comes from the value in image memory.
  2. Border Texel
    • If the texel coordinates fail validation, and
    • If the read is the result of an image sample instruction or image gather instruction, and
    • If the image is not a cube image, or if a sampler created with VK_SAMPLER_CREATE_NON_SEAMLESS_CUBE_MAP_BIT_EXT is used,

    then the texel is a border texel and texel replacement is performed.
  3. Invalid Texel
    • If the texel coordinates fail validation, and
    • If the read is the result of an image fetch instruction, image read instruction, or atomic instruction,

    then the texel is an invalid texel and texel replacement is performed.
  4. Cube Map Edge or Corner
    Otherwise the texel coordinates lie beyond the edges or corners of the selected cube map face, and Cube map edge handling is performed.

Cube Map Edge Handling

If the texel coordinates lie beyond the edges or corners of the selected cube map face (as described in the prior section), the following steps are performed. Note that this does not occur when using VK_FILTER_NEAREST filtering within a mip level, since VK_FILTER_NEAREST is treated as using VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.

  • Cube Map Edge Texel
    • If the texel lies beyond the selected cube map face in either only i or only j, then the coordinates (i,j) and the array layer l are transformed to select the adjacent texel from the appropriate neighboring face.
  • Cube Map Corner Texel
    • If the texel lies beyond the selected cube map face in both i and j, then there is no unique neighboring face from which to read that texel. The texel should be replaced by the average of the three values of the adjacent texels in each incident face. However, implementations may replace the cube map corner texel by other methods. The methods are subject to the constraint that for linear filtering if the three available texels have the same value, the resulting filtered texel must have that value, and for cubic filtering if the twelve available samples have the same value, the resulting filtered texel must have that value.

Sparse Validation

If the texel reads from an unbound region of a sparse image, the texel is a sparse unbound texel, and processing continues with texel replacement.

Layout Validation

If all planes of a disjoint multi-planar image are not in the same image layout, the image must not be sampled with sampler Y′CBCR conversion enabled.

Format Conversion

Texels undergo a format conversion from the VkFormat of the image view to a vector of either floating-point or signed or unsigned integer components, with the number of components based on the number of components present in the format.

  • Color formats have one, two, three, or four components, according to the format.
  • Depth/stencil formats are one component. The depth or stencil component is selected by the aspectMask of the image view.

Each component is converted based on its type and size (as defined in the Format Definition section for each VkFormat), using the appropriate equations in 16-Bit Floating-Point Numbers, Unsigned 11-Bit Floating-Point Numbers, Unsigned 10-Bit Floating-Point Numbers, Fixed-Point Data Conversion, and Shared Exponent to RGB. Signed integer components smaller than 32 bits are sign-extended.

If the image view format is sRGB, the color components are first converted as if they are UNORM, and then sRGB to linear conversion is applied to the R, G, and B components as described in the sRGB EOTF section of the Khronos Data Format Specification. The A component, if present, is unchanged.

If VkSamplerYcbcrConversionYcbcrDegammaCreateInfoQCOM::enableYDegamma is equal to VK_TRUE, then sRGB to linear conversion is applied to the G component as described in the sRGB EOTF section of the Khronos Data Format Specification. If VkSamplerYcbcrConversionYcbcrDegammaCreateInfoQCOM::enableCbCrDegamma is equal to VK_TRUE, then sRGB to linear conversion is applied to the R and B components as described in the sRGB EOTF section of the Khronos Data Format Specification. The A component, if present, is unchanged.

If the image view format is block-compressed, then the texel value is first decoded, then converted based on the type and number of components defined by the compressed format.

Texel Replacement

A texel is replaced if it is one (and only one) of:

  • a border texel,
  • an invalid texel, or
  • a sparse unbound texel.

Border texels are replaced with a value based on the image format and the borderColor of the sampler. The border color is:

Table 24. Border Color B, Custom Border Color VkSamplerCustomBorderColorCreateInfoEXT::customBorderColor U
Sampler borderColorCorresponding Border Color

VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK

[Br, Bg, Bb, Ba] = [0.0, 0.0, 0.0, 0.0]

VK_BORDER_COLOR_FLOAT_OPAQUE_BLACK

[Br, Bg, Bb, Ba] = [0.0, 0.0, 0.0, 1.0]

VK_BORDER_COLOR_FLOAT_OPAQUE_WHITE

[Br, Bg, Bb, Ba] = [1.0, 1.0, 1.0, 1.0]

VK_BORDER_COLOR_INT_TRANSPARENT_BLACK

[Br, Bg, Bb, Ba] = [0, 0, 0, 0]

VK_BORDER_COLOR_INT_OPAQUE_BLACK

[Br, Bg, Bb, Ba] = [0, 0, 0, 1]

VK_BORDER_COLOR_INT_OPAQUE_WHITE

[Br, Bg, Bb, Ba] = [1, 1, 1, 1]

VK_BORDER_COLOR_FLOAT_CUSTOM_EXT

[Br, Bg, Bb, Ba] = [Ur, Ug, Ub, Ua]

VK_BORDER_COLOR_INT_CUSTOM_EXT

[Br, Bg, Bb, Ba] = [Ur, Ug, Ub, Ua]

The custom border color (U) may be rounded by implementations prior to texel replacement, but the error introduced by such a rounding must not exceed one ULP of the image’s format.

The names VK_BORDER_COLOR_*_TRANSPARENT_BLACK, VK_BORDER_COLOR_*_OPAQUE_BLACK, and VK_BORDER_COLOR_*_OPAQUE_WHITE are meant to describe which components are zeros and ones in the vocabulary of compositing, and are not meant to imply that the numerical value of VK_BORDER_COLOR_INT_OPAQUE_WHITE is a saturating value for integers.

This is substituted for the texel value by replacing the number of components in the image format

Table 25. Border Texel Components After Replacement
Texel Aspect or FormatComponent Assignment

Depth aspect

D = Br

Stencil aspect

S = Br{sym2}

One component color format

Colorr = Br

Two component color format

[Colorr,Colorg] = [Br,Bg]

Three component color format

[Colorr,Colorg,Colorb] = [Br,Bg,Bb]

Four component color format

[Colorr,Colorg,Colorb,Colora] = [Br,Bg,Bb,Ba]

Single component alpha format

[Colorr,Colorg,Colorb, Colora] = [0,0,0,Ba]

† S = Bg may be substituted as the replacement method by the implementation when VkSamplerCreateInfo::borderColor is VK_BORDER_COLOR_INT_CUSTOM_EXT and VkSamplerCustomBorderColorCreateInfoEXT::format is VK_FORMAT_UNDEFINED. Implementations should use S = Br as the replacement method.

The value returned by a read of an invalid texel is undefined:, unless that read operation is from a buffer resource and the robustBufferAccess feature is enabled. In that case, an invalid texel is replaced as described by the robustBufferAccess feature. If the access is to an image resource and the x, y, z, or layer coordinate validation fails and the robustImageAccess feature is enabled, then zero must be returned for the R, G, and B components, if present. Either zero or one must be returned for the A component, if present. If If the robustImageAccess2 feature is enabled, zero values must be returned. If only the sample index was invalid, the values returned are undefined:.

Additionally, if the robustImageAccess feature is enabled, but the robustImageAccess2 feature is not, any invalid texels may be expanded to four components prior to texel replacement. This means that components not present in the image format may be replaced with 0 or may undergo conversion to RGBA as normal.

Loads from a null descriptor return a four component color value of all zeros. However, for storage images and storage texel buffers using an explicit SPIR-V Image Format, loads from a null descriptor may return an alpha value of 1 (float or integer, depending on format) if the format does not include alpha.

If the VkPhysicalDeviceSparseProperties::residencyNonResidentStrict property is VK_TRUE, a sparse unbound texel is replaced with 0 or 0.0 values for integer and floating-point components of the image format, respectively.

If residencyNonResidentStrict is VK_FALSE, the value of the sparse unbound texel is undefined:.

Depth Compare Operation

If the image view has a depth/stencil format, the depth component is selected by the aspectMask, and the operation is an OpImage*Dref* instruction, a depth comparison is performed. The result is 1.0 if the comparison evaluates to true, and 0.0 otherwise. This value replaces the depth component D.

The compare operation is selected by the VkCompareOp value set by VkSamplerCreateInfo::compareOp. The reference value from the SPIR-V operand Dref and the texel depth value Dtex are used as the reference and test values, respectively, in that operation.

If the image being sampled has an unsigned normalized fixed-point format, then Dref is clamped to [0,1] before the compare operation.

Conversion to RGBA

The texel is expanded from one, two, or three components to four components based on the image base color:

Table 26. Texel Color After Conversion To RGBA
Texel Aspect or FormatRGBA Color

Depth aspect

[Colorr,Colorg,Colorb, Colora] = [D,0,0,one]

Stencil aspect

[Colorr,Colorg,Colorb, Colora] = [S,0,0,one]

One component color format

[Colorr,Colorg,Colorb, Colora] = [Colorr,0,0,one]

Two component color format

[Colorr,Colorg,Colorb, Colora] = [Colorr,Colorg,0,one]

Three component color format

[Colorr,Colorg,Colorb, Colora] = [Colorr,Colorg,Colorb,one]

Four component color format

[Colorr,Colorg,Colorb, Colora] = [Colorr,Colorg,Colorb,Colora]

One alpha component color format

[Colorr,Colorg,Colorb, Colora] = [0,0,0,Colora]

where one = 1.0f for floating-point formats and depth aspects, and one = 1 for integer formats and stencil aspects.

Component Swizzle

All texel input instructions apply a swizzle based on:

The swizzle can rearrange the components of the texel, or substitute zero or one for any components. It is defined as follows for each color component:

Color_component={Color_rfor RED swizzle Color_gfor GREEN swizzle Color_bfor BLUE swizzle Color_afor ALPHA swizzle 0for ZERO swizzle onefor ONE swizzle identityfor IDENTITY swizzle\begin{aligned} Color'\_{component} & = \begin{cases} Color\_r & \text{for RED swizzle} \\\ Color\_g & \text{for GREEN swizzle} \\\ Color\_b & \text{for BLUE swizzle} \\\ Color\_a & \text{for ALPHA swizzle} \\\ 0 & \text{for ZERO swizzle} \\\ one & \text{for ONE swizzle} \\\ identity & \text{for IDENTITY swizzle} \end{cases} \end{aligned}

where:

one={1.0ffor floating-point components 1for integer components  identity={Color_rfor component=r Color_gfor component=g Color_bfor component=b Color_afor component=a \begin{aligned} one & = \begin{cases} & 1.0\text{f} & \text{for floating-point components} \\\ & 1 & \text{for integer components} \\\ \end{cases} \\\ identity & = \begin{cases} & Color\_r & \text{for}\ component = r \\\ & Color\_g & \text{for}\ component = g \\\ & Color\_b & \text{for}\ component = b \\\ & Color\_a & \text{for}\ component = a \\\ \end{cases} \end{aligned}

If the border color is one of the VK_BORDER_COLOR_*_OPAQUE_BLACK enums and the VkComponentSwizzle is not the identity swizzle for all components, the value of the texel after swizzle is undefined:.

If the image view has a depth/stencil format and the VkComponentSwizzle is VK_COMPONENT_SWIZZLE_ONE, and VkPhysicalDeviceMaintenance5PropertiesKHR::depthStencilSwizzleOneSupport is not VK_TRUE, the value of the texel after swizzle is undefined:.

Sparse Residency

OpImageSparse* instructions return a structure which includes a residency code indicating whether any texels accessed by the instruction are sparse unbound texels. This code can be interpreted by the OpImageSparseTexelsResident instruction which converts the residency code to a boolean value.

Chroma Reconstruction

In some color models, the color representation is defined in terms of monochromatic light intensity (often called luma) and color differences relative to this intensity, often called chroma. It is common for color models other than RGB to represent the chroma components at lower spatial resolution than the luma component. This approach is used to take advantage of the eye’s lower spatial sensitivity to color compared with its sensitivity to brightness. Less commonly, the same approach is used with additive color, since the green component dominates the eye’s sensitivity to light intensity and the spatial sensitivity to color introduced by red and blue is lower.

Lower-resolution components are downsampled by resizing them to a lower spatial resolution than the component representing luminance. This process is also commonly known as chroma subsampling. There is one luminance sample in each texture texel, but each chrominance sample may be shared among several texels in one or both texture dimensions.

  • _444 formats do not spatially downsample chroma values compared with luma: there are unique chroma samples for each texel.
  • _422 formats have downsampling in the x dimension (corresponding to u or s coordinates): they are sampled at half the resolution of luma in that dimension.
  • _420 formats have downsampling in the x dimension (corresponding to u or s coordinates) and the y dimension (corresponding to v or t coordinates): they are sampled at half the resolution of luma in both dimensions.

The process of reconstructing a full color value for texture access involves accessing both chroma and luma values at the same location. To generate the color accurately, the values of the lower-resolution components at the location of the luma samples are reconstructed from the lower-resolution sample locations, an operation known here as chroma reconstruction irrespective of the actual color model.

The location of the chroma samples relative to the luma coordinates is determined by the xChromaOffset and yChromaOffset members of the VkSamplerYcbcrConversionCreateInfo structure used to create the sampler Y′CBCR conversion.

The following diagrams show the relationship between unnormalized (u,v) coordinates and (i,j) integer texel positions in the luma component (shown in black, with circles showing integer sample positions) and the texel coordinates of reduced-resolution chroma components, shown as crosses in red.

If the chroma values are reconstructed at the locations of the luma samples by means of interpolation, chroma samples from outside the image bounds are needed; these are determined according to Wrapping Operation. These diagrams represent this by showing the bounds of the chroma texel extending beyond the image bounds, and including additional chroma sample positions where required for interpolation. The limits of a sample for NEAREST sampling is shown as a grid.

chromasamples 422 cosited

chromasamples 422 midpoint

chromasamples 420 xcosited ycosited

chromasamples 420 xmidpoint ycosited

chromasamples 420 xcosited ymidpoint

chromasamples 420 xmidpoint ymidpoint

Reconstruction is implemented in one of two ways:

If the format of the image that is to be sampled sets VK_FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT, or the VkSamplerYcbcrConversionCreateInfo’s forceExplicitReconstruction is VK_TRUE, reconstruction is performed as an explicit step independent of filtering, described in the Explicit Reconstruction section.

If the format of the image that is to be sampled does not set VK_FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT and if the VkSamplerYcbcrConversionCreateInfo’s forceExplicitReconstruction is VK_FALSE, reconstruction is performed as an implicit part of filtering prior to color model conversion, with no separate post-conversion texel filtering step, as described in the Implicit Reconstruction section.

Explicit Reconstruction

  • If the chromaFilter member of the VkSamplerYcbcrConversionCreateInfo structure is VK_FILTER_NEAREST:
    • If the format’s R and B components are reduced in resolution in just width by a factor of two relative to the G component (i.e. this is a _422 format), the τ_ijk\[level]\tau\_{ijk}\[level] values accessed by texel filtering are reconstructed as follows:τ_R(i,j)=τ_R(i×0.5,j)\[level] τ_B(i,j)=τ_B(i×0.5,j)\[level]\begin{aligned} \tau\_R'(i, j) & = \tau\_R(\left\lfloor{i\times 0.5}\right\rfloor, j)\[level] \\\ \tau\_B'(i, j) & = \tau\_B(\left\lfloor{i\times 0.5}\right\rfloor, j)\[level] \end{aligned}
    • If the format’s R and B components are reduced in resolution in width and height by a factor of two relative to the G component (i.e. this is a _420 format), the τ_ijk\[level]\tau\_{ijk}\[level] values accessed by texel filtering are reconstructed as follows:τ_R(i,j)=τ_R(i×0.5,j×0.5)\[level] τ_B(i,j)=τ_B(i×0.5,j×0.5)\[level]\begin{aligned} \tau\_R'(i, j) & = \tau\_R(\left\lfloor{i\times 0.5}\right\rfloor, \left\lfloor{j\times 0.5}\right\rfloor)\[level] \\\ \tau\_B'(i, j) & = \tau\_B(\left\lfloor{i\times 0.5}\right\rfloor, \left\lfloor{j\times 0.5}\right\rfloor)\[level] \end{aligned}

      xChromaOffset and yChromaOffset have no effect if chromaFilter is VK_FILTER_NEAREST for explicit reconstruction.

  • If the chromaFilter member of the VkSamplerYcbcrConversionCreateInfo structure is VK_FILTER_LINEAR:
    • If the format’s R and B components are reduced in resolution in just width by a factor of two relative to the G component (i.e. this is a _422 format):
      • If xChromaOffset is VK_CHROMA_LOCATION_COSITED_EVEN:τ_RB(i,j)={τ_RB(i×0.5,j)\[level],0.5×i=0.5×i 0.5×τ_RB(i×0.5,j)\[level]+ 0.5×τ_RB(i×0.5+1,j)\[level],0.5×i0.5×i\tau\_{RB}'(i,j) = \begin{cases} \tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor,j)\[level], & 0.5 \times i = \left\lfloor{0.5 \times i}\right\rfloor\\\ 0.5\times\tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor,j)\[level] + \\\ 0.5\times\tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor + 1,j)\[level], & 0.5 \times i \neq \left\lfloor{0.5 \times i}\right\rfloor \end{cases}
      • If xChromaOffset is VK_CHROMA_LOCATION_MIDPOINT:τ_RB(i,j)={0.25×τ_RB(i×0.51,j)\[level]+ 0.75×τ_RB(i×0.5,j)\[level],0.5×i=0.5×i 0.75×τ_RB(i×0.5,j)\[level]+ 0.25×τ_RB(i×0.5+1,j)\[level],0.5×i0.5×i\tau\_{RB}'(i,j) = \begin{cases} 0.25 \times \tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor - 1,j)\[level] + \\\ 0.75 \times \tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor,j)\[level], & 0.5 \times i = \left\lfloor{0.5 \times i}\right\rfloor\\\ 0.75 \times \tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor,j)\[level] + \\\ 0.25 \times \tau\_{RB}(\left\lfloor{i\times 0.5}\right\rfloor + 1,j)\[level], & 0.5 \times i \neq \left\lfloor{0.5 \times i}\right\rfloor \end{cases}
    • If the format’s R and B components are reduced in resolution in width and height by a factor of two relative to the G component (i.e. this is a _420 format), a similar relationship applies. Due to the number of options, these formulae are expressed more concisely as follows:i_RB={0.5×(i)xChromaOffset = COSITEDEVEN 0.5×(i0.5)xChromaOffset = MIDPOINT jRB={0.5×(j)yChromaOffset = COSITEDEVEN 0.5×(j0.5)yChromaOffset = MIDPOINT  ifloor=i_RB j_floor=j_RB  i_frac=i_RBi_floor j_frac=j_RBj_floor\begin{aligned} i\_{RB} & = \begin{cases} 0.5 \times (i) & \textrm{xChromaOffset = COSITED}\\*\textrm{EVEN} \\\ 0.5 \times (i - 0.5) & \textrm{xChromaOffset = MIDPOINT} \end{cases}\\\ j*{RB} & = \begin{cases} 0.5 \times (j) & \textrm{yChromaOffset = COSITED}\\*\textrm{EVEN} \\\ 0.5 \times (j - 0.5) & \textrm{yChromaOffset = MIDPOINT} \end{cases}\\\ \\\ i*{floor} & = \left\lfloor i\_{RB} \right\rfloor \\\ j\_{floor} & = \left\lfloor j\_{RB} \right\rfloor \\\ \\\ i\_{frac} & = i\_{RB} - i\_{floor} \\\ j\_{frac} & = j\_{RB} - j\_{floor} \end{aligned}τ_RB(i,j)=τ_RB(i_floor,j_floor)\[level]×(1i_frac)×(1j_frac)+ τ_RB(1+i_floor,j_floor)\[level]×(i_frac)×(1j_frac)+ τ_RB(i_floor,1+j_floor)\[level]×(1i_frac)×(j_frac)+ τ_RB(1+i_floor,1+j_floor)\[level]×(i_frac)×(j_frac)\begin{aligned} \tau\_{RB}'(i,j) = & \tau\_{RB}( i\_{floor}, j\_{floor})\[level] & \times & ( 1 - i\_{frac} ) & & \times & ( 1 - j\_{frac} ) & + \\\ & \tau\_{RB}( 1 + i\_{floor}, j\_{floor})\[level] & \times & ( i\_{frac} ) & & \times & ( 1 - j\_{frac} ) & + \\\ & \tau\_{RB}( i\_{floor}, 1 + j\_{floor})\[level] & \times & ( 1 - i\_{frac} ) & & \times & ( j\_{frac} ) & + \\\ & \tau\_{RB}( 1 + i\_{floor}, 1 + j\_{floor})\[level] & \times & ( i\_{frac} ) & & \times & ( j\_{frac} ) & \end{aligned}

In the case where the texture itself is bilinearly interpolated as described in Texel Filtering, thus requiring four full-color samples for the filtering operation, and where the reconstruction of these samples uses bilinear interpolation in the chroma components due to chromaFilter=VK_FILTER_LINEAR, up to nine chroma samples may be required, depending on the sample location.

Implicit Reconstruction

Implicit reconstruction takes place by the samples being interpolated, as required by the filter settings of the sampler, except that chromaFilter takes precedence for the chroma samples.

If chromaFilter is VK_FILTER_NEAREST, an implementation may behave as if xChromaOffset and yChromaOffset were both VK_CHROMA_LOCATION_MIDPOINT, irrespective of the values set.

This will not have any visible effect if the locations of the luma samples coincide with the location of the samples used for rasterization.

The sample coordinates are adjusted by the downsample factor of the component (such that, for example, the sample coordinates are divided by two if the component has a downsample factor of two relative to the luma component):

u_RB(422/420)={0.5×(u+0.5),xChromaOffset = COSITEDEVEN 0.5×u,xChromaOffset = MIDPOINT vRB(420)={0.5×(v+0.5),yChromaOffset = COSITED_EVEN 0.5×v,yChromaOffset = MIDPOINT\begin{aligned} u\_{RB}' (422/420) &= \begin{cases} 0.5\times (u + 0.5), & \textrm{xChromaOffset = COSITED}\\*\textrm{EVEN} \\\ 0.5\times u, & \textrm{xChromaOffset = MIDPOINT} \end{cases} \\\ v*{RB}' (420) &= \begin{cases} 0.5\times (v + 0.5), & \textrm{yChromaOffset = COSITED}\\\_\textrm{EVEN} \\\ 0.5\times v, & \textrm{yChromaOffset = MIDPOINT} \end{cases} \end{aligned}

Sampler Y′CBCR Conversion

Sampler Y′CBCR conversion performs the following operations, which an implementation may combine into a single mathematical operation:

Sampler Y′CBCR Range Expansion

Sampler Y′CBCR range expansion is applied to color component values after all texel input operations which are not specific to sampler Y′CBCR conversion. For example, the input values to this stage have been converted using the normal format conversion rules.

The input values to this stage may have been converted using sRGB to linear conversion if ycbcrDegamma is enabled.

Sampler Y′CBCR range expansion is not applied if ycbcrModel is VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY. That is, the shader receives the vector C'rgba as output by the Component Swizzle stage without further modification.

For other values of ycbcrModel, range expansion is applied to the texel component values output by the Component Swizzle defined by the components member of VkSamplerYcbcrConversionCreateInfo. Range expansion applies independently to each component of the image. For the purposes of range expansion and Y′CBCR model conversion, the R and B components contain color difference (chroma) values and the G component contains luma. The A component is not modified by sampler Y′CBCR range expansion.

The range expansion to be applied is defined by the ycbcrRange member of the VkSamplerYcbcrConversionCreateInfo structure:

  • If ycbcrRange is VK_SAMPLER_YCBCR_RANGE_ITU_FULL, the following transformations are applied:Y=Crgba\[G] C_B=Crgba\[B]2(n1)(2n)1 C_R=C_rgba\[R]2(n1)(2n)1\begin{aligned} Y' &= C'*{rgba}\[G] \\\ C\_B &= C'*{rgba}\[B] - {{2^{(n-1)}}\over{(2^n) - 1}} \\\ C\_R &= C'\_{rgba}\[R] - {{2^{(n-1)}}\over{(2^n) - 1}} \end{aligned}

    These formulae correspond to the full range encoding in the Quantization schemes chapter of the Khronos Data Format Specification.

    Should any future amendments be made to the ITU specifications from which these equations are derived, the formulae used by Vulkan may also be updated to maintain parity.

  • If ycbcrRange is VK_SAMPLER_YCBCR_RANGE_ITU_NARROW, the following transformations are applied:Y=Crgba\[G]×(2n1)16×2n8219×2n8 C_B=Crgba\[B]×(2n1)128×2n8224×2n8 C_R=C_rgba\[R]×(2n1)128×2n8224×2n8\begin{aligned} Y' &= {{C'*{rgba}\[G] \times (2^n-1) - 16\times 2^{n-8}}\over{219\times 2^{n-8}}} \\\ C\_B &= {{C'*{rgba}\[B] \times \left(2^n-1\right) - 128\times 2^{n-8}}\over{224\times 2^{n-8}}} \\\ C\_R &= {{C'\_{rgba}\[R] \times \left(2^n-1\right) - 128\times 2^{n-8}}\over{224\times 2^{n-8}}} \end{aligned}

    These formulae correspond to the narrow range encoding in the Quantization schemes chapter of the Khronos Data Format Specification.

  • n is the bit-depth of the components in the format.

The precision of the operations performed during range expansion must be at least that of the source format.

An implementation may clamp the results of these range expansion operations such that Y′ falls in the range [0,1], and/or such that CB and CR fall in the range [-0.5,0.5].

Sampler Y′CBCR Model Conversion

The range-expanded values are converted between color models, according to the color model conversion specified in the ycbcrModel member:

VK_SAMPLER_YCBCR_MODEL_CONVERSION_RGB_IDENTITY

The color components are not modified by the color model conversion since they are assumed already to represent the desired color model in which the shader is operating; Y′CBCR range expansion is also ignored.

VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_IDENTITY

The color components are not modified by the color model conversion and are assumed to be treated as though in Y′CBCR form both in memory and in the shader; Y′CBCR range expansion is applied to the components as for other Y′CBCR models, with the vector (CR,Y′,CB,A) provided to the shader.

VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_709

The color components are transformed from a Y′CBCR representation to an R′G′B′ representation as described in the BT.709 Y′CBCR conversion section of the Khronos Data Format Specification.

VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_601

The color components are transformed from a Y′CBCR representation to an R′G′B′ representation as described in the BT.601 Y′CBCR conversion section of the Khronos Data Format Specification.

VK_SAMPLER_YCBCR_MODEL_CONVERSION_YCBCR_2020

The color components are transformed from a Y′CBCR representation to an R′G′B′ representation as described in the BT.2020 Y′CBCR conversion section of the Khronos Data Format Specification.

In this operation, each output component is dependent on each input component.

An implementation may clamp the R′G′B′ results of these conversions to the range [0,1].

The precision of the operations performed during model conversion must be at least that of the source format.

The alpha component is not modified by these model conversions.

Sampling operations in a non-linear color space can introduce color and intensity shifts at sharp transition boundaries. To avoid this issue, the technically precise color correction sequence described in the Introduction to Color Conversions chapter of the Khronos Data Format Specification may be performed as follows:

The additional calculations and, especially, additional number of sampling operations in the VK_FILTER_LINEAR case can be expected to have a performance impact compared with using the outputs directly. Since the variations from correct results are subtle for most content, the application author should determine whether a more costly implementation is strictly necessary.

If chromaFilter, and minFilter or magFilter are both VK_FILTER_NEAREST, these operations are redundant and sampling using sampler Y′CBCR conversion at the desired sample coordinates will produce the correct results without further processing.

Texel Output Operations

Texel output instructions are SPIR-V image instructions that write to an image. Texel output operations are a set of steps that are performed on state, coordinates, and texel values while processing a texel output instruction, and which are common to some or all texel output instructions. They include the following steps, which are performed in the listed order:

Texel Output Validation Operations

Texel output validation operations inspect instruction/image state or coordinates, and in certain circumstances cause the write to have no effect. There are a series of validations that the texel undergoes.

Texel Format Validation

If the image format of the OpTypeImage is not compatible with the VkImageView’s format, the write causes the contents of the image’s memory to become undefined:.

Texel Type Validation

If the Sampled Type of the OpTypeImage does not match the SPIR-V Type, the write causes the value of the texel to become undefined:. For integer types, if the signedness of the access does not match the signedness of the accessed resource, the write causes the value of the texel to become undefined:.

Integer Texel Coordinate Validation

The integer texel coordinates are validated according to the same rules as for texel input coordinate validation.

If the texel fails integer texel coordinate validation, then the write has no effect.

Sparse Texel Operation

If the texel attempts to write to an unbound region of a sparse image, the texel is a sparse unbound texel. In such a case, if the VkPhysicalDeviceSparseProperties::residencyNonResidentStrict property is VK_TRUE, the sparse unbound texel write has no effect. If residencyNonResidentStrict is VK_FALSE, the write may have a side effect that becomes visible to other accesses to unbound texels in any resource, but will not be visible to any device memory allocated by the application.

Texel Output Format Conversion

If the image format is sRGB, a linear to sRGB conversion is applied to the R, G, and B components as described in the sRGB EOTF section of the Khronos Data Format Specification. The A component, if present, is unchanged.

Texels then undergo a format conversion from the floating-point, signed, or unsigned integer type of the texel data to the VkFormat of the image view. If the number of components in the texel data is larger than the number of components in the format, additional components are discarded.

Each component is converted based on its type and size (as defined in the Format Definition section for each VkFormat). Floating-point outputs are converted as described in Floating-Point Format Conversions and Fixed-Point Data Conversion. Integer outputs are converted such that their value is preserved. The converted value of any integer that cannot be represented in the target format is undefined:.

If the VkImageView format has an X component in its format description, undefined: values are written to those bits.

If the underlying VkImage format has an X component in its format description, undefined: values are also written to those bits, even if result format conversion produces a valid value for those bits because the VkImageView format is different.

Normalized Texel Coordinate Operations

If the image sampler instruction provides normalized texel coordinates, some of the following operations are performed.

Projection Operation

For Proj image operations, the normalized texel coordinates (s,t,r,q,a) and (if present) the Dref coordinate are transformed as follows:

s=sq,for 1D, 2D, or 3D image  t=tq,for 2D or 3D image  r=rq,for 3D image  D_ref=D_refq,if provided\begin{aligned} s & = \frac{s}{q}, & \text{for 1D, 2D, or 3D image} \\\ \\\ t & = \frac{t}{q}, & \text{for 2D or 3D image} \\\ \\\ r & = \frac{r}{q}, & \text{for 3D image} \\\ \\\ D\_{\textit{ref}} & = \frac{D\_{\textit{ref}}}{q}, & \text{if provided} \end{aligned}

Derivative Image Operations

Derivatives are used for LOD selection. These derivatives are either implicit (in an ImplicitLod image instruction in a mesh, task, compute, or fragment shader) or explicit (provided explicitly by shader to the image instruction in any shader).

For implicit derivatives image instructions, the derivatives of texel coordinates are calculated in the same manner as derivative operations. That is:

s/x=dPdx(s),s/y=dPdy(s),for 1D, 2D, Cube, or 3D image t/x=dPdx(t),t/y=dPdy(t),for 2D, Cube, or 3D image r/x=dPdx(r),r/y=dPdy(r),for Cube or 3D image\begin{aligned} \partial{s}/\partial{x} & = dPdx(s), & \partial{s}/\partial{y} & = dPdy(s), & \text{for 1D, 2D, Cube, or 3D image} \\\ \partial{t}/\partial{x} & = dPdx(t), & \partial{t}/\partial{y} & = dPdy(t), & \text{for 2D, Cube, or 3D image} \\\ \partial{r}/\partial{x} & = dPdx(r), & \partial{r}/\partial{y} & = dPdy(r), & \text{for Cube or 3D image} \end{aligned}

Partial derivatives not defined above for certain image dimensionalities are set to zero.

For explicit LOD image instructions, if the optional SPIR-V operand Grad is provided, then the operand values are used for the derivatives. The number of components present in each derivative for a given image dimensionality matches the number of partial derivatives computed above.

If the optional SPIR-V operand Lod is provided, then derivatives are set to zero, the cube map derivative transformation is skipped, and the scale factor operation is skipped. Instead, the floating-point scalar coordinate is directly assigned to λbase as described in LOD Operation.

If the image or sampler object used by an implicit derivative image instruction is not uniform across the quad and quadDivergentImplicitLod is not supported, then the derivative and LOD values are undefined:. Implicit derivatives are well-defined when the image and sampler and control flow are uniform across the quad, even if they diverge between different quads.

If quadDivergentImplicitLod is supported, then derivatives and implicit LOD values are well-defined even if the image or sampler object are not uniform within a quad. The derivatives are computed as specified above, and the implicit LOD calculation proceeds for each shader invocation using its respective image and sampler object.

Cube Map Face Selection and Transformations

For cube map image instructions, the (s,t,r) coordinates are treated as a direction vector (rx,ry,rz). The direction vector is used to select a cube map face. The direction vector is transformed to a per-face texel coordinate system (sface,tface), The direction vector is also used to transform the derivatives to per-face derivatives.

Cube Map Face Selection

The direction vector selects one of the cube map’s faces based on the largest magnitude coordinate direction (the major axis direction). Since two or more coordinates can have identical magnitude, the implementation must have rules to disambiguate this situation.

The rules should have as the first rule that rz wins over ry and rx, and the second rule that ry wins over rx. An implementation may choose other rules, but the rules must be deterministic and depend only on (rx,ry,rz).

The layer number (corresponding to a cube map face), the coordinate selections for sc, tc, rc, and the selection of derivatives, are determined by the major axis direction as specified in the following two tables.

Table 27. Cube Map Face and Coordinate Selection
Major Axis DirectionLayer NumberCube Map Facesctcrc

+rx

0

Positive X

-rz

-ry

rx

-rx

1

Negative X

+rz

-ry

rx

+ry

2

Positive Y

+rx

+rz

ry

-ry

3

Negative Y

+rx

-rz

ry

+rz

4

Positive Z

+rx

-ry

rz

-rz

5

Negative Z

-rx

-ry

rz

Table 28. Cube Map Derivative Selection
Major Axis Direction{partial}sc / {partial}x{partial}sc / {partial}y{partial}tc / {partial}x{partial}tc / {partial}y{partial}rc / {partial}x{partial}rc / {partial}y

+rx

-{partial}rz / {partial}x

-{partial}rz / {partial}y

-{partial}ry / {partial}x

-{partial}ry / {partial}y

+{partial}rx / {partial}x

+{partial}rx / {partial}y

-rx

+{partial}rz / {partial}x

+{partial}rz / {partial}y

-{partial}ry / {partial}x

-{partial}ry / {partial}y

-{partial}rx / {partial}x

-{partial}rx / {partial}y

+ry

+{partial}rx / {partial}x

+{partial}rx / {partial}y

+{partial}rz / {partial}x

+{partial}rz / {partial}y

+{partial}ry / {partial}x

+{partial}ry / {partial}y

-ry

+{partial}rx / {partial}x

+{partial}rx / {partial}y

-{partial}rz / {partial}x

-{partial}rz / {partial}y

-{partial}ry / {partial}x

-{partial}ry / {partial}y

+rz

+{partial}rx / {partial}x

+{partial}rx / {partial}y

-{partial}ry / {partial}x

-{partial}ry / {partial}y

+{partial}rz / {partial}x

+{partial}rz / {partial}y

-rz

-{partial}rx / {partial}x

-{partial}rx / {partial}y

-{partial}ry / {partial}x

-{partial}ry / {partial}y

-{partial}rz / {partial}x

-{partial}rz / {partial}y

Cube Map Coordinate Transformation

s_face=12×s_cr_c+12 t_face=12×t_cr_c+12 \begin{aligned} s\_{\textit{face}} & = \frac{1}{2} \times \frac{s\_c}{|r\_c|} + \frac{1}{2} \\\ t\_{\textit{face}} & = \frac{1}{2} \times \frac{t\_c}{|r\_c|} + \frac{1}{2} \\\ \end{aligned}

Cube Map Derivative Transformation

The partial derivatives of the Cube Map Coordinate Transformations can be computed as:

s_facex=x(12×s_cr_c\+12) =12×x(s_cr_c) =12×(r_c×s_c/xs_c×r_c/x(r_c)2)\begin{aligned} \frac{\partial{s\_{\textit{face}}}}{\partial{x}} &= \frac{\partial}{\partial{x}} \left ( \frac{1}{2} \times \frac{s\_{c}}{|r\_{c}|} \+ \frac{1}{2}\right ) \\\ &= \frac{1}{2} \times \frac{\partial}{\partial{x}} \left ( \frac{s\_{c}}{|r\_{c}|} \right ) \\\ &= \frac{1}{2} \times \left ( \frac{ |r\_{c}| \times \partial{s\_c}/\partial{x} -s\_c \times {\partial{r\_{c}}}/{\partial{x}}} {\left ( r\_{c} \right )^2} \right ) \end{aligned}

The other derivatives are simplified similarly, resulting in

s_facey=12×(r_c×s_c/ys_c×r_c/y(r_c)2) t_facex=12×(r_c×t_c/xt_c×r_c/x(r_c)2) t_facey=12×(r_c×t_c/yt_c×r_c/y(r_c)2)\begin{aligned} \frac{\partial{s\_{\textit{face}}}}{\partial{y}} &= \frac{1}{2} \times \left ( \frac{ |r\_{c}| \times \partial{s\_c}/\partial{y} -s\_c \times {\partial{r\_{c}}}/{\partial{y}}} {\left ( r\_{c} \right )^2} \right )\\\ \frac{\partial{t\_{\textit{face}}}}{\partial{x}} &= \frac{1}{2} \times \left ( \frac{ |r\_{c}| \times \partial{t\_c}/\partial{x} -t\_c \times {\partial{r\_{c}}}/{\partial{x}}} {\left ( r\_{c} \right )^2} \right ) \\\ \frac{\partial{t\_{\textit{face}}}}{\partial{y}} &= \frac{1}{2} \times \left ( \frac{ |r\_{c}| \times \partial{t\_c}/\partial{y} -t\_c \times {\partial{r\_{c}}}/{\partial{y}}} {\left ( r\_{c} \right )^2} \right ) \end{aligned}

Scale Factor Operation, LOD Operation and Image Level(s) Selection

LOD selection can be either explicit (provided explicitly by the image instruction) or implicit (determined from a scale factor calculated from the derivatives). The LOD must be computed with mipmapPrecisionBits of accuracy.

Scale Factor Operation

The magnitude of the derivatives are calculated by:

  • mux = |∂s/∂x| × wbase
  • mvx = |∂t/∂x| × hbase
  • mwx = |∂r/∂x| × dbase
  • muy = |∂s/∂y| × wbase
  • mvy = |∂t/∂y| × hbase
  • mwy = |∂r/∂y| × dbase

where:

  • ∂t/∂x = ∂t/∂y = 0 (for 1D images)
  • ∂r/∂x = ∂r/∂y = 0 (for 1D, 2D or Cube images)

and:

  • wbase = image.w
  • hbase = image.h
  • dbase = image.d

(for the baseMipLevel, from the image descriptor).

For corner-sampled images, the wbase, hbase, and dbase are instead:

  • wbase = image.w - 1
  • hbase = image.h - 1
  • dbase = image.d - 1

A point sampled in screen space has an elliptical footprint in texture space. The minimum and maximum scale factors (ρmin, ρmax) should be the minor and major axes of this ellipse.

The scale factors ρx and ρy, calculated from the magnitude of the derivatives in x and y, are used to compute the minimum and maximum scale factors.

ρx and ρy may be approximated with functions fx and fy, subject to the following constraints:

f_x is continuous and monotonically increasing in each of m_ux,m_vx, and m_wx f_y is continuous and monotonically increasing in each of m_uy,m_vy, and m_wy\begin{aligned} & f\_x \text{\ is\ continuous\ and\ monotonically\ increasing\ in\ each\ of\ } m\_{ux}, m\_{vx}, \text{\ and\ } m\_{wx} \\\ & f\_y \text{\ is\ continuous\ and\ monotonically\ increasing\ in\ each\ of\ } m\_{uy}, m\_{vy}, \text{\ and\ } m\_{wy} \end{aligned}max(m_ux,m_vx,m_wx)f_x2(m_ux+m_vx+m_wx) max(m_uy,m_vy,m_wy)f_y2(m_uy+m_vy+m_wy)\begin{aligned} \max(|m\_{ux}|, |m\_{vx}|, |m\_{wx}|) \leq f\_{x} \leq \sqrt{2} (|m\_{ux}| + |m\_{vx}| + |m\_{wx}|) \\\ \max(|m\_{uy}|, |m\_{vy}|, |m\_{wy}|) \leq f\_{y} \leq \sqrt{2} (|m\_{uy}| + |m\_{vy}| + |m\_{wy}|) \end{aligned}

The minimum and maximum scale factors (ρminmax) are determined by:

  • ρmax = max(ρx, ρy)
  • ρmin = min(ρx, ρy)

The ratio of anisotropy is determined by:

  • η = min(ρmaxmin, maxAniso)

where:

  • sampler.maxAniso = maxAnisotropy (from sampler descriptor)
  • limits.maxAniso = maxSamplerAnisotropy (from physical device limits)
  • maxAniso = min(sampler.maxAniso, limits.maxAniso)

If ρmax = ρmin = 0, then all the partial derivatives are zero, the fragment’s footprint in texel space is a point, and η should be treated as 1. If ρmax ≠ 0 and ρmin = 0 then all partial derivatives along one axis are zero, the fragment’s footprint in texel space is a line segment, and η should be treated as maxAniso. However, anytime the footprint is small in texel space the implementation may use a smaller value of η, even when ρmin is zero or close to zero. If either VkPhysicalDeviceFeatures::samplerAnisotropy or VkSamplerCreateInfo::anisotropyEnable are VK_FALSE, maxAniso is set to 1.

If η = 1, sampling is isotropic. If η > 1, sampling is anisotropic.

The sampling rate (N) is derived as:

  • N = ⌈η⌉

An implementation may round N up to the nearest supported sampling rate. An implementation may use the value of N as an approximation of η.

LOD Operation

The LOD parameter λ is computed as follows:

λ_base(x,y)={shaderOp.Lod(from optional SPIR-V operand) log_2(ρ_maxη)otherwise λ(x,y)=λ_base+clamp(sampler.bias+shaderOp.bias,maxSamplerLodBias,maxSamplerLodBias) λ={lod_max,λ>lod_max λ,lod_minλlod_max lod_min,λ<lod_min undefined,lod_min>lod_max\begin{aligned} \lambda\_{base}(x,y) & = \begin{cases} shaderOp.Lod & \text{(from optional SPIR-V operand)} \\\ \log\_2 \left ( \frac{\rho\_{max}}{\eta} \right ) & \text{otherwise} \end{cases} \\\ \lambda'(x,y) & = \lambda\_{base} + \mathbin{clamp}(sampler.bias + shaderOp.bias,-maxSamplerLodBias,maxSamplerLodBias) \\\ \lambda & = \begin{cases} lod\_{max}, & \lambda' > lod\_{max} \\\ \lambda', & lod\_{min} \leq \lambda' \leq lod\_{max} \\\ lod\_{min}, & \lambda' < lod\_{min} \\\ \textit{undefined}, & lod\_{min} > lod\_{max} \end{cases} \end{aligned}

where:

sampler.bias=mipLodBias(from sampler descriptor) shaderOp.bias={Bias(from optional SPIR-V operand) 0otherwise sampler.lod_min=minLod(from sampler descriptor) shaderOp.lod_min={MinLod(from optional SPIR-V operand) 0otherwise  lod_min=max(sampler.lod_min,shaderOp.lod_min) lod_max=maxLod(from sampler descriptor)\begin{aligned} sampler.bias & = mipLodBias & \text{(from sampler descriptor)} \\\ shaderOp.bias & = \begin{cases} Bias & \text{(from optional SPIR-V operand)} \\\ 0 & \text{otherwise} \end{cases} \\\ sampler.lod\_{min} & = minLod & \text{(from sampler descriptor)} \\\ shaderOp.lod\_{min} & = \begin{cases} MinLod & \text{(from optional SPIR-V operand)} \\\ 0 & \text{otherwise} \end{cases} \\\ \\\ lod\_{min} & = \max(sampler.lod\_{min}, shaderOp.lod\_{min}) \\\ lod\_{max} & = maxLod & \text{(from sampler descriptor)} \end{aligned}

and maxSamplerLodBias is the value of the VkPhysicalDeviceLimits feature maxSamplerLodBias.

Image Level(s) Selection

The image level(s) d, dhi, and dlo which texels are read from are determined by an image-level parameter dl, which is computed based on the LOD parameter, as follows:

d_l={nearest(d),mipmapMode is VK_SAMPLER_MIPMAP_MODE_NEAREST d,otherwise\begin{aligned} d\_{l} = \begin{cases} nearest(d'), & \text{mipmapMode is VK\\\_SAMPLER\\\_MIPMAP\\\_MODE\\\_NEAREST} \\\ d', & \text{otherwise} \end{cases} \end{aligned}

where:

d=max(level_base+clamp(λ,0,q),minLod_imageView)\begin{aligned} d' = max(level\_{base} + \text{clamp}(\lambda, 0, q), minLod\_{imageView}) \end{aligned}nearest(d)={d+0.51,preferred d+0.5,alternative\begin{aligned} nearest(d') & = \begin{cases} \left \lceil d' + 0.5\right \rceil - 1, & \text{preferred} \\\ \left \lfloor d' + 0.5\right \rfloor, & \text{alternative} \end{cases} \end{aligned}

and:

minLod_imageView={minLodFloat_imageView,preferred minLodInteger_imageView,alternative level_base=baseMipLevel q=levelCount1\begin{aligned} minLod\_{imageView} & = \begin{cases} minLodFloat\_{imageView}, & \text{preferred} \\\ minLodInteger\_{imageView}, & \text{alternative} \end{cases} \\\ level\_{base} & = baseMipLevel \\\ q & = levelCount - 1 \end{aligned}

baseMipLevel and levelCount are taken from the subresourceRange of the image view.

minLodimageView must be less or equal to levelbase + q.

If the sampler’s mipmapMode is VK_SAMPLER_MIPMAP_MODE_NEAREST, then the level selected is d = dl.

If the sampler’s mipmapMode is VK_SAMPLER_MIPMAP_MODE_LINEAR, two neighboring levels are selected:

d_hi=d_l d_lo=min(d_hi+1,level_base+q) δ=d_ld_hi\begin{aligned} d\_{hi} & = \left\lfloor d\_{l} \right\rfloor \\\ d\_{lo} & = min( d\_{hi} + 1, level\_{base} + q ) \\\ \delta & = d\_{l} - d\_{hi} \end{aligned}

δ is the fractional value, quantized to the number of mipmap precision bits, used for linear filtering between levels.

(s,t,r,q,a) to (u,v,w,a) Transformation

The normalized texel coordinates are scaled by the image level dimensions and the array layer is selected.

This transformation is performed once for each level used in filtering (either d, or dhi and dlo).

u(x,y)=s(x,y)×width_scale+Δ_i v(x,y)={0for 1D images t(x,y)×height_scale+Δ_jotherwise w(x,y)={0for 2D or Cube images r(x,y)×depth_scale+Δ_kotherwise  a(x,y)={a(x,y)for array images 0otherwise\begin{aligned} u(x,y) & = s(x,y) \times width\_{scale} + \Delta\_i\\\ v(x,y) & = \begin{cases} 0 & \text{for 1D images} \\\ t(x,y) \times height\_{scale} + \Delta\_j & \text{otherwise} \end{cases} \\\ w(x,y) & = \begin{cases} 0 & \text{for 2D or Cube images} \\\ r(x,y) \times depth\_{scale} + \Delta\_k & \text{otherwise} \end{cases} \\\ \\\ a(x,y) & = \begin{cases} a(x,y) & \text{for array images} \\\ 0 & \text{otherwise} \end{cases} \end{aligned}

where:

  • widthscale = widthlevel
  • heightscale = heightlevel
  • depthscale = depthlevel

for conventional images, and:

  • widthscale = widthlevel - 1
  • heightscale = heightlevel - 1
  • depthscale = depthlevel - 1

for corner-sampled images.

and where (Δi, Δj, Δk) are taken from the image instruction if it includes a ConstOffset or Offset operand, otherwise they are taken to be zero.

Operations then proceed to Unnormalized Texel Coordinate Operations.

Unnormalized Texel Coordinate Operations

(u,v,w,a) to (i,j,k,l,n) Transformation and Array Layer Selection

The unnormalized texel coordinates are transformed to integer texel coordinates relative to the selected mipmap level.

The layer index l is computed as:

  • l = clamp(RNE(a), 0, layerCount - 1) + baseArrayLayer

where layerCount is the number of layers in the image subresource range of the image view, baseArrayLayer is the first layer from the subresource range, and where:

RNE(a)={roundTiesToEven(a)preferred, from IEEE Std 754-2008 Floating-Point Arithmetic a+0.5alternative\begin{aligned} \mathbin{RNE}(a) & = \begin{cases} \mathbin{roundTiesToEven}(a) & \text{preferred, from IEEE Std 754-2008 Floating-Point Arithmetic} \\\ \left \lfloor a + 0.5 \right \rfloor & \text{alternative} \end{cases} \end{aligned}

The sample index n is assigned the value 0.

Nearest filtering (VK_FILTER_NEAREST) computes the integer texel coordinates that the unnormalized coordinates lie within:

i=u+shift j=v+shift k=w+shift\begin{aligned} i &= \left\lfloor u + shift \right\rfloor \\\ j &= \left\lfloor v + shift \right\rfloor \\\ k &= \left\lfloor w + shift \right\rfloor \end{aligned}

where:

  • shift = 0.0

for conventional images, and:

  • shift = 0.5

for corner-sampled images.

Linear filtering (VK_FILTER_LINEAR) computes a set of neighboring coordinates which bound the unnormalized coordinates. The integer texel coordinates are combinations of i0 or i1, j0 or j1, k0 or k1, as well as weights α, β, and γ.

i_0=ushift i_1=i_0+1 j_0=vshift j_1=j_0+1 k_0=wshift k_1=k_0+1\begin{aligned} i\_0 &= \left\lfloor u - shift \right\rfloor \\\ i\_1 &= i\_0 + 1 \\\ j\_0 &= \left\lfloor v - shift \right\rfloor \\\ j\_1 &= j\_0 + 1 \\\ k\_0 &= \left\lfloor w - shift \right\rfloor \\\ k\_1 &= k\_0 + 1 \end{aligned}α=frac(ushift)\[1em]β=frac(vshift)\[1em]γ=frac(wshift)\begin{aligned} \alpha &= \mathbin{frac}\left(u - shift\right) \\\\\[1em] \beta &= \mathbin{frac}\left(v - shift\right) \\\\\[1em] \gamma &= \mathbin{frac}\left(w - shift\right) \end{aligned}

where:

  • shift = 0.5

for conventional images, and:

  • shift = 0.0

for corner-sampled images, and where:

frac(x)=xx\mathbin{frac}(x) = x - \left\lfloor x \right\rfloor

where the number of fraction bits retained is specified by VkPhysicalDeviceLimits::subTexelPrecisionBits.

Cubic filtering (VK_FILTER_CUBIC_EXT) computes a set of neighboring coordinates which bound the unnormalized coordinates. The integer texel coordinates are combinations of i0, i1, i2 or i3, j0, j1, j2 or j3, k0, k1, k2 or k3, as well as weights α, β, and γ.

i_0=u32i_1=i_0+1i_2=i_1+1i_3=i_2+1\[1em]j_0=v32j_1=j_0+1j_2=j_1+1j_3=j_2+1\[1em]k_0=w32k_1=k_0+1k_2=k_1+1k_3=k_2+1\begin{aligned} i\_{0} & = {\left \lfloor {u - \frac{3}{2}} \right \rfloor} & i\_{1} & = i\_{0} + 1 & i\_{2} & = i\_{1} + 1 & i\_{3} & = i\_{2} + 1 \\\\\[1em] j\_{0} & = {\left \lfloor {v - \frac{3}{2}} \right \rfloor} & j\_{1} & = j\_{0} + 1 & j\_{2} & = j\_{1} + 1 & j\_{3} & = j\_{2} + 1 \\\\\[1em] k\_{0} & = {\left \lfloor {w - \frac{3}{2}} \right \rfloor} & k\_{1} & = k\_{0} + 1 & k\_{2} & = k\_{1} + 1 & k\_{3} & = k\_{2} + 1 \end{aligned}α=frac(u12)\[1em]β=frac(v12)\[1em]γ=frac(w12)\begin{aligned} \alpha &= \mathbin{frac}\left(u - \frac{1}{2}\right) \\\\\[1em] \beta &= \mathbin{frac}\left(v - \frac{1}{2}\right) \\\\\[1em] \gamma &= \mathbin{frac}\left(w - \frac{1}{2}\right) \end{aligned}

where:

frac(x)=xx\mathbin{frac}(x) = x - \left\lfloor x \right\rfloor

where the number of fraction bits retained is specified by VkPhysicalDeviceLimits::subTexelPrecisionBits.

Integer Texel Coordinate Operations

Integer texel coordinate operations may supply a LOD which texels are to be read from or written to using the optional SPIR-V operand Lod. If the Lod is provided then it must be an integer.

The image level selected is:

d=level_base+{Lod(from optional SPIR-V operand) 0otherwise \begin{aligned} d & = level\_{base} + \begin{cases} Lod & \text{(from optional SPIR-V operand)} \\\ 0 & \text{otherwise} \end{cases} \\\ \end{aligned}

If d does not lie in the range [baseMipLevel, baseMipLevel + levelCount) or d is less than minLodIntegerimageView, then any values fetched are zero if the robustImageAccess2 feature is enabled, otherwise are undefined:, and any writes (if supported) are discarded.

Image Sample Operations

Wrapping Operation

If the used sampler was created without VK_SAMPLER_CREATE_NON_SEAMLESS_CUBE_MAP_BIT_EXT, Cube images ignore the wrap modes specified in the sampler. Instead, if VK_FILTER_NEAREST is used within a mip level then VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE is used, and if VK_FILTER_LINEAR is used within a mip level then sampling at the edges is performed as described earlier in the Cube map edge handling section.

The first integer texel coordinate i is transformed based on the addressModeU parameter of the sampler.

i={imodsizefor repeat (size1)mirror((imod(2×size))size)for mirrored repeat clamp(i,0,size1)for clamp to edge clamp(i,1,size)for clamp to border clamp(mirror(i),0,size1)for mirror clamp to edge\begin{aligned} i &= \begin{cases} i \bmod size & \text{for repeat} \\\ (size - 1) - \mathbin{mirror} ((i \bmod (2 \times size)) - size) & \text{for mirrored repeat} \\\ \mathbin{clamp}(i,0,size-1) & \text{for clamp to edge} \\\ \mathbin{clamp}(i,-1,size) & \text{for clamp to border} \\\ \mathbin{clamp}(\mathbin{mirror}(i),0,size-1) & \text{for mirror clamp to edge} \end{cases} \end{aligned}

where:

mirror(n)={nfor n0 (1+n)otherwise\begin{aligned} & \mathbin{mirror}(n) = \begin{cases} n & \text{for}\ n \geq 0 \\\ -(1+n) & \text{otherwise} \end{cases} \end{aligned}

j (for 2D and Cube image) and k (for 3D image) are similarly transformed based on the addressModeV and addressModeW parameters of the sampler, respectively.

Texel Gathering

SPIR-V instructions with Gather in the name return a vector derived from 4 texels in the base level of the image view. The rules for the VK_FILTER_LINEAR minification filter are applied to identify the four selected texels. Each texel is then converted to an RGBA value according to conversion to RGBA and then swizzled. A four-component vector is then assembled by taking the component indicated by the Component value in the instruction from the swizzled color value of the four texels. If the operation does not use the ConstOffsets image operand then the four texels form the 2 × 2 rectangle used for texture filtering:

τ\[R]=τ_i0j1\[level_base]\[comp] τ\[G]=τ_i1j1\[level_base]\[comp] τ\[B]=τ_i1j0\[level_base]\[comp] τ\[A]=τ_i0j0\[level_base]\[comp]\begin{aligned} \tau\[R] &= \tau\_{i0j1}\[level\_{base}]\[comp] \\\ \tau\[G] &= \tau\_{i1j1}\[level\_{base}]\[comp] \\\ \tau\[B] &= \tau\_{i1j0}\[level\_{base}]\[comp] \\\ \tau\[A] &= \tau\_{i0j0}\[level\_{base}]\[comp] \end{aligned}

If the operation does use the ConstOffsets image operand then the offsets allow a custom filter to be defined:

τ\[R]=τ_i0j0+Δ_0\[level_base]\[comp] τ\[G]=τ_i0j0+Δ_1\[level_base]\[comp] τ\[B]=τ_i0j0+Δ_2\[level_base]\[comp] τ\[A]=τ_i0j0+Δ_3\[level_base]\[comp]\begin{aligned} \tau\[R] &= \tau\_{i0j0 + \Delta\_0}\[level\_{base}]\[comp] \\\ \tau\[G] &= \tau\_{i0j0 + \Delta\_1}\[level\_{base}]\[comp] \\\ \tau\[B] &= \tau\_{i0j0 + \Delta\_2}\[level\_{base}]\[comp] \\\ \tau\[A] &= \tau\_{i0j0 + \Delta\_3}\[level\_{base}]\[comp] \end{aligned}

where:

τ\[level_base]\[comp]={τ\[level_base]\[R],for comp=0 τ\[level_base]\[G],for comp=1 τ\[level_base]\[B],for comp=2 τ\[level_base]\[A],for comp=3 comp,from SPIR-V operand Component\begin{aligned} \tau\[level\_{base}]\[comp] &= \begin{cases} \tau\[level\_{base}]\[R], & \text{for}\ comp = 0 \\\ \tau\[level\_{base}]\[G], & \text{for}\ comp = 1 \\\ \tau\[level\_{base}]\[B], & \text{for}\ comp = 2 \\\ \tau\[level\_{base}]\[A], & \text{for}\ comp = 3 \end{cases}\\\ comp & \\,\text{from SPIR-V operand Component} \end{aligned}

OpImage*Gather must not be used on a sampled image with sampler Y′CBCR conversion enabled.

If levelbase < minLodIntegerimageView, then any values fetched are zero if robustImageAccess2 is enabled. Otherwise values are undefined:.

Texel Filtering

Texel filtering is first performed for each level (either d or dhi and dlo).

If λ is less than or equal to zero, the texture is said to be magnified, and the filter mode within a mip level is selected by the magFilter in the sampler. If λ is greater than zero, the texture is said to be minified, and the filter mode within a mip level is selected by the minFilter in the sampler.

Texel Nearest Filtering

Within a mip level, VK_FILTER_NEAREST filtering selects a single value using the (i, j, k) texel coordinates, with all texels taken from layer l.

τ\[level]={τ_ijk\[level],for 3D image τ_ij\[level],for 2D or Cube image τ_i\[level],for 1D image\begin{aligned} \tau\[level] &= \begin{cases} \tau\_{ijk}\[level], & \text{for 3D image} \\\ \tau\_{ij}\[level], & \text{for 2D or Cube image} \\\ \tau\_{i}\[level], & \text{for 1D image} \end{cases} \end{aligned}

Texel Linear Filtering

Within a mip level, VK_FILTER_LINEAR filtering combines 8 (for 3D), 4 (for 2D or Cube), or 2 (for 1D) texel values, together with their linear weights. The linear weights are derived from the fractions computed earlier:

w_i_0=(1α) w_i_1=(α) w_j_0=(1β) w_j_1=(β) w_k_0=(1γ) w_k_1=(γ)\begin{aligned} w\_{i\_0} &= (1-\alpha) \\\ w\_{i\_1} &= (\alpha) \\\ w\_{j\_0} &= (1-\beta) \\\ w\_{j\_1} &= (\beta) \\\ w\_{k\_0} &= (1-\gamma) \\\ w\_{k\_1} &= (\gamma) \end{aligned}

The values of multiple texels, together with their weights, are combined to produce a filtered value.

The VkSamplerReductionModeCreateInfo::reductionMode can control the process by which multiple texels, together with their weights, are combined to produce a filtered texture value.

When the reductionMode is set (explicitly or implicitly) to VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, a weighted average is computed:

τ_3D=_k=k_0k_1_j=j_0j_1_i=i_0i_1(w_i)(w_j)(w_k)τ_ijk τ_2D=_j=j_0j_1_i=i_0i_1(w_i)(w_j)τ_ij τ_1D=_i=i_0i_1(w_i)τ_i\begin{aligned} \tau\_{3D} &= \sum\_{k=k\_0}^{k\_1}\sum\_{j=j\_0}^{j\_1}\sum\_{i=i\_0}^{i\_1}(w\_{i})(w\_{j})(w\_{k})\tau\_{ijk} \\\ \tau\_{2D} &= \sum\_{j=j\_0}^{j\_1}\sum\_{i=i\_0}^{i\_1}(w\_{i})(w\_{j})\tau\_{ij} \\\ \tau\_{1D} &= \sum\_{i=i\_0}^{i\_1}(w\_{i})\tau\_{i} \end{aligned}

However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above set of multiple texels, together with their weights, computing a component-wise minimum or maximum, respectively, of the components of the set of texels with non-zero weights.

Texel Cubic Filtering

Within a mip level, VK_FILTER_CUBIC_EXT, filtering computes a weighted average of 64 (for 3D), 16 (for 2D), or 4 (for 1D) texel values, together with their Catmull-Rom, Zero Tangent Cardinal, B-Spline, or Mitchell-Netravali weights as specified by VkSamplerCubicWeightsCreateInfoQCOM.

Catmull-Rom weights specified by VK_CUBIC_FILTER_WEIGHTS_CATMULL_ROM_QCOM are derived from the fractions computed earlier.

\begin{aligned} \begin{bmatrix} w\_{i\_0}\phantom{,} w\_{i\_1}\phantom{,} w\_{i\_2}\phantom{,} w\_{i\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \alpha & \alpha^2 & \alpha^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -1 & \phantom{-}0 & \phantom{-}1 & \phantom{-}0 \\\ \phantom{-}2 & -5 & \phantom{-}4 & -1 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{j\_0}\phantom{,} w\_{j\_1}\phantom{,} w\_{j\_2}\phantom{,} w\_{j\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \beta & \beta^2 & \beta^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -1 & \phantom{-}0 & \phantom{-}1 & \phantom{-}0 \\\ \phantom{-}2 & -5 & \phantom{-}4 & -1 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{k\_0}\phantom{,} w\_{k\_1}\phantom{,} w\_{k\_2}\phantom{,} w\_{k\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \gamma & \gamma^2 & \gamma^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -1 & \phantom{-}0 & \phantom{-}1 & \phantom{-}0 \\\ \phantom{-}2 & -5 & \phantom{-}4 & -1 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \end{aligned}

Zero Tangent Cardinal weights specified by VK_CUBIC_FILTER_WEIGHTS_ZERO_TANGENT_CARDINAL_QCOM are derived from the fractions computed earlier.

\begin{aligned} \begin{bmatrix} w\_{i\_0}\phantom{,} w\_{i\_1}\phantom{,} w\_{i\_2}\phantom{,} w\_{i\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \alpha & \alpha^2 & \alpha^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -2 & \phantom{-}0 & \phantom{-}2 & \phantom{-}0 \\\ \phantom{-}4 & -4 & \phantom{-}2 & -2 \\\ -2 & \phantom{-}2 & -2 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{j\_0}\phantom{,} w\_{j\_1}\phantom{,} w\_{j\_2}\phantom{,} w\_{j\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \beta & \beta^2 & \beta^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -2 & \phantom{-}0 & \phantom{-}2 & \phantom{-}0 \\\ \phantom{-}4 & -4 & \phantom{-}2 & -2 \\\ -2 & \phantom{-}2 & -2 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{k\_0}\phantom{,} w\_{k\_1}\phantom{,} w\_{k\_2}\phantom{,} w\_{k\_3} \end{bmatrix} \= \frac{1}{2} \begin{bmatrix} 1 & \gamma & \gamma^2 & \gamma^3 \end{bmatrix} \begin{bmatrix} \phantom{-}0 & \phantom{-}2 & \phantom{-}0 & \phantom{-}0 \\\ -2 & \phantom{-}0 & \phantom{-}2 & \phantom{-}0 \\\ \phantom{-}4 & -4 & \phantom{-}2 & -2 \\\ -2 & \phantom{-}2 & -2 & \phantom{-}1 \end{bmatrix} \end{aligned}

B-Spline weights specified by VK_CUBIC_FILTER_WEIGHTS_B_SPLINE_QCOM are derived from the fractions computed earlier.

\begin{aligned} \begin{bmatrix} w\_{i\_0}\phantom{,} w\_{i\_1}\phantom{,} w\_{i\_2}\phantom{,} w\_{i\_3} \end{bmatrix} \= \frac{1}{6} \begin{bmatrix} 1 & \alpha & \alpha^2 & \alpha^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}4 & \phantom{-}1 & \phantom{-}0 \\\ -3 & \phantom{-}0 & \phantom{-}3 & \phantom{-}0 \\\ \phantom{-}3 & -6 & \phantom{-}3 & \phantom{-}0 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{j\_0}\phantom{,} w\_{j\_1}\phantom{,} w\_{j\_2}\phantom{,} w\_{j\_3} \end{bmatrix} \= \frac{1}{6} \begin{bmatrix} 1 & \beta & \beta^2 & \beta^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}4 & \phantom{-}1 & \phantom{-}0 \\\ -3 & \phantom{-}0 & \phantom{-}3 & \phantom{-}0 \\\ \phantom{-}3 & -6 & \phantom{-}3 & \phantom{-}0 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \\\ \begin{bmatrix} w\_{k\_0}\phantom{,} w\_{k\_1}\phantom{,} w\_{k\_2}\phantom{,} w\_{k\_3} \end{bmatrix} \= \frac{1}{6} \begin{bmatrix} 1 & \gamma & \gamma^2 & \gamma^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}4 & \phantom{-}1 & \phantom{-}0 \\\ -3 & \phantom{-}0 & \phantom{-}3 & \phantom{-}0 \\\ \phantom{-}3 & -6 & \phantom{-}3 & \phantom{-}0 \\\ -1 & \phantom{-}3 & -3 & \phantom{-}1 \end{bmatrix} \end{aligned}

Mitchell-Netravali weights specified by VK_CUBIC_FILTER_WEIGHTS_MITCHELL_NETRAVALI_QCOM are derived from the fractions computed earlier.

\begin{aligned} \begin{bmatrix} w\_{i\_0}\phantom{,} w\_{i\_1}\phantom{,} w\_{i\_2}\phantom{,} w\_{i\_3} \end{bmatrix} \= \frac{1}{18} \begin{bmatrix} 1 & \alpha & \alpha^2 & \alpha^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}16 & \phantom{-}1 & \phantom{-}0 \\\ -9 & \phantom{-}0 & \phantom{-}9 & \phantom{-}0 \\\ \phantom{-}15 & -36 & \phantom{-}27 & -6 \\\ -7 & \phantom{-}21 & -21 & \phantom{-}7 \end{bmatrix} \\\ \begin{bmatrix} w\_{j\_0}\phantom{,} w\_{j\_1}\phantom{,} w\_{j\_2}\phantom{,} w\_{j\_3} \end{bmatrix} \= \frac{1}{18} \begin{bmatrix} 1 & \beta & \beta^2 & \beta^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}16 & \phantom{-}1 & \phantom{-}0 \\\ -9 & \phantom{-}0 & \phantom{-}9 & \phantom{-}0 \\\ \phantom{-}15 & -36 & \phantom{-}27 & -6 \\\ -7 & \phantom{-}21 & -21 & \phantom{-}7 \end{bmatrix} \\\ \begin{bmatrix} w\_{k\_0}\phantom{,} w\_{k\_1}\phantom{,} w\_{k\_2}\phantom{,} w\_{k\_3} \end{bmatrix} \= \frac{1}{18} \begin{bmatrix} 1 & \gamma & \gamma^2 & \gamma^3 \end{bmatrix} \begin{bmatrix} \phantom{-}1 & \phantom{-}16 & \phantom{-}1 & \phantom{-}0 \\\ -9 & \phantom{-}0 & \phantom{-}9 & \phantom{-}0 \\\ \phantom{-}15 & -36 & \phantom{-}27 & -6 \\\ -7 & \phantom{-}21 & -21 & \phantom{-}7 \end{bmatrix} \end{aligned}

The values of multiple texels, together with their weights, are combined to produce a filtered value.

The VkSamplerReductionModeCreateInfo::reductionMode can control the process by which multiple texels, together with their weights, are combined to produce a filtered texture value.

When the reductionMode is set (explicitly or implicitly) to VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE or VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE_RANGECLAMP_QCOM , a weighted average is computed:

τ_3D=_k=j_0k_3_j=j_0j_3_i=i_0i_3(w_i)(w_j)(w_k)τ_ijk τ_2D=_j=j_0j_3_i=i_0i_3(w_i)(w_j)τ_ij τ_1D=_i=i_0i_3(w_i)τ_i\begin{aligned} \tau\_{3D} &= \sum\_{k=j\_0}^{k\_3}\sum\_{j=j\_0}^{j\_3}\sum\_{i=i\_0}^{i\_3}(w\_{i})(w\_{j})(w\_{k})\tau\_{ijk} \\\ \tau\_{2D} &= \sum\_{j=j\_0}^{j\_3}\sum\_{i=i\_0}^{i\_3}(w\_{i})(w\_{j})\tau\_{ij} \\\ \tau\_{1D} &= \sum\_{i=i\_0}^{i\_3}(w\_{i})\tau\_{i} \end{aligned}

However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above set of multiple texels, together with their weights, computing a component-wise minimum or maximum, respectively, of the components of the set of texels with non-zero weights.

Texel Range Clamp

When reductionMode is VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE_RANGECLAMP_QCOM, the weighted average is clamped to be within the component-wise minimum and maximum of the set of texels with non-zero weights.

Texel Mipmap Filtering

VK_SAMPLER_MIPMAP_MODE_NEAREST filtering returns the value of a single mipmap level,

τ = τ[d].

VK_SAMPLER_MIPMAP_MODE_LINEAR filtering combines the values of multiple mipmap levels (τ[hi] and τ[lo]), together with their linear weights.

The linear weights are derived from the fraction computed earlier:

w_hi=(1δ) w_lo=(δ) \begin{aligned} w\_{hi} &= (1-\delta) \\\ w\_{lo} &= (\delta) \\\ \end{aligned}

The values of multiple mipmap levels, together with their weights, are combined to produce a final filtered value.

The VkSamplerReductionModeCreateInfo::reductionMode can control the process by which multiple texels, together with their weights, are combined to produce a filtered texture value.

When the reductionMode is set (explicitly or implicitly) to VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, a weighted average is computed:

τ=(w_hi)τ\[hi]+(w_lo)τ\[lo]\begin{aligned} \tau &= (w\_{hi})\tau\[hi]+(w\_{lo})\tau\[lo] \end{aligned}

However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above values, together with their weights, computing a component-wise minimum or maximum, respectively, of the components of the values with non-zero weights.

Texel Anisotropic Filtering

Anisotropic filtering is enabled by the anisotropyEnable in the sampler. When enabled, the image filtering scheme accounts for a degree of anisotropy.

The particular scheme for anisotropic texture filtering is implementation-dependent. Implementations should consider the magFilter, minFilter and mipmapMode of the sampler to control the specifics of the anisotropic filtering scheme used. In addition, implementations should consider minLod and maxLod of the sampler.

For historical reasons, vendor implementations of anisotropic filtering interpret these sampler parameters in different ways, particularly in corner cases such as magFilter, minFilter of NEAREST or maxAnisotropy equal to 1.0. Applications should not expect consistent behavior in such cases, and should use anisotropic filtering only with parameters which are expected to give a quality improvement relative to LINEAR filtering.

The following describes one particular approach to implementing anisotropic filtering for the 2D Image case; implementations may choose other methods:

Given a magFilter, minFilter of VK_FILTER_LINEAR and a mipmapMode of VK_SAMPLER_MIPMAP_MODE_NEAREST:

Instead of a single isotropic sample, N isotropic samples are sampled within the image footprint of the image level d to approximate an anisotropic filter. The sum τ2Daniso is defined using the single isotropic τ2D(u,v) at level d.

τ_2Daniso=1N_i=1Nτ_2D(u(x12+iN+1,y),v(x12+iN+1,y)),when ρ_x>ρ_y τ_2Daniso=1N_i=1Nτ_2D(u(x,y12+iN+1),v(x,y12+iN+1)),when ρ_yρ_x\begin{aligned} \tau\_{2Daniso} & = \frac{1}{N}\sum\_{i=1}^{N} {\tau\_{2D}\left ( u \left ( x - \frac{1}{2} + \frac{i}{N+1} , y \right ), v \left (x-\frac{1}{2}+\frac{i}{N+1}, y \right ) \right )}, & \text{when}\ \rho\_{x} > \rho\_{y} \\\ \tau\_{2Daniso} &= \frac{1}{N}\sum\_{i=1}^{N} {\tau\_{2D}\left ( u \left ( x, y - \frac{1}{2} + \frac{i}{N+1} \right ), v \left (x,y-\frac{1}{2}+\frac{i}{N+1} \right ) \right )}, & \text{when}\ \rho\_{y} \geq \rho\_{x} \end{aligned}

When VkSamplerReductionModeCreateInfo::reductionMode is VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, the above summation is used. However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above values, together with their weights, computing a component-wise minimum or maximum, respectively, of the components of the values with non-zero weights.

Texel Footprint Evaluation

The SPIR-V instruction OpImageSampleFootprintNV evaluates the set of texels from a single mip level that would be accessed during a texel filtering operation. In addition to the inputs that would be accepted by an equivalent OpImageSample* instruction, OpImageSampleFootprintNV accepts two additional inputs. The Granularity input is an integer identifying the size of texel groups used to evaluate the footprint. Each bit in the returned footprint mask corresponds to an aligned block of texels whose size is given by the following table:

Table 29. Texel Footprint Granularity Values
GranularityDim = 2DDim = 3D

0

unsupported

unsupported

1

2x2

2x2x2

2

4x2

unsupported

3

4x4

4x4x2

4

8x4

unsupported

5

8x8

unsupported

6

16x8

unsupported

7

16x16

unsupported

8

unsupported

unsupported

9

unsupported

unsupported

10

unsupported

16x16x16

11

64x64

32x16x16

12

128x64

32x32x16

13

128x128

32x32x32

14

256x128

64x32x32

15

256x256

unsupported

The Coarse input is used to select between the two mip levels that may be accessed during texel filtering when using a mipmapMode of VK_SAMPLER_MIPMAP_MODE_LINEAR. When filtering between two mip levels, a Coarse value of true requests the footprint in the lower-resolution mip level (higher level number), while false requests the footprint in the higher-resolution mip level. If texel filtering would access only a single mip level, the footprint in that level would be returned when Coarse is false; an empty footprint would be returned when Coarse is true.

The footprint for OpImageSampleFootprintNV is returned in a structure with six members:

  • The first member is a boolean value that is true if the texel filtering operation would access only a single mip level.
  • The second member is a two- or three-component integer vector holding the footprint anchor location. For two-dimensional images, the returned components are in units of eight texel groups. For three-dimensional images, the returned components are in units of four texel groups.
  • The third member is a two- or three-component integer vector holding a footprint offset relative to the anchor. All returned components are in units of texel groups.
  • The fourth member is a two-component integer vector mask, which holds a bitfield identifying the set of texel groups in an 8x8 or 4x4x4 neighborhood relative to the anchor and offset.
  • The fifth member is an integer identifying the mip level containing the footprint identified by the anchor, offset, and mask.
  • The sixth member is an integer identifying the granularity of the returned footprint.

For footprints in two-dimensional images (Dim2D), the mask returned by OpImageSampleFootprintNV indicates whether each texel group in a 8x8 local neighborhood of texel groups would have one or more texels accessed during texel filtering. In the mask, the texel group with local group coordinates (lgx,lgy)(lgx,lgy) is considered covered if and only if

\begin{aligned} 0 \neq ((mask.x + (mask.y << 32)) \text{ \\& } (1 << (lgy \times 8 + lgx))) \end{aligned}

where:

  • 0lgx<80 \leq lgx < 8 and 0lgy<80 \leq lgy < 8; and
  • maskmask is the returned two-component mask.

The local group with coordinates (lgx,lgy)(lgx,lgy) in the mask is considered covered if and only if the texel filtering operation would access one or more texels τ_ij\tau\_{ij} in the returned mip level where:

i0={gran.x×(8×anchor.x+lgx),if lgx+offset.x<8 gran.x×(8×(anchor.x1)+lgx),otherwise i1=i0+gran.x1 j0={gran.y×(8×anchor.y+lgy),if lgy+offset.y<8 gran.y×(8×(anchor.y1)+lgy),otherwise j1=j0+gran.y1\begin{aligned} i0 & = \begin{cases} gran.x \times (8 \times anchor.x + lgx), & \text{if } lgx + offset.x < 8 \\\ gran.x \times (8 \times (anchor.x - 1) + lgx), & \text{otherwise} \end{cases} \\\ i1 & = i0 + gran.x - 1 \\\ j0 & = \begin{cases} gran.y \times (8 \times anchor.y + lgy), & \text{if } lgy + offset.y < 8 \\\ gran.y \times (8 \times (anchor.y - 1) + lgy), & otherwise \end{cases} \\\ j1 & = j0 + gran.y - 1 \end{aligned}

and

  • i0ii1i0 \leq i \leq i1 and j0jj1j0 \leq j \leq j1;
  • grangran is a two-component vector holding the width and height of the texel group identified by the granularity;
  • anchoranchor is the returned two-component anchor vector; and
  • offsetoffset is the returned two-component offset vector.

For footprints in three-dimensional images (Dim3D), the mask returned by OpImageSampleFootprintNV indicates whether each texel group in a 4x4x4 local neighborhood of texel groups would have one or more texels accessed during texel filtering. In the mask, the texel group with local group coordinates (lgx,lgy,lgz)(lgx,lgy,lgz), is considered covered if and only if:

\begin{aligned} 0 \neq ((mask.x + (mask.y << 32)) \text{ \\& } (1 << (lgz \times 16 + lgy \times 4 + lgx))) \end{aligned}

where:

  • 0lgx<40 \leq lgx < 4, 0lgy<40 \leq lgy < 4, and 0lgz<40 \leq lgz < 4; and
  • maskmask is the returned two-component mask.

The local group with coordinates (lgx,lgy,lgz)(lgx,lgy,lgz) in the mask is considered covered if and only if the texel filtering operation would access one or more texels τ_ijk\tau\_{ijk} in the returned mip level where:

i0={gran.x×(4×anchor.x+lgx),if lgx+offset.x<4 gran.x×(4×(anchor.x1)+lgx),otherwise i1=i0+gran.x1 j0={gran.y×(4×anchor.y+lgy),if lgy+offset.y<4 gran.y×(4×(anchor.y1)+lgy),otherwise j1=j0+gran.y1 k0={gran.z×(4×anchor.z+lgz),if lgz+offset.z<4 gran.z×(4×(anchor.z1)+lgz),otherwise k1=k0+gran.z1\begin{aligned} i0 & = \begin{cases} gran.x \times (4 \times anchor.x + lgx), & \text{if } lgx + offset.x < 4 \\\ gran.x \times (4 \times (anchor.x - 1) + lgx), & \text{otherwise} \end{cases} \\\ i1 & = i0 + gran.x - 1 \\\ j0 & = \begin{cases} gran.y \times (4 \times anchor.y + lgy), & \text{if } lgy + offset.y < 4 \\\ gran.y \times (4 \times (anchor.y - 1) + lgy), & otherwise \end{cases} \\\ j1 & = j0 + gran.y - 1 \\\ k0 & = \begin{cases} gran.z \times (4 \times anchor.z + lgz), & \text{if } lgz + offset.z < 4 \\\ gran.z \times (4 \times (anchor.z - 1) + lgz), & otherwise \end{cases} \\\ k1 & = k0 + gran.z - 1 \end{aligned}

and

  • i0ii1i0 \leq i \leq i1, j0jj1j0 \leq j \leq j1, k0kk1k0 \leq k \leq k1;
  • grangran is a three-component vector holding the width, height, and depth of the texel group identified by the granularity;
  • anchoranchor is the returned three-component anchor vector; and
  • offsetoffset is the returned three-component offset vector.

If the sampler used by OpImageSampleFootprintNV enables anisotropic texel filtering via anisotropyEnable, it is possible that the set of texel groups accessed in a mip level may be too large to be expressed using an 8x8 or 4x4x4 mask using the granularity requested in the instruction. In this case, the implementation uses a texel group larger than the requested granularity. When a larger texel group size is used, OpImageSampleFootprintNV returns an integer granularity value that can be interpreted in the same manner as the granularity value provided to the instruction to determine the texel group size used. If anisotropic texel filtering is disabled in the sampler, or if an anisotropic footprint can be represented as an 8x8 or 4x4x4 mask with the requested granularity, OpImageSampleFootprintNV will use the requested granularity as-is and return a granularity value of zero.

OpImageSampleFootprintNV supports only two- and three-dimensional image accesses (Dim2D and Dim3D), and the footprint returned is undefined: if a sampler uses an addressing mode other than VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE.

Weight Image Sampling

The SPIR-V instruction OpImageWeightedSampleQCOM specifies a texture sampling operation involving two images: the sampled image and the weight image. It is similar to bilinear filtering except more than 2x2 texels may participate in the filter and the filter weights are application-specified rather than computed by fixed-function hardware. The weight image view defines a 2D kernel weights used during sampling.

The OpImageWeightedSampleQCOM support normalized or unnormalized texel coordinates. In addition to the inputs that would be accepted by an equivalent OpImageSample* instruction, OpImageWeightedSampleQCOM accepts a weight input that specifies the view of a sample weight image

The input weight must be a view of a 2D or 1D image with miplevels equal to 1, samples equal to VK_SAMPLE_COUNT_1_BIT, created with an identity swizzle, and created with usage that includes VK_IMAGE_USAGE_SAMPLE_WEIGHT_BIT_QCOM. The VkImageViewSampleWeightCreateInfoQCOM specifies additional parameters of the view: filterCenter, filterSize, and numPhases. described in more detail below.

The weight input must be bound using a sample weight image descriptor type. The weight view defines a filtering kernel that is a region of view’s subresource range. The kernel spans a region from integer texel coordinate (0,0) to (filterSize.x-1, filterSize.y-1). It is valid for the view’s subresource to have dimensions larger than the kernel but the texels with integer coordinates greater than (filterSize.width-1, filterSize.height-1) are ignored by weight sampling. The value returned by queries OpImageQuerySize, OpImageQuerySizeLod, OpImageQueryLevels, and OpImageQuerySamples return for a weight image is undefined:.

filterCenter designates an integer texel coordinate within the filter kernel as being the 'center' of the kernel. The center must be in the range (0,0) to (filterSize.x-1, filterSize.y-1). numPhases describes the number of filter phases used to provide sub-pixel filtering. Both are described in more detail below.

Weight Image Layout

The weight image specifies filtering kernel weight values. A 2D image view can be used to specify a 2D matrix of filter weights. For separable filers, a 1D image view can be used to specity the horizontal and vertical weights.

2D Non-Separable Weight Filters

A 2D image view defined with VkImageViewSampleWeightCreateInfoQCOM describes a 2D matrix (filterSize.width × filterSize.height) of weight elements with filter’s center point at filterCenter. Note that filterSize can be smaller than the view’s subresource, but the filter will always be located starting at integer texel coordinate (0,0).

The following figure illustrates a 2D convolution filter having filterSize of (4,3) and filterCenter at (1, 1).

weight filter 2d

For a 2D weight filter, the phases are stored as layers of a 2D array image. The width and height of the view’s subresource range must be less than or equal to VkPhysicalDeviceImageProcessingPropertiesQCOM::maxWeightFilterDimension. The layers are stored in horizontal phase major order. Expressed as a formula, the layer index for each filter phase is computed as:

layerIndex(horizPhase,vertPhase,horizPhaseCount) = (vertPhase * horizPhaseCount) + horizPhase

1D Separable Weight Filters

A separable weight filter is a 2D filter that can be specified by two 1D filters in the x and y directions such that their product yields the 2D filter. The following example shows a 2D filter and its associated separable 1D horizontal and vertical filters.

weight filter 1d separable

A 1D array image view defined with VkImageViewSampleWeightCreateInfoQCOM and with layerCount equal to '2' describes a separable weight filter. The horizontal weights are specified in slice '0' and the vertical weights in slice '1'. The filterSize and filterCenter specify the size and origin of the of the horizontal and vertical filters. For many use cases, 1D separable filters can offer a performance advantage over 2D filters.

For a 1D separable weight filter, the phases are arranged into a 1D array image with two layers. The horizontal weights are stored in layer 0 and the vertical weights in layer 1. Within each layer of the 1D array image, the weights are arranged into groups of 4, and then arranged by phase. Expressed as a formula, the 1D texel offset for each weight within each layer is computed as:

// Let horizontal weights have a weightIndex of [0, filterSize.width - 1]
// Let vertical weights have a weightIndex of [0, filterSize.height - 1]
// Let phaseCount be the number of phases in either the vertical or horizontal direction.

texelOffset(phaseIndex,weightIndex,phaseCount) = (phaseCount * 4 * (weightIndex / 4)) + (phaseIndex * 4) + (weightIndex % 4)

Weight Sampling Phases

When using weight image sampling, the texture coordinates may not align with a texel center in the sampled image. In this case, the filter weights can be adjusted based on the subpixel location. This is termed subpixel filtering to indicate that the origin of the filter lies at a subpixel location other than the texel center. Conceptually, this means that the weight filter is positioned such that filter taps do not align with sampled texels exactly. In such a case, modified filter weights may be needed to adjust for the off-center filter taps. Unlike bilinear filtering where the subpixel weights are computed by the implementation, subpixel weight image sampling requires that the per-phase filter weights are pre-computed by the application and stored in an array where each slice of the array is a filter phase. The array is indexed by the implementation based on subpixel positioning. Rather than a single 2D kernel of filter weights, the application provides an array of kernels, one set of filter weights per phase.

The number of phases are restricted by following requirements, which apply to both separable and non-separable filters:

  • The number of phases in the vertical direction, phaseCountvert, must be a power of two (i.e., 1, 2, 4, etc.).
  • The number of phases in the horizontal direction phaseCounthoriz, must equal phaseCountvert.
  • The total number of phases, phaseCountvert × phaseCounthoriz, must be less than or equal to VkPhysicalDeviceImageProcessingPropertiesQCOM::maxWeightFilterPhases.

Weight Sampler Parameters

Weight sampling requires VkSamplerCreateInfo addressModeU and addressModeV must be VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE or VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER. If VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER is used, then the border color must be VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK.

Weight Sampling Operation

The 2D unnormalized texel coordinates (u,v)(u,v) are transformed by filterCenterfilterCenter to specify coordinates i_0,j_0i\_{0}, j\_{0}.

i_0=ufilterCenter_x\[1em]j_0=vfilterCenter_y\begin{aligned} i\_{0} &= \left\lfloor u - filterCenter\_{x} \right\rfloor \\\\\[1em] j\_{0} &= \left\lfloor v - filterCenter\_{y} \right\rfloor \end{aligned}

where filterCenterfilterCenter is specified by VkImageViewSampleWeightCreateInfoQCOM::filterCenter.

Two sets of neighboring integer 2D texel coordinates are generated. The first set is used for selecting texels from the sampled image τ\tau and the second set used for selecting texels from the weight image ww. The first set of neighboring coordinates are combinations of i_0i\_{0} to i_filterWidth1i\_{filterWidth-1} and j_0j\_{0} to j_filterHeight1j\_{filterHeight-1}. The second set of neighboring coordinates are combinations of k_0k\_{0} to k_filterWidth1k\_{filterWidth-1} and l_0l\_{0} to l_filterHeight1l\_{filterHeight-1}. The first and second sets each contain (filterWidth×filterHeight)(filterWidth \times filterHeight) of pairs of (i,j)(i,j) and (k,l)(k,l) coordinates respectively.

i_qq=0q=filterWidth1=i0+q\[1em]j_qq=0q=filterHeight1=j0+q\[1em]k_qq=0q=filterWidth1=q\[1em]l_qq=0q=filterHeight1=q\begin{aligned} \\{i\_q\\}*{q=0}^{q=filterWidth-1} \quad &= i*{0} + q \\\\\[1em] \\{j\_q\\}*{q=0}^{q=filterHeight-1} \quad &= j*{0} + q \\\\\[1em] \\{k\_q\\}*{q=0}^{q=filterWidth-1} \quad &= q \\\\\[1em] \\{l\_q\\}*{q=0}^{q=filterHeight-1} \quad &= q \end{aligned}

where filterWidthfilterWidth and filterHeightfilterHeight are specified by VkImageViewSampleWeightCreateInfoQCOM::filterSize.

Each of the generated integer coordinates (i_q,j_q)({i\_q}, {j\_q}) is transformed by texture wrapping operation, followed by integer texel coordinate validation, If any coordinate fails coordinate validation, it is a Border Texel and texel replacement is performed.

The phase index ψ\psi is computed from the fraction bits of the unnormalized 2D texel coordinates:

phaseCount_h=phaseCount_v=numPhases\[1em]hPhase=frac(u)×phaseCount_h\[1em]vPhase=frac(v)×phaseCount_v\[1em]ψ=(vPhase×phaseCount_h)+hPhase\begin{aligned} phaseCount\_{h} = phaseCount\_{v} &= \sqrt{numPhases} \\\\\[1em] hPhase &= \left\lfloor\mathbin{frac}\left( u \right) \times phaseCount\_{h} \right\rfloor \\\\\[1em] vPhase &= \left\lfloor\mathbin{frac}\left( v \right) \times phaseCount\_{v} \right\rfloor \\\\\[1em] \psi &= \left(vPhase \times phaseCount\_{h}\right) + hPhase \end{aligned}

where the number of fraction bits retained is log2(numPhases)\mathbin{log2}\left( numPhases \right) specified by VkImageViewSampleWeightCreateInfoQCOM::numPhases

Each pair of texel coordinates (i,j)(i,j) in the first set selects a single texel value τ_ij\tau\_{ij} from the sampled image. Each pair of texel coordinates (k,l)(k,l) in the second set, combined with phaseIndex ψ\psi, selects a single weight from the weight image w(k,l,ψ)w(k,l,\psi) .

w(k,l,ψ)={w_kl\[ψ](ψ as layer index)for 2D array view (non-separable filter)  weight_h×weight_vfor 1D array view (separable filter)  \begin{aligned} w(k,l,\psi) &= \begin{cases} w\_{kl}\[\psi]\quad\text{(}\psi\text{ as layer index)} & \text{for 2D array view (non-separable filter) } \\\ weight\_{h} \times weight\_{v} & \text{for 1D array view (separable filter) } \\\ \end{cases} \end{aligned}

If ww is a 2D array view, then non-separable filtering is specified, and integer coordinates (k,l)(k,l) are used to select texels from layer ψ\psi of (w)(w). If ww is a 1D array view, then separable filtering is specified and integer coordinates (k,l)(k,l) are transformed to (k_packed,l_packed)(k\_{packed},l\_{packed}), and used to select horizontal weight (weight_h)(weight\_{h}) and vertical weight (weight_v)(weight\_{v}) texels from layer 0 and layer 1 of (w)(w) respectively.

\begin{aligned} k\_{packed} &= \left(phaseCount\_{h} \times 4 \times \left\lfloor k / 4 \right\rfloor\right) + \left(hPhase \times 4\right) + \left(k \mathbin{\\%} 4\right) \\\\\[1em] l\_{packed}& = \left(phaseCount\_{v} \times 4 \times \left\lfloor l / 4 \right\rfloor\right) + \left(vPhase \times 4\right) + \left(l \mathbin{\\%} 4\right) \\\\\[1em] weight\_{h} &= w\_{k\_{packed}}\[0] & \text{(horizontal weights packed in layer 0)} \\\\\[1em] weight\_{v} &= w\_{l\_{packed}}\[1] & \text{(vertical weights packed in layer 1)} \end{aligned}

Where \mathbin{\\%} refers to the integer modulo operator.

The values of multiple texels, together with their weights, are combined to produce a filtered value.

τ_weightSampling=_j=j_0l=l_0j_blockHeight1l_blockHeight1_i=i_0k=k_0i_blockWidth1k_blockWidth1w(k,l,ψ)τ_ij \begin{aligned} \tau\_{weightSampling} &= \sum\_{{j=j\_0} \atop {l=l\_0}}^{j\_{blockHeight-1} \atop {l\_{blockHeight-1}}}\quad \sum\_{{i=i\_0}\atop {k=k\_0}}^{i\_{blockWidth-1} \atop {k\_{blockWidth-1}}}w(k,l,\psi)\tau\_{ij} \\\ \end{aligned}

When VkSamplerReductionModeCreateInfo::reductionMode is VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, the above summation is used. However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above values, computing a component-wise minimum or maximum of the texels with non-zero weights. If the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, each w(k,l,ψ)w(k,l,\psi) weight must be equal to 0.0 or 1.0, otherwise the undefined: values are returned.

Finally, the operations described in Conversion to RGBA and Component swizzle are performed and the final result is returned to the shader.

Block Matching

The SPIR-V instruction opImageBlockMatchSAD and opImageBlockMatchSSD specify texture block matching operations where a block or region of texels within a target image is compared with a same-sized region a reference image. The instructions make use of two image views: the target view and the reference view. The target view and reference view can be the same view, allowing block matching of two blocks within a single image.

Similar to an equivalent OpImageFetch instruction, opImageBlockMatchSAD and opImageBlockMatchSAD specify an image and an integer texel coordinate which describes the bottom-left texel of the target block. There are three additional inputs. The reference and refCoodinate specifies bottom-left texel of the reference block. The blockSize specifies the integer width and height of the target and reference blocks to be compared, and must not be greater than VkPhysicalDeviceImageProcessingPropertiesQCOM.maxBlockMatchRegion.

opImageBlockMatchWindowSAD and opImageBlockMatchWindowSAD take the same input parameters as the corresponding non-window instructions. The block matching comparison is performed for all pixel values within a 2D window whose dimensions are specified in the sampler.

Block Matching Sampler Parameters

For opImageBlockMatchSAD and opImageBlockMatchSSD, the input sampler must be created with addressModeU and addressModeV, equal to VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE, or VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER with VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK. The input sampler must be created with unnormalizedCoordinates equal to VK_TRUE. The input sampler must be created with components equal to VK_COMPONENT_SWIZZLE_IDENTITY.

For opImageBlockMatchWindowSAD and opImageBlockMatchWindowSSD instructions, the target sampler must have been created with VkSamplerBlockMatchWindowCreateInfoQCOM in the pNext chain.

For opImageBlockMatchWindowSAD, opImageBlockMatchWindowSSD, opImageBlockMatchGatherSAD, or opImageBlockMatchGatherSSDinstructions, the input sampler must be created with addressModeU and addressModeV, equal to VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER with VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK.

Other sampler states are ignored.

Block Matching Operation

Block matching SPIR-V instructions opImageBlockMatchSAD and opImageBlockMatchSSD specify two sets of 2D integer texel coordinates: target coordinates (u,v)(u,v) and reference coordinates (s,t)(s,t).

The coordinates define the bottom-left texel of the target block (i_0,j_0)(i\_{0}, j\_{0}) and the reference block (k_0,l_0)(k\_{0}, l\_{0}).

i_0=u\[1em]j_0=v\[1em]k_0=s\[1em]l_0=t\begin{aligned} i\_{0} &= u \\\\\[1em] j\_{0} &= v \\\\\[1em] k\_{0} &= s \\\\\[1em] l\_{0} &= t \end{aligned}

For the target block, a set of neighboring integer texel coordinates are generated. The neighboring coordinates are combinations of i_0i\_{0} to i_blockWidth1i\_{blockWidth-1} and j_0j\_{0} to j_blockHeight1j\_{blockHeight-1}. The set is of size blockWidth×blockHeightblockWidth \times blockHeight.

i_qq=0q=blockWidth1=i0+q\[1em]j_qq=0q=blockHeight1=j0+q\begin{aligned} \\{i\_q\\}*{q=0}^{q=blockWidth-1} \quad &= i*{0} + q \\\\\[1em] \\{j\_q\\}*{q=0}^{q=blockHeight-1} \quad &= j*{0} + q \end{aligned}

where blockWidthblockWidth and blockHeightblockHeight is specified by the blockSize operand.

If any target integer texel coordinate (i,j)(i,j) in the set fails integer texel coordinate validation, then the texel is an invalid texel and texel replacement is performed.

Similarly for the reference block, a set of neighboring integer texel coordinates are generated.

k_qq=0q=blockWidth1=k0+q\[1em]l_qq=0q=blockHeight1=l0+q\begin{aligned} \\{k\_q\\}*{q=0}^{q=blockWidth-1} \quad &= k*{0} + q \\\\\[1em] \\{l\_q\\}*{q=0}^{q=blockHeight-1} \quad &= l*{0} + q \end{aligned}

Each reference texel coordinate (k,l)(k,l) in the set must not fail integer texel coordinate validation. To avoid undefined: behavior, application shader should guarantee that the reference block is fully within the bounds of the reference image.

Each pair of texel coordinates (i,j)(i,j) in the set selects a single texel value from the target image τ_ij\tau\_{ij}. Each pair of texel coordinates (k,l)(k,l) in the set selects a single texel value from the reference image υ_kl\upsilon\_{kl}.

The difference between target and reference texel values is summed to compute a difference metric. The opTextureBlockMatchSAD computes the sum of absolute differences.

τ_SAD=_j=j_0l=l_0j_blockHeight1l_blockHeight1_i=i_0k=k_0i_blockWidth1k_blockWidth1υ_klτ_ij \begin{aligned} \tau\_{SAD} &= \sum\_{{j=j\_0} \atop {l=l\_0}}^{{j\_{blockHeight-1}} \atop {l\_{blockHeight-1}}} \quad\sum\_{{i=i\_0} \atop {k=k\_0}}^{{i\_{blockWidth-1}} \atop {k\_{blockWidth-1}}}|\upsilon\_{kl}-\tau\_{ij}| \\\ \end{aligned}

The opImageBlockMatchSSD computes the sum of the squared differences.

τ_SSD=_j=j_0l=l_0j_blockHeight1l_blockHeight1_i=i_0k=k_0i_blockWidth1k_blockWidth1υ_klτ_ij2 \begin{aligned} \tau\_{SSD} &= \sum\_{{j=j\_0} \atop {l=l\_0}}^{{j\_{blockHeight-1}} \atop {l\_{blockHeight-1}}} \quad\sum\_{{i=i\_0} \atop {k=k\_0}}^{{i\_{blockWidth-1}} \atop {k\_{blockWidth-1}}}|\upsilon\_{kl}-\tau\_{ij}|^2 \\\ \end{aligned}

When VkSamplerReductionModeCreateInfo::reductionMode is VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, the above summation is used. However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above values, computing a component-wise minimum or maximum of υ_klτ_ij|\upsilon\_{kl}-\tau\_{ij}|, respectively. For τ_SAD\tau\_{SAD}, the minimum or maximum difference is computed and for τ_SSD\tau\_{SSD}, the square of the minimum or maximum is computed.

Finally, the operations described in Conversion to RGBA and Component swizzle are performed and the final result is returned to the shader. The component swizzle is specified by the target image descriptor; any swizzle specified by the reference image descriptor is ignored.

Block Matching Window Operation

Window block matching SPIR-V instructions opImageBlockMatchWindowSAD and opImageBlockMatchWindowSSD specify two sets of 2D integer texel coordinates: target coordinates (u,v)(u,v) and reference coordinates (s,t)(s,t). The block matching operation is performed repeatedly, for multiple sets of target integer coordinates within the specified window. These instructions effectively search a region or window within the target texture and identify the window coordinates where the minimum or maximum error metric is found. These instructions only support single component image formats.

The target coordinates are combinations of coordinates from (u,v)(u,v) to (u+windowWidth1,v+windowHeight1)(u + windowWidth - 1, v + windowHeight - 1) where windowHeightwindowHeight and windowWidthwindowWidth are specified by VkSamplerBlockMatchWindowCreateInfoQCOM::windowExtent. At each target coordinate, a block matching operation is performed, resulting in a difference metric. The reference coordinate (s,t)(s,t) is fixed. The block matching operation is repeated windowWidth×windowHeightwindowWidth \times windowHeight times.

The resulting minimum or maximum error is returned in the R component of the output. The integer window coordinates (x,y)(x,y) are returned in the G and B components of the output. The A component is 0. The minimum or maximum behavior is selected by VkSamplerBlockMatchWindowCreateInfoQCOM::windowCompareMode.

The following pseudocode describes the operation opImageBlockMatchWindowSAD. The pseudocode for opImageBlockMatchWindowSSD follows an identical pattern.

vec4 opImageBlockMatchGatherSAD( sampler2D target,
                                 uvec2 targetCoord,
                                 samler2D reference,
                                 uvec2 refCoord,
                                 uvec2 blocksize) {
    // Two parameters are sourced from the VkSampler associated with
    // `target`:
    //    compareMode  (which can be either `MIN` or `MAX`)
    //    uvec2 window (which defines the search window)

    minSAD = INF;
    maxSAD = -INF;
    uvec2 minCoord;
    uvec2 maxCoord;

    for (uint x=0, x<window.width; x++) {
        for (uint y=0; y<window.height; y++) {
            float SAD = textureBlockMatchSAD(target,
                                             targetCoord + uvec2(x, y),
                                             reference,
                                             refCoord,
                                             blocksize).x;
            if (SAD < minSAD) {
                minSAD = SAD;
                minCoord = uvec2(x,y);
            }
            if (SAD > maxSAD) {
                maxSAD = SAD;
                maxCoord = uvec2(x,y);
            }
        }
    }
    if (compareMode==MIN) {
        return vec4(minSAD, minCoord.x, minCoord.y, 0.0);
    } else {
        return vec4(maxSAD, maxCoord.x, maxCoord.y, 0.0);
    }
}

Block Matching Gather Operation

Block matching Gather SPIR-V instructions opImageBlockMatchGatherSAD and opImageBlockMatchGatherSSD specify two sets of 2D integer texel coordinates: target coordinates (u,v)(u,v) and reference coordinates (s,t)(s,t).

These instructions perform the block matching operation 4 times, using integer target coordinates (u,v)(u,v), (u+1,v)(u+1,v), (u+2,v)(u+2,v), and (u+3,v)(u+3,v). The R component from each of those 4 operations is gathered and returned in the R, G, B, and A components of the output respectively. For each block match operation, the reference coordinate is (s,t)(s,t). For each block match operation, only the R component of the target and reference images are compared. The following pseudocode describes the operation opImageBlockMatchGatherSAD. The pseudocode for opImageBlockMatchGatherSSD follows an identical pattern.

vec4 opImageBlockMatchGatherSAD(sampler2D target,
                                uvec2 targetCoord,
                                samler2D reference,
                                uvec2 refCoord,
                                uvec2 blocksize) {
    vec4 out;
    for (uint x=0, x<4; x++) {
            float SAD = textureBlockMatchSAD(target,
                                             targetCoord + uvec2(x, 0),
                                             reference,
                                             refCoord,
                                             blocksize).x;
            if (x == 0) {
                out.x = SAD;
            }
            if (x == 1) {
                out.y = SAD;
            }
            if (x == 2) {
                out.z = SAD;
            }
            if (x == 3) {
                out.w = SAD;
            }
    }
    return out;
}

Box Filter Sampling

The SPIR-V instruction OpImageBoxFilterQCOM specifies texture box filtering operation where a weighted average of a region of texels is computed, with the weights proportional to the coverage of each of the texels.

In addition to the inputs that would be accepted by an equivalent OpImageSample* instruction, OpImageBoxFilterQCOM accepts one additional input, boxSize which specifies the width and height in texels of the region to be averaged.

The figure below shows an example of using OpImageBoxFilterQCOM to sample from a 8 × 4 texel two-dimensional image, with unnormalized texture coordinates (4.125, 2.625) and boxSize of (2.75, 2.25). The filter will read 12 texel values and compute a weights based portion of each texel covered by the box.

vulkantexture boxFilter

If boxSize has height and width both equal to 1.0, then this instruction will behave as traditional bilinear filtering. The boxSize parameter must be greater than or equal to 1.0 and must not be greater than VkPhysicalDeviceImageProcessingPropertiesQCOM.maxBoxFilterBlockSize.

Box Filter Sampler Parameters

The input sampler must be created with addressModeU and addressModeV, equal to VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_EDGE, or VK_SAMPLER_ADDRESS_MODE_CLAMP_TO_BORDER with VK_BORDER_COLOR_FLOAT_TRANSPARENT_BLACK.

Box Filter Operation

The 2D unnormalized texel coordinates (u,v)(u,v) are transformed by boxSizeboxSize to specify integer texel coordinates (i_0,j_0)(i\_{0}, j\_{0}) of the bottom left texel for the filter.

i_0=uboxWidth2\[1em]j_0=vboxHeight2\begin{aligned} i\_{0} &= \left\lfloor u - \frac{boxWidth}{2} \right\rfloor \\\\\[1em] j\_{0} &= \left\lfloor v - \frac{boxHeight}{2} \right\rfloor \end{aligned}

where boxWidthboxWidth and boxHeightboxHeight are specified by the code:(x,y) components of the boxSize operand.

The filter dimensions (filterWidth×filterHeight)(filterWidth \times filterHeight) are computed from the fractional portion of the (u,v)(u,v) coordinates and the boxSizeboxSize.

startFracU=frac(uboxWidth2)\[1em]startFracV=frac(vboxHeight2)\[1em]endFracU=frac(startFracU+boxWidth)\[1em]endFracV=frac(startFracV+boxHeight)\[1em]filterWidth=startFracU+boxWidth\[1em]filterHeight=startFracV+boxHeight\begin{aligned} startFracU &= \mathbin{frac}\left(u - \frac{boxWidth}{2} \right) \\\\\[1em] startFracV &= \mathbin{frac}\left(v - \frac{boxHeight}{2} \right) \\\\\[1em] endFracU &= \mathbin{frac}\left( startFracU + boxWidth \right) \\\\\[1em] endFracV &= \mathbin{frac}\left( startFracV + boxHeight \right) \\\\\[1em] filterWidth &= \left\lceil startFracU + boxWidth \right\rceil \\\\\[1em] filterHeight &= \left\lceil startFracV + boxHeight \right\rceil \end{aligned}

where the number of fraction bits retained by frac()frac() is specified by VkPhysicalDeviceLimits::subTexelPrecisionBits.

A set of neighboring integer texel coordinates are generated. The neighboring coordinates are combinations of i_0i\_{0} to i_filterWidth1i\_{filterWidth-1} and j_0j\_{0} to j_filterHeight1j\_{filterHeight-1}, with i_0,j_0i\_{0}, j\_{0} being the top-left coordinate of this set. The set is of size (filterWidth×filterHeight)(filterWidth \times filterHeight).

i_qq=0q=filterWidth1=i0+q\[1em]j_qq=0q=filterHeight1=j0+q\begin{aligned} \\{i\_q\\}*{q=0}^{q=filterWidth-1} \quad &= i*{0} + q \\\\\[1em] \\{j\_q\\}*{q=0}^{q=filterHeight-1} \quad &= j*{0} + q \end{aligned}

Each of the generated integer coordinates (i_q,j_q)({i\_q}, {j\_q}) is transformed by texture wrapping operation, followed by integer texel coordinate validation, If any coordinate fails coordinate validation, it is a Border Texel and texel replacement is performed.

Horizontal weights horizWeight_0horizWeight\_{0} to horizWeight_boxWidth1horizWeight\_{boxWidth-1} and vertical weights vertWeight_0vertWeight\_{0} to vertWeight_boxHeight1vertWeight\_{boxHeight-1} are computed. Texels that are fully covered by the box will have a horizontal and vertical weight of 1. Texels partially covered by the box will have will have a reduced weights proportional to the coverage.

horizWeight_i={(1startFracU),for (i==0) (endFracU),for (i==filterWidth1) and (endFracU!=0) (1),otherwise \begin{aligned} horizWeight\_{i} &= \begin{cases} \left(1-startFracU \right), & \text{for } (i == 0) \\\ \left(endFracU \right), & \text{for } (i == filterWidth-1) \text{ and } (endFracU != 0) \\\ \left(1\right), & \text{otherwise} \\\ \end{cases} \end{aligned}vertWeight_j={(1startFracV),for (j==0) (endFracV),for (j==filterHeight1) and (endFracV!=0) (1),otherwise \begin{aligned} vertWeight\_{j} &= \begin{cases} \left(1-startFracV \right), & \text{for } (j == 0) \\\ \left(endFracV \right), & \text{for } (j == filterHeight-1) \text{ and } (endFracV !=0) \\\ \left(1\right), & \text{otherwise} \\\ \end{cases} \end{aligned}

The values of multiple texels, together with their horizontal and vertical weights, are combined to produce a box filtered value.

τ_boxFilter=1boxHeight×boxWidth_j=j_0j_filterHeight1_i=i_0i_filterWidth1(horizWeight_i)(vertWeight_j)τ_ij \begin{aligned} \tau\_{boxFilter} &= \frac{1}{boxHeight \times boxWidth} \sum\_{j=j\_0}^{j\_{filterHeight-1}}\quad\sum\_{i=i\_0}^{i\_{filterWidth-1}}(horizWeight\_i)(vertWeight\_j)\tau\_{ij} \\\ \end{aligned}

When VkSamplerReductionModeCreateInfo::reductionMode is VK_SAMPLER_REDUCTION_MODE_WEIGHTED_AVERAGE, the above summation is used. However, if the reduction mode is VK_SAMPLER_REDUCTION_MODE_MIN or VK_SAMPLER_REDUCTION_MODE_MAX, the process operates on the above values, computing a component-wise minimum or maximum of the texels.

Image Operation Steps

Each step described in this chapter is performed by a subset of the image instructions:

  • Texel Input Validation Operations, Format Conversion, Texel Replacement, Conversion to RGBA, and Component Swizzle: Performed by all instructions except OpImageWrite.
  • Depth Comparison: Performed by OpImage*Dref instructions.
  • All Texel output operations: Performed by OpImageWrite.
  • Projection: Performed by all OpImage*Proj instructions.
  • Derivative Image Operations, Cube Map Operations, Scale Factor Operation, LOD Operation and Image Level(s) Selection, and Texel Anisotropic Filtering: Performed by all OpImageSample* and OpImageSparseSample* instructions.
  • (s,t,r,q,a) to (u,v,w,a) Transformation, Wrapping, and (u,v,w,a) to (i,j,k,l,n) Transformation And Array Layer Selection: Performed by all OpImageSample, OpImageSparseSample, and OpImage*Gather instructions.
  • Texel Gathering: Performed by OpImage*Gather instructions.
  • Texel Footprint Evaluation: Performed by OpImageSampleFootprint instructions.
  • Texel Filtering: Performed by all OpImageSample* and OpImageSparseSample* instructions.
  • Sparse Residency: Performed by all OpImageSparse* instructions.
  • (s,t,r,q,a) to (u,v,w,a) Transformation, Wrapping, and Weight Image Sampling: Performed by OpImageWeightedSample* instructions.
  • (s,t,r,q,a) to (u,v,w,a) Transformation, Wrapping, and Block Matching: Performed by opImageBlockMatch* instructions.
  • (s,t,r,q,a) to (u,v,w,a) Transformation, Wrapping, and Box Filter Sampling: Performed by OpImageBoxFilter* instructions.

Image Query Instructions

Image Property Queries

OpImageQuerySize, OpImageQuerySizeLod, OpImageQueryLevels, and OpImageQuerySamples query properties of the image descriptor that would be accessed by a shader image operation. They return 0 if the bound descriptor is a null descriptor.

OpImageQuerySizeLod returns the size of the image level identified by the Level of Detail operand. If that level does not exist in the image, and the descriptor is not null, then the value returned is undefined:.

LOD Query

OpImageQueryLod returns the Lod parameters that would be used in an image operation with the given image and coordinates. If the descriptor that would be accessed is a null descriptor then (0,0) is returned. Otherwise, the steps described in this chapter are performed as if for OpImageSampleImplicitLod, up to Scale Factor Operation, LOD Operation and Image Level(s) Selection. The return value is the vector (λ', dl - levelbase). These values may be subject to implementation-specific maxima and minima for very large, out-of-range values.