Rasterization

Rasterization is the process by which a primitive is converted to a two-dimensional image. Each discrete location of this image contains associated data such as depth, color, or other attributes.

Rasterizing a primitive begins by determining which squares of an integer grid in framebuffer coordinates are occupied by the primitive, and assigning one or more depth values to each such square. This process is described below for points, lines, and polygons.

A grid square, including its (x,y) framebuffer coordinates, z (depth), and associated data added by fragment shaders, is called a fragment. A fragment is located by its upper left corner, which lies on integer grid coordinates.

Rasterization operations also refer to a fragment’s sample locations, which are offset by fractional values from its upper left corner. The rasterization rules for points, lines, and triangles involve testing whether each sample location is inside the primitive. Fragments need not actually be square, and rasterization rules are not affected by the aspect ratio of fragments. Display of non-square grids, however, will cause rasterized points and line segments to appear fatter in one direction than the other.

We assume that fragments are square, since it simplifies antialiasing and texturing. After rasterization, fragments are processed by fragment operations.

Several factors affect rasterization, including the members of VkPipelineRasterizationStateCreateInfo and VkPipelineMultisampleStateCreateInfo.

VkPipelineRasterizationStateCreateInfoStructure specifying parameters of a newly created pipeline rasterization state
VkPipelineRasterizationStateCreateFlagsReserved for future use
VkPipelineRasterizationDepthClipStateCreateInfoEXTStructure specifying depth clipping state
VkPipelineRasterizationDepthClipStateCreateFlagsEXTReserved for future use
VkPipelineMultisampleStateCreateInfoStructure specifying parameters of a newly created pipeline multisample state
VkPipelineMultisampleStateCreateFlagsReserved for future use
VkSampleMaskMask of sample coverage information

Rasterization only generates fragments which cover one or more pixels inside the framebuffer. Pixels outside the framebuffer are never considered covered in the fragment. Fragments which would be produced by application of any of the primitive rasterization rules described below but which lie outside the framebuffer are not produced, nor are they processed by any later stage of the pipeline, including any of the fragment operations.

Surviving fragments are processed by fragment shaders. Fragment shaders determine associated data for fragments, and can also modify or replace their assigned depth values.

Discarding Primitives Before Rasterization

Primitives are discarded before rasterization if the rasterizerDiscardEnable member of VkPipelineRasterizationStateCreateInfo is enabled. When enabled, primitives are discarded after they are processed by the last active shader stage in the pipeline before rasterization.

vkCmdSetRasterizerDiscardEnableControl whether primitives are discarded before the rasterization stage dynamically for a command buffer

Controlling the Vertex Stream Used for Rasterization

By default vertex data output from the last pre-rasterization shader stage are directed to vertex stream zero. Geometry shaders can emit primitives to multiple independent vertex streams. Each vertex emitted by the geometry shader is directed at one of the vertex streams. As vertices are received on each vertex stream, they are arranged into primitives of the type specified by the geometry shader output primitive type. The shading language instructions OpEndPrimitive and OpEndStreamPrimitive can be used to end the primitive being assembled on a given vertex stream and start a new empty primitive of the same type. An implementation supports up to VkPhysicalDeviceTransformFeedbackPropertiesEXT::maxTransformFeedbackStreams streams, which is at least 1. The individual streams are numbered 0 through maxTransformFeedbackStreams minus 1. There is no requirement on the order of the streams to which vertices are emitted, and the number of vertices emitted to each vertex stream can be completely independent, subject only to the VkPhysicalDeviceTransformFeedbackPropertiesEXT::maxTransformFeedbackStreamDataSize and VkPhysicalDeviceTransformFeedbackPropertiesEXT::maxTransformFeedbackBufferDataSize limits. The primitives output from all vertex streams are passed to the transform feedback stage to be captured to transform feedback buffers in the manner specified by the last pre-rasterization shader stage shader’s XfbBuffer, XfbStride, and Offsets decorations on the output interface variables in the graphics pipeline. To use a vertex stream other than zero, or to use multiple streams, the GeometryStreams capability must be specified.

By default, the primitives output from vertex stream zero are rasterized. If the implementation supports the VkPhysicalDeviceTransformFeedbackPropertiesEXT::transformFeedbackRasterizationStreamSelect property it is possible to rasterize a vertex stream other than zero.

By default, geometry shaders that emit vertices to multiple vertex streams are limited to using only the OutputPoints output primitive type. If the implementation supports the VkPhysicalDeviceTransformFeedbackPropertiesEXT::transformFeedbackStreamsLinesTriangles property it is possible to emit OutputLineStrip or OutputTriangleStrip in addition to OutputPoints.

VkPipelineRasterizationStateStreamCreateInfoEXTStructure defining the geometry stream used for rasterization
VkPipelineRasterizationStateStreamCreateFlagsEXTReserved for future use
vkCmdSetRasterizationStreamEXTSpecify the rasterization stream dynamically for a command buffer

Rasterization Order

Within a subpass of a render pass instance, for a given (x,y,layer,sample) sample location, the following operations are guaranteed to execute in rasterization order, for each separate primitive that includes that sample location:

  1. Fragment operations, in the order defined
  2. Blending, logic operations, and color writes

Execution of these operations for each primitive in a subpass occurs in an order determined by the application.

VkPipelineRasterizationStateRasterizationOrderAMDStructure defining rasterization order for a graphics pipeline
VkRasterizationOrderAMDSpecify rasterization order for a graphics pipeline

Multisampling

Multisampling is a mechanism to antialias all Vulkan primitives: points, lines, and polygons. The technique is to sample all primitives multiple times at each pixel. Each sample in each framebuffer attachment has storage for a color, depth, and/or stencil value, such that per-fragment operations apply to each sample independently. The color sample values can be later resolved to a single color (see Resolving Multisample Images and the Render Pass chapter for more details on how to resolve multisample images to non-multisample images).

Vulkan defines rasterization rules for single-sample modes in a way that is equivalent to a multisample mode with a single sample in the center of each fragment.

Each fragment includes a coverage mask with a single bit for each sample in the fragment, and a number of depth values and associated data for each sample.

It is understood that each pixel has rasterizationSamples locations associated with it. These locations are exact positions, rather than regions or areas, and each is referred to as a sample point. The sample points associated with a pixel must be located inside or on the boundary of the unit square that is considered to bound the pixel. Furthermore, the relative locations of sample points may be identical for each pixel in the framebuffer, or they may differ.

If the render pass has a fragment density map attachment, each fragment only has rasterizationSamples locations associated with it regardless of how many pixels are covered in the fragment area. Fragment sample locations are defined as if the fragment had an area of (1,1) and its sample points must be located within these bounds. Their actual location in the framebuffer is calculated by scaling the sample location by the fragment area. Attachments with storage for multiple samples per pixel are located at the pixel sample locations. Otherwise, the fragment’s sample locations are generally used for evaluation of associated data and fragment operations.

If the current pipeline includes a fragment shader with one or more variables in its interface decorated with Sample and Input, the data associated with those variables will be assigned independently for each sample. The values for each sample must be evaluated at the location of the sample. The data associated with any other variables not decorated with Sample and Input need not be evaluated independently for each sample.

A coverage mask is generated for each fragment, based on which samples within that fragment are determined to be within the area of the primitive that generated the fragment.

Single pixel fragments and multi-pixel fragments defined by a fragment density map have one set of samples. Multi-pixel fragments defined by a shading rate image have one set of samples per pixel. Multi-pixel fragments defined by setting the fragment shading rate have one set of samples per pixel. Each set of samples has a number of samples determined by VkPipelineMultisampleStateCreateInfo::rasterizationSamples. Each sample in a set is assigned a unique sample index i in the range [0, rasterizationSamples).

vkCmdSetRasterizationSamplesEXTSpecify the rasterization samples dynamically for a command buffer

Each sample in a fragment is also assigned a unique coverage index j in the range [0, n × rasterizationSamples), where n is the number of sets in the fragment. If the fragment contains a single set of samples, the coverage index is always equal to the sample index. If a shading rate image is used and a fragment covers multiple pixels, the coverage index is determined as defined by VkPipelineViewportCoarseSampleOrderStateCreateInfoNV or vkCmdSetCoarseSampleOrderNV.

If the fragment shading rate is set, the coverage index j is determined as a function of the pixel index

p, the sample index i, and the number of rasterization samples r as:

  • j = i + r × ((fw × fh) - 1 - p)

where the pixel index p is determined as a function of the pixel’s framebuffer location (x,y) and the fragment size (fw,fh):

  • px = x % fw
  • py = y % fh
  • p = px + (py × fw)

The tables below illustrate the pixel index for multi-pixel fragments:

Table 35. Pixel Indices - 1 Wide
1x11x21x4

pixel index 1x1

pixel index 1x2

pixel index 1x4

Table 36. Pixel Indices - 2 Wide
2x12x22x4

pixel index 2x1

pixel index 2x2

pixel index 2x4

Table 37. Pixel Indices - 4 Wide
4x14x24x4

pixel index 4x1

pixel index 4x2

pixel index 4x4

The coverage mask includes B bits packed into W words, defined as:

  • B = n × rasterizationSamples
  • W = ⌈B/32⌉

Bit b in coverage mask word w is 1 if the sample with coverage index j = 32×w + b is covered, and 0 otherwise.

If the standardSampleLocations member of VkPhysicalDeviceLimits is VK_TRUE, then the sample counts VK_SAMPLE_COUNT_1_BIT, VK_SAMPLE_COUNT_2_BIT, VK_SAMPLE_COUNT_4_BIT, VK_SAMPLE_COUNT_8_BIT, and VK_SAMPLE_COUNT_16_BIT have sample locations as listed in the following table, with the ith entry in the table corresponding to sample index i. VK_SAMPLE_COUNT_32_BIT and VK_SAMPLE_COUNT_64_BIT do not have standard sample locations. Locations are defined relative to an origin in the upper left corner of the fragment.

Table 38. Standard Sample Locations
Sample countSample Locations

VK_SAMPLE_COUNT_1_BIT

(0.5,0.5)

sample count 1

VK_SAMPLE_COUNT_2_BIT

(0.75,0.75) (0.25,0.25)

sample count 2

VK_SAMPLE_COUNT_4_BIT

(0.375, 0.125) (0.875, 0.375) (0.125, 0.625) (0.625, 0.875)

sample count 4

VK_SAMPLE_COUNT_8_BIT

(0.5625, 0.3125) (0.4375, 0.6875) (0.8125, 0.5625) (0.3125, 0.1875) (0.1875, 0.8125) (0.0625, 0.4375) (0.6875, 0.9375) (0.9375, 0.0625)

sample count 8

VK_SAMPLE_COUNT_16_BIT

(0.5625, 0.5625) (0.4375, 0.3125) (0.3125, 0.625) (0.75, 0.4375) (0.1875, 0.375) (0.625, 0.8125) (0.8125, 0.6875) (0.6875, 0.1875) (0.375, 0.875) (0.5, 0.0625) (0.25, 0.125) (0.125, 0.75) (0.0, 0.5) (0.9375, 0.25) (0.875, 0.9375) (0.0625, 0.0)

sample count 16

Color images created with multiple samples per pixel use a compression technique where there are two arrays of data associated with each pixel. The first array contains one element per sample where each element stores an index to the second array defining the fragment mask of the pixel. The second array contains one element per color fragment and each element stores a unique color value in the format of the image. With this compression technique it is not always necessary to actually use unique storage locations for each color sample: when multiple samples share the same color value the fragment mask may have two samples referring to the same color fragment. The number of color fragments is determined by the samples member of the VkImageCreateInfo structure used to create the image. The VK_AMD_shader_fragment_mask device extension provides shader instructions enabling the application to get direct access to the fragment mask and the individual color fragment values.

fragment mask

Custom Sample Locations

VkPipelineSampleLocationsStateCreateInfoEXTStructure specifying sample locations for a pipeline
VkSampleLocationsInfoEXTStructure specifying a set of sample locations
VkSampleLocationEXTStructure specifying the coordinates of a sample location
vkCmdSetSampleLocationsEnableEXTSpecify the samples locations enable state dynamically for a command buffer
vkCmdSetSampleLocationsEXTSet sample locations dynamically for a command buffer

Fragment Shading Rates

The features advertised by VkPhysicalDeviceFragmentShadingRateFeaturesKHR allow an application to control the shading rate of a given fragment shader invocation.

The fragment shading rate strongly interacts with Multisampling, and the set of available rates for an implementation may be restricted by sample rate.

vkGetPhysicalDeviceFragmentShadingRatesKHRGet available shading rates for a physical device
VkPhysicalDeviceFragmentShadingRateKHRStructure returning information about sample count specific additional multisampling capabilities

Fragment shading rates can be set at three points, with the three rates combined to determine the final shading rate.

Pipeline Fragment Shading Rate

The pipeline fragment shading rate can be set on a per-draw basis by either setting the rate in a graphics pipeline, or dynamically via vkCmdSetFragmentShadingRateKHR.

VkPipelineFragmentShadingRateStateCreateInfoKHRStructure specifying parameters controlling the fragment shading rate
vkCmdSetFragmentShadingRateKHRSet pipeline fragment shading rate and combiner operation dynamically for a command buffer

Primitive Fragment Shading Rate

The primitive fragment shading rate can be set via the PrimitiveShadingRateKHR built-in in the last active pre-rasterization shader stage. If the last pre-rasterization shader stage is using the MeshEXT Execution Model, the rate associated with a given primitive is sourced from the value written to the per-primitive PrimitiveShadingRateKHR. Otherwise the rate associated with a given primitive is sourced from the value written to PrimitiveShadingRateKHR by that primitive’s provoking vertex.

Attachment Fragment Shading Rate

The attachment shading rate can be set by including VkFragmentShadingRateAttachmentInfoKHR in a subpass to define a fragment shading rate attachment. Each pixel in the framebuffer is assigned an attachment fragment shading rate by the corresponding texel in the fragment shading rate attachment, according to:

  • x' = floor(x / regionx)
  • y' = floor(y / regiony)

where x' and y' are the coordinates of a texel in the fragment shading rate attachment, x and y are the coordinates of the pixel in the framebuffer, and regionx and regiony are the size of the region each texel corresponds to, as defined by the shadingRateAttachmentTexelSize member of VkFragmentShadingRateAttachmentInfoKHR.

If multiview is enabled and the shading rate attachment has multiple layers, the shading rate attachment texel is selected using layer = ViewIndex. If multiview is disabled, and both the shading rate attachment and the framebuffer have multiple layers, the shading rate attachment texel is selected using layer = Layer. Otherwise, layer = 0.

The texel is read from the fragment shading rate attachment image as a texture input operation without a sampler, using integer coordinates i = x', j = y', k = 0, l = layer, and s = 0. The fragment size is encoded into the first component of the result of that operation as follows:

  • sizew = 2((texel/4)&3)
  • sizeh = 2(texel&3)

where texel is the value in the first component of the returned value, and sizew and sizeh are the width and height of the fragment size, decoded from the texel.

If no fragment shading rate attachment is specified, this size is calculated as sizew = sizeh = 1. Applications must not specify a width or height greater than 4 by this method.

The Fragment Shading Rate enumeration in SPIR-V adheres to the above encoding.

Combining the Fragment Shading Rates

The final rate (Cxy') used for fragment shading must be one of the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count and render pass transform used by rasterization.

If any of the following conditions are met, Cxy' is set to {1,1} by the implementation:

Otherwise, each of the specified shading rates are combined and then used to derive the value of Cxy'. As there are three ways to specify shading rates, two combiner operations are specified - between the pipeline and primitive shading rates, and between the result of that and the attachment shading rate.

VkFragmentShadingRateCombinerOpKHRControl how fragment shading rates are combined

This is used to generate a combined fragment area using the equation:

  • Cxy = combine(Axy,Bxy)

where Cxy is the combined fragment area result, and Axy and Bxy are the fragment areas of the fragment shading rates being combined.

Two combine operations are performed, first with Axy equal to the pipeline fragment shading rate and Bxy equal to the primitive fragment shading rate, with the combine() operation selected by combinerOps[0]. A second combination is then performed, with Axy equal to the result of the first combination and Bxy equal to the attachment fragment shading rate, with the combine() operation selected by combinerOps[1]. The result of the second combination is used as the final fragment shading rate, reported via the ShadingRateKHR built-in.

Implementations should clamp the inputs to the combiner operations Axy and Bxy, and must do so if VkPhysicalDeviceMaintenance6PropertiesKHR::fragmentShadingRateClampCombinerInputs is VK_TRUE. All implementations must clamp the result of the second combiner operation.

A fragment shading rate Rxy representing any of Axy, Bxy or Cxy is clamped as follows. If Rxy is one of the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count and render pass transform used by rasterization, the clamped shading rate Rxy' is Rxy. Otherwise, the clamped shading rate is selected from the rates returned by vkGetPhysicalDeviceFragmentShadingRatesKHR for the sample count and render pass transform used by rasterization. From this list of supported rates, the following steps are applied in order, to select a single value:

  1. Keep only rates where Rx' ≤ Rx and Ry' ≤ Ry.
    • Implementations may also keep rates where Rx' ≤ Ry and Ry' ≤ Rx.
  2. Keep only rates with the highest area (Rx' × Ry').
  3. Keep only rates with the lowest aspect ratio (Rx' + Ry').
  4. In cases where a wide (e.g. 4x1) and tall (e.g. 1x4) rate remain, the implementation may choose either rate. However, it must choose this rate consistently for the same shading rates, render pass transform, and combiner operations for the lifetime of the VkDevice.

Extended Fragment Shading Rates

The features advertised by VkPhysicalDeviceFragmentShadingRateEnumsFeaturesNV provide support for additional fragment shading rates beyond those specifying one fragment shader invocation covering all pixels in a fragment whose size is indicated by the fragment shading rate.

VkFragmentShadingRateNVEnumeration with fragment shading rates

When using fragment shading rate enums, the pipeline fragment shading rate can be set on a per-draw basis by either setting the rate in a graphics pipeline, or dynamically via vkCmdSetFragmentShadingRateEnumNV.

VkPipelineFragmentShadingRateEnumStateCreateInfoNVStructure specifying parameters controlling the fragment shading rate using rate enums
VkFragmentShadingRateTypeNVEnumeration with fragment shading rate types
vkCmdSetFragmentShadingRateEnumNVSet pipeline fragment shading rate dynamically for a command buffer using enums

When the supersampleFragmentShadingRates or noInvocationFragmentShadingRates features are enabled, the behavior of the shading rate combiner operations is extended to support the shading rates enabled by those features. Primitive and attachment shading rate values are interpreted as VkFragmentShadingRateNV values and the behavior of the combiners is modified as follows:

  • For VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MIN_KHR, VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MAX_KHR, and VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MUL_KHR, if either Axy or Bxy is VK_FRAGMENT_SHADING_RATE_NO_INVOCATIONS_NV, combine(Axy,Bxy) produces a shading rate of VK_FRAGMENT_SHADING_RATE_NO_INVOCATIONS_NV, regardless of the other input shading rate.
  • For VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MIN_KHR, combine(Axy,Bxy) produces a shading rate whose fragment size is the smaller of the fragment sizes of Axy and Bxy and whose invocation count is the larger of the invocation counts of Axy and Bxy.
  • For VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MAX_KHR, combine(Axy,Bxy) produces a shading rate whose fragment size is the larger of the fragment sizes of Axy and Bxy and whose invocation count is the smaller of the invocation counts of Axy and Bxy.
  • For VK_FRAGMENT_SHADING_RATE_COMBINER_OP_MUL_KHR, combine(Axy,Bxy) produces a shading rate whose fragment size and invocation count is the product of the fragment sizes and invocation counts, respectively, of Axy and Bxy. If the resulting shading rate has both multiple pixels and multiple invocations per fragment, an implementation may adjust the shading rate by reducing both the pixel and invocation counts.

If the final shading rate from the combiners is VK_FRAGMENT_SHADING_RATE_NO_INVOCATIONS_NV, no fragments will be generated for any portion of a primitive using that shading rate.

If the final shading rate from the combiners specifies multiple fragment shader invocations per fragment, the fragment will be processed with multiple unique samples as in sample shading, where the total number the total number of invocations is taken from the shading rate and then clamped to rasterizationSamples and maxFragmentShadingRateInvocationCount.

Shading Rate Image

The shadingRateImage feature allows pipelines to use a shading rate image to control the fragment area and the minimum number of fragment shader invocations launched for each fragment. When the shading rate image is enabled, the rasterizer determines a base shading rate for each region of the framebuffer covered by a primitive by fetching a value from the shading rate image and translating it to a shading rate using a per-viewport shading rate palette. This base shading rate is then adjusted to derive a final shading rate. The final shading rate specifies the fragment area and fragment shader invocation count to use for fragments generated in the region.

VkPipelineViewportShadingRateImageStateCreateInfoNVStructure specifying parameters controlling shading rate image usage
vkCmdBindShadingRateImageNVBind a shading rate image on a command buffer

When the shading rate image is enabled in the current pipeline, rasterizing a primitive covering the pixel with coordinates (x,y) will fetch a shading rate index value from the shading rate image bound by vkCmdBindShadingRateImageNV. If the shading rate image view has a type of VK_IMAGE_VIEW_TYPE_2D, the lookup will use texel coordinates (u,v) where u=xtwidthu = \left\lfloor \frac{x}{twidth} \right\rfloor, v=ytheightv = \left\lfloor \frac{y}{theight} \right\rfloor, and twidthtwidth and theighttheight are the width and height of the implementation-dependent shading rate texel size. If the shading rate image view has a type of VK_IMAGE_VIEW_TYPE_2D_ARRAY, the lookup will use texel coordinates (u,v) to extract a texel from the layer l, where l is the layer of the framebuffer being rendered to. If l is greater than or equal to the number of layers in the image view, layer zero will be used.

If the bound shading rate image view is not VK_NULL_HANDLE and contains a texel with coordinates (u,v) in layer l (if applicable), the single unsigned integer component for that texel will be used as the shading rate index. If the (u,v) coordinate is outside the extents of the subresource used by the shading rate image view, or if the image view is VK_NULL_HANDLE, the shading rate index is zero. If the shading rate image view has multiple mipmap levels, the base level identified by VkImageSubresourceRange::baseMipLevel will be used.

A shading rate index is mapped to a base shading rate using a lookup table called the shading rate image palette. There is a separate palette for each viewport. The number of entries in each palette is given by the implementation-dependent shading rate image palette size.

vkCmdSetShadingRateImageEnableNVSpecify the shading rate image enable state dynamically for a command buffer
vkCmdSetViewportShadingRatePaletteNVSet shading rate image palettes dynamically for a command buffer
VkShadingRatePaletteNVStructure specifying a single shading rate palette

To determine the base shading rate image, a shading rate index i is mapped to array element i in the array pShadingRatePaletteEntries for the palette corresponding to the viewport used for the fragment. If i is greater than or equal to the palette size shadingRatePaletteEntryCount, the base shading rate is undefined:.

VkShadingRatePaletteEntryNVShading rate image palette entry types

When the shading rate image is disabled, a shading rate of VK_SHADING_RATE_PALETTE_ENTRY_1_INVOCATION_PER_PIXEL_NV will be used as the base shading rate.

Once a base shading rate has been established, it is adjusted to produce a final shading rate. First, if the base shading rate uses multiple pixels for each fragment, the implementation may reduce the fragment area to ensure that the total number of coverage samples for all pixels in a fragment does not exceed an implementation-dependent maximum.

If sample shading is active in the current pipeline and would result in processing n (n > 1) unique samples per fragment when the shading rate image is disabled, the shading rate is adjusted in an implementation-dependent manner to increase the number of fragment shader invocations spawned by the primitive. If the shading rate indicates fs pixels per fragment and fs is greater than n, the fragment area is adjusted so each fragment has approximately fsnfs \over n pixels. Otherwise, if the shading rate indicates ipf invocations per fragment, the fragment area will be adjusted to a single pixel with approximately ipf×nfsipf \times n \over fs invocations per fragment.

If sample shading occurs due to the use of a fragment shader input variable decorated with SampleId or SamplePosition, the shading rate is ignored. Each fragment will have a single pixel and will spawn up to rasterizationSamples fragment shader invocations, as when using sample shading without a shading rate image.

Finally, if the shading rate specifies multiple fragment shader invocations per fragment, the total number of invocations in the shading rate is clamped to be no larger than rasterizationSamples.

When the final shading rate for a primitive covering pixel (x,y) has a fragment area of fw×fhfw \times fh, the fragment for that pixel will cover all pixels with coordinates (x',y') that satisfy the equations:

xfw=xfw\begin{aligned} \left\lfloor \frac{x}{fw} \right\rfloor = \left\lfloor \frac{x'}{fw} \right\rfloor \end{aligned}yfh=yfh\begin{aligned} \left\lfloor \frac{y}{fh} \right\rfloor = \left\lfloor \frac{y'}{fh} \right\rfloor \end{aligned}

This combined fragment is considered to have multiple coverage samples; the total number of samples in this fragment is given by samples=fw×fh×rssamples = fw \times fh \times rs where rs indicates the value of VkPipelineMultisampleStateCreateInfo::rasterizationSamples specified at pipeline creation time. The set of coverage samples in the fragment is the union of the per-pixel coverage samples in each of the fragment’s pixels The location and order of coverage samples within each pixel in the combined fragment are assigned as described in Multisampling and Custom Sample Locations. Each coverage sample in the set of pixels belonging to the combined fragment is assigned a unique coverage index in the range [0,samples-1]. If the shadingRateCoarseSampleOrder feature is supported, the order of coverage samples can be specified for each combination of fragment area and coverage sample count. If this feature is not supported, the sample order is implementation-dependent.

VkPipelineViewportCoarseSampleOrderStateCreateInfoNVStructure specifying parameters controlling sample order in coarse fragments
VkCoarseSampleOrderTypeNVShading rate image sample ordering types

When using a coarse sample order of VK_COARSE_SAMPLE_ORDER_TYPE_PIXEL_MAJOR_NV for a fragment with an upper-left corner of (fx,fy)(fx,fy) with a width of fw×fhfw \times fh and fscfsc samples per pixel, coverage index cscs of the fragment will be assigned to sample index fsfs of pixel (px,py)(px,py) as follows:

\begin{aligned} px = & fx + (\left\lfloor {cs \over fsc} \right\rfloor \text{ \\% } fw) \\\ py = & fy + \left\lfloor {cs \over {fsc \times fw}} \right\rfloor \\\ fs = & cs \text{ \\% } fsc \end{aligned}

When using a coarse sample order of VK_COARSE_SAMPLE_ORDER_TYPE_SAMPLE_MAJOR_NV, coverage index cscs will be assigned as follows:

\begin{aligned} px = & fx + cs \text{ \\% } fw \\\ py = & (fy + \left\lfloor {cs \over fw} \right\rfloor \text{ \\% } fh) \\\ fs = & \left\lfloor {cs \over {fw \times fh}} \right\rfloor \end{aligned}
VkCoarseSampleOrderCustomNVStructure specifying parameters controlling shading rate image usage
VkCoarseSampleLocationNVStructure specifying parameters controlling shading rate image usage
vkCmdSetCoarseSampleOrderNVSet order of coverage samples for coarse fragments dynamically for a command buffer

If the final shading rate for a primitive covering pixel (x,y) results in n invocations per pixel (n > 1), n separate fragment shader invocations will be generated for the fragment. Each coverage sample in the fragment will be assigned to one of the n fragment shader invocations in an implementation-dependent manner. The outputs from the fragment output interface of each shader invocation will be broadcast to all of the framebuffer samples associated with the invocation. If none of the coverage samples associated with a fragment shader invocation is covered by a primitive, the implementation may discard the fragment shader invocation for those samples.

If the final shading rate for a primitive covering pixel (x,y) results in a fragment containing multiple pixels, a single set of fragment shader invocations will be generated for all pixels in the combined fragment. Outputs from the fragment output interface will be broadcast to all covered framebuffer samples belonging to the fragment. If the fragment shader executes code discarding the fragment, none of the samples of the fragment will be updated.

Sample Shading

Sample shading can be used to specify a minimum number of unique samples to process for each fragment. If sample shading is enabled, an implementation must invoke the fragment shader at least max(⌈ VkPipelineMultisampleStateCreateInfo::minSampleShading × VkPipelineMultisampleStateCreateInfo::rasterizationSamples ⌉, 1) times per fragment. If VkPipelineMultisampleStateCreateInfo::sampleShadingEnable is VK_TRUE, sample shading is enabled.

If a fragment shader entry point statically uses an input variable decorated with a BuiltIn of SampleId or SamplePosition, sample shading is enabled and a value of 1.0 is used instead of minSampleShading. If a fragment shader entry point statically uses an input variable decorated with Sample, sample shading may be enabled and a value of 1.0 will be used instead of minSampleShading if it is. If the VK_AMD_mixed_attachment_samples extension is enabled and the subpass uses color attachments, the samples value used to create each color attachment is used instead of rasterizationSamples.

If a shader decorates an input variable with Sample and that value meaningfully impacts the output of a shader, sample shading will be enabled to ensure that the input is in fact interpolated per-sample. This is inherent to the specification and not spelled out here - if an application simply declares such a variable it is implementation-defined whether sample shading is enabled or not. It is possible to see the effects of this by using atomics in the shader or using a pipeline statistics query to query the number of fragment invocations, even if the shader itself does not use any per-sample variables.

If there are fewer fragment invocations than covered samples, implementations may include those samples in fragment shader invocations in any manner as long as covered samples are all shaded at least once, and each invocation that is not a helper invocation covers at least one sample.

Barycentric Interpolation

When the fragmentShaderBarycentric feature is enabled, the PerVertexKHR interpolation decoration can be used with fragment shader inputs to indicate that the decorated inputs do not have associated data in the fragment. Such inputs can only be accessed in a fragment shader using an array index whose value (0, 1, or 2) identifies one of the vertices of the primitive that produced the fragment. Reads of per-vertex values for missing vertices, such as the third vertex of a line primitive, will return values from the valid vertex with the highest index. This means that the per-vertex values of indices 1 and 2 for point primitives will be equal to those of index 0, and the per-vertex values of index 2 for line primitives will be equal to those of index 1.

When tessellation, geometry shading, and mesh shading are not active, fragment shader inputs decorated with PerVertexKHR will take values from one of the vertices of the primitive that produced the fragment, identified by the extra index provided in SPIR-V code accessing the input. If the n vertices passed to a draw call are numbered 0 through n-1, and the point, line, and triangle primitives produced by the draw call are numbered with consecutive integers beginning with zero, the following table indicates the original vertex numbers used when the provoking vertex mode is VK_PROVOKING_VERTEX_MODE_FIRST_VERTEX_EXT for index values of 0, 1, and 2. If an input decorated with PerVertexKHR is accessed with any other vertex index value, or is accessed while rasterizing a polygon when the VkPipelineRasterizationStateCreateInfo::polygonMode property of the currently active pipeline is not VK_POLYGON_MODE_FILL, an undefined: value is returned.

Primitive TopologyVertex 0Vertex 1Vertex 2

VK_PRIMITIVE_TOPOLOGY_POINT_LIST

i

i

i

VK_PRIMITIVE_TOPOLOGY_LINE_LIST

2i

2i+1

2i+1

VK_PRIMITIVE_TOPOLOGY_LINE_STRIP

i

i+1

i+1

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST

3i

3i+1

3i+2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP (even)

i

i+1

i+2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP (odd)

i

i+2

i+1

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN

i+1

i+2

0

VK_PRIMITIVE_TOPOLOGY_LINE_LIST_WITH_ADJACENCY

4i+1

4i+2

4i+2

VK_PRIMITIVE_TOPOLOGY_LINE_STRIP_WITH_ADJACENCY

i+1

i+2

i+2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_LIST_WITH_ADJACENCY

6i

6i+2

6i+4

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY (even)

2i

2i+2

2i+4

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY (odd)

2i

2i+4

2i+2

When the provoking vertex mode is VK_PROVOKING_VERTEX_MODE_LAST_VERTEX_EXT, the original vertex numbers used are the same as above except as indicated in the table below.

Primitive TopologyVertex 0Vertex 1Vertex 2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP (odd, and triStripVertexOrderIndependentOfProvokingVertex of VkPhysicalDeviceFragmentShaderBarycentricPropertiesKHR is VK_FALSE)

i+1

i

i+2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_FAN

0

i+1

i+2

VK_PRIMITIVE_TOPOLOGY_TRIANGLE_STRIP_WITH_ADJACENCY (odd)

2i+2

2i

2i+4

When geometry or mesh shading is active, primitives processed by fragment shaders are assembled from the vertices emitted by the geometry or mesh shader. In this case, the vertices used for fragment shader inputs decorated with PerVertexKHR are derived by treating the primitives produced by the shader as though they were specified by a draw call and consulting the table above.

When using tessellation without geometry shading, the tessellator produces primitives in an implementation-dependent manner. While there is no defined vertex ordering for inputs decorated with PerVertexKHR, the vertex ordering used in this case will be consistent with the ordering used to derive the values of inputs decorated with BaryCoordKHR or BaryCoordNoPerspKHR.

Fragment shader inputs decorated with BaryCoordKHR or BaryCoordNoPerspKHR hold three-component vectors with barycentric weights that indicate the location of the fragment relative to the screen-space locations of vertices of its primitive. For point primitives, such variables are always assigned the value (1,0,0). For line primitives, the built-ins are obtained by interpolating an attribute whose values for the vertices numbered 0 and 1 are (1,0,0) and (0,1,0), respectively. For polygon primitives, the built-ins are obtained by interpolating an attribute whose values for the vertices numbered 0, 1, and 2 are (1,0,0), (0,1,0), and (0,0,1), respectively. For BaryCoordKHR, the values are obtained using perspective interpolation. For BaryCoordNoPerspKHR, the values are obtained using linear interpolation. The values of BaryCoordKHR and BaryCoordNoPerspKHR are undefined: while rasterizing a polygon when the VkPipelineRasterizationStateCreateInfo::polygonMode property of the currently active pipeline is not VK_POLYGON_MODE_FILL.

Points

A point is drawn by generating a set of fragments in the shape of a square centered around the vertex of the point. Each vertex has an associated point size controlling the width/height of that square. The point size is taken from the (potentially clipped) shader built-in PointSize written by:

  • the geometry shader, if active;
  • the tessellation evaluation shader, if active and no geometry shader is active;
  • the vertex shader, otherwise

and clamped to the implementation-dependent point size range [pointSizeRange[0],pointSizeRange[1]]. The value written to PointSize must be greater than zero. If maintenance5 is enabled, and a value is not written to PointSize, the point size takes a default value of 1.0.

Not all point sizes need be supported, but the size 1.0 must be supported. The range of supported sizes and the size of evenly-spaced gradations within that range are implementation-dependent. The range and gradations are obtained from the pointSizeRange and pointSizeGranularity members of VkPhysicalDeviceLimits. If, for instance, the size range is from 0.1 to 2.0 and the gradation size is 0.1, then the sizes 0.1, 0.2, …​, 1.9, 2.0 are supported. Additional point sizes may also be supported. There is no requirement that these sizes be equally spaced. If an unsupported size is requested, the nearest supported size is used instead.

Further, if the render pass has a fragment density map attachment, point size may be rounded by the implementation to a multiple of the fragment’s width or height.

Basic Point Rasterization

Point rasterization produces a fragment for each fragment area group of framebuffer pixels with one or more sample points that intersect a region centered at the point’s (xf,yf). This region is a square with side equal to the current point size. Coverage bits that correspond to sample points that intersect the region are 1, other coverage bits are 0. All fragments produced in rasterizing a point are assigned the same associated data, which are those of the vertex corresponding to the point. However, the fragment shader built-in PointCoord contains point sprite texture coordinates. The s and t point sprite texture coordinates vary from zero to one across the point horizontally left-to-right and vertically top-to-bottom, respectively. The following formulas are used to evaluate s and t:

s=12+(x_px_f)sizes = {1 \over 2} + { \left( x\_p - x\_f \right) \over \text{size} }t=12+(y_py_f)sizet = {1 \over 2} + { \left( y\_p - y\_f \right) \over \text{size} }

where size is the point’s size; (xp,yp) is the location at which the point sprite coordinates are evaluated - this may be the framebuffer coordinates of the fragment center, or the location of a sample; and (xf,yf) is the exact, unrounded framebuffer coordinate of the vertex for the point.

Line Segments

VkPipelineRasterizationLineStateCreateInfoKHRStructure specifying parameters of a newly created pipeline line rasterization state
VkLineRasterizationModeKHRLine rasterization modes
vkCmdSetLineRasterizationModeEXTSpecify the line rasterization mode dynamically for a command buffer
vkCmdSetLineStippleEnableEXTSpecify the line stipple enable dynamically for a command buffer
vkCmdSetLineWidthSet line width dynamically for a command buffer

Not all line widths need be supported for line segment rasterization, but width 1.0 antialiased segments must be provided. The range and gradations are obtained from the lineWidthRange and lineWidthGranularity members of VkPhysicalDeviceLimits. If, for instance, the size range is from 0.1 to 2.0 and the gradation size is 0.1, then the sizes 0.1, 0.2, …​, 1.9, 2.0 are supported. Additional line widths may also be supported. There is no requirement that these widths be equally spaced. If an unsupported width is requested, the nearest supported width is used instead.

Further, if the render pass has a fragment density map attachment, line width may be rounded by the implementation to a multiple of the fragment’s width or height.

Basic Line Segment Rasterization

If the lineRasterizationMode member of VkPipelineRasterizationLineStateCreateInfoKHR is VK_LINE_RASTERIZATION_MODE_RECTANGULAR_KHR, rasterized line segments produce fragments which intersect a rectangle centered on the line segment. Two of the edges are parallel to the specified line segment; each is at a distance of one-half the current width from that segment in directions perpendicular to the direction of the line. The other two edges pass through the line endpoints and are perpendicular to the direction of the specified line segment. Coverage bits that correspond to sample points that intersect the rectangle are 1, other coverage bits are 0.

Next we specify how the data associated with each rasterized fragment are obtained. Let pr = (xd, yd) be the framebuffer coordinates at which associated data are evaluated. This may be the center of a fragment or the location of a sample within the fragment. When rasterizationSamples is VK_SAMPLE_COUNT_1_BIT, the fragment center must be used. Let pa = (xa, ya) and pb = (xb,yb) be initial and final endpoints of the line segment, respectively. Set

t=(p_rp_a)(p_bp_a)p_bp_a2t = {{( \mathbf{p}\_r - \mathbf{p}\_a ) \cdot ( \mathbf{p}\_b - \mathbf{p}\_a )} \over {\\| \mathbf{p}\_b - \mathbf{p}\_a \\|^2 }}

(Note that t = 0 at pa and t = 1 at pb. Also note that this calculation projects the vector from pa to pr onto the line, and thus computes the normalized distance of the fragment along the line.)

If strictLines is VK_TRUE, line segments are rasterized using perspective or linear interpolation.

Perspective interpolation for a line segment interpolates two values in a manner that is correct when taking the perspective of the viewport into consideration, by way of the line segment’s clip coordinates. An interpolated value f can be determined by

f=(1t)f_a/w_a+tf_b/w_b(1t)/w_a+t/w_bf = {{ (1-t) {f\_a / w\_a} + t { f\_b / w\_b} } \over {(1-t) / w\_a + t / w\_b }}

where fa and fb are the data associated with the starting and ending endpoints of the segment, respectively; wa and wb are the clip w coordinates of the starting and ending endpoints of the segment, respectively.

Linear interpolation for a line segment directly interpolates two values, and an interpolated value f can be determined by

  • f = (1 - t) fa + t fb

where fa and fb are the data associated with the starting and ending endpoints of the segment, respectively.

The clip coordinate w for a sample is determined using perspective interpolation. The depth value z for a sample is determined using linear interpolation. Interpolation of fragment shader input values are determined by Interpolation decorations.

The above description documents the preferred method of line rasterization, and must be used when lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_RECTANGULAR_KHR.

By default, when strictLines is VK_FALSE, or the relaxedLineRasterization feature is enabled, and when the lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_DEFAULT_KHR, the edges of the lines are generated as a parallelogram surrounding the original line. The major axis is chosen by noting the axis in which there is the greatest distance between the line start and end points. If the difference is equal in both directions then the X axis is chosen as the major axis. Edges 2 and 3 are aligned to the minor axis and are centered on the endpoints of the line as in xref::name::fig-non-strict-lines, and each is lineWidth long. Edges 0 and 1 are parallel to the line and connect the endpoints of edges 2 and 3. Coverage bits that correspond to sample points that intersect the parallelogram are 1, other coverage bits are 0.

Samples that fall exactly on the edge of the parallelogram follow the polygon rasterization rules.

Interpolation occurs as if the parallelogram was decomposed into two triangles where each pair of vertices at each end of the line has identical attributes.

non strict lines

When strictLines is VK_FALSE or when the relaxedLineRasterization feature is enabled, and lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_DEFAULT_EXT implementations may deviate from the non-strict line algorithm described above in the following ways:

If VkPhysicalDeviceMaintenance5PropertiesKHR::nonStrictSinglePixelWideLinesUseParallelogram is VK_TRUE, the lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_DEFAULT_EXT, and strictLines is VK_FALSE, non-strict lines of width 1.0 are rasterized as parallelograms, otherwise they are rasterized using Bresenham’s algorithm.

If VkPhysicalDeviceMaintenance5PropertiesKHR::nonStrictWideLinesUseParallelogram is VK_TRUE, the lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_DEFAULT_EXT, and strictLines is VK_FALSE, non-strict lines of width greater than 1.0 are rasterized as parallelograms, otherwise they are rasterized using Bresenham’s algorithm.

Bresenham Line Segment Rasterization

If lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_BRESENHAM_KHR, then the following rules replace the line rasterization rules defined in Basic Line Segment Rasterization.

Non-strict lines may also follow these rasterization rules for non-antialiased lines.

If the relaxedLineRasterization feature is enabled, and lineRasterizationMode is VK_LINE_RASTERIZATION_MODE_DEFAULT_EXT implementations must follow these rasterization rules for non-antialised lines of width 1.0.

Line segment rasterization begins by characterizing the segment as either x-major or y-major. x-major line segments have slope in the closed interval [-1,1]; all other line segments are y-major (slope is determined by the segment’s endpoints). We specify rasterization only for x-major segments except in cases where the modifications for y-major segments are not self-evident.

Ideally, Vulkan uses a diamond-exit rule to determine those fragments that are produced by rasterizing a line segment. For each fragment f with center at framebuffer coordinates xf and yf, define a diamond-shaped region that is the intersection of four half planes:

R_f=(x,y)xx_f+yy_f<12R\_f = \\{ (x,y) \mid | x - x\_f | + | y - y\_f | < \frac{1}{2} \\}

Essentially, a line segment starting at pa and ending at pb produces those fragments f for which the segment intersects Rf, except if pb is contained in Rf.

bresenham

To avoid difficulties when an endpoint lies on a boundary of Rf we (in principle) perturb the supplied endpoints by a tiny amount. Let pa and pb have framebuffer coordinates (xa, ya) and (xb, yb), respectively. Obtain the perturbed endpoints pa' given by (xa, ya) - (ε, ε2) and pb' given by (xb, yb) - (ε, ε2). Rasterizing the line segment starting at pa and ending at pb produces those fragments f for which the segment starting at pa' and ending on pb' intersects Rf, except if pb' is contained in Rf. ε is chosen to be so small that rasterizing the line segment produces the same fragments when δ is substituted for ε for any 0 < δ ≤ ε.

When pa and pb lie on fragment centers, this characterization of fragments reduces to Bresenham’s algorithm with one modification: lines produced in this description are half-open, meaning that the final fragment (corresponding to pb) is not drawn. This means that when rasterizing a series of connected line segments, shared endpoints will be produced only once rather than twice (as would occur with Bresenham’s algorithm).

Implementations may use other line segment rasterization algorithms, subject to the following rules:

  • The coordinates of a fragment produced by the algorithm must not deviate by more than one unit in either x or y framebuffer coordinates from a corresponding fragment produced by the diamond-exit rule.
  • The total number of fragments produced by the algorithm must not differ from that produced by the diamond-exit rule by more than one.
  • For an x-major line, two fragments that lie in the same framebuffer-coordinate column must not be produced (for a y-major line, two fragments that lie in the same framebuffer-coordinate row must not be produced).
  • If two line segments share a common endpoint, and both segments are either x-major (both left-to-right or both right-to-left) or y-major (both bottom-to-top or both top-to-bottom), then rasterizing both segments must not produce duplicate fragments. Fragments also must not be omitted so as to interrupt continuity of the connected segments.

The actual width w of Bresenham lines is determined by rounding the line width to the nearest integer, clamping it to the implementation-dependent lineWidthRange (with both values rounded to the nearest integer), then clamping it to be no less than 1.

Bresenham line segments of width other than one are rasterized by offsetting them in the minor direction (for an x-major line, the minor direction is y, and for a y-major line, the minor direction is x) and producing a row or column of fragments in the minor direction. If the line segment has endpoints given by (x0, y0) and (x1, y1) in framebuffer coordinates, the segment with endpoints (x_0,y_0w12)(x\_0, y\_0 - \frac{w-1}{2}) and (x_1,y_1w12)(x\_1, y\_1 - \frac{w-1}{2}) is rasterized, but instead of a single fragment, a column of fragments of height w (a row of fragments of length w for a y-major segment) is produced at each x (y for y-major) location. The lowest fragment of this column is the fragment that would be produced by rasterizing the segment of width 1 with the modified coordinates.

The preferred method of attribute interpolation for a wide line is to generate the same attribute values for all fragments in the row or column described above, as if the adjusted line was used for interpolation and those values replicated to the other fragments, except for FragCoord which is interpolated as usual. Implementations may instead interpolate each fragment according to the formula in Basic Line Segment Rasterization, using the original line segment endpoints.

When Bresenham lines are being rasterized, sample locations may all be treated as being at the pixel center (this may affect attribute and depth interpolation).

The sample locations described above are not used for determining coverage, they are only used for things like attribute interpolation. The rasterization rules that determine coverage are defined in terms of whether the line intersects pixels, as opposed to the point sampling rules used for other primitive types. So these rules are independent of the sample locations. One consequence of this is that Bresenham lines cover the same pixels regardless of the number of rasterization samples, and cover all samples in those pixels (unless masked out or killed).

Line Stipple

If the stippledLineEnable member of VkPipelineRasterizationLineStateCreateInfoKHR is VK_TRUE, then lines are rasterized with a line stipple determined by lineStippleFactor and lineStipplePattern. lineStipplePattern is an unsigned 16-bit integer that determines which fragments are to be drawn or discarded when the line is rasterized. lineStippleFactor is a count that is used to modify the effective line stipple by causing each bit in lineStipplePattern to be used lineStippleFactor times.

Line stippling discards certain fragments that are produced by rasterization. The masking is achieved using three parameters: the 16-bit line stipple pattern p, the line stipple factor r, and an integer stipple counter s. Let

b=srmod16b = \left\lfloor \frac{s}{r} \right\rfloor \bmod 16

Then a fragment is produced if the b'th bit of p is 1, and discarded otherwise. The bits of p are numbered with 0 being the least significant and 15 being the most significant.

The initial value of s is zero. For VK_LINE_RASTERIZATION_MODE_BRESENHAM_KHR lines, s is incremented after production of each fragment of a line segment (fragments are produced in order, beginning at the starting point and working towards the ending point). For VK_LINE_RASTERIZATION_MODE_RECTANGULAR_KHR and VK_LINE_RASTERIZATION_MODE_RECTANGULAR_SMOOTH_KHR lines, the rectangular region is subdivided into adjacent unit-length rectangles, and s is incremented once for each rectangle. Rectangles with a value of s such that the b'th bit of p is zero are discarded. If the last rectangle in a line segment is shorter than unit-length, then the remainder may carry over to the next line segment in the line strip using the same value of s (this is the preferred behavior, for the stipple pattern to appear more consistent through the strip).

s is reset to 0 at the start of each strip (for line strips), and before every line segment in a group of independent segments.

If the line segment has been clipped, then the value of s at the beginning of the line segment is implementation-dependent.

vkCmdSetLineStippleKHRSet line stipple dynamically for a command buffer

Smooth Lines

If the lineRasterizationMode member of VkPipelineRasterizationLineStateCreateInfoKHR is VK_LINE_RASTERIZATION_MODE_RECTANGULAR_SMOOTH_KHR, then lines are considered to be rectangles using the same geometry as for VK_LINE_RASTERIZATION_MODE_RECTANGULAR_KHR lines. The rules for determining which pixels are covered are implementation-dependent, and may include nearby pixels where no sample locations are covered or where the rectangle does not intersect the pixel at all. For each pixel that is considered covered, the fragment computes a coverage value that approximates the area of the intersection of the rectangle with the pixel square, and this coverage value is multiplied into the color location 0’s alpha value after fragment shading, as described in Multisample Coverage.

The details of the rasterization rules and area calculation are left intentionally vague, to allow implementations to generate coverage and values that are aesthetically pleasing.

Polygons

A polygon results from the decomposition of a triangle strip, triangle fan or a series of independent triangles. Like points and line segments, polygon rasterization is controlled by several variables in the VkPipelineRasterizationStateCreateInfo structure.

Basic Polygon Rasterization

VkFrontFaceInterpret polygon front-facing orientation
vkCmdSetFrontFaceSet front face orientation dynamically for a command buffer
VkCullModeFlagBitsBitmask controlling triangle culling
VkCullModeFlagsBitmask of VkCullModeFlagBits
vkCmdSetCullModeSet cull mode dynamically for a command buffer

The rule for determining which fragments are produced by polygon rasterization is called point sampling. The two-dimensional projection obtained by taking the x and y framebuffer coordinates of the polygon’s vertices is formed. Fragments are produced for any fragment area groups of pixels for which any sample points lie inside of this polygon. Coverage bits that correspond to sample points that satisfy the point sampling criteria are 1, other coverage bits are 0. Special treatment is given to a sample whose sample location lies on a polygon edge. In such a case, if two polygons lie on either side of a common edge (with identical endpoints) on which a sample point lies, then exactly one of the polygons must result in a covered sample for that fragment during rasterization. As for the data associated with each fragment produced by rasterizing a polygon, we begin by specifying how these values are produced for fragments in a triangle.

Barycentric coordinates are a set of three numbers, a, b, and c, each in the range [0,1], with a + b + c = 1. These coordinates uniquely specify any point p within the triangle or on the triangle’s boundary as

  • p = a pa + b pb + c pc

where pa, pb, and pc are the vertices of the triangle. a, b, and c are determined by:

a=A(pp_bp_c)A(p_ap_bp_c),b=A(pp_ap_c)A(p_ap_bp_c),c=A(pp_ap_b)A(p_ap_bp_c),a = {{\mathrm{A}(p p\_b p\_c)} \over {\mathrm{A}(p\_a p\_b p\_c)}}, \quad b = {{\mathrm{A}(p p\_a p\_c)} \over {\mathrm{A}(p\_a p\_b p\_c)}}, \quad c = {{\mathrm{A}(p p\_a p\_b)} \over {\mathrm{A}(p\_a p\_b p\_c)}},

where A(lmn) denotes the area in framebuffer coordinates of the triangle with vertices l, m, and n.

Denote an associated datum at pa, pb, or pc as fa, fb, or fc, respectively.

Perspective interpolation for a triangle interpolates three values in a manner that is correct when taking the perspective of the viewport into consideration, by way of the triangle’s clip coordinates. An interpolated value f can be determined by

f=af_a/w_a+bf_b/w_b+cf_c/w_ca/w_a+b/w_b+c/w_cf = {{ a {f\_a / w\_a} + b {f\_b / w\_b} + c {f\_c / w\_c} } \over { {a / w\_a} + {b / w\_b} + {c / w\_c} }}

where wa, wb, and wc are the clip w coordinates of pa, pb, and pc, respectively. a, b, and c are the barycentric coordinates of the location at which the data are produced.

Linear interpolation for a triangle directly interpolates three values, and an interpolated value f can be determined by

  • f = a fa + b fb + c fc

where fa, fb, and fc are the data associated with pa, pb, and pc, respectively.

The clip coordinate w for a sample is determined using perspective interpolation. The depth value z for a sample is determined using linear interpolation. Interpolation of fragment shader input values are determined by Interpolation decorations.

For a polygon with more than three edges, such as are produced by clipping a triangle, a convex combination of the values of the datum at the polygon’s vertices must be used to obtain the value assigned to each fragment produced by the rasterization algorithm. That is, it must be the case that at every fragment

f=_i=1na_if_if = \sum\_{i=1}^{n} a\_i f\_i

where n is the number of vertices in the polygon and fi is the value of f at vertex i. For each i, 0 ≤ ai ≤ 1 and _i=1na_i=1\sum\_{i=1}^{n}a\_i = 1. The values of ai may differ from fragment to fragment, but at vertex i, ai = 1 and aj = 0 for j ≠ i.

One algorithm that achieves the required behavior is to triangulate a polygon (without adding any vertices) and then treat each triangle individually as already discussed. A scan-line rasterizer that linearly interpolates data along each edge and then linearly interpolates data across each horizontal span from edge to edge also satisfies the restrictions (in this case the numerator and denominator of perspective interpolation are iterated independently, and a division is performed for each fragment).

Polygon Mode

VkPolygonModeControl polygon rasterization mode
vkCmdSetPolygonModeEXTSpecify polygon mode dynamically for a command buffer

Depth Bias

The depth values of all fragments generated by the rasterization of a polygon can be biased (offset) by a single depth bias value oo that is computed for that polygon.

Depth Bias Enable

The depth bias computation is enabled by the depthBiasEnable set with vkCmdSetDepthBiasEnable and vkCmdSetDepthBiasEnableEXT, or the corresponding VkPipelineRasterizationStateCreateInfo::depthBiasEnable value used to create the currently active pipeline. If the depth bias enable is VK_FALSE, no bias is applied and the fragment’s depth values are unchanged.

vkCmdSetDepthBiasEnableControl whether to bias fragment depth values dynamically for a command buffer

Depth Bias Computation

The depth bias depends on three parameters:

  • depthBiasSlopeFactor scales the maximum depth slope m of the polygon
  • depthBiasConstantFactor scales the parameter r of the depth attachment
  • the scaled terms are summed to produce a value which is then clamped to a minimum or maximum value specified by depthBiasClamp

depthBiasSlopeFactor, depthBiasConstantFactor, and depthBiasClamp can each be positive, negative, or zero. These parameters are set as described for vkCmdSetDepthBias and vkCmdSetDepthBias2EXT below.

The maximum depth slope m of a triangle is

m=(z_fx_f)2\+(z_fy_f)2m = \sqrt{ \left({{\partial z\_f} \over {\partial x\_f}}\right)^2 \+ \left({{\partial z\_f} \over {\partial y\_f}}\right)^2}

where (xf, yf, zf) is a point on the triangle. m may be approximated as

m=max(z_fx_f,z_fy_f).m = \max\left( \left| { {\partial z\_f} \over {\partial x\_f} } \right|, \left| { {\partial z\_f} \over {\partial y\_f} } \right| \right).

In a pipeline with a depth bias representation of VK_DEPTH_BIAS_REPRESENTATION_FLOAT_EXT, r, for the given primitive is defined as

  • r = 1

Otherwise r is the minimum resolvable difference that depends on the depth attachment representation. If VkDepthBiasRepresentationInfoEXT::depthBiasExact is VK_FALSE it is the smallest difference in framebuffer coordinate z values that is guaranteed to remain distinct throughout polygon rasterization and in the depth attachment. All pairs of fragments generated by the rasterization of two polygons with otherwise identical vertices, but zf values that differ by r, will have distinct depth values.

For fixed-point depth attachment representations, or in a pipeline with a depth bias representation of VK_DEPTH_BIAS_REPRESENTATION_LEAST_REPRESENTABLE_VALUE_FORCE_UNORM_EXT, r is constant throughout the range of the entire depth attachment. If VkDepthBiasRepresentationInfoEXT::depthBiasExact is VK_TRUE, then its value must be

  • r = 2-n

Otherwise its value is implementation-dependent but must be at most

  • r = 2 × 2-n

where n is the number of bits used for the depth aspect when using a fixed-point attachment, or the number of mantissa bits plus one when using a floating-point attachment.

Otherwise for floating-point depth attachment, there is no single minimum resolvable difference. In this case, the minimum resolvable difference for a given polygon is dependent on the maximum exponent, e, in the range of z values spanned by the primitive. If n is the number of bits in the floating-point mantissa, the minimum resolvable difference, r, for the given primitive is defined as

  • r = 2e-n

If a triangle is rasterized using the VK_POLYGON_MODE_FILL_RECTANGLE_NV polygon mode, then this minimum resolvable difference may not be resolvable for samples outside of the triangle, where the depth is extrapolated.

If no depth attachment is present, r is undefined:.

The bias value o for a polygon is

o=dbclamp(m×depthBiasSlopeFactor+r×depthBiasConstantFactor) wheredbclamp(x)={xdepthBiasClamp=0 or NaN min(x,depthBiasClamp)depthBiasClamp>0 max(x,depthBiasClamp)depthBiasClamp<0 \begin{aligned} o &= \mathrm{dbclamp}( m \times \mathtt{depthBiasSlopeFactor} + r \times \mathtt{depthBiasConstantFactor} ) \\\ \text{where} &\quad \mathrm{dbclamp}(x) = \begin{cases} x & \mathtt{depthBiasClamp} = 0 \ \text{or}\ \texttt{NaN} \\\ \min(x, \mathtt{depthBiasClamp}) & \mathtt{depthBiasClamp} > 0 \\\ \max(x, \mathtt{depthBiasClamp}) & \mathtt{depthBiasClamp} < 0 \\\ \end{cases} \end{aligned}

m is computed as described above. If the depth attachment uses a fixed-point representation, m is a function of depth values in the range [0,1], and o is applied to depth values in the same range.

Depth bias is applied to triangle topology primitives received by the rasterizer regardless of polygon mode. Depth bias may also be applied to line and point topology primitives received by the rasterizer.

vkCmdSetDepthBiasSet depth bias factors and clamp dynamically for a command buffer
VkDepthBiasRepresentationInfoEXTStructure specifying depth bias parameters
VkDepthBiasRepresentationEXTSpecify the depth bias representation
VkDepthBiasInfoEXTStructure specifying depth bias parameters
vkCmdSetDepthBias2EXTSet depth bias factors and clamp dynamically for a command buffer

Conservative Rasterization

VkPipelineRasterizationConservativeStateCreateInfoEXTStructure specifying conservative raster state
VkPipelineRasterizationConservativeStateCreateFlagsEXTReserved for future use
VkConservativeRasterizationModeEXTSpecify the conservative rasterization mode
vkCmdSetConservativeRasterizationModeEXTSpecify the conservative rasterization mode dynamically for a command buffer
vkCmdSetExtraPrimitiveOverestimationSizeEXTSpecify the conservative rasterization extra primitive overestimation size dynamically for a command buffer

When overestimate conservative rasterization is enabled, rather than evaluating coverage at individual sample locations, a determination is made whether any portion of the pixel (including its edges and corners) is covered by the primitive. If any portion of the pixel is covered, then all bits of the coverage mask for the fragment corresponding to that pixel are enabled. If the render pass has a fragment density map attachment and any bit of the coverage mask for the fragment is enabled, then all bits of the coverage mask for the fragment are enabled.

For the purposes of evaluating which pixels are covered by the primitive, implementations can increase the size of the primitive by up to VkPhysicalDeviceConservativeRasterizationPropertiesEXT::primitiveOverestimationSize pixels at each of the primitive edges. This may increase the number of fragments generated by this primitive and represents an overestimation of the pixel coverage.

This overestimation size can be increased further by setting the extraPrimitiveOverestimationSize value above 0.0 in steps of VkPhysicalDeviceConservativeRasterizationPropertiesEXT::extraPrimitiveOverestimationSizeGranularity up to and including VkPhysicalDeviceConservativeRasterizationPropertiesEXT::extraPrimitiveOverestimationSize. This may further increase the number of fragments generated by this primitive.

The actual precision of the overestimation size used for conservative rasterization may vary between implementations and produce results that only approximate the primitiveOverestimationSize and extraPrimitiveOverestimationSizeGranularity properties. Implementations may especially vary these approximations when the render pass has a fragment density map and the fragment area covers multiple pixels.

For triangles if VK_CONSERVATIVE_RASTERIZATION_MODE_OVERESTIMATE_EXT is enabled, fragments will be generated if the primitive area covers any portion of any pixel inside the fragment area, including their edges or corners. The tie-breaking rule described in Basic Polygon Rasterization does not apply during conservative rasterization and coverage is set for all fragments generated from shared edges of polygons. Degenerate triangles that evaluate to zero area after rasterization, even for pixels containing a vertex or edge of the zero-area polygon, will be culled if VkPhysicalDeviceConservativeRasterizationPropertiesEXT::degenerateTrianglesRasterized is VK_FALSE or will generate fragments if degenerateTrianglesRasterized is VK_TRUE. The fragment input values for these degenerate triangles take their attribute and depth values from the provoking vertex. Degenerate triangles are considered backfacing and the application can enable backface culling if desired. Triangles that are zero area before rasterization may be culled regardless.

For lines if VK_CONSERVATIVE_RASTERIZATION_MODE_OVERESTIMATE_EXT is enabled, and the implementation sets VkPhysicalDeviceConservativeRasterizationPropertiesEXT::conservativePointAndLineRasterization to VK_TRUE, fragments will be generated if the line covers any portion of any pixel inside the fragment area, including their edges or corners. Degenerate lines that evaluate to zero length after rasterization will be culled if VkPhysicalDeviceConservativeRasterizationPropertiesEXT::degenerateLinesRasterized is VK_FALSE or will generate fragments if degenerateLinesRasterized is VK_TRUE. The fragments input values for these degenerate lines take their attribute and depth values from the provoking vertex. Lines that are zero length before rasterization may be culled regardless.

For points if VK_CONSERVATIVE_RASTERIZATION_MODE_OVERESTIMATE_EXT is enabled, and the implementation sets VkPhysicalDeviceConservativeRasterizationPropertiesEXT::conservativePointAndLineRasterization to VK_TRUE, fragments will be generated if the point square covers any portion of any pixel inside the fragment area, including their edges or corners.

When underestimate conservative rasterization is enabled, rather than evaluating coverage at individual sample locations, a determination is made whether all of the pixel (including its edges and corners) is covered by the primitive. If the entire pixel is covered, then a fragment is generated with all bits of its coverage mask corresponding to the pixel enabled, otherwise the pixel is not considered covered even if some portion of the pixel is covered. The fragment is discarded if no pixels inside the fragment area are considered covered. If the render pass has a fragment density map attachment and any pixel inside the fragment area is not considered covered, then the fragment is discarded even if some pixels are considered covered.

For triangles, if VK_CONSERVATIVE_RASTERIZATION_MODE_UNDERESTIMATE_EXT is enabled, fragments will only be generated if any pixel inside the fragment area is fully covered by the generating primitive, including its edges and corners.

For lines, if VK_CONSERVATIVE_RASTERIZATION_MODE_UNDERESTIMATE_EXT is enabled, fragments will be generated if any pixel inside the fragment area, including its edges and corners, are entirely covered by the line.

For points, if VK_CONSERVATIVE_RASTERIZATION_MODE_UNDERESTIMATE_EXT is enabled, fragments will only be generated if the point square covers the entirety of any pixel square inside the fragment area, including its edges or corners.

If the render pass has a fragment density map and VK_CONSERVATIVE_RASTERIZATION_MODE_UNDERESTIMATE_EXT is enabled, fragments will only be generated if the entirety of all pixels inside the fragment area are covered by the generating primitive, line, or point.

For both overestimate and underestimate conservative rasterization modes a fragment has all of its pixel squares fully covered by the generating primitive must set FullyCoveredEXT to VK_TRUE if the implementation enables the VkPhysicalDeviceConservativeRasterizationPropertiesEXT::fullyCoveredFragmentShaderInputVariable feature.

When the use of a shading rate image or setting the fragment shading rate results in fragments covering multiple pixels, coverage for conservative rasterization is still evaluated on a per-pixel basis and may result in fragments with partial coverage. For fragment shader inputs decorated with FullyCoveredEXT, a fragment is considered fully covered if and only if all pixels in the fragment are fully covered by the generating primitive.