Location>code7788 >text

Wgpu Graphic Detail (02) Render Pipeline and Shaders

Popularity:110 ℃/2024-11-07 17:35:10

In the first post of this series ("Wgpu Graphic Detailed Explanation (01) Window and Basic Rendering"), we introduced how to build the desktop environment of Wgpu based on the 0.30+ version of winit, and also explained some basic concepts, modules and architectural ideas about Wgpu, as well as implemented a form that can display a colored background based on the wgpu library. And in this post, we will start to introduce the rendering pipeline as well as shader in Wgpu, and through these two basic elements, we will render a triangle based on the original window.

⚠️ This chapter contains a lot of content, compared with the previous chapter, the reader needs to have more theoretical knowledge about graphics, otherwise it will still look confused, but the author as far as possible, some of the contents of the details, especially the shader code and the code of some of the configurations of the relationship between the reader to be able to make the reader not so "dizzy". Of course, the author's ability is limited, so for the content of the graphics, the reader can familiarize themselves with the understanding of the content, and then look at this article, of course, here is also recommended to their own another article "about some of the introduction of computer graphics (01) the basic elements and spatial transformations "().knowblog (loanword)

⚠️ The version of wgpu used at the beginning of this chapter is23.0.0, for the last MAJOR upgrade in 2024:release/tab/v23.0.0This version has a break change, so readers are urged to make sure the version is consistent.

Introduction to basic concepts

First, let's start with a brief overview of what thePipelineThe pipeline is similar to a production line in a factory from a practical point of view. From a practical point of view, a pipeline is analogous to a production line in a factory: starting at one end, it receives the basic raw materials, which are then sequentially processed by the various nodes of the production line to gradually form the final product. Similarly, the form of the pipeline in computer graphics engineering is very similar. We take as input some necessary data about the final image to be rendered, and work through the layers of the pipeline to get the graphics and colors that can be rendered on the screen device. In addition, one of the more valuable effects of pipelining is the ability to make the division of labor for processing data more explicit, and the ability to configure and program each step independently.

Of course, there are many kinds of pipelines in graphics engineering, such as the rendering pipeline RenderPipeline, the computation pipeline ComputePipeline. different kinds of pipelines are responsible for different jobs, but their essence is the same: process the graphics data. In order to render a triangle via Wgpu, we need to build at least onerendering pipeline, to achieve the ultimate goal.

With the introduction of the rendering pipeline, we have to introduce another important dongle:Shader. As stated above, the essence of a rendering pipeline is a job pipeline that contains multiple links. To make it easier for us to control each job link through a program, Graphics Engineering introduced theshaderThe concept. It is important to emphasize that a shader is not some kind of coloring-like feature, but rather a programmable piece of processing that allows us to programmatically control the outcome at certain points in the rendering pipeline. So, taken together as a whole, we can express the relationship between the rendering pipeline and the shader in a single diagram:

000

The diagram above is just a conceptually simple diagram of the relationship. In actual graphics engineering, it is far more complex than this, but to give the reader a sense of the relationship, the relationship between pipelines and shaders can be understood by following the relationship in the above diagram for the time being.

Since a shader is essentially a program, it is inevitable that we need to write such a program. In Wgpu, we use wgsl (Web GPU Shading Language) to write shader programs. Of course, just like C/C++, Rust, and other high-level programming languages, the wgsl we write is just source code, so we need to compile that source code into shader binaries, which is pretty much a no-brainer because the Wgpu compiles and calls the shader code as it runs.

Well, so far we have a general understanding of the pipeline and shader, of course, the theoretical knowledge is not enough, then we will start from the code project, write the relevant code to build the rendering pipeline and shader program.

preparatory phase

The code project in this chapter will be modified from the results of the build in the first article. Therefore, please make sure that you have fully understood the contents of the first chapter and built the environment before proceeding to the later lectures.

First, let's take a look at theWgpuCtxAdd a new field to this structurerender_pipelineIts type iswgpu::RenderPipeline. Next, let's prepare a structure-independent method with the signature:

fn create_pipeline() -> wgpu::RenderPipeline;

Finally, let's call the above at the specified location in the new_async method of WgpuCtx'screate_pipelinemethod and give the resulting RenderPipeline to WgpuCtx for storage.

005

010

Next, let's write a shader program. Create a file in the project directory calledfile and add the following wgsl code to it:

@vertex
fn vs_main(@builtin(vertex_index) in_vertex_index: u32) -> @builtin(position) vec4<f32> {
    let x = f32(i32(in_vertex_index) - 1);
    let y = f32(i32(in_vertex_index & 1u) * 2 - 1);
    return vec4<f32>(x, y, 0.0, 1.0);
}

@fragment
fn fs_main() -> @location(0) vec4<f32> {
    return vec4<f32>(1.0, 0.0, 0.0, 1.0);
}

020

As for the meaning of this wgsl code we won't rush to explain it, we'll explain it in more detail later, at this point it's simply understood that we've written a copy of the shader program's source code and made it work in the rendering pipeline.

Next, let's modify the method signature of create_pipeline by adding two input parameters:

fn create_pipeline(
    device: &wgpu::Device, // <--- parameters1
    swap_chain_format: wgpu::TextureFormat, // parameters2
) -> wgpu::RenderPipeline {
  //...
}

For the first parameterwgpu::Device, readers who have read Chapter 1 should know that this example is called via adapterrequest_deviceWhat you get is an instance of the abstraction of the logical device:

030

For the second argument, thewgpu::TextureFormat, in turn, comes from the format field of surface_config after completing the configuration. So, we need to make appropriate changes at the point of call:

040

After the preparation work was done, our project now looks roughly like this:

050

At this point, we have an environment ready to create a pipeline. Next let's start focusing on thecreate_pipelineThe concrete implementation of this method starts to really create the rendering pipeline, shaders, and understanding them.

Creating a Render Pipeline

insofar ascreate_pipelineof the method body, we fill in the following:

060

With the code comments, we can understand that there are at least two steps to create a basic rendering pipeline:

  1. pass (a bill or inspection etc)wgpu::DeviceProvided APIscreate_shader_moduleLoad the shader program module;
  2. pass (a bill or inspection etc)wgpu::DeviceProvided APIscreate_render_pipelinethat creates the rendering pipeline in conjunction with the shader module instance obtained in step 1.

For the first step, the reader can directly refer to the above code, the meaning of which is not difficult to understand, the core is to load the shader source code from the contents of the shader, and through a series of construction process to get a ShaderModule (shader program module).

Many of wgpu's structures will have a structure namedlabelfield, which has no effect on the runtime and is only used as a convenient way to locate the data during the Debug debugging phase.

For the second callcreate_render_pipeline, the details of which are shown below:

070

The author in the above chart code will be labeled as five parts of the configuration. Among them, the first and fifth configuration of this chapter is not involved, according to the sample code above to pass the default value, these parameters we will gradually explain in subsequent articles, this article we do not repeat. Let us focus on the above figure in the second, third and fourth parts.

The next section of ⚠️ will cover some important concepts in computer graphics, in addition to content about the use of the wgpu itself. What is a vertex vertex, what is a fragment fragment, what is a graph primitive, all of these are essential knowledge points for learning computer graphics.As this series of articles focuses on how to use wgpu from an engineering point of view, knowledge points about graphics will not be specifically introduced, readers need to learn on their own, this article assumes that readers already have the knowledge of the relevant

Again, I self-refer to "Some Introduction to Computer Graphics (01) Fundamentals and Spatial Transformations" (Blog AddressZhihu address)。

vertex shader

Let's focus on the first part first:

vertex: wgpu::VertexState {
    module: &shader,
    entry_point: Some("vs_main"),
    buffers: &[],
    compilation_options: Default::default(),
},

first parametermodulewhich indicates which ShaderModule instance we need to get thevertex shaderProgram. Earlier, we had written a shader code and passed it through thecreate_shader_moduleA ShaderModule instance is created, just pass it in here as the value of this parameter.

second parameterentry_pointIt shows that thevertex shaderProgram entry point, this so-called entry point is similar to the main function in our regular program. However, it should be noted that here we are filling in the"vs_main", remember that we previously wroteCode? In it is a piece of code like the one we wrote:

080

In that code, we use an annotation@vertexto indicate that the next function is a vertex shader-related function, and then, we name the method"vs_main". Correspondingly, in the Rust code above theentry_pointThe corresponding field we fill in is thisvs_main. So, the current situation is as follows:

090

Note that the version of wgpu used in this article is +, which has an important break change from the 0.2x version: the type of the entry_point field for the VertexState and the FragmentState described in the following section was changed from the old version of the&'a strowe it all toOption<&'a str>. Therefore this paper is all passed on by theSome(xxx)

Having understood such a configuration relationship, we also need to know the significance of this vertex shader code. First of all, the method will be executed in theevery timeis called when vertices are processed. Assuming that n vertices are now provided in our scene, the rendering pipeline will call this vertex shader program n times during this part of vertex processing. For thisvs_mainmethod's parameters, starting with the incoming@builtin(vertex_index) in_vertex_index: u32Each time you call thevs_mainmethod is passed a value of type u32, which is wgslBuilt-in vertex index values(usually 0 to n-1 if n vertices). The process can be visualized as the following pseudo-code:

Traversal: 0 <= vertex index n-1 {
Vertex processing result = execute vs_main(vertex index index)
Take the vertex processing result and do something else...
}

Also, after each call to this method completes, it returns avec<f32>The same time with the@builtin(position), indicating that the method returns aBuilt-in location data. Maybe the reader still feels very abstract about this piece. Then let's use a more practical example to explain.

Suppose we now have a triangle as follows:

100

For the three vertices of this triangle, in counterclockwise order, their indexes are 0, 1, and 2. In the rendering pipeline's vertex shader processing session, based on what we talked about earlier, each vertex calls once thevs_mainmethod, then the result is as follows:

110

It is worth noting that when finding the y-value in the code, the code uses the data with 1 for theBinary Bitwise vs.operation, so that when index = 2, the2 & 1actuallyBinary 10 & Binary 01, the result of a bitwise sum isBinary 00, which is 0.

For each vertex, we find its position coordinates. However, it is worth noting that the returned position coordinates are a 4-dimensional one, where the first two components correspond to the x-axis and y-axis, respectively, as well as those we obtained dynamically based on the vertex indexes; the third component is the z-axis, and are all0.0which indicates that all vertices are in the plane where the z-axis is equal to 0; the last component is the w-value, which is usually1.0(For this w-component, it is important to ask the reader to carefully understand its mathematical significance on their own, which will not be repeated in this paper.)

Organizing the above results a bit, we can know that the result of processing the three vertices in sequence is to generate three points in the same 2-dimensional plane (since z is all 0) with the coordinates:(-1.0, -1.0)(0.0, 1.0)(1.0, -1.0). So what is the significance of these coordinates under wgpu? Here we give a direct conclusion. First of all, we know that wgpu rendering, corresponding to the physical screen is the existence of a viewport viewport (if you forget, please read the first chapter of this series of content), for this viewport, regardless of its width and height of the absolute size of the value of the origin is always centered on the center of the origin of the viewport up and down the y range of 1.0 to -1.0, as well as the viewport left and right of the x range of -1.0 to 1.0 for the upper and lower y ranges of the viewport, and the left and right x ranges of the viewport:

120

Thus, the result of the above coordinates is that we are able to render a triangle with the following vertices that just about fill the viewport:

130

The current progress of the code does not yet allow for the rendering of the above image, this is just to give the reader a more intuitive understanding of the relationship between the coordinates and the final rendering

Of course, if we modify the code in the vertex shader appropriately and multiply the x and y values by 0.5 respectively, we can see a scaled down version of the triangle:

140

Now that we've explained about Vertex configurationVertexState(used form a nominal expression)modulecap (a poem)entry_pointfield is up, for the remainingbufferscap (a poem)compilation_optionsFields, for the time being, will not be discussed in this chapter, just the defaults:

vertex: wgpu::VertexState {
    module: &shader,
    entry_point: Some("vs_main"),
    buffers: &[], // <--- default (setting)
    compilation_options: Default::default(), // <--- default (setting)
},

Tuple Configuration

For images the configuration is as follows:

primitive: wgpu::PrimitiveState {
    topology: wgpu::PrimitiveTopology::TriangleList,
    ..Default::default()
},

In this article the author only shows a core field configuration:topology. For this parameter, there are the following configurations that can currently be supported:

  • PointList: vertex data is a series of points. Each vertex is a new point. That means that the 3 vertices we provided, like above, don't end up being rendered as a triangle, but rather three separate points.

  • LineList: The vertex data is a series of lines. Each pair of vertices forms a new line. Vertices 0 1 2 3 create two lines 0 1 and 2 3. Note that with this enumeration value configuration, the vertices we provide must be able topaireAppears, like our 3 vertices above, will only end up rendering one line, because 0 and 1 form one line, and there's no way for vertex 2 to form another line.

  • LineStrip: the vertex data is a line. Each set of two neighboring vertices forms a line. Vertices 0 1 2 3 create three lines 0 1, 1 2, and 2 3. That is, the above example will not end up rendering a filled triangle, but rather a triangle with only side lines.

  • TriangleList (default): the vertex data is a series of triangles. Each group of 3 vertices forms a new triangle. Vertices 0 1 2 3 4 5 create two triangles 0 1 2 and 3 4 5. This is our default configuration.

  • TriangleStrip: the vertex data is a triangle strip. Each set of three neighboring vertices forms a triangle. Vertices 0 1 2 3 4 5 create four triangles 0 1 2, 2 1 3, 2 3 4 and 4 3 5.

By way of explanation, I believe the reader should be able to understand the results of the above configuration, but of course the reader will follow up this piece of the article with more examples.

slice shader

Next let's focus on the part of the flake shader. To understand the piecewise shader, we first need to know what a piece is and where it comes from. In the previous section on vertex shaders, we know that by entering three vertex indices, we can compute the coordinates of the three vertices through the vertex shader, and then configure the topology of the tuple to indicate that the three points form a triangular surface (rather than three points or three lines), and then control the positional size of the triangular surface in space by using the vertex coordinates. Once the positional size is available, the rendering pipeline is processed in one step: rasterization. Rasterization logic is the process of finding the corresponding pixel on the screen device for each "point" in the geometry.

150

For the rasterization of the specific implementation of the implementation, it is not in the scope of this article, for this piece of interest in the students can consult their own information for in-depth study.

After briefly understanding the basic forms and results of rasterization, let's go back to the core of this section: the fragment, which is actually a rasterized version of a graphic.Sample of one or more pixels. Two points are worth noting here:

  1. Despite being calledwafer (math.), but usually refers to one or many fewer pixel-sized units. That is, a geometric figure that is rasterized is broken down into multiple slice elements.
  2. The wafer obtained after rasterization is justproximityPixel points, but not exactly equal to pixel points. A slice is a collection of data associated with a pixel that is to be processed, including information such as color, depth, texture coordinates, etc. (depth and texture coordinates, etc., are first simply understood as some additional data).

A flake is not a pixel, it's just close to a pixel, so there is usually a step to further process the flake so that it is eventually converted to a pixel on the screen for rendering (which is basically a point with rgba color). So this step is actually a call to the flake shader for processing. The process is that the rendering pipeline computes m flakes at some point after the vertex shader processing; the rendering pipeline then calls the flake shader and passes the flake context into the flake shader's entry method as a parameter and returns the on-screen color of the corresponding flake:

160

As a result, our previously writtenCode for the piecewise shaderIt would actually be easy to understand:

170

In the above code, first we use the@fragmentThe annotation marks the name of thisfs_mainmethod is the entry method of the slice shader; secondly, for the implementation of this method, it's very see simple, we always return rgba of(1.0, 0.0, 0.0, 1.0)of the red color value. Also, the configuration is as follows:

180

Here we need to focus on one point. In the slice shader, we end up returning a type definition that is:@location(0) vec4<f32>This one.vec4<f32>The reader should understand that is a color value that represents rgba. Then this@location(0)What does it mean? In fact, the configuration process shown above can give some hints. When configuring the fragment parameters, we configured thetargets: &[Some(swap_chain_format.into())]The targets are an array, and we pass in the only one elementSome(swap_chain_format.into())while the slice shader's return is configured in the@location(0)The meaning of this is that the color calculated by the piecewise shader is "put" into theColor TargetAnd thisColor TargetIn this case it's byswap_chain_format.into()Converted color targetColorTargetState

190

So far, we've gotten a general idea of the basic usage of a piecewise shader. In this case, however, our piecewise shader does not have any inputs and always returns a fixed color value. We'll be talking more about flake shaders with more examples in the next articles.

Using the Render Pipeline

In the above code, we simply created a render pipeline at the stage of constructing the Wgpu context and stored it in the render_pipeline field of the WgpuCtx. So where should we use this render pipeline? The answer is to use it in the draw method of the WgpuCtx we wrote earlier:

200

For the added code, the first step inset_pipeline(xxx)It is well understood and will not be repeated here; the second step for the parameters of the draw method that calls the rendering channel (render_pass) need to be clarified. The first parameter definition of the draw method is:vertices: Range<u32>Here we pass in a0..3, the meaning of which is to tell the rendering pipeline that I'm providing 0, 1, and 2 vertices. Going back to our vertex shader code, we have the parameters defined in the entry method of the vertex shader:@builtin(vertex_index) in_vertex_index: u32Here.@builtin(vertex_index)It's to express the fact that the vertex shader code entry passes me vertex indices 0, 1, and 2 in sequence, so that we can do some calculations to get the positions of the three geometric vertices of the triangle that I expect.

210For the 2nd parameterinstances: Range<u32>In the case of this chapter we have passed0..1, i.e. there is only one instance of rendering. Of course, when you need to draw multiple identical or similar objects, you can use instantiated rendering.instances parameter specifies the number of instances to draw. Also, we can specify the number of instances to be drawn in the vertex shader via the@builtin(instance_index)to get the current instance index. As an example, suppose now we want to draw two triangles. One way to represent 2 triangles is to provide the vertices of both triangles (e.g., we pass in 0-5 totaling 6 vertices), or we could pass in 3 vertex indices as before, but construct two instances:

220

We then modify the original vertex shader entry parameter to add access to the instance index:

230

Running the program again, we see that two triangles are now rendered in the window:

240

put at the end

This chapter is basically close to the end of the content, in this article on the basis of the first chapter, further introduction to the rendering pipeline and shader code, and through the code practice, I hope to make the reader more clear understanding of the whole process. Of course, so far we have only consumed vertex indexes in the vertex shader processing stage, and returned fixed color values in the slice shader processing stage, while the actual application scenario is far from simple. So in the next article we will introduce new concepts to realize how to build triangles more dynamically.

The code repository for this chapter is here:

/w4ngzhen/wgpu_winit_example/tree/main/ch02_render_a_triangle

The relevant code for subsequent articles will also be added to that repository, so interested readers can click STAR, thank you for your support!