Location>code7788 >text

games101 assignment1 and assignment2 analysis and resolution

Popularity:547 ℃/2024-08-09 12:25:47

games101 assignment1 and assignment2 analysis and resolution

Last year when the games101 course as well as homework to complete, but the whole process is relatively rough, but also with the help of a lot of outside forces (doge), so recently prepared to take a few days to focus on the homework (1-7) over again, often look at the often new Well environment configuration directly with:/roeas/GAMES101-Premake I was on a virtual machine before, and this time it's easier to use vs.
I'll work on the big assignment sometime too

Assignment 1

code analysis

Briefly analyze an overall drawing process
First the viewport is defined and the pixel buffer and depth buffer are initialized:

rst::rasterizer r(700, 700);
rst::rasterizer::rasterizer(int w, int h) : width(w), height(h)
{
    frame_buf.resize(w * h);
    depth_buf.resize(w * h);
}

Define the camera position, the position of the three vertices of the triangle in space, and the indexing order of the three vertices. Note that I've set the camera position and vertex position differently here than I did originally, and I'll mention it later here:

Eigen::Vector3f eye_pos = {0, 0, 0};

std::vector<Eigen::Vector3f> pos{{2, 0, 12}, {0, 2, 12}, {-2, 0, 12}};

std::vector<Eigen::Vector3i> ind{{0, 1, 2}};

Then create the vertex buffer as well as the index buffer for the corresponding triangle:

auto pos_id = r.load_positions(pos);
auto ind_id = r.load_indices(ind);
rst::pos_buf_id rst::rasterizer::load_positions(const std::vector<Eigen::Vector3f> &positions)
{
    auto id = get_next_id();
    pos_buf.emplace(id, positions);

    return {id};
}

rst::ind_buf_id rst::rasterizer::load_indices(const std::vector<Eigen::Vector3i> &indices)
{
    auto id = get_next_id();
    ind_buf.emplace(id, indices);

    return {id};
}

Then it's a matter of setting up the model, the observation, and the perspective matrix, and finally drawing the
Drawing section:

void rst::rasterizer::draw(rst::pos_buf_id pos_buffer, rst::ind_buf_id ind_buffer, rst::Primitive type)
{
    if (type != rst::Primitive::Triangle)
    {
        throw std::runtime_error("Drawing primitives other than triangle is not implemented yet!");
    }
    Reads the vertex and index information of the corresponding triangle
    auto& buf = pos_buf[pos_buffer.pos_id];
    auto& ind = ind_buf[ind_buffer.ind_id];

    float f1 = (100 - 0.1) / 2.0;
    float f2 = (100 + 0.1) / 2.0;

    Eigen::Matrix4f mvp = projection * view * model;
    for (auto& i : ind)
    {
        Triangle t;
        Convert to screen space
        Eigen::Vector4f v[] = {
                mvp * to_vec4(buf[i[0]], 1.0f),
                mvp * to_vec4(buf[i[1]], 1.0f),
                mvp * to_vec4(buf[i[2]], 1.0f)
        };
        division by fluoroscopy
        for (auto& vec : v) {
            vec /= ();
        }
        Convert to pixel space
        for (auto & vert : v)
        {
            () = 0.5*width*(()+1.0);
            () = 0.5*height*(()+1.0);
            () = () * f1 + f2;
        }
        Setting the vertices of a triangle
        for (int i = 0; i < 3; ++i)
        {
            (i, v[i].head<3>());
            (i, v[i].head<3>());
            (i, v[i].head<3>());
        }
        Setting the color of each vertex
        (0, 255.0, 0.0, 0.0);
        (1, 0.0 ,255.0, 0.0);
        (2, 0.0 , 0.0,255.0);
        draft 这里是用线框形式draft The line drawing algorithm used isBresenham
        rasterize_wireframe(t);
    }
}

theoretical analysis

img
Post a rough summary chart
Focus on analyzing the derivation of perspective matrices
Here I describe the derivation of the d3d12 dragon book

Project the point onto our projection plane Using similarity we can get the relationship (assuming the distance from the projection plane to our camera is 1):

\[x^{'} = \frac{x}{z} \]

\[y^{'} = \frac{y}{z} \]

In order to normalize the normalization, we are taking the projective plane\(x\in [-width/2,width/2]\)together with\(y\in [-height/2,height/2]\) The transformation to this plane of [-1,1] goes through the transformation:

\[x^{'} = \frac{x*2}{W} \]

\[y^{'} = \frac{y*2}{H} \]

If we use fov with the aspect ratio (r) it translates to:

\[x^{'} = \frac{x}{(r* tan \frac{Fov}{2})} \]

\[y^{'} = \frac{y}{tan \frac{Fov}{2}} \]

You can see that we're actually going to do a two-step transformation of x and y. We can start by doing a normalizing transformation in the first step.
Also for perspective division we need to store the z coordinates, so in the first step we are going to use the w component to store the z values, the resulting transformation is as follows:

\[\begin{bmatrix} \frac{1}{r\tan \frac{Fov}{2}} & 0 &0 &0 \\ 0& \frac{1}{\tan \frac{Fov}{2}}&0 &0 \\ 0& 0& A& B\\ 0& 0& 1&0 \end{bmatrix} \begin{bmatrix} x\\ y \\ z\\ 1 \end{bmatrix}= \begin{bmatrix} \frac{x}{(r* tan \frac{Fov}{2})}\\ \frac{y}{tan \frac{Fov}{2}} \\ Az+B\\ z \end{bmatrix}\]

This is followed by a perspective division in the second step:

\[\begin{bmatrix} \frac{x}{(rz* tan \frac{Fov}{2})}\\ \frac{y}{ztan \frac{Fov}{2}} \\ A+\frac{B}{z}\\ 1 \end{bmatrix}\]

Finally we also need to perform a normalization operation on the z-depth values Converting the z-values to 0-1 In the above matrix we can directly use A and B to make the depth values of the points in the near plane to be 0 and the depth values of the points in the far plane to be 1:
img

The final perspective matrix:

\[\begin{bmatrix} \frac{1}{r\tan \frac{Fov}{2}} & 0 &0 &0 \\ 0& \frac{1}{\tan \frac{Fov}{2}}&0 &0 \\ 0& 0& \frac{f}{f-n} & \frac{-nf}{f-n} \\ 0& 0& 1& 0 \end{bmatrix}\]

practical solution

Notice here that the camera I set up and the vertex positions change:

Eigen::Vector3f eye_pos = {0, 0, 0};

std::vector<Eigen::Vector3f> pos{{2, 0, 12}, {0, 2, 12}, {-2, 0, 12}};
r.set_projection(get_projection_matrix(45, 1, 0.1, 50));

With this setup you won't have the problem of the original triangles being inverted
Because with the original setup, the z-axis is facing outward, and the near-plane is set to positive, so the camera is facing positive on the z-axis, but the triangle is facing negative on the z-axis, and that creates a problem.

I think it would be easier to change it this way than to change the perspective matrix directly on the internet.

Eigen::Matrix4f get_model_matrix(float rotation_angle)
{
    Eigen::Matrix4f model = Eigen::Matrix4f::Identity();

    // TODO: Implement this function
    // Create the model matrix for rotating the triangle around the Z axis.
    // Then return it.
    float Cos = cos(rotation_angle /  * MY_PI);
    float Sin = sin(rotation_angle /  * MY_PI);
    model << Cos, -Sin, 0, 0,
        Sin, Cos, 0, 0,
        0, 0, 1, 0,
        0, 0, 0, 1;
    return model;
}

Eigen::Matrix4f get_projection_matrix(float eye_fov, float aspect_ratio,
                                      float zNear, float zFar)
{
    // Students will implement this function

    Eigen::Matrix4f projection = Eigen::Matrix4f::Identity();
    float TanFov = tan((eye_fov / 2) /  * MY_PI);

    projection << 1 / (aspect_ratio * TanFov), 0, 0, 0,
        0, 1 / TanFov, 0, 0,
        0, 0, zFar / zFar - zNear, -zFar * zNear / zFar - zNear,
        0, 0, 1, 0;

    return projection;
}

The effect is demonstrated:
img

Homework 2

theoretical analysis

The entire code framework has changed little from Assignment 1
The biggest difference is that instead of drawing a wireframe using a line drawing algorithm, you're actually rasterizing the fill pixels.
Changes to the draw function
The entire drawing process is as follows:
1. Find the boundingbox of the triangle tuple.
2. Determine whether each pixel block in the range is inside the triangle (using the cross product) The cross product is a three-dimensional vector, we should use the z-coordinate to determine (xy-plane cross product is a vector perpendicular to the xy-plane) If the three cross products have the same sign, then it means the point (the center of the pixel block) is inside the triangle.
3. Use the area ratio calculation to get the center of gravity coordinates
4. Use gravity coordinate interpolation to get the depth of the pixel points in the triangle. Here we have to do perspective corrected interpolation, but the original code method is wrong. We should use the correct depth value in 3D space instead of the depth value after the pixel space is compressed. See the detailed description:/dyccyber/p/ together with/p/509902950
5. Conduct in-depth testing

practical solution

Coverage testing:
Here, I've calculated the z-coordinate directly, without calculating the fork product as a whole.

static bool insideTriangle(float x, float y, const Vector3f* _v)
{   
    // TODO : Implement this function to check if the point (x, y) is inside the triangle represented by _v[0], _v[1], _v[2]
    Vector2f v0P(x - _v[0].x(), y - _v[0].y());
    Vector2f v1P(x - _v[1].x(), y - _v[1].y());
    Vector2f v2P(x - _v[2].x(), y - _v[2].y());
    Vector2f v0v1(_v[1].x() - _v[0].x(), _v[1].y() - _v[0].y());
    Vector2f v1v2(_v[2].x() - _v[1].x(), _v[2].y() - _v[1].y());
    Vector2f v2v0(_v[0].x() - _v[2].x(), _v[0].y() - _v[2].y());
    float Xp0 = () * () - () * ();
    float Xp1 = () * () - () * ();
    float Xp2 = () * () - () * ();
    return (Xp0 < 0 && Xp1 < 0 && Xp2 < 0) || (Xp0 > 0 && Xp1 > 0 && Xp2 > 0);

}

Screen space rasterization:
Here I'm using 4xssaa for antialiasing, to create a quadruple framebuffer and depthbuffer to test the coverage and depth of each sample point in turn, and then to average the colors.

void rst::rasterizer::clear(rst::Buffers buff)
{
    if ((buff & rst::Buffers::Color) == rst::Buffers::Color)
    {
        std::fill(frame_buf.begin(), frame_buf.end(), Eigen::Vector3f{0, 0, 0});
        std::fill(frame_sample.begin(), frame_sample.end(), Eigen::Vector3f{ 0, 0, 0 });
    }
    if ((buff & rst::Buffers::Depth) == rst::Buffers::Depth)
    {
        std::fill(depth_buf.begin(), depth_buf.end(), std::numeric_limits<float>::infinity());
    }
}

rst::rasterizer::rasterizer(int w, int h) : width(w), height(h)
{
    frame_buf.resize(w * h);
    depth_buf.resize(w * h * 4);
    frame_sample.resize(w * h * 4);
    helper[0].x() = 0.25;
    helper[0].y() = 0.25;

    helper[1].x() = 0.75;
    helper[1].y() = 0.25;

    helper[2].x() = 0.25;
    helper[2].y() = 0.75;

    helper[3].x() = 0.75;
    helper[3].y() = 0.75;
 
}
void rst::rasterizer::rasterize_triangle(const Triangle& t) {
    auto v = t.toVector4();
    int XMin = std::min(std::min(v[0].x(), v[1].x()), v[2].x());
    int XMax = std::max(std::max(v[0].x(), v[1].x()), v[2].x());
    int YMin = std::min(std::min(v[0].y(), v[1].y()), v[2].y());
    int YMax = std::max(std::max(v[0].y(), v[1].y()), v[2].y());
    for (int x = XMin; x < XMax; x++) {
        for (int y = YMin; y < YMax; y++) {
            int index = get_index(x, y) * 4;
            for (int i = 0; i < 4; i++) {
                if (insideTriangle(x + helper[i].x(), y + helper[i].y(), )) {
                    auto [alpha, beta, gamma] = computeBarycentric2D(x + helper[i].x(), y + helper[i].y(), );
                    float w_reciprocal = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
                    float z_interpolated = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
                    z_interpolated *= w_reciprocal;
                    if (z_interpolated < depth_buf[index+i]) {
                        depth_buf[index+i] = z_interpolated;
                        frame_sample[index+i] = ();
                    }
                }
            }
            frame_buf[index / 4] = (frame_sample[index] + frame_sample[index + 1] + frame_sample[index + 2] + frame_sample[index + 3]) / 4;
            
        }
    }

}