games101 homework 3 analysis detailing bump mapping
code analysis
The overall code structure has not changed much, mainly because of the introduction of vertexshader (which does nothing) and fragmentshader (which uses a different shading method, directly using normals as rgb, using blingphong lighting model, texture mapping, bumpmapping, displacementmapping).
The main change is in the rasterization section Tinting calculations use a specific fragmentshader
Here I had a problem with the code I wrote earlier boundingbox didn't cover the boundaries resulting in gaps between the different triangles of the first coming out of the cowhhh
Note that the w-value used here is no longer the incorrect 1, but the true depth, but I have a feeling that the formula used for interpolation here is still uncorrected, but the overall effect is small.
void rst::rasterizer::rasterize_triangle(const Triangle& t, const std::array<Eigen::Vector3f, 3>& view_pos)
{
//Do not usetoVector4 would result in a direct assignment of depth values to1
auto v = ;
int XMin = std::min(std::min(v[0].x(), v[1].x()), v[2].x());
int XMax = std::max(std::max(v[0].x(), v[1].x()), v[2].x());
int YMin = std::min(std::min(v[0].y(), v[1].y()), v[2].y());
int YMax = std::max(std::max(v[0].y(), v[1].y()), v[2].y());
for (int x = XMin; x <= XMax; x++) {
for (int y = YMin; y <= YMax; y++) {
int index = get_index(x, y);
if (insideTriangle(x + 0.5, y + 0.5, )) {
auto [alpha, beta, gamma] = computeBarycentric2D(x + 0.5, y + 0.5, );
float Z = 1.0 / (alpha / v[0].w() + beta / v[1].w() + gamma / v[2].w());
float zp = alpha * v[0].z() / v[0].w() + beta * v[1].z() / v[1].w() + gamma * v[2].z() / v[2].w();
zp *= Z;
if (zp < depth_buf[index]) {
depth_buf[index] = zp;
auto interpolated_color = interpolate(alpha, beta, gamma, [0], [1], [2], 1.0f);
auto interpolated_normal = interpolate(alpha, beta, gamma, [0], [1], [2], 1.0f);
auto interpolated_texcoords = interpolate(alpha, beta, gamma, t.tex_coords[0], t.tex_coords[1], t.tex_coords[2], 1.0f);
auto interpolated_shadingcoords = interpolate(alpha, beta, gamma, view_pos[0], view_pos[1], view_pos[2], 1.0f);
//frame_buf[index] = ();
fragment_shader_payload FragShader(interpolated_color, interpolated_normal.normalized(), interpolated_texcoords, texture ? &*texture : nullptr);
FragShader.view_pos = interpolated_shadingcoords;
auto pixel_color = fragment_shader(FragShader);
Vector2i point;
point << x, y;
set_pixel(point, pixel_color);
}
}
}
}
There are still some details in the following draw function
For example, perspective division w-component preserves z-values
Use view_Pos to store the real world coordinates Because lighting calculations have to be calculated in three dimensions, here's what to store
Normal Correction Prevents non-isometric scaling of objects in the mod matrix.
void rst::rasterizer::draw(std::vector<Triangle *> &TriangleList) {
float f1 = (50 - 0.1) / 2.0;
float f2 = (50 + 0.1) / 2.0;
Eigen::Matrix4f mvp = projection * view * model;
for (const auto& t:TriangleList)
{
Triangle newtri = *t;
//Get coloring point coordinates using coordinates in 3D space
std::array<Eigen::Vector4f, 3> mm {
(view * model * t->v[0]),
(view * model * t->v[1]),
(view * model * t->v[2])
};
std::array<Eigen::Vector3f, 3> viewspace_pos;
std::transform((), (), viewspace_pos.begin(), [](auto& v) {
return head<3>();
});
Eigen::Vector4f v[] = {
mvp * t->v[0],
mvp * t->v[1],
mvp * t->v[2]
};
//Homogeneous division
for (auto& vec : v) {
()/=();
()/=();
()/=();
}
//normal correction
Eigen::Matrix4f inv_trans = (view * model).inverse().transpose();
Eigen::Vector4f n[] = {
inv_trans * to_vec4(t->normal[0], 0.0f),
inv_trans * to_vec4(t->normal[1], 0.0f),
inv_trans * to_vec4(t->normal[2], 0.0f)
};
//Viewport transformation
for (auto & vert : v)
{
() = 0.5*width*(()+1.0);
() = 0.5*height*(()+1.0);
() = () * f1 + f2;
}
for (int i = 0; i < 3; ++i)
{
//screen space coordinates
(i, v[i]);
}
for (int i = 0; i < 3; ++i)
{
//view space normal
(i, n[i].head<3>());
}
(0, 148,121.0,92.0);
(1, 148,121.0,92.0);
(2, 148,121.0,92.0);
// Also pass view space vertice position
rasterize_triangle(newtri, viewspace_pos);
}
}
theoretical analysis
The theory of bling-phong modeling and texture mapping is relatively simple and will not be repeated here.
Problems with texture addressing and texture oversizing/under-sizing I'll probably write another one to refer to other sources.
Focus on analyzing the theory of bumpmapping
In fact, normal mapping and bumpmapping are very similar Bump mapping uses height maps to perturb the normals Normal mapping uses normal maps to directly define the surface normals Theoretically, normal mapping can achieve better detail Both are designed to use fewer triangles to achieve more detail:
Not using bump mapping
Using bump mapping
First of all, you can refer to learnopengl:/05 Advanced Lighting/04 Normal Mapping/
To summarize briefly: Why do we use tangent space to store normals?
If we stored the normals of the object directly, then any time the object changed, our normal maps wouldn't work, but by storing them in tangent space, we can just derive the TBN matrix one at a time, and then perform a transformation to convert the normals from tangent space to world space, and then compute the illumination.
The TBN matrix is the spatial transformation of the coordinates of the three basis vectors in tangent space into world space.
For the derivation, you need to find the tangent lines and the subtangent lines, you can look at the derivation in learnopengl I didn't understand much of the tangent line solution in this assignment, I felt like I was making an approximation?
Got the TBN matrix How to use:
We directly use the TBN matrix, which is a matrix that converts vectors from tangent coordinate space to world coordinate space. So we pass it into the fragment shader, left-multiply the normal coordinates obtained through sampling by the TBN matrix, and convert to world coordinate space so that all normals and other lighting variables are in the same coordinate system.
That's what was done in this course
The only question now is how we find this normal coordinate
The games101 course explains how to find the normal coordinates after bump mapping.
That is, after the height map to perturb the position of the point, the partial derivatives are used to approximate the tangent line to further obtain the normal:
But this is actually an approximation of the result
The derivation in pbr is this:
The above is a change in the position of the point d(u,v) represents the change in height n(u,v) is the surface normal It's supposed to be a scalar, but the assignment was in rgb form, so we asked for its vanity.
Our actual approximation of the above for partial derivatives should be:
The assignment only used the change in uv coordinates as an approximation because we had limited information.
For a detailed analysis see./4ed/Textures_and_Materials/Material_Interface_and_Implementations#NormalMapping
Here what we get is actually shading normal and the n in the tbn we used before is surface normal The coloring is the calculation of the position after the height perturbation that we want to use as well as the normal (shading normal) and the calculation of the position after the height perturbation that we want to use as surface normal This is very important and I see that some of the answers on the internet calculate it wrong. I've seen some of the answers on the internet that are wrong.
practical solution
Note that all direction vectors are normalized!
Eigen::Vector3f texture_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f return_color = {0, 0, 0};
if ()
{
// TODO: Get the texture value at the texture coordinates of the current fragment
return_color = ->getColor(payload.tex_coords.x(), payload.tex_coords.y());
}
Eigen::Vector3f texture_color;
texture_color << return_color.x(), return_color.y(), return_color.z();
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = texture_color / ;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = texture_color;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = ;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
Eigen::Vector3f ambient_color = (amb_light_intensity);
double distance = ( - point).norm();
Eigen::Vector3f light_dir = ( - point).normalized();
Eigen::Vector3f diffuse_color = () / (distance * distance) * std::max(().dot(light_dir), 0.0f);
Eigen::Vector3f half_vector = (light_dir + (eye_pos - point).normalized()).normalized();
Eigen::Vector3f specular_color = () / (distance * distance) * std::pow(std::max(().dot(half_vector), 0.0f), p);
result_color += (ambient_color + diffuse_color + specular_color);
}
return result_color * ;
}
Eigen::Vector3f phong_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = ;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = ;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = ;
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
Eigen::Vector3f ambient_color = (amb_light_intensity);
double distance = ( - point).norm();
Eigen::Vector3f light_dir = ( - point).normalized();
Eigen::Vector3f diffuse_color = () / (distance * distance) * std::max(().dot(light_dir),0.0f);
Eigen::Vector3f half_vector = (light_dir + (eye_pos - point).normalized()).normalized();
Eigen::Vector3f specular_color = () / (distance * distance) * std::pow(std::max(().dot(half_vector),0.0f), p);
result_color += (ambient_color + diffuse_color + specular_color);
}
return (result_color) * ;
}
//
Eigen::Vector3f displacement_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = ;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = ;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = ;
float kh = 0.2, kn = 0.1;
Eigen::Vector3f tagent;
tagent << () * () / std::sqrt(std::pow((), 2) + std::pow((), 2)),
std::sqrt(std::pow((), 2) + std::pow((), 2)),
()* () / std::sqrt(std::pow((), 2) + std::pow((), 2));
Eigen::Vector3f bitangent = (tagent);
Eigen::Matrix3f TBN;
(0) = ();
(1) = ();
(2) = ();
float width = 1.0f / ->width;
float height = 1.0f / ->height;
float dU = kh * kn * (->getColor(payload.tex_coords.x() + width, payload.tex_coords.y()).norm() -
->getColor(payload.tex_coords.x(), payload.tex_coords.y()).norm());
float dV = kh * kn * (->getColor(payload.tex_coords.x() + width, payload.tex_coords.y() + height).norm() -
->getColor(payload.tex_coords.x(), payload.tex_coords.y()).norm());
//Calculate the position after the perturbation
point += kn * normal * ->getColor(payload.tex_coords.x(), payload.tex_coords.y()).norm();
Eigen::Vector3f ln;
ln << -dU, -dV, 1;
Eigen::Vector3f shading_normal = (TBN * ln).normalized();
Eigen::Vector3f result_color = {0, 0, 0};
for (auto& light : lights)
{
Eigen::Vector3f ambient_color = (amb_light_intensity);
double distance = ( - point).norm();
Eigen::Vector3f light_dir = ( - point).normalized();
Eigen::Vector3f diffuse_color = () / (distance * distance) * std::max(shading_normal.normalized().dot(light_dir), 0.0f);
Eigen::Vector3f half_vector = (light_dir + (eye_pos - point).normalized()).normalized();
Eigen::Vector3f specular_color = () / (distance * distance) * std::pow(std::max(shading_normal.normalized().dot(half_vector), 0.0f), p);
result_color += (ambient_color + diffuse_color + specular_color);
}
return result_color * ;
}
Eigen::Vector3f bump_fragment_shader(const fragment_shader_payload& payload)
{
Eigen::Vector3f ka = Eigen::Vector3f(0.005, 0.005, 0.005);
Eigen::Vector3f kd = ;
Eigen::Vector3f ks = Eigen::Vector3f(0.7937, 0.7937, 0.7937);
auto l1 = light{{20, 20, 20}, {500, 500, 500}};
auto l2 = light{{-20, 20, 0}, {500, 500, 500}};
std::vector<light> lights = {l1, l2};
Eigen::Vector3f amb_light_intensity{10, 10, 10};
Eigen::Vector3f eye_pos{0, 0, 10};
float p = 150;
Eigen::Vector3f color = ;
Eigen::Vector3f point = payload.view_pos;
Eigen::Vector3f normal = ;
float kh = 0.2, kn = 0.1;
Eigen::Vector3f tagent;
tagent << () * () / std::sqrt(std::pow((), 2) + std::pow((), 2)),
std::sqrt(std::pow((), 2) + std::pow((), 2)),
()* () / std::sqrt(std::pow((), 2) + std::pow((), 2));
Eigen::Vector3f bitangent = (tagent);
Eigen::Matrix3f TBN;
(0) = ();
(1) = ();
(2) = ();
float width = 1.0f / ->width;
float height = 1.0f / ->height;
float dU = kh * kn * (->getColor(payload.tex_coords.x() + width, payload.tex_coords.y()).norm() -
->getColor(payload.tex_coords.x(), payload.tex_coords.y()).norm());
float dV = kh * kn * (->getColor(payload.tex_coords.x() + width, payload.tex_coords.y() + height).norm() -
->getColor(payload.tex_coords.x(), payload.tex_coords.y()).norm());
Eigen::Vector3f ln;
ln << -dU, -dV, 1;
normal = (TBN * ln).normalized();
Eigen::Vector3f result_color = {0, 0, 0};
result_color = normal;
return result_color * ;
}