Location>code7788 >text

games101 Assignment 4 and Assignment 5 Detailing the Ray Tracing Framework

Popularity:439 ℃/2024-08-16 17:10:39

games101 Assignment 4 and Assignment 5 Detailing the Ray Tracing Framework

Assignment 4

code analysis

The overall code for assignment 4 is relatively simple, the main process is to get the coordinates of the four control points through the mouse event, and then we do the Bezier curve drawing.

theoretical analysis

The theory of Bézier curves is to be given a set of control points and then continuously interpolate between the control points and then interpolate between the new interpolated points obtained. The process can be represented by a tree structure:
img

We can use recursion to derive the interpolation to get the position of the points on the Bessel curve It can also be expressed using binomial distribution polynomials:
img

Excellent properties of Bessel curves:
img
1. The start and end points of the curve must be the first and last points of the control points
2. The curve will be tangent to the line segment between the first and last control points at the start and end points, respectively.
3. You can transform the whole curve by changing the control points without recalculating all the points of the curve.
4. The curve is completely contained within a convex envelope of control points. A convex envelope is the smallest convex polygon that can completely enclose all the control points. Therefore, a Bézier curve will not leave the confines of this polygon. This property helps us ensure that the shape of the curve is well controlled between the control points.

The source of the anti-aliasing is the unnatural transition between the curve and the background pixels, so we give the background pixels an average color for the other four pixels around the pixel, depending on the distance.

A Bezier surface is a set of Bezier curves that are further interpolated:
img

practical solution

I've tried to leave the original code structure as unchanged as possible, and to recurse in recursive_bezier, which is equivalent to counting the interpolated layers of the tree structure with each recursion, until there's only one interpolated point to return to.

cv::Vec3b blendColors(const cv::Vec3b& color1, const cv::Vec3b& color2, float alpha)
{
    return color1 * (1.0 - alpha) + color2 * alpha;
}

void draw_anti_aliased_pixel(cv::Mat& window, cv::Point2f point, cv::Vec3b color)
{
    int x = static_cast<int>(std::floor());
    int y = static_cast<int>(std::floor());

    float alpha_x = - x;
    float alpha_y = - y;

    // Blend colors based on distance to pixel center
    <cv::Vec3b>(y, x) = blendColors(<cv::Vec3b>(y, x), color, (1 - alpha_x) * (1 - alpha_y));
    <cv::Vec3b>(y, x + 1) = blendColors(<cv::Vec3b>(y, x + 1), color, alpha_x * (1 - alpha_y));
    <cv::Vec3b>(y + 1, x) = blendColors(<cv::Vec3b>(y + 1, x), color, (1 - alpha_x) * alpha_y);
    <cv::Vec3b>(y + 1, x + 1) = blendColors(<cv::Vec3b>(y + 1, x + 1), color, alpha_x * alpha_y);
}

cv::Point2f recursive_bezier(const std::vector<cv::Point2f>& control_points, float t)
{
    cv::Point2f point;
    int size = control_points.size();
    std::vector<cv::Point2f> new_points(size);
    if (size == 1) {
        return control_points[0];
    }
    // TODO: Implement de Casteljau's algorithm
    for (int i = 0; i < size - 1; i++)
    {
        new_points[i] = (1 - t) * control_points[i] + t * control_points[i + 1];
    }
    new_points.resize(size - 1);
    point = recursive_bezier(new_points, t);
    return point;

}
void bezier(const std::vector<cv::Point2f>& control_points, cv::Mat &window)
{
    cv::Vec3b color(0, 255, 0); // greener
    for (double t = 0.0; t <= 1.0; t += 0.001)
    {
        cv::Point2f point = recursive_bezier(control_points, t);
        //<cv::Vec3b>(, )[1] = 255;
        draw_anti_aliased_pixel(window, point, color);
    }

}

Results Showcase:

img

img

Assignment 5

code analysis

The overall code framework is roughly as follows:
Initialize the scene. The scene contains objects, lights, etc.
The scene will contain the size of the entire rast space fov The width of the camera's viewpoint Background color
maxDepth is used to control the number of times a ray of light is reflected and refracted. We can't let light travel indefinitely.
epsilon is used to prevent floating-point precision issues from causing the intersection with an object to be calculated below the surface, so that a second reflection will intersect the same surface again.

class Scene
{
public:
    // setting up options
    int width = 1280;
    int height = 960;
    double fov = 90;
    Vector3f backgroundColor = Vector3f(0.235294, 0.67451, 0.843137);
    int maxDepth = 5;
    float epsilon = 0.00001;

    Scene(int w, int h) : width(w), height(h)
    {}

    void Add(std::unique_ptr<Object> object) { objects.push_back(std::move(object)); }
    void Add(std::unique_ptr<Light> light) { lights.push_back(std::move(light)); }

    [[nodiscard]] const std::vector<std::unique_ptr<Object> >& get_objects() const { return objects; }
    [[nodiscard]] const std::vector<std::unique_ptr<Light> >&  get_lights() const { return lights; }

private:
    // creating the scene (adding objects and lights)
    std::vector<std::unique_ptr<Object> > objects;
    std::vector<std::unique_ptr<Light> > lights;
};

The two objects used in the framework are two spheres and a triangle mesh, where the triangle mesh is two triangles stitched together to form a square. The two spheres are made of a glossy_specular material and a reflective-transmissive material, and the triangle mesh is made of glossy_specular:

auto sph1 = std::make_unique<Sphere>(Vector3f(-1, 0, -12), 2);
sph1->materialType = DIFFUSE_AND_GLOSSY;
sph1->diffuseColor = Vector3f(0.6, 0.7, 0.8);

auto sph2 = std::make_unique<Sphere>(Vector3f(0.5, -0.5, -8), 1.5);
sph2->ior = 1.5;
sph2->materialType = REFLECTION_AND_REFRACTION;

(std::move(sph1));
(std::move(sph2));

Vector3f verts[4] = {{-5,-3,-6}, {5,-3,-6}, {5,-3,-16}, {-5,-3,-16}};
uint32_t vertIndex[6] = {0, 1, 3, 1, 2, 3};
Vector2f st[4] = {{0, 0}, {1, 0}, {1, 1}, {0, 1}};
auto mesh = std::make_unique<MeshTriangle>(verts, vertIndex, 2, st);
mesh->materialType = DIFFUSE_AND_GLOSSY;

(std::move(mesh));
(std::make_unique<Light>(Vector3f(-20, 70, 20), 0.5));
(std::make_unique<Light>(Vector3f(30, 50, -12), 0.5));    

After that it's just a matter of projecting the camera ray and calculating the color of each pixel.
Trace is used to compute the intersection points with objects. For spheres, use the analytic solution. Triangle meshes need to be traversed to find the intersection points for each triangle element. Use the Moller-Trumbore algorithm to compute the intersection points. Note that the intersection points here have to be determined whether they are the closest or not after the intersection points have been found:

std::optional<hit_payload> trace(
        const Vector3f &orig, const Vector3f &dir,
        const std::vector<std::unique_ptr<Object> > &objects)
{
    float tNear = kInfinity;
    std::optional<hit_payload> payload;
    for (const auto & object : objects)
    {
        float tNearK = kInfinity;
        uint32_t indexK;
        Vector2f uvK;
        if (object->intersect(orig, dir, tNearK, indexK, uvK) && tNearK < tNear)
        {
            ();
            payload->hit_obj = ();
            payload->tNear = tNearK;
            payload->index = indexK;
            payload->uv = uvK;
            tNear = tNearK;
        }
    }

    return payload;
}

In castray, the reflective refraction material terminates the recursion if the light bounces more than the five times we've set, or if it hits the diffuse_glossy material.
The bling-phong model is used for the coloring calculation of the diffuse_glossy material.
There's a lot of code here, so I won't post it. Just a few details.
1. Reflection calculation:
img

Vector3f reflect(const Vector3f &I, const Vector3f &N)
{
    return I - 2 * dotProduct(I, N) * N;
}

Simple vector arithmetic
2. Refraction calculation
For the derivation here see:/night-ride-depart/p/
It's a little complicated, but it's the right idea, and I've done a little work on it, so I'm just going to be a little lazy.
img
Here's where we add a discussion of whether the normal is on the same side as the incident light.
If the dot product on the same side is less than 0, it means that the light is coming from the outside of the object.
If the dot product of the opposite side is greater than 0, it means that the light is coming from the inside of the object. This is where we need to adjust the direction of our surface normal and the light density to light sparsity should be adjusted accordingly:

Vector3f refract(const Vector3f &I, const Vector3f &N, const float &ior)
{
    float cosi = clamp(-1, 1, dotProduct(I, N)); float etai = 1, etat = ior; {
    float etai = 1, etat = ior; float cosi = clamp(-1, 1, dotProduct(I, N))
    Vector3f n = N;
    // Adjust accordingly to the position of the normal and incident light.
    if (cosi < 0) { cosi = -cosi; ????} else { std::swap(etai, etat); n= -N; }
    float eta = etai / etat; // Total reflection criticality calculation.
    // Total reflection criticality calculation Calculate cos'
    float k = 1 - eta * eta * (1 - cosi * cosi); }
    return k < 0 ? 0 : eta * I + (eta * cosi - sqrtf(k)) * n;
}

Here I think is not the framework of the code is a bit of a problem in fact, the dot product calculation cos are negative why in less than 0 and then take the negative so that the following eta * cosi - sqrtf (k) to be inverted So in fact, cosi = -cosi; this line is not needed?

utilization
As we mentioned earlier, epsilon is used to control the change in hitpoint position, preventing it from hitting the same surface again, and this is also adjusted depending on whether you're on the same side or the opposite side.
If it's a reflection the same side is plus and the opposite side should be minus and if it's a refraction the same side is minus and the opposite side should be plus so isn't there a bit of a problem with the way the frame is written here:

Vector3f reflectionRayOrig = (dotProduct(reflectionDirection, N) < 0) ?
                           + ? hitPoint - N *  :
                            -? hitPoint + N * ;
Vector3f refractionRayOrig = (dotProduct(refractionDirection, N) < 0) ?
                             hitPoint - N *  :
                             hitPoint + N * ;

generation of
If a ray from the hitpoint is directed at the light source, detecting whether it hits the object and detecting whether the object is between the light source and the hitpoint (determined by the distance) are both satisfied, then the point is shaded:

Vector3f shadowPointOrig = (dotProduct(dir, N) < 0) ?
                           hitPoint + N *  :
                           hitPoint - N * ;
auto shadow_res = trace(shadowPointOrig, lightDir, scene.get_objects());
bool inShadow = shadow_res && (shadow_res->tNear * shadow_res->tNear < lightDistance2);

Results Showcase:
img

Here, the chessboard square below is the initial triangle mesh, you can see the shadows, you can see some refraction, you can see some reflection, the ball behind is the diffuse_glossy material, you can see some highlights on it.
Tessellation texture generation:

 Vector3f evalDiffuseColor(const Vector2f& st) const override
 {
     float scale = 5;
     float pattern = (fmodf( * scale, 1) > 0.5) ^ (fmodf( * scale, 1) > 0.5);
     return lerp(Vector3f(0.815, 0.235, 0.031), Vector3f(0.937, 0.937, 0.231), pattern);
 }

theoretical analysis

The content to be completed for this assignment is relatively simple
The first step is to generate a camera ray, which requires us to convert 2D points in rast space into 3D points, which requires a series of transformations. See my previous blog post:/dyccyber/p/ The final multiplication by * imageAspectRatio * scale is actually the conversion to world space, which corresponds to the scaling of the xy coordinates of our Job 1 perspective matrix.

Then we calculate the intersection of the triangle with the rays by applying the formula from the class, with the caveat that we need to incorporate boundary ranges, one to make sure that it's not a reverse intersection of the rays, i.e., tnear>0, and one with the center of gravity coordinates between 0-1, to prevent the point of intersection from being on the outside of the triangle.

practical solution

for (int j = 0; j < ; ++j)
{
    for (int i = 0; i < ; ++i)
    {
        // generate primary ray direction
        float x = (2 * ((i + 0.5) / (float)) - 1) * imageAspectRatio * scale;
        float y = (1 - 2 * ((j + 0.5) / (float))) * scale;
        // TODO: Find the x and y positions of the current pixel to get the direction
        // vector that passes through it.
        // Also, don't forget to multiply both of them with the variable *scale*, and
        // x (horizontal) variable with the *imageAspectRatio*            

        Vector3f dir = normalize(Vector3f(x, y, -1)); // Don't forget to normalize this direction!
        framebuffer[m++] = castRay(eye_pos, dir, scene, 0);
    }
    UpdateProgress(j / (float));
}
bool rayTriangleIntersect(const Vector3f& v0, const Vector3f& v1, const Vector3f& v2, const Vector3f& orig,
                          const Vector3f& dir, float& tnear, float& u, float& v)
{
    // TODO: Implement this function that tests whether the triangle
    // that's specified bt v0, v1 and v2 intersects with the ray (whose
    // origin is *orig* and direction is *dir*)
    // Also don't forget to update tnear, u and v.

    Vector3f E1 = v1 - v0;
    Vector3f E2 = v2 - v0;
    Vector3f S = orig - v0;
    Vector3f S1 = crossProduct(dir, E2);
    Vector3f S2 = crossProduct(S, E1);
    float div = dotProduct(S1,E1);
    tnear = dotProduct(S2, E2) / div;
    u = dotProduct(S1, S) / div;
    v = dotProduct(S2, dir) / div;
    //Two boundaries One is the direction of the light One is that the intersection should be inside the triangle
    if (tnear >= 0 && u >=0 && u<=1 && v>=0 && v<=1) {
        return true;
    }
    return false;
}