Location>code7788 >text

SpringBoot Beginner to Master (XIII) log: do not underestimate it, or suffer the loss of their own! Learn you can also design architecture

Popularity:297 ℃/2024-10-24 17:14:57

Don't underestimate him, when you are confronted, you will know what a painful realization!

How to use Logback to record detailed logs in Spring Boot?

Integrating LogBack, Log4J... etc., are not many ways! But note that what I'm talking about may be the same as you, but it's also different.

Common Logging Levels:your (honorific) --- low alignment
TRACE:
DESCRIPTION: The most detailed level of logging, typically used in the development and debugging phases to record very detailed execution information.
Example:log.trace("Entering method: {data here, followed by parameters that are automatically populated}", methodName);
DEBUG:
Description: Used for debugging information, recording the detailed execution of the program, but slightly less than the TRACE level.
Example:log.debug("Variable value: {}", variableValue);
INFO:
DESCRIPTION: Records a general information log, usually used to record the normal running status of the application.
Example:log.info("User logged in: {}", userId);
WARN:
DESCRIPTION: Warning message indicating a potential problem, but the application can continue to run.
Example:log.warn("File not found: {}", fileName);
ERROR:
DESCRIPTION: Error message indicating that an error has occurred in the application, which may affect the normal operation of the function.
Example:log.error("Database connection failed: {}", ());  Here each letter code, look carefully, there is no doubt? Special, special attention Oh, I'll tell you later
FATAL:
DESCRIPTION: Critical error, usually resulting in the application crashing or not continuing to run.
Example:log.fatal("Critical system failure: {}", ());

A real-world test of truth! On the Importance of Logs.

Logging is a very important feature when developing enterprise applications. Good logging can help us quickly locate and resolve issues. For example, anomaly troubleshooting, interface interaction! Most think that a little bit of direct...

Details matter:

Usually, the production environment, the log level requirements are very strict (set INFO raise your hand), enterprise-level development, the basic requirements do not allow too much logging, usually do not recommend the use of DEBUG level logging, because this will produce a large number of log output, not only take up storage space, but also may affect the system performance.

SpringBoot with LogBack logs (merged)

1. Introduction of dependencies

In a Spring Boot project, theLogback is the default logging framework, and Spring Boot automatically configures Logback.So you usually don't need to add Logback dependencies manually. However, to make sure that all necessary dependencies are included, you can specify them explicitly in the file.

<dependencies>
    <!-- Spring Boot Starter Web (or any other Starter you need)-->
    <dependency>
        <groupId></groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>

    <!-- Spring Boot Starter Logging (with SLF4J and Logback)-->
    <dependency>
        <groupId></groupId>
        <artifactId>spring-boot-starter-logging</artifactId>
    </dependency>

    <!-- If you have already included spring-boot-starter-web, this dependency is redundant because spring-boot-starter-web already includes spring-boot-starter-logging.-->
</dependencies>

2. Configuring Logback

Create or edit files in the src/main/resources directory (file name and location as per auto-assembly mechanism, default)

File name: (based on automated assembly)
Location: src/main/resources/ (based on auto-assembly)

<configuration>
    <!-- Define the storage path for log files-->
    <!-- <property name="LOG_PATH" value="logs" /> relative path, where the current project is located in the directory logs, such as the project in /home/tomcat/project then logs in /home/tomcat/project/logs-->
    <!-- Customize the log path, you can specify the log save location, pointing to the Boot configuration file yml, properties file configuration-->
    <property name="LOG_PATH" value="${}" />
    <!-- Customize the log name to ignore-->
    <property name="LOG_FILE_NAME" value="application" />

    <!-- Console log output: often configured for use locally in development environments-->
    <appender name="CONSOLE" class="">
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:} [%thread] %-5level [%X{uuid}] %logger{36} - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <!-- File log output: online environment test, UAT, PRE-->
    <appender name="FILE" class="">
        <!-- [Define log file requirements]-->
        <file>${LOG_PATH}/${LOG_FILE_NAME}.log</file>
        <rollingPolicy class="">
            <!-- Scroll by day
            ${LOG_PATH} path
            ${LOG_FILE_NAME} Name
            %d{yyyy-MM-dd} date
            %i Serial number, starting at 0
-->
            <fileNamePattern>${LOG_PATH}/${LOG_FILE_NAME}-%d{yyyy-MM-dd}.%</fileNamePattern>
            <!-- Maximum 500MB for a single file-->
            <maxFileSize>500MB</maxFileSize>
            <!-- Retain log files for the last 30 days-->
            <maxHistory>30</maxHistory>
            <!-- Total log file size not to exceed 1GB-->
            <!-- <totalSizeCap>1GB</totalSizeCap> -->
        </rollingPolicy>
        <!-- [Define the format of the contents of the log file in which the log is recorded]
            %d{yyyy-MM-dd HH:mm:}:
            %d: Indicates the date and time.
            {yyyy-MM-dd HH:mm:}: Formatting mode for date and time.
            yyyy: four-digit year.
            MM: two digits month.
            dd: two digits of the day.
            HH: two hours (24-hour system).
            mm: two bits of minutes.
            ss: two bits of seconds.
            SSS: three bits of milliseconds.
            [%thread]:
            %thread: indicates the name of the current thread.
            []: used to wrap the thread name to make it more readable.
            %-5level:
            %level: indicates the log level (e.g. TRACE, DEBUG, INFO, WARN, ERROR).
            -5: Indicates that the minimum width of the log level is 5 characters, if the log level is less than 5 characters, it is left justified and padded with spaces.
            %logger{36}:
            %logger: indicates the name of the logger.
            {36}: indicates that the maximum length of the logger name is 36 characters, if the name is more than 36 characters, it is truncated.
            - %msg:
            %msg: Indicates the content of the log message.
            -: used to separate the logger name from the log message to make it more readable.
            %n:
            %n: Indicates a line break character, used to break lines after each log message.
-->
        <encoder>
            <pattern>%d{yyyy-MM-dd HH:mm:} [%thread] %-5level [%X{uuid}] %logger{36} - %msg%n</pattern>
            <charset>UTF-8</charset>
        </encoder>
    </appender>

    <!-- Asynchronous Log Output
        Role
        Improves performance:
        Asynchronous logging: AsyncAppender puts log messages into a queue and then a separate thread processes them. In this way, the main thread does not block due to logging operations, thus improving application performance.
        Reduced I/O impact: Logging often involves disk I/O operations that can be slow. With asynchronous logging, these operations can be moved to background threads, reducing the impact on the main thread.
        Resource management:
        Queue size: memory usage can be controlled by setting queueSize. A larger queue can hold more log messages, but will take up more memory.
        Discard policy: by setting discardingThreshold, you can choose whether to discard log messages when the queue is full to prevent memory overflow.
-->
    <appender name="ASYNC" class="">
        <appender-ref ref="FILE" /> <!-- Here are some pointers.-->
        <!-- This defines the size of the queue for the asynchronous logger. The queue is used to hold log messages until they are processed-->
        <queueSize>512</queueSize>
        <!-- Defines whether to discard log messages when the queue is full. The default value is 0, which means that no log messages are discarded. Greater than 0 When the number of messages in the queue exceeds this threshold, newly generated log messages are discarded-->
        <discardingThreshold>0</discardingThreshold> 
    </appender>
    
    <!-- Development log level settings springProfile enablement is associated with the pointers in the boot configuration file yml, properties-->
    <springProfile name="dev">
        <!-- Set the root level of logging to debug and specify that logs are output to the console and asynchronously to a file.-->
        <root level="debug">
            <appender-ref ref="CONSOLE" />
            <appender-ref ref="ASYNC" />
        </root>
        <!-- Specify an alternative logger for info asynchronous output to file.
            additivity="false": when log events are logged in the package, they are only output to the FILE logger and are not passed to the root logger. Therefore, these log events will not appear in the console.
            additivity="true" (default): If additivity is true, log events in the package are not only output to the FILE logger, they are also passed to the root logger and thus appear in the console.
            Scenarios
            Avoid duplicate logs: If you want log events for a particular logger to be output only to that particular logger, and don't want them to be duplicated elsewhere, set additivity="false".
            Fine-grained control of logging output: By setting additivity="false", you can control logging output more finely to ensure that log messages are clear and organized.
            Note: there is root, in fact, enough, the other top definition depends on your requirements, which is a way to refine the
-->
        <logger name= "Package name" level="info" additivity="false">
            <appender-ref ref="ASYNC" />
        </logger>
    </springProfile>
    
    <!-- test, UAT logging level settings springProfile enablement is associated with the pointers in the boot configuration file yml, properties-->
    <springProfile name="test、uat">
        <!-- Set the root level of the log to debug and specify asynchronous output to a file.-->
        <root level="debug">
            <appender-ref ref="ASYNC" />
        </root>
        <logger name= "Package name" level="info" additivity="false">
            <appender-ref ref="ASYNC" />
        </logger>
    </springProfile>

    <springProfile name="pre">
        <root level="info">
            <appender-ref ref="ASYNC" />
        </root>
        <!-- Log Level Configuration for Third-Party Libraries-->
        <logger name="" level="debug" additivity="false">
            <appender-ref ref="ASYNC" />
        </logger>
    </springProfile>
</configuration>

 

3. Volume of configuration changes

You can use environment variables to set logging variables in the Spring Boot configuration file. Spring Boot supports referencing environment variables in or files:

#
# Activate the configuration environment
=dev

# Configure where to put the log file after it's generated: the location of the file, which can be referenced after configuration
logging.file.path=/opt/tomcat/myapp/logs
# Configure the root log level
#logs=info
# Configure three-way logging levels, refine
# . Package Name=debug
# What if the log profile location, name, is customized?
#=classpath:

You can also combine multiple environment profiles to set up different log paths. For example, set different log paths for the development environment, test environment, and pre-release environment.


file to activate the environment-specific configuration file is sufficient

SpringBoot and LogBack logging (production detail control)

Issue 1

Whether or not clustered services, ten people concurrent is also concurrent, initiating the same function or do not pass the function belongs to the concurrent interaction, the logger prints the log is also cross-printing. So how to find the records belonging to a specific function operation in the millions of lines of cross-log records!

A, B, and C are all requesting the message service, and the processing time for each service is definitely not consistent, so how do you troubleshoot the elapsed time when concurrently cross-logging prints?

Solution: For each interaction, mark from the beginning of the log until the end of the interaction, as long as it is guaranteed that, for each thread, for each interaction, the mark is unique, which is fine, even if the logs are cross-printed, because it is guaranteed that the unique mark is different, so it will be very good to distinguish.

Logback supports inserting placeholders for MDC (Mapped Diagnostic Context) variables in the log output. In the [Define the format of the contents of the log file to record logs], there is a red mark, there is an example of MDC value, MDC is a thread context-related key-value pair storage, can be used to add additional information in the log, such as the unique identifier (UUID) of the request

MDC(Mapped Diagnostic Context)

MDC Overview
An MDC is a store of thread context-dependent key-value pairs, with each thread having its own instance of an MDC. MDCs are a feature provided in Logback and Log4j, and their main purpose is to add contextual information about the current thread to the log record, such as a unique identifier for a request, a user ID, a session ID, and so on, to allow for finer-grained tracing and debugging in the log. debugging.
Key Features
Storing Contextual Information: MDC allows you to store and access contextual information related to the current thread in the log records.
Log Formatting: Using the MDC variable in the log output mode allows you to insert this contextual information into the log message.
Thread-safe: MDC is thread-safe, each thread has a separate MDC instance and will not interfere with each other.

Use of MDC

1. Use before each log print (not recommended, do not take advantage of the unified management, but whenever someone forgets, newcomers, then the log is no longer available, duplicated, but also too much trouble)

public class MyController {
    public void handleRequest() {
        // Generate a unique request identifier
        String uuid = ().toString();
        
        // Setting the UUID to the MDC Before you use log. Before you print the log, put the
        ("uuid", uuid);
        
        ("requesting,start:{}",json)
        // Processing requests
// ... Other business, etc., other logs, etc...
        ("responsive,end:{}",json)
        
        // Clear the UUID in the MDC after the request is processed.
        ("uuid");
    }
}

Console Output

2024-11-01 14:30:00.123 [main] [f0e2c1a0-1b9d-4b7e-8c0a-123456789abc] INFO  MyController - requesting,start:{"name": "John", "age": 30}
------ If there are other logs in the business,Then theiruuidIt's all the same.,Very well positioned.  ------
2024-11-01 14:30:00.124 [main] [f0e2c1a0-1b9d-4b7e-8c0a-123456789abc] INFO  MyController - respondent,end:{"name": "John", "age": 30}

2. Utilize the filter Filter

You can customize the implementation of the Filter interface haha, this is relatively old, with the times it, Boot has, just use it.

OncePerRequestFilter Overview
The main purpose of OncePerRequestFilter is to ensure that the filtering logic is executed only once, even when a request is processed by multiple instances in multiple filter chains. This is important to avoid duplicate processing and potential performance issues.

OncePerRequestFilter is a filter class provided by the Spring Framework that is used toEnsure that each request is processed only onceIt inherits from the class It inherits from the class and is often used in scenarios where some action needs to be performed on each HTTP request, such as logging, performance monitoring, security checks, etc.

Main features
Single execution: ensures that each request is processed only once, even if there are multiple instances in the filter chain.
Thread-safe: for multi-threaded environments to ensure thread-safety.
Flexibility: The filtering logic can be customized by overriding the doFilterInternal method.
Usage Scenarios
Logging: Logs are recorded at the beginning and end of each request.
Performance monitoring: record the processing time of each request.
Security check: authentication and authorization before the request reaches the controller.
Cross-domain handling: set the response header to support cross-domain requests. ,

Creating Custom Filters

 Create a custom filter that inherits from OncePerRequestFilter to generate unique request identifiers and set them in the MDC:

@Component // Marked as a Spring component, the SpringBoot auto-assembly mechanism automatically loads the component when it is scanned. Of course there is also the option to manually add the filter to the service (available on your own)
@Slf4j
public class RequestLoggingFilter extends OncePerRequestFilter {
    /**
     * :: Override the doFilterInternal filter method so that, for each interaction, the filter intercepts and executes the submethod
*/
    @Override
    protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain)
            throws ServletException, IOException {
        // Generate a unique request identifier Each request generates a
        String uuid = ().toString().replaceAll("-", "");
        // Set UUIDs to the MDC, one per thread.
        ("uuid", uuid);
        // Recording the request start log
        ("Request Start,URL: {}", ());

        // Continuing the filter chain
        try {
            (request, response);
        } catch (Exception e) {
            // Recording error logs
            ("Request processing failure,URL: {}, error message: {}", (), (e));
            // Re-throw exceptions to ensure that subsequent unified exception handling can receive exceptions and handle exception messages in a unified manner
            throw e;
        } finally {
            // Recording the request closure log
            ("Request closed,URL: {}", ());
            // Clear the UUID in the MDC after the request is processed.
// ("uuid"); // explicit designation
// Clear the UUID in the MDC after the request is processed.
            (); // Clear all (MDC each request, thread has its own MDC, it is recommended to use clear all to avoid MDC context data remaining after the end of the interaction, as well as potential memory leaks or information obfuscation)
        }
    }
}

Note: For uniform exception handling, watch my "Unified Exception Handling" , these are all architectural must-haves.

Optimization Log Step 3, AOP

Log building is basically perfect, but what about log output? Each interface needs to be manually logged .....

Spring AOP can be utilized to notify the controllers of the surroundings. Before each controller's request is processed and the answer is returned, logging is added: interface name, controller name, interactive request message, answer message, and so on.

@Component
@Aspect
@Slf4j
public class ControllerAop {
    /**
     * Entry point expression: filter all control layers, and control layer interface methods
     * The first "*": any return value.
     * The second ". *": if there is one dot less, it is the next arbitrary package, and if there is one dot more, it is the next arbitrary package and sub-packages of the controller package.
     * The third ". *": any class under the controller package.
     * The fourth "*": any method
     * The fifth "(...)" : any parameter
*/
    @Pointcut("execution(public * ..*.controller..*.*(..))")
    public void privilege() {
    }

    /**
     * Define the AOP notification method: Around surrounds the controller method that satisfies the slice and will go through the level once here first.
     *
     *@param proceedingJoinPoint is an important interface in AOP (cutter-oriented programming) that is primarily used to implement wrap-around notifications (@Around)
     *@return Intercepted method execution results
     *@throws Throwable
     */
    @Around("privilege()")
    public Object around(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
        // ThreadLocalContext self-defined class is a utility class for storing and passing context information in the current thread. It utilizes the ThreadLocal mechanism to ensure that each thread has a separate copy of its variables, thus avoiding data contention problems in a multi-threaded environment.
// ("aaa", "");
        // (): clears the paging information.PageHelper is a commonly used paging plugin.This method is used to clear the previous paging settings at the start of each request, ensuring that new requests are not affected by the previous paging settings.

        long start = ();
        // getSignature Get the signature information of the intercepted method, including the method name, class name, etc.
        String className = ().getDeclaringTypeName();
        String methodName = ().getName();
        Object result = null;
        // Get an array of parameters for the intercepted method
        Object[] args = ();
        for (Object object : args) {
            if(object instanceof RequestVo) {
                ("{}{}{}" , "【"+className + "." + methodName + "】" , "【Front-end and back-end interactions--requesting】" ,  null != object ? (object): object);
            }
        }
        // Continue executing the intercepted method and get the return message.
        result = ();
        long end = ();
        ("{}{}{}" , "【"+className + "." + methodName + "】" , "【Front-end and back-end interactions--responsive--take a period of (x amount of time):"+(end - start)+"】" ,  null != result ? (result) : result);
        return result;
    }
}

ThreadLocal is a thread-local variable storage mechanism provided by Java, where each thread has its own independent copy of the ThreadLocal variable without interfering with each other. Its main role is to provide each thread with an independent copy of the variable, so as to realize the isolation between threads and data security. The following is a detailed explanation of ThreadLocal's role, meaning, and common usage scenarios.

corresponds English -ity, -ism, -ization
Thread Isolation:
Each thread has its own separate copy of the ThreadLocal variable, ensuring that different threads don't interact with each other.
For scenarios where you need to save state inside a thread and don't want other threads to access that state.
Simplify data transfer between threads:
Avoid passing parameters between method calls to reduce the complexity of method signatures.
Improve code readability and maintainability.

Usage Scenarios
Log Tracking:
In distributed systems, in order to trace the entire invocation chain of a request, you can store a unique request identifier (e.g., traceId) in ThreadLocal and output this identifier in the logs for problem localization and debugging purposes.
Management of services:
In transaction processing, you can use ThreadLocal to store transaction context information, ensuring that multiple method calls in the same thread share the same transaction context.
For example, Spring's transaction manager, TransactionSynchronizationManager, uses ThreadLocal to manage the transaction context.
User session information:
In web applications, you can use ThreadLocal to store a user's session information (such as user ID, roles, and so on) so that you can use it in multiple method calls without passing parameters each time.

custom class ThreadLocalContext, this is not necessarily used, depending on their own task requirements, in fact, this should be introduced separately, and then think about it or forget it, lazy!

/**
 * :: A utility class for storing and passing contextual information in the current thread. It utilizes the ThreadLocal mechanism to ensure that each thread has a separate copy of the variable, thus avoiding data contention problems in a multi-threaded environment. ...
*/
public class ThreadLocalContext {
    
    private static final ThreadLocal<Map<String, Object>> context = new ThreadLocal<Map<String, Object>>();
    
    /**
     * :: Empty variables cached in the thread context.
*/
     public static void clear() {
       (null);
     }

     /**
      * :: Cache variables in the context of a thread.
*/
     public static void set(String key, Object value) {
       Map<String, Object> map = ();
       if (map == null) {
         map = new HashMap<>();
         (map);
       }
       (key, value);
     }

     /**
      * :: Remove variables from the thread context.
*/
     public static Object get(String key) {
       Map<String, Object> map = ();
       Object value = null;
       if (map != null) {
         value = (key);
       }
       return value;
     }

     /**
      * :: Removal of variables.
*/
     public static void remove(String key) {
       Map<String, Object> map = ();
       if (map != null) {
         (key);
       }
     }
}

Log output style:

2024-11-01 05:09:20.788 logback [http-nio-8080-exec-13] INFO   [ed80915251674756adf3d51c7c89bfdb] Request Start,URL: {http://......}
2023-11-01 05:09:20,789 logback [http-nio-8080-exec-13] INFO   [ed80915251674756adf3d51c7c89bfdb] -【】【Front-end and back-end interactions--requesting】{"param1":"Hello","param2":123}
2023-11-01 05:09:20,790 logback [http-nio-8080-exec-13] INFO   [ed80915251674756adf3d51c7c89bfdb] -【】【Front-end and back-end interactions--responsive--take a period of (x amount of time):1ms】{"param1":"Hello","param2":123}
2024-11-01 05:09:20.800 logback [http-nio-8080-exec-13] INFO   [ed80915251674756adf3d51c7c89bfdb] Request closed,URL: {http://......}

Issue 2

Log log .... Things that must be taken care of.

When there is exception handling, and you don't want to throw an exception, but you need to show hints and log tracing, and you want to make it easy for logging to be troubleshooting, then themustOverloading is required.. Otherwise, you might not see anything.

try {
    // A business operation
} catch(Exception e) {
    // print exception information, useful debug, some use info, some use error (standard), but, the use of the wrong method, is not useful, only very simple information text, it is difficult to troubleshoot!
    ("debug exception message:" + ());
    ("info anomaly information:" + ());
    ("error exception message:" + e);
    /**
     * Whichever of the above, maybe it shows something like this: xxx exception message: / by zero
     * And then it's gone. You don't know what line or class is reporting the error, * or even if the getMessage is an empty string.
     * Even some errors may getMessage is an empty string, then you will only see: xxx exception message:
     * When troubleshooting, you'll have nothing to go on.
*/
    : / by zero
    at (:14)
    at (:21)
    
    // Be sure to use this, method overloading, and the overload argument is an exception object: by default log doesn't log the stack trace information for exceptions, and only overloading to pass an exception object supports exceptions showing up on the stack.
    ("Anomalous information:" , e);
    ("Anomalous information:" , e);
    ("Anomalous information:" , e);
    /**Very detailed log messages, identifying which class, which line, which location.
     * Exception message: : / by zero
     * at (:14)
     * at (:21)
     * ... Omit n lines...
*/
}

 

SpringBoot with LogBack logging (dynamic update level)

Logging is an important tool for diagnosing problems during development and operations.Spring Boot provides powerful logging management features, but by default, the logging level is configured at startup (e.g., above, the logging level is already configured as INFO at startup).

Sometimes we want to dynamically adjust the logging level while the application is running for more flexible debugging and monitoring. For example, the production log throughput in order to save server resources, will choose the INFO level, but after the problem occurs, and want DEBU details, you need to dynamically update.

Problem: Update log level from INFO -→ DEBUG || DEBUG -→ INFO If it's a change to the project configuration file, then a service restart is required, and a service restart without version iteration in production, is taboo! How to achieve dynamic update log level without restarting the application?

Way 1: SpringBoot monitoring (not recommended)

Let's start with the reason why it's not recommended: production, opening up endpoints to the public, is an unwise choice. Of course, with good security management, it's not impossible.

 Spring Boot Actuatorbespring bootProject a monitoring module that provides a number of native endpoints that contain integrated functionality for introspection and monitoring of the application system, such as all the beans in the application context, r logging levels, runtime status checking, health metrics, environment variables and all kinds of important metrics and so on. Therefore, it is possible to dynamically update log levels through its own log monitoring.

1. Build SpringBoot Monitor.

Reference Reading "SpringBoot Monitoring", which is described in detail.

2. Pay attention to the opening of endpoints, specify and open loggers

# Expose only log-related endpoints
=loggers,logfile

# Enable logfile endpoints
=true

# Configure logging levels
=INFO
=DEBUG

# Configure the log file path
logging.file.name=
logging.file.path=/var/log/myapp

Access:localhost:8080/actuator/loggers, will get a JSON data:

{
    "levels": [
        "OFF",
        "ERROR",
        "WARN",
        "INFO",
        "DEBUG",
        "TRACE"
    ],
    "loggers": { 
        "ROOT": {
            "configuredLevel": "INFO",
            "effectiveLevel": "INFO"
        },
        "com": {
            "configuredLevel": null,
            "effectiveLevel": "INFO"
        },
        "": {
            "configuredLevel": null,
            "effectiveLevel": "INFO"
        },
        "": {
            "configuredLevel": null,
            "effectiveLevel": "INFO"
        }
        .... Omit line n here...
    }
}

Short description: "ROOT", "com", ""..... These, then, are the names of the loggers, which are frankly your package hierarchy .... With this package hierarchy, all the classes under it, if there is logging, then its corresponding logging level. (of which ROOT, the highest level)

configuredLevel: Configured Level (desired level of configuration)
effectiveLevel: effective level (the level currently being used)

Also: the log level of the sub-package path will be passed on by the parent level, and conversely, setting the sub-package log level individually will not affect the parent log level

As above: if you update the ROOT log level, all other levels will be updated synchronously, updating the Log level, ...... All sub-level hierarchies, will be updated, ROOT , com are not affected!

3. Dynamic updating of log levels

POST request: http://localhost:8080/actuator/loggers/Logger name

Request parameters: {"configuredLevel": "DEBUG"}

Revisit:http://localhost:8080/actuator/loggers Look at the JSON log level and it was updated. At this point, just check the service log.

Way 2: interface update LoggerContext (not highly recommended, but conditions can be considered)

First of all, do not recommend the reason: write their own interface maintenance, each update call interface, not particularly convenient, but can be used.

Interface way to update the log level: conditional situation can be considered. Such as in addition to the external server, or internal operation and maintenance management services. This interface can be provided for the management of internal services. Operated by the operation and maintenance department to use (the purpose of dynamically updating the log level itself is to carry out for the operation and maintenance)

Call interface: http://IP:ports/api/levelSetting?levelName=root&level=debug

@Slf4j
@RestController
@RequestMapping("api")
public class HealthExaminationController {
    // Get LoggerContext logging context object
    private LoggerContext loggerContext = (LoggerContext) ();

    /**
     * Set the logging level of the specified logger [logger name, you can query SpringBoot's monitor to turn on log monitoring. access:http://localhost:: Ports/actuator/loggers]
     *
     *@param levelName Logger name
     *@param level Log level
     *@return
     * @throws Exception
     */
    @GetMapping(value = "/levelSetting")
    public String levelSetting(@RequestParam("levelName") String levelName, @RequestParam("level") String level) {
        // Get the logger based on the specified logger name
        Logger logger = (levelName);
        if (logger == null) {
            return levelName + ": The parameter logger Name not exist!";
        }
        ("Before updating the log:Log level【{}】", ());
        // Parses the level parameter. The second parameter indicates the default value when the level parameter is illegal.
        Level newLevel = (level, null);
        if (newLevel == null) {
            return level + ": The parameter logger level is not legal!";
        }
        // Rewrite to set the level of the logger
        (newLevel);
        ("After updating the log:Log level【{}】", ());
        return "success! update logger Level to:" + ();
    }
}

Way 3: Nacos Configuration Center (recommended, but not written)

First of all, do not write the reason: microservices knowledge, since this edit is SpringBoot, Cloud's east to put in is not very good, right. So divided into categories, you can see the subsequent SpringCloud, SpringCloud Alibaba updates. Here byNacos Official DocumentationAlso see

1. Download Nacos and start the Nacos server.

2. Introducing Nacos configuration dependencies

3. Configuring Nacos integration

4. Test updating the configuration in the Nacos console