Location>code7788 >text

Java Soul Torture 13 Whys, which ones do you know?

Popularity:484 ℃/2024-11-13 15:38:04

Hello, everyone, I am V brother. Today, I watched the AliCloud developer community on the soul of Java torture, a large factory in the use of Java, will consider what problems, for many years of work, and no big factory experience partners may wish to look at, V brother summarized these 13 why, you will which? The first to praise and then see, never pose a bad.

1. Why is it forbidden to use the equals method of BigDecimal to make equal comparisons?

BigDecimal (used form a nominal expression)equals method has some problems when comparing equal values, and it is usually not recommended to use it directly to determine the equality of values. Here are the main reasons and the recommended alternatives:

1. equals The methods are more rigorous and include comparisons of precision and symbols

Compare not only the values themselves, but also the precision and sign. For example.BigDecimal (used form a nominal expression)equals The method will assume that the1.0 cap (a poem)1.00 are different values because theirscale Different (i.e., different number of decimal places). Example:

BigDecimal a = new BigDecimal("1.0");
BigDecimal b = new BigDecimal("1.00");

((b)); // exports false

(go ahead and do it) without hesitating1.0 cap (a poem)1.00 are numerically equal, butequals method will returnfalse

2. equals The method will distinguish between positive and negative zeros

existBigDecimal in which the positive zero (0.0) and negative zero (-0.0) are not equivalent, and usingequals lead to0.0 cap (a poem)-0.0 are considered unequal. Example:

BigDecimal zero1 = new BigDecimal("0.0");
BigDecimal zero2 = new BigDecimal("-0.0");

((zero2)); // exports false

This can lead to miscalculations because in most business logic, we think that the0.0 cap (a poem)-0.0 is equivalent.

Recommended alternative: usecompareTo methodologies

To avoid these problems, it is recommended to use Methods.compareTo method only compares the magnitude of the values and is not concerned with precision or sign. Therefore, when it is necessary to determine the size of twoBigDecimal If the value is equal or not, use thecompareTo Makes more sense:

BigDecimal a = new BigDecimal("1.0");
BigDecimal b = new BigDecimal("1.00");

((b) == 0); // exports true

In this case.1.0 cap (a poem)1.00 are considered equal, even if they have different accuracies.compareTo will also return0

wrap-up

  • Do not useequals methodologies: It will consider precision and sign, which can easily lead to misclassification.
  • RecommendedcompareTo methodologies: Comparing only numerical values, ignoring precision and the difference between positive and negative zeros, enables equivalent comparisons that are more in line with business needs.

2. Why is it forbidden to use a double to construct a BigDecimal directly?

in usingBigDecimal It is not recommended to directly use thedouble as a constructor parameter. This is because thedouble The representation of types in Java is based on binary floating-point numbers, which introduces precision errors that can lead to inaccurate results. Example:

double d = 0.1;
BigDecimal bd = new BigDecimal(d);
(bd); // Output 0.1000000000000000055511151231257827021181583404541015625

Analysis of causes

  1. Precision problems with binary floating point numbers
    double Use the IEEE 754 standard to represent decimals in the binary system, like the0.1 Such a decimal cannot be represented exactly, causing it to be stored as an approximation. This approximation is passed directly to theBigDecimal constructor, which generates an error-ladenBigDecimal Value.

  2. Inaccurate results affecting operational calculations
    In some financial calculations or other scenarios that require high precision, it is straightforward to use thedouble tectonic (geology)BigDecimal There is a potential accumulation of errors that can affect the final result. For example, the error may be magnified over multiple calculations or accumulations.

Recommended Alternatives

  • Constructed using strings or exact valuesBigDecimal
    Precision errors can be avoided by passing in numbers in the form of strings, since the string constructor does not introduce any binary approximations.
  BigDecimal bd = new BigDecimal("0.1");
  (bd); // output 0.1
  • utilization(double) methodologies
    Another safe way is to use the(double), the method will set thedouble convert toString representation, and then constructBigDecimalThis prevents loss of precision.
  BigDecimal bd = (0.1);
  (bd); // output 0.1

wrap-up

  • Avoid direct usedouble tectonic (geology)BigDecimalto avoid introducing precision errors in binary floating-point numbers.
  • Prioritize the use of string constructorsor use(double) to ensure accuracy.

3. Why is it forbidden to use Apache Beanutils for copying properties?

Apache BeanUtils is an early tool library for Java bean property copying, but its use for property copying is generally not recommended in modern Java development, especially in performance-sensitive scenarios. The reasons for this include the following:

1. Performance issues

Apache () A large number of reflection operations are used, and each copy requires lookups and reflection calls to fields and methods. The reflection mechanism is flexible but low performance, especially in scenarios with a large number of objects or frequent copies, which can create significant performance bottlenecks.

In comparison.Spring BeanUtils maybeApache Commons Lang (used form a nominal expression)FieldUtils and other tools have been optimized to use a more efficient way to perform attribute replication. In situations where performance requirements are high, theMapStruct maybeDozer Compile-time code generation, for example, avoids runtime reflection altogether.

2. Type conversion issues

Type conversions are implicitly performed in case of mismatched attribute types. For example, convertingString typed"123" convert toInteger, an exception is thrown if the conversion fails. This kind of implicit conversion can introduce imperceptible errors when processing data and is not always suitable for application scenarios.

With exact attribute copying requirements, it is often desirable to skip copying directly in case of a type mismatch, or to explicitly throw an error instead of implicitly converting. For example.Spring No implicit conversion will be performed, suitable for strict attribute matching scenarios.

3. Potential security issues

Apache BeanUtils (used form a nominal expression)PropertyUtils The component has some security concerns when performing reflection operations. Historically, theBeanUtils (used form a nominal expression)PropertyUtils There were security vulnerabilities that allowed a malicious user to exploit reflection mechanisms to execute system commands or load malicious classes through carefully constructed input. While these vulnerabilities have been fixed in modern releases, the architecture and implementation of the library is still too old to cope with increased security requirements.

4. Lack of deep copy support for nested objects

Only shallow copies are supported, i.e., only the first-level properties of an object can be copied; it is not possible to recursively copy nested objects. If the object contains complex nested structures, use the It is easy to experience unexpected behavior or data loss. Things likeMapStruct maybeDozer Such tools, on the other hand, provide the ability to make deep copies of nested objects, which is more suitable for the deep copying needs of complex objects.

Recommended Alternatives

  1. Spring ()
    Spring() Provides better performance and better type safety. It does not do type conversion and provides convenient filters for selective copying of properties.

  2. MapStruct
    MapStruct Is an annotation-based object mapping framework , support for compile-time generation of code , completely avoid the performance overhead of reflection , and support for complex objects , nested attributes of the deep copy , is the first choice for high performance requirements .

  3. Dozer
    Dozer Supports more flexible mapping configuration and deep copy , suitable for the case of complex object structure . It can handle nested attribute mapping , type conversion , and has a better ability to customize .

wrap-up

Apache is not suitable for the performance, security, and flexibility requirements of modern Java development, it is recommended to use a more efficient, secure, and flexible framework (such as Spring).BeanUtilsMapStruct etc.) instead.

4. Why is it required that dates be formatted with a y for the year and not a y for the year?

In date formatting, you must use they rather thanY to represent the year, this is becausey cap (a poem)Y represents different meanings in Java and other date formatting tools:

  1. y Indicates calendar year
    y is a standard character that represents a year in the usual sense of the calendar year, such as2024 indicates the year of the year. Use they The date formatting tool will accurately format the corresponding year value:
   SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd"); ((new Date())); // Output: 2024-11-10-10
   ((new Date())); // Output: 2024-11-10
  1. Y Indicates the week year
    Y It represents the "week-year" or "ISO week-numbering year", which is an ISO week-numbering based year. It is an ISO week-numbering year. The year is calculated based on the week in which the first Monday of each year falls, and if a day falls in the first full week of a new year, it is categorized as a week-numbering year for the new year.

    For example, if the last days of a year fall in the first week of the following year, they might be grouped into the following year'sweek year. Similarly, if the first few days of a new year fall within the last full week of the previous year, the week year for those days may be attributed to the previous year. This can lead to unexpected year differences in date and time processing.

   SimpleDateFormat sdf = new SimpleDateFormat("YYYY-MM-dd"); ((new Date())); // May output a different value than the actual year.
   ((new Date())); // may output a different value than the actual year

utilizationY Potential problems

utilizationY Indicating the year triggers some date-calculation errors because it relies on a calculation of the number of weeks that doesn't always agree with the actual calendar year. Example:

  • December 31, 2024 would be treated as a2025 (used form a nominal expression)week yearThis leads to the use ofYYYY Formatting gets2025-12-31
  • Use in cross-year calculations or date-specific logicY Indicates that the year may be incorrect becauseweek year does not always correspond to the calendar year as commonly understood.

When to useY

Y Generally only used for date formats that need to be ISO 8601 compliant, especially if they contain an ISO week number (e.g. "2024-W01-1" for the first Monday of 2024), but in general we should usey to indicate the calendar year.

wrap-up

  • utilizationy to indicate the regular yearto avoid date formatting errors.
  • Avoid usingY to indicate the yearThe year should be parsed and displayed in the ISO anniversary format unless it is really necessary to do so.

5. Why is it important to pay attention to type alignment when using the trinomial operator?

Type alignment is important when using the trinomial operator because the two branches of the trinomial operator are type-inferred to a common type. If they are of different types, the Java compiler performs type lifting or automatic conversions, which can lead to unexpected type changes and potential errors. Here are the reasons and details to be aware of:

1. The trinomial operator automatically performs type lifting

The return value type of the trinomial operator is based on the type of thetrue cap (a poem)false The types of the branches are inferred. To get consistent results, Java automatically lifts the different types to higher precision. For example, if a branch returnsint And the other branch returnsdoubleJava will set theint promote todouble

int x = 5;
double y = 10.5;
double result = (x > 0) ? x : y; // come (or go) back double typology
(result); // exports 5.0

Here the return value5 be promoted to5.0. While the code is error-free in this example, in some cases this automatic lifting can lead to unexpected precision loss or type mismatch problems.

2. Automated unpacking and crating can triggerNullPointerException

In Java, alignment of basic and packed types requires special care. The trinomial operator tries to align packed and basic types to the same type, which can lead to autoboxing and unboxing if a branch for thenull and requires unpacking, which may triggerNullPointerException

Integer a = null;
int b = 10; int result = (a !
int result = (a ! = null) ? a : b; // If a is null, the result will be auto-unboxed, raising a NullPointerException!

due toa because ofnullJava will try to unbox it asintand thus throws theNullPointerException. This can be avoided by ensuring that the types are aligned, or by avoiding the need for a type alignment that might benull The object is unboxed.

3. Inconsistency in return value types may lead to compilation errors

If the two return types of the trinomial operator cannot be automatically converted by the compiler to a compatible type, the code will simply report an error. Example:

int x = 5; String y = "10"; String
String y = "10"; Object result = (x > 0 ?); Object result = (x > 0 ?)
Object result = (x > 0) ? x : y; // Compilation error: int and String are incompatible.

In this case.int cap (a poem)String cannot be lifted to the same type and therefore will raise a compilation error. If you do wish to return values of different types, you can manually specify common supertypes, e.g. by defining the results asObject Type:

Object result = (x > 0) ? (x) : y; // here result is Object

4. Type alignment improves code readability

Keeping the types returned by the trivial operator consistent makes code clearer and easier to understand and maintain. Type alignment avoids the confusion associated with type conversions and auto-raising, making the code more predictable and easier to understand:

double result = (condition) ? 1.0 : 0.0; // return double

wrap-up

  • Maintaining type consistencyEnsuretrue cap (a poem)false Branches are of the same type to avoid accidental type boosting.
  • Beware of automatic crating and uncratingavoidancenull Participate in the trinomial operator calculation.
  • Choosing the right public type when returning different typesIf you useObject or explicit conversion.

6. Why is it recommended to initialize the size of a HashMap?

initializationHashMap The capacity size is designed to improve performance and reduce memory waste. By setting the proper initial capacity, you can reduceHashMap number of expansions to improve the efficiency of program operation. The following are detailed reasons and recommendations:

1. Reduce the number of expansions and improve performance

HashMap The default initial capacity is 16, and when the load factor threshold is exceeded (the default is 0.75, which is 75% of capacity), theHashMap A scaling operation is performed automatically to expand the capacity to twice its original size. Expansion involves recalculating hashes and redistributing data into new buckets, a time-consuming process that can significantly impact performance, especially when there are many elements.

By setting the appropriate initial capacity, you can avoid or reduce the expansion operation and improve theHashMap The access efficiency of the

2. Save memory and avoid unnecessary memory overheads

If a large amount of data is expected to be stored but no capacity is specified, theHashMap It may be expanded multiple times, and each expansion allocates new memory space and copies the original data to the new space, resulting in wasted memory. If you create theHashMap If the capacity can be reasonably estimated, enough space can be allocated at one time, thus avoiding the waste of resources caused by repeated allocation of memory.

3. Avoiding thread-safety issues associated with capacity expansion

In concurrent environments, frequent scaling may lead to thread unsafety, even if theConcurrentHashMap It also does not completely avoid the performance and consistency problems associated with scaling. Initializing the right capacity can reduce the risks associated with scaling in a concurrent environment.

How to estimate the right capacity

  1. Estimated data volume: If it is expected thatHashMap will storen elements, you can set the initial capacity to(n / 0.75)The first step is to round up to the nearest power of two.
   int initialCapacity = (int) (n / 0.75);
   Map<String, String> map = new HashMap<>(initialCapacity);
  1. Taking powers of 2HashMap The capacity of the hash is always increased by a power of 2 because the hash bucket index can be computed efficiently by using the bitwise-with operation when performing hash operations. Therefore, setting the initial capacity to a power of 2 will result in a more even hash distribution.

sample code (computing)

int expectedSize = 1000; // Estimate the number of key-value pairs to be stored
int initialCapacity = (int) (expectedSize / 0.75);
HashMap<String, Integer> map = new HashMap<>(initialCapacity);

wrap-up

initializationHashMap The capacity size has the following benefits:

  • improve performance: Reduce the number of expansions and optimize access efficiency.
  • Save memory: Avoid memory waste caused by multiple expansions.
  • Enhancing Thread Safety: Reduce the risk of thread insecurity due to scaling in concurrent environments.

Reasonable initializationHashMap Capacity is particularly important for high-performance applications, especially when storing large amounts of data that can significantly improve the efficiency of a program.

7. Why is it prohibited to use Executors to create thread pools?

When creating thread pools in Java, it is not recommended to directly use theExecutors The shortcut methods provided (e.g.()() etc.), while it is recommended to useThreadPoolExecutor constructor to manually configure the thread pool. The main reason for this approach is to avoidExecutors Hidden Risks in Creating Thread Pools to ensure that the thread pool configuration meets the requirements. The specific reasons are as follows:

1. Opaque task queue lengths lead to OOM risk

  • newFixedThreadPool() cap (a poem)newSingleThreadExecutor() Using theunbounded queue LinkedBlockingQueue. Unbounded queues can hold an unlimited number of tasks, and once the number of tasks is very large, the queue can quickly take up a lot of memory, causing theOutOfMemoryError(OOM)。

  • newCachedThreadPool() Using theSynchronousQueue, the queue has no ability to store tasks, and each task must have a free thread to process the task immediately upon arrival, otherwise a new thread will be created. When the rate of task arrivals exceeds the rate of thread destruction, the number of threads increases rapidly, causing theOOM

2. Uncontrollable number of threads leading to resource exhaustion

existnewCachedThreadPool() There is no upper limit to the number of threads in the created thread pool. A large number of requests in a short period of time will cause the number of threads to skyrocket and exhaust system resources.newFixedThreadPool() cap (a poem)newSingleThreadExecutor() While limiting the number of core threads, not limiting the length of the task queue can still run out of memory.

In scenarios where business requirements are uncertain or tasks are proliferating, it is recommended to explicitly limit the maximum number of threads and queue length of the thread pool to better control the use of system resources and avoid performance problems caused by uncontrollable number of threads.

3. Lack of reasonable denial policy controls

  • Executors Thread pools are created by default using theAbortPolicy rejection policy, which throws aRejectedExecutionException Exception.
  • Different business scenarios may require different denial policies, for example, you can use theCallerRunsPolicy(letting the thread that submitted the task execute it) orDiscardOldestPolicy(discarding the oldest tasks) to balance task processing.

Manually createdThreadPoolExecutor When you do so, you can specify a rejection policy that suits your business needs, so that you can more flexibly handle the situation where the thread pool is full and avoid anomalies or system performance degradation.

4. Flexible configuration of core parameters

utilizationThreadPoolExecutor constructor can manually set the following parameters for flexible configuration of the thread pool based on business requirements:

  • corePoolSize: Number of core threads to avoid frequent destruction and rebuilding of idle threads.
  • maximumPoolSize: Maximum number of threads to control the maximum resources the thread pool can use.
  • keepAliveTime: The survival time of non-core threads, suitable for controlling the frequency of thread destruction.
  • workQueue: Task queue type and length for easy management of task backlogs.

The proper configuration of these parameters can effectively balance the performance, resource consumption and task processing capacity of the thread pool, avoiding situations that do not meet the requirements when using the default configuration.

Recommended way to create a thread pool

Recommended for direct useThreadPoolExecutor constructor to configure the thread pool, for example:

int corePoolSize = 10;
int maximumPoolSize = 20;
long keepAliveTime = 60L;
BlockingQueue<Runnable> workQueue = new ArrayBlockingQueue<>(100);

ThreadPoolExecutor executor = new ThreadPoolExecutor(
    corePoolSize,
    maximumPoolSize,
    keepAliveTime,
    ,
    workQueue,
    new () // rejection strategy
);

wrap-up

utilizationExecutors Creating a thread pool poses an unnoticeable risk of running out of system resources or piling up tasks, manually configuring theThreadPoolExecutor The behavior of the thread pool can be better controlled to match the actual business requirements and resource constraints. Therefore, for the sake of system robustness and controllability, it is recommended to avoid using theExecutors shortcut method to create a thread pool.

8. Why do I need to be careful about using the subList method in an ArrayList?

in usingArrayList (used form a nominal expression)subList method needs to be used with caution, as it has a number of potential pitfalls that can easily lead to unexpected errors and hard-to-troubleshoot exceptions. The followingsubList Reasons and precautions that need to be used with care:

1. subList Returns a view, not a standalone copy

ArrayList (used form a nominal expression)subList method returns a partial view of the original list (view), rather than a separate copy. ForsubList modifications will directly affect the original list and vice versa:

ArrayList<Integer> list = new ArrayList<>((1, 2, 3, 4, 5));
List<Integer> subList = (1, 4).
(0, 10); // Modify the subList.
(list); // The original list is also affected: [1, 10, 3, 4, 5]

This mechanism of sharing views may trigger unexpected modifications in some scenarios, resulting in data being accidentally changed, thus affecting the integrity and correctness of the original data structure.

2. subList Structural Modification Limitations of the

act as a matchmaker forArrayList itself (rather thansubList view) to make structural changes (addremove etc. to change the size of the list) and then operate thesubList lead toConcurrentModificationException Anomaly. This is due to thesubList harmonizeArrayList The state of structural modifications is shared between them, and once one of them is modified, the other is invalidated:

ArrayList<Integer> list = new ArrayList<>((1, 2, 3, 4, 5));
List<Integer> subList = (1, 4);
(6); // Modify the structure of the original list
(0); // throw out ConcurrentModificationException

This limitation means thatsubList Not suitable for use in scenarios where the list changes frequently, otherwise it is easy to raise concurrent modification exceptions.

3. subList cap (a poem)ArrayList operations such as removeAll may result in an error

subList The generated view list may have problems with batch delete operations, such as calling theremoveAll method when thesubList behavior is inconsistent or anomalous. ForArrayList (used form a nominal expression)subListSome batch modification methods (e.g.removeAllretainAll) may cause the view element to be deleted after theArrayList produce unpredictable states and even triggerIndexOutOfBoundsException and other anomalies.

4. Recommended safe use

If a separate sub-list is required, it can be created via thenew ArrayList<>((start, end)) to create a copy of the sublist, thus avoiding the need for thesubList of the shared view problem:

ArrayList<Integer> list = new ArrayList<>((1, 2, 3, 4, 5)); // create copy
ArrayList<Integer> subListCopy = new ArrayList<>((1, 4)); // create the copy
(6); // Modify the original list
(0); // Safe, not affected

wrap-up

utilizationArrayList (used form a nominal expression)subList The methodology requires attention to the following points:

  • view mechanismsubList It's just a view of the original list, and modifying one affects the other.
  • Structural modification restrictions: Structural modifications to the original list before revisitingsubList will throwConcurrentModificationException
  • Batch operation issuessubList of batch operations may trigger unforeseen errors.
  • Suggest creating a copy: If you need to manipulate sublists independently, it is better to create asubList copies to avoid potential problems.

Use cautionsubList Unexpected errors can be avoided and code robustness can be improved.

9. Why is it forbidden to remove/add elements in a foreach loop?

In Java, it is forbidden to use theforeach loop for the elements of theremove maybeadd operation, mainly because such an operation could lead toConcurrentModificationException exception, or cause the loop to behave in a way that is not expected. The specific reasons are listed below:

1. ConcurrentModificationException exceptions

When you're inforeach directly modify the collection in a loop (e.g.remove maybeadd element), can lead to concurrent modification problems.foreach The bottom of the loop uses the collection'sIterator to traverse the elements. Most collection classes (such asArrayListHashSet etc.) all maintain amodCount A counter indicating the number of times the structure of the collection has been changed. When you modify the structure of the collection while traversing it (e.g., removing or adding elements), themodCount will change, and theIterator will detect this structural modification and thus throw theConcurrentModificationException exception to prevent unexpected behavior of the program in a multithreaded environment.

Example:

List<String> list = new ArrayList<>(("a", "b", "c", "d"));
for (String s : list) {
    if (("b")) {
        (s); // will throw ConcurrentModificationException
    }
}

In the code above, theforeach cyclic traversallist If the elementbIt's going to change.list structure, which leads to theIterator Concurrent modifications are detected and exceptions are thrown.

2. Unpredictable behavior

Even if it doesn't throwConcurrentModificationExceptioninforeach Modifying collections in loops can also lead to unpredictable behavior. For example.remove maybeadd operations change the size and content of the collection, which may affect the order of iteration or cause some elements to be missed, or even cause a dead loop or skip some elements.

Example:

List<String> list = new ArrayList<>(("a", "b", "c", "d"));
for (String s : list) {
    if (("b")) {
        ("e"); // modify the size of the set
    }
    (s);
}

In this example, theadd The operation sends a message to thelist Add a new element to the"e", thereby modifying the structure of the collection. Sinceforeach Internal implementations of loops use iterators, which may not take into account modified new elements, resulting in a different order of output or traversal than expected.

3. The iterator'sremove() methodologies

If you need to delete elements in a loop, it is recommended to use theIterator Explicitly performs a delete operation.Iterator Provides a secureremove() method, which allows you to safely remove elements while traversing without raising theConcurrentModificationException

Example:

List<String> list = new ArrayList<>(("a", "b", "c", "d"));
Iterator<String> iterator = ();
while (()) {
    String s = ();
    if (("b")) {
        (); // utilization Iterator (used form a nominal expression) remove() methodologies
    }
}

utilization() Elements can be safely deleted on traversal without throwing concurrent modification exceptions.

wrap-up

existforeach directly in the loopremove maybeadd The operation is unsafe for the following main reasons:

  • ConcurrentModificationException: Directly modifying the collection triggers the concurrent modification detection of the iterator, resulting in an exception.
  • Unpredictable behavior: Modifying the structure of a collection may result in missing elements, misordering, or errors in program logic.
  • utilizationIterator substitute for: UseIterator (used form a nominal expression)remove() method avoids these problems and enables safe element deletion operations.

Therefore, the correct approach is to useIterator Explicitly handle element deletions or modifications, rather than directly in theforeach Modifications are made in a loop.

10. Why are engineers prohibited from using the APIs in logging systems (Log4j, Logback) directly?

In many engineering practices, theProhibit engineers from using APIs directly from logging systems (e.g. Log4j, Logback), mainly for several reasons:

1. Separation of logging configuration and implementation

Using the logging system's API directly may result in logging logic being tightly coupled with the application's business logic, making the separation of logging configuration and implementation difficult. Modern logging frameworks (e.g. Log4j, Logback) allow logging logic to be configured via external configuration files (e.g. maybeThe logging API provides the flexibility to configure logging levels, output formats, output locations, etc., rather than hard-coding them into the application code. Direct use of the logging API will result in logging configuration tied to business code, which is not easy to modify and maintain.

Recommended practices: By using the logging framework's logging abstraction interface (such as theorg.) to log, rather than relying directly on a specific logging implementation. This approach provides greater flexibility in that the logging implementation can be changed at runtime via a configuration file without modifying the code.

2. Flexibility and scalability issues

If engineers directly use the API of the logging library, the project needs to change a lot of code when it needs to switch logging frameworks (e.g., switching from Log4j to Logback or other frameworks), increasing the coupling of the system and the difficulty of maintenance. On the other hand, the use of logging abstraction layer (such as SLF4J) can avoid this problem , because SLF4J is a logging abstraction layer , the bottom can switch to a specific logging implementation without changing the business code .

typical example

// not recommended:Direct use Log4j (used form a nominal expression) API
import .;
Logger logger = ();
("This is a log message");

// testimonials:pass (a bill or inspection etc) SLF4J interface to logging
import org.;
import org.;
Logger logger = ();
("This is a log message");

Using SLF4J allows you to flexibly switch logging implementations between different environments without modifying the code.

3. Logging inconsistent with debugging

If engineers use the logging framework's API directly, they may not follow a consistent logging policy when logging. For example, the level of logging, format, and content of logging output may be inconsistent, resulting in confusing and not easily traceable logging information. Consistency and standardization of logging can be maintained through a unified logging abstraction interface (e.g., SLF4J) and a standardized logging policy (through AOP or features that come with the logging framework).

best practice

  • Ensure that all logging follows a uniform format and specification by encapsulating logging methods through a unified log management class or tool class.
  • Uniformly use the appropriate log level in the log (e.g.DEBUGINFOWARNERROR) and standardized formats.

4. Performance Impact of Logs

Logging can have an impact on the performance of an application, especially if logging is too frequent or if the log output is excessive. By using the logging framework's API directly, you may not have the flexibility to control the frequency, content, or filtering policies of the log output, which can cause performance issues. Many logging frameworks (such as Log4j and Logback) provide advanced configuration options, such as asynchronous logging, log caching and other features that can significantly improve performance.

Recommended practices

  • Use the asynchronous logging capabilities provided by the logging framework to improve performance.
  • Configure appropriate logging levels to avoid outputting too much debugging information in the production environment.

5. Harmonization and standardization of log management

In team development, direct use of the logging framework's API can lead to different developers not following a uniform specification for logging in different modules, resulting in inconsistent log formats, inconsistent information, and even duplicate log records. By using a logging management tool class or wrapper class, you can ensure that all developers follow a uniform logging strategy.

typical example

  • Create a unifiedLoggerFactory factory class to generate logging objects.
  • Uniformly define log levels and output formats to ensure consistent log output.

wrap-up

Engineers are prohibited from using the APIs in logging systems (e.g. Log4j, Logback) directly, primarily:

  1. Decoupling logging implementation and business logic: By using a logging abstraction layer such as SLF4J, it is easier to switch logging frameworks and avoid hard-coding.
  2. Increased flexibility and maintainability: Avoid reusing framework APIs in your application and improve the flexibility and consistency of your logging configuration.
  3. Standardize logging behavior: Enhance readability and traceability by encapsulating logging and ensuring uniformity in log level, format and content.
  4. optimize performance: Improve the performance of your logging system and reduce the impact on your application by configuring advanced features of your logging framework, such as asynchronous logging.
  5. Unified Log Management: Avoid team members using inconsistent logging across modules and ensure standardization of log output.

The best practice is to log through a logging abstraction layer (e.g., SLF4J), and at the same time to ensure efficient, standardized, and flexible logging through unified configuration and invocation of log management tool classes.

11. Why are developers advised to use inheritance with caution?

In object-oriented programming (OOP), thepredecessoris a common form of code reuse that allows one class to inherit attributes and behaviors from another class. However, while inheritance can improve code reusability, excessive or improper use of inheritance can lead to increased code complexity, which in turn can lead to a number of potential problems. Therefore, developers are advised to be careful when using inheritance, and here are some key reasons why:

1. Increased coupling between classes

Inheritance leads to a tight coupling between subclasses and parent classes. Subclasses depend on the implementation of the parent class, which means that if a change occurs in the parent class, it may affect all subclasses inherited from that parent class, making modification and maintenance more difficult. This tight coupling also limits the flexibility of the subclass because it must follow the interface and implementation of the parent class.

(for) instance

class Animal {
    void eat() {
        ("Animal is eating");
    }
}

class Dog extends Animal {
    @Override
    void eat() {
        ("Dog is eating");
    }
}

If the parent classAnimal Changes were made (e.g., modification ofeat() method implementation).Dog classes will also be affected. Such coupling increases the complexity of later maintenance.

2. Encapsulation is broken.

Inheritance can break encapsulation because subclasses have direct access to members (fields and methods) of the parent class, especially if the parent class member is set to theprotected maybepublic time. This situation can lead to subclasses exposing details that should not be accessible to the outside world, destroying the encapsulation of data.

(for) instance

class Vehicle {
    protected int speed;
}

class Car extends Vehicle {
    void accelerate() {
        speed += 10; // Direct access to the parent class's protected field
    }
}

In this case.Car class directly accesses the parent classVehicle (used form a nominal expression)speed field, rather than modifying it through the public interface, resulting in less encapsulation.

3. Inheritance can lead to illogical class hierarchies

Inheritance often leads to illogical class hierarchies, especially when trying to express "is a" through inheritance (is-a) relationships when the actual situation may not follow this logic. Misuse of inheritance may make relationships between classes complex and unintuitive, leading to structural confusion in the code.

(for) instance
Suppose we have aCar class and aTruck classes, all of which inherit fromVehicle class. If theCar cap (a poem)Truck sharing many methods and properties, such a design may be appropriate. However, if theCar respond in singingTruck are very different from each other, and constructing their relationships through inheritance alone may result in an inheritance hierarchy that is overly complex and becomes difficult to read and understand in code.

4. Inheritance can lead to errors that are not easy to detect

Since the child class inherits the behavior of the parent class, any modification to the parent class may affect the behavior of the child class. Even worse, errors or inconsistent modifications may occur in the parent class, and these errors may not be immediately exposed until the program is run to a particular place where the error becomes apparent.

(for) instance
Suppose you modify a method in the parent class, but forget to update or adjust the corresponding overridden method in the child class, which can lead to hard-to-find errors.

5. Inheritance limits flexibility (non-reusability issues)

Inheritance creates a fixed relationship between a parent class and a child class, which means that if you want to reuse a class in a completely different context, you may not be able to do so through inheritance. In some cases, composition is more flexible than inheritance, allowing you to combine multiple behaviors into a single class rather than forcing a class hierarchy through inheritance.

(for) instance

// Combination rather than inheritance
class Engine {
    void start() {
        ("Engine started");
    }
}

class Car {
    private Engine engine = new Engine(); // Use by combination Engine
    void start() {
        ();
    }
}

By combining, it is possible to flexibly use different components without inheriting the entire class. This has the advantage of being more extensible and flexible.

6. Inheritance limits reuse of methods (poor maintainability)

If you rely too heavily on inheritance, your code will be susceptible to limitations imposed by the parent class implementation, making it difficult to flexibly add new functionality or extend it. For example, adding new functionality to an inheritance chain can result in a whole bunch of method modifications and rewrites, whereas without inheritance, it's much easier to reuse functionality as separate modules.

7. Better use of interfaces and combinations

Compared to inheritance, theInterface cap (a poem)Composition More in line with the principles of object-oriented design. Interfaces allow classes to expose only the desired functionality without revealing implementation details, and combinations allow you to combine several different behaviors together, making the system more flexible and extensible. Many of the problems of inheritance can be avoided through interfaces and combinations.

Recommended Design Patterns

  • Strategy Pattern: Replace inheritance with interfaces and combinations.
  • Decorator Pattern: Use combinations and proxies to extend behavior, not through inheritance.

wrap-up

Although inheritance is an important feature in object-oriented programming, misuse of inheritance can cause many problems, especially in the following areas:

  • Increases coupling between classes and reduces flexibility;
  • Breaking encapsulation and exposing internal implementations that should not be accessible;
  • May result in a complex class hierarchy, making it more difficult to understand and maintain;
  • Limit code reuse and extensibility.

Therefore.It is recommended to prioritize the use of combinations over inheritanceUse interfaces wherever possible to enable flexible extensions. If you must use inheritance, make sure it clearly expresses the "is a" relationship and avoid deep inheritance hierarchies.

12. Why are developers prohibited from modifying the value of the serialVersionUID field?

serialVersionUID is a static field used in Java to identify the serialization version. Its purpose is to ensure that during deserialization, the JVM can verify the compatibility of the serialized class with the current class to avoid errors caused by incompatible versions. Although theserialVersionUID can be defined manually by the developer.Prohibit developers from modifyingserialVersionUID The value of the field The reasons for this are as follows:

1. Serialization and Deserialization Compatibility

serialVersionUID s main role is to ensure class version compatibility during serialization and deserialization. It is used to identify the version of a class if the serialization and deserialization process uses the class'sserialVersionUID mismatch, it throws theInvalidClassException

  • unevenserialVersionUID can cause the serialized data to be incompatible with the current class, resulting in a deserialization failure.
  • modificationsserialVersionUID value can change the version identity of the class, causing serialized data to be unsuccessfully read during deserialization, especially if the class structure is changed (e.g., fields are added or removed).

Example:

// The first version of the class
public class MyClass implements Serializable {
    private static final long serialVersionUID = 1L;
    private String name;
    // Other fields and methods
}

// Second modified version of class
public class MyClass implements Serializable {
    private static final long serialVersionUID = 2L; // modified serialVersionUID
    private String name;
    private int age; // New Fields
    // Other fields and methods
}

If you modify theserialVersionUIDIf the previously serialized data was serialized using a version 1 class, the deserialization will be reversed because theserialVersionUID Mismatches lead to failure.

2. Avoid unnecessary version conflicts

Java automatically generates a class based on its fields, methods, and other information.serialVersionUID, this value is calculated based on the structure of the class. If the developer modifies theserialVersionUID, which may break Java's automatically generated version control mechanisms, leading to inconsistent version control and increased maintenance complexity.

If you manually modify theserialVersionUID, is prone to the following kinds of problems:

  • Since the class structure remains unchanged, modifying theserialVersionUID It may cause serialized data to be unrecoverable.
  • If a different developer modifies theserialVersionUID, which may cause serialization inconsistencies between different machines or systems.

3. Impact on serialization compatibility

Java provides two main types of compatibility rules:

  • Compatibility forward: If a field or method of the class changes, but not theserialVersionUID, then deserialization works.
  • Backward compatibility: If you modify the structure of the class (e.g., field changes, method signature changes, etc.) and keep the sameserialVersionUID, deserialization still works.

If you accidentally modify theserialVersionUID, which may result in the following:

  • forward compatibility: The new version of the class is not compatible with the old version of the object, causing the deserialization to fail.
  • backwards compatibility: Older versions of classes cannot deserialize newer versions of objects.

4. Automatically generated vs. manually specified

  • automaticserialVersionUID: Java automatically generates a class structure based on theserialVersionUID, so that if the structure of the class changesserialVersionUID will automatically change to ensure that there is no unexpected deserialization behavior between incompatible versions.
  • Manual designationserialVersionUID: Manual modificationserialVersionUID It may lead to inconsistent version control, especially in multi-developer, distributed deployment environments that are prone to deserialization failures.

5. Avoiding Unintended Deserialization Issues

manual modificationserialVersionUID may result in data loss or exceptions being thrown on deserialization. For example, if a developer mistakenly modifies theserialVersionUID, the system may not be able to deserialize when attempting to do so because of theserialVersionUID A mismatch prevents the object from being successfully loaded, causing an exception to be thrown.

wrap-up

Prohibit developers from modifyingserialVersionUID field's value, primarily:

  • Ensuring Serialization and Deserialization Compatibilityto avoid version mismatches that can cause deserialization to fail.
  • Avoid unnecessary version conflicts and data loss, especially when the class structure is modified.
  • Keeping Java Automatically ManagedserialVersionUID competitive edge, ensuring version consistency and maintainability of classes.

If it does need to be modifiedserialVersionUID, should ensure that the modified version is compatible with data that has been serialized and follow a reasonable versioning policy.

13. Why are developers prohibited from using isSuccess as a variable name?

Prohibit developers from usingisSuccess As a variable name, the main reason is to follow better programming specifications and to improve the readability and maintainability of the code. The core of the problem with this variable name is its tendency to cause ambiguity and confusion. The reasons for this are as follows:

1. Not conforming to Boolean naming conventions

In Java, it is common to useis maybehas A variable name that begins with a boolean value (boolean (type). This type of naming usually follows a specific semantic convention that indicates whether a certain condition holds. Example:

  • isEnabled Indicates whether a feature is enabled;
  • hasPermission Indicates if you have permission.

concern

  • isSuccess looks like a boolean (boolean type), but it may not actually represent a boolean value directly, but rather a state or result. This naming can lead to confusion, and developers may mistake it for a variable of type Boolean, when in fact it may be an object describing a state, a string, or some other type of data.

2. semantic ambiguity

isSuccess The name ostensibly means "success or failure", but it lacks a specific context, resulting in a lack of semantic clarity. A true success boolean should be expressed directly using the nameboolean type of variable and use clear and unambiguous naming.

Example:

  • isCompleted: Indicates whether a task has been completed.
  • isSuccessful: Indicates whether an operation was successful.

These designations make the meaning of Boolean variables clearer and avoid ambiguity in understanding.

3. With the standardis prefix confusion

is Prefixes are often used to indicate "whether" a condition holds, and are used for methods or variables that return a boolean value.isSuccess Naming it this way could lead developers to mistake it for a boolean, or aboolean type, but in reality it may be a complex type or some other non-Boolean type, causing unnecessary confusion.

Example:

boolean isSuccess = someMethod(); // looks like a boolean, but the actual type may be different

This situation can lead developers to misunderstand that theisSuccess represents a boolean value, but it could be some object, enumeration, or other data type that indicates success.

4. Better Naming Suggestions

To avoid ambiguity and confusion, developers should use names that are more explicit and conform to naming conventions. Here are some suggestions for naming improvements:

  • If it's a boolean, name itisSuccessful maybewasSuccessful
  • If it is an object that represents a result, use a more specific name, such asoperationResult maybestatusCodeto indicate that it is a variable that describes the result of an operation.

5. Improve code readability and maintainability

Clear and meaningful naming helps team members or future developers understand the intent of the code more quickly. If a variable name is too vague (e.g.isSuccess), it can raise questions about what it actually means, especially when reading larger or more complex code. Good naming improves the readability and maintainability of code.

wrap-up

  • isSuccess Such naming is unclear and can easily be confused with variables of type Boolean, which in turn affects the readability of the code.
  • Naming should be as clear as possible, avoid ambiguous names, especially when naming Boolean-valued types.
  • It is recommended that a more descriptive name be used, such asisSuccessful maybewasSuccessful, which more clearly expresses the meaning of the variables.

ultimate

The above is V Brother carefully summarized 13 Java programming in a small coding problems, but also V Brother daily coding summarized in the study notes to share with you, if the content is helpful to you, please do not begrudge a small praise chant, focus on the Vigo love programming, Java road, you and I go forward together.