First, let's clarify cap (a poem)
The difference between. Not only do these two files have different names, but there are also significant differences in their functionality. Next, we'll dive into the specific roles of these two files and their respective application scenarios. Let's unravel the mystery of them together!
In our last discussion of the Spring Boot 3 release, we looked at its loading mechanism and noted some minor changes. Strictly speaking, these changes were mainly in the tweaking of file names: the originalMETA-INF/
The files have been migrated to a new location, theMETA-INF/spring/
。
For more detailed information, feel free to check out this article:/guoxiaoyu/p/18384642
Here's the problem.
To gain insight into Spring Boot's loading mechanism, it's important to recognize that each third-party dependency package actually contains its own uniqueMETA-INF/
File. As shown in Fig.
These files play an important role at application startup by defining autoconfigured classes and other related settings that help Spring Boot automatically recognize and load the appropriate configuration at runtime.
However, when we try to look at a third-party dependency package, we may find that the correspondingMETA-INF/
file, and not even*.imports
Documentation, what to do at this point? Don't panic! Not all projects have autoconfiguration. For example, ZhiPuAiAutoConfiguration's autoconfiguration is actually included in Spring Boot's core library.
@AutoConfiguration(after = { , })
@ConditionalOnClass()
@EnableConfigurationProperties({ , ,
, })
public class ZhiPuAiAutoConfiguration {
}
It can be simply understood that once you reference the appropriate dependent packages, their configuration takes effect immediately. However, when looking for the configuration of the*.imports
file, I found an interesting phenomenon: many dependencies also exist under the package File. What is this used for? Considering that Spring Boot itself contains such files, this suggests that the concept is not a no-brainer.
Therefore, with this in mind, I decided to delve deeper into the mechanisms and roles behind it.
probe to find out what really happened
After a bit of AI Q&A and googling, I got a general idea of what the Purpose of the file: It is actually for packaging and compilation. This file helps to package the Java project into an executable EXE file (on Windows, other operating systems have different packaging methods), so that it can run without relying on the Java runtime environment. However, this is not directly related to Spring Boot's auto-configuration mechanism.
So why was something like this invented? I know you're anxious, but don't worry! Listen to me little by little and you'll understand better!
Java Current Pain Points
Those of you who have experience in Java development should know that previous Java applications were usually monolithic architectures, which means that it often takes several minutes to start a project, especially a large project, the startup time is even more of a headache. Therefore, with the development of technology, microservices architecture came into being, not only significantly shorten the startup time, but also the business logic for a reasonable cut.
However, microservices architecture is not without its drawbacks. Despite faster startups, projects often don't run optimally immediately after launch, meaning that it takes a while for applications to hit peak efficient operation.
Because the underlying Java is usually the HotSpot virtual machine, HotSpot runs on a mechanism that compiles common parts of code into native (i.e., native) code. This means that at the start of a program, HotSpot doesn't know what code is going to be "hot", and therefore can't immediately convert that code into a form that the machine can directly understand and execute.
During this process, HotSpot constantly analyzes and monitors code execution to quickly identify which parts are frequently called. Only after identifying the hot code and compiling it to native code can our project achieve optimal throughput.
It's almost impossible to get Java to start up as instantaneously as Python. This phenomenon makes Java more suitable for enterprise services in many cases, mainly because of the stability and reliability it seeks. In an enterprise environment, system stability is often the primary concern.
However, Java also faces a number of challenges that are particularly acute in the age of cloud computing.
cloud era
Serverless, for example, is a deployment model that is becoming increasingly mainstream in cloud computing environments. By abstracting infrastructure management and operations tasks, it enables developers to focus more on the implementation of business logic without having to pay too much attention to the underlying resource configuration and management.
I won't mention the old days when you needed to deploy your own physical machines. Today, the vast majority of companies have adopted Kubernetes (K8s) as their cluster management solution. After purchasing servers from major cloud service providers, organizations typically manage their cluster services themselves. The operations team is then responsible for monitoring and optimizing resource allocation and scaling in time to meet demand.
Additionally, Server Mesh and the side-car model are emerging as the technology evolves, and these are concepts worth delving into. At the end of the day, the purpose of these improvements is to significantly save development time within a company, thus allowing teams to focus more on their core business.
The current Serverless architecture significantly improves the efficiency of resource utilization, as all infrastructure management work is handled by the cloud service provider. In fact, the cloud vendor's infrastructure itself has not fundamentally changed; what has changed is primarily the architectural design, making the customer experience easier and more efficient. In this model, both operations and maintenance personnel and developers only need to focus on the deployment of functions, without the need to delve into the details of the server information.
Developers no longer need to care about how functions are run, or how many containers or servers underneath them are supporting these services. For them, it's all abstracted down to a simple interface, and they just need to make sure the parameters are properly aligned.
But would you dare deploy Serverless functions in Java? When the throughput of the system rises sharply, and you need to quickly start a new node to support the additional load, the Java big brother may still be busy starting or warming up, which can be a real delay ah! So how can Java, the big brother of the cow and the horse, be willing to be the little brother?
Introduction to GraalVM
If you're not familiar with GraalVM, you've heard of OpenJDK, which is actually a complete JDK distribution capable of running applications developed in any JVM-oriented language. GraalVM goes beyond that, however, and offers a unique feature called Native Image Packaging. The power of this technology lies in its ability to package applications into standalone binaries that are self-contained and can be run completely outside of the JVM environment.
In other words, GraalVM allows you to create applications that resemble common executables such as .exe files, which makes deployment and distribution easier and more flexible.
As shown above, the GraalVM compiler provides two modes: Just-In-Time (JIT) and Ahead-of-Time (AOT) compilation, which is called Ahead-of-Time Processing.
For the JIT mode, we all know that Java classes generate .class format files after compilation, and these files are bytecode that the JVM can recognize. In the process of running Java applications, the JIT compiler will dynamically compile the bytecode on some hotspot paths into machine code to achieve faster execution. This approach takes full advantage of runtime information and can be optimized for actual execution, resulting in improved performance.
In AOT mode, GraalVM converts bytecode to machine code during compilation, eliminating the need to rely on the JVM at runtime. By eliminating JVM loading and bytecode runtime warmup, AOT compiled and packaged programs have very high runtime efficiency. This means that at startup, applications can respond almost instantaneously, greatly improving their ability to handle requests.
So how fast is this AOT compilation? Will it become a common solution for Serverless functions, surpassing applications in other languages like Python? To verify its performance benefits, we can perform a real-world test.
Installing GraalVM
All of us basically have IntelliJ IDEA development tools installed locally, which is very easy to use. Here, we can download GraalVM directly through IDEA's built-in functionality, eliminating the need to look for and download it from the official website. In just a few simple steps, we can quickly get the latest version of GraalVM, ready for development.
Once the download is complete, we just need to configure the project's JDK for GraalVM. since I'm currently using JDK 17, I need to select the GraalVM 17 version that is compatible with it. This is a relatively simple process, just change the JDK path in the project settings.
We're going to continue to use the Spring AI project we looked at earlier, and on top of that, we'll need to add some relevant Spring Boot plugins.
<plugin>
<groupId></groupId>
<artifactId>native-maven-plugin</artifactId>
</plugin>
After completing all the configurations and getting ready to compile, we unexpectedly encountered an error indicating that JAVA_HOME was pointing to our original JDK 1.8. This issue arose because one of the tools did not rely on IntelliJ IDEA's startup variables, but instead read the JAVA_HOME environment variable directly.
To fix this, we need to make sure that the JAVA_HOME environment variable correctly points to our newly installed version of GraalVM. Therefore, we must download and install GraalVM on our local system, making sure that its version matches the version of the JDK we need for our project.
First we find the official website:/downloads/
Because I am a windows version, so please choose the appropriate operating system. Wait for the download to finish, unzip it, point the configuration environment variable to the changed directory and reboot to take effect, then compile it again.
After running it, it still reports the following error:
Error: Please specify class (or
/ ) containing the main entry point method. (see --help)
The content just can't find the meaning of the startup class, so I need to add some configuration. Looking for half a day to modify the following, remember not to change the order before and after, otherwise there will still be some problems.
<build>
<plugins>
<plugin>
<groupId></groupId>
<artifactId>native-maven-plugin</artifactId>
<configuration>
<!-- imageNameUsed to set the name of the generated binary file -->
<imageName>${}</imageName>
<!-- mainClassfor specifyingmainMethod Class Path -->
<mainClass></mainClass>
<buildArgs>
--no-fallback
</buildArgs>
</configuration>
<executions>
<execution>
<id>build-native</id>
<goals>
<goal>compile-no-fork</goal>
</goals>
<phase>package</phase>
</execution>
</executions>
</plugin>
<plugin>
<groupId></groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<excludes>
<exclude>
<groupId></groupId>
<artifactId>lombok</artifactId>
</exclude>
</excludes>
</configuration>
</plugin>
</plugins>
</build>
Next, we'll move on to packaging the project using common Maven commands such as mvn clean package. This process can seem a bit lengthy, especially since the speed of packaging seems to have dropped by more than one level compared to before. This time, my packaging process lasted about ten minutes, which is really quite a bit slower than my previous experience.
Then got very excited and clicked on the generated file and it was still reporting errors:
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v3.3.1)
Application run failed
: Startup with AOT mode enabled failed: AOT initializer .DemoApplication__ApplicationContextInitializer could not be found
at (:443)
at (:400)
at (:334)
at (:1363)
at (:1352)
at (:10)
Then after careful querying, the profile needs to be specified as native, because the previously typed package has no Aot information.
Well, after another 15 minutes of packing, this time, we finally succeeded, we can visualize the startup time after packing the AOT way compared to the startup time of the jar package way. It's a world of difference.
Oh, yeah? Indeed, it's amazing how startup times have been reduced to milliseconds! With GraalVM's Native Image technology, we've not only achieved fast startup of Java projects, but we've also effectively eliminated the warmup time of traditional Java applications. All of this seemed ideal, but there was a problem.
GraalVM Disadvantages
Having said that GraalVM solves Java's original problems, we must also recognize that it is not without its shortcomings. If it weren't for these shortcomings, GraalVM would be better known than OpenJDK. After all, if it's so good, why isn't it widely used by most people?
First and foremost, compatibility issues are a significant challenge. Many older versions of the JDK project are simply not compatible with GraalVM, which limits the scope of use for most organizations. For organizations that rely on older JDKs, migrating to GraalVM can be time-consuming and resource-intensive, even at the risk of refactoring code.
Second, developers are often hesitant to use GraalVM, even for projects using newer versions of the JDK. The reason for this is that GraalVM has relatively weak support for certain dynamic features. For example, limitations in features such as reflection mechanisms, resource loading, serialization, and dynamic proxies can have a significant impact on the operation of existing code. These dynamic behaviors are a core part of many applications, and any weakening of them could lead to missing functionality or performance issues.
Some people may wonder: the Spring framework itself relies on the factory model and various dynamic proxy features, if GraalVM does not support these advanced features, does not mean that the operation of Spring will be fatally affected? If dynamic proxies do not work properly, many of Spring's core features will be restricted, so what happened to the successful packaging mentioned earlier?
In fact, the reason behind all this is the Ahead-of-Time (AOT) metadata file feature provided by GraalVM. This feature allows developers to specify which classes and methods will use dynamic proxies at compile time, and GraalVM integrates this information into the final executable at compile time.
RuntimeHints with
GraalVM's API, RuntimeHints, is responsible for gathering requirements for features such as reflection, resource loading, serialization, and JDK proxies at runtime. This feature provides an important clue to understanding how GraalVM supports dynamic features. In fact, you can guess the purpose of the file right here. That's right, this file exists to ensure that the JDK proxy-related requirements needed by the Spring framework are loaded when GraalVM compiles. Let's take a look at the file:
=\
,\
,\
,\
=\
=\
=\
We can notice the presence of the RuntimeHintsRegistrar, whose main role is to identify and load all classes that implement the relevant interfaces for parsing and processing. It should be emphasized that GraalVM does not automatically look for files by default, as this is a Spring-specific mechanism. This means that GraalVM is not able to actively recognize and utilize these dynamic features without explicit guidance.
Below the RuntimeHintsRegistrar, we can also see a number of implementations of the AotProcessor. The structure is somewhat similar to the beanFactoryProcessor we discussed earlier, but we won't delve into the specifics here. Today we will focus only on the surface in order to understand its basic functionality.
Spring has actually solved the problem of loading relevant information for us so that dynamic features can be handled properly at compile time. However, this does not mean that everything is ready. Third-party components also need to provide implementations to ensure compatibility with Spring. If your dependency library uses some advanced features but does not implement Spring's scanning mechanism, those features will not be available at compile time.
Therefore, much work remains to be done to ensure compatibility and functionality across the ecosystem.
summarize
exploring cap (a poem)
In the course of this article, we not only revealed the essential differences between these two files, but also explored their role in Spring Boot 3 and their application scenarios. This journey of discovery takes us to the forefront of modern Java application development, especially in the context of Serverless and microservices architectures. With the advent of cloud computing, application performance and startup speed have become a central concern for developers. In this context, GraalVM provides a novel solution to reduce the startup time of Java applications through its Native Image feature, which is a great convenience for developers.
However, we also realize that while GraalVM offers many advantages, it is not without its challenges. Compatibility issues are still a major hurdle, and many older JDK projects may be difficult to migrate to the new platform. In addition, some dynamic features are still not well supported in GraalVM, which may affect developers' flexibility and functionality when using the Spring framework. This impact may be especially noticeable in complex enterprise applications.
With the combination of the Spring framework and GraalVM, developers can enjoy faster application launches and better resource utilization, but they also need to be prepared for potential issues with compatibility. This means that we need to constantly learn, adapt, and optimize our development processes as new technologies emerge.
I'm Rain, a Java server-side coder, studying the mysteries of AI technology. I love technical communication and sharing, and I am passionate about open source community. I am also a Tencent Cloud Creative Star, Ali Cloud Expert Blogger, Huawei Cloud Enjoyment Expert, and Nuggets Excellent Author.
💡 I won't be shy about sharing my personal explorations and experiences on the path of technology, in the hope that I can bring some inspiration and help to your learning and growth.
🌟 Welcome to the effortless drizzle! 🌟