Location>code7788 >text

Microservices Architecture - The Indispensable Registry

Popularity:531 ℃/2024-11-07 18:15:31

Starting today, we will take a deep dive into microservices architecture using Java backend technologies as an entry point. The focus of this chapter will be on one of the most critical aspects of microservices: service discovery and registration. The article will take a step-by-step approach and gradually lead you into the wide world of microservices. Whether you are a technical novice or a seasoned expert, I hope that this article will provide you with unique and valuable insights and takeaways.

Okay, here we go!

Monolithic Architecture vs. Microservices Architecture

monolithic structure

First, let's look at the old monolithic architecture. An archive package (e.g., in WAR format) typically contained all the functionality and logic of an application, an architecture that led us to call it a monolithic application. The design philosophy of monolithic applications emphasizes packaging all functional modules into a single package for easy deployment and management. This architectural pattern is referred to as the monolithic application architecture, meaning that a single WAR package is used to carry all the responsibilities and functionality of the entire application.

image

As shown in this simple example diagram we have presented, we can analyze the advantages and disadvantages of monolithic architectures in more depth in order to fully understand their impact in software development and system design.

microservices architecture

The core concept of microservices is to split the traditional monolithic application based on business requirements and decompose it into multiple independent services to achieve complete decoupling. Each microservice focuses on a specific function or business logic, following the principle of "one service only does one thing", similar to the process in the operating system. This design allows each service to be deployed independently, and can even have its own database, thus improving the flexibility and maintainability of the system.

image

In this way, each small service is independent of each other, can more effectively respond to business changes, rapid iterative development and release, while reducing the overall complexity of the system, which is the essence of microservice architecture. Of course, microservice architecture also has its advantages and disadvantages, because there is no "silver bullet" that can perfectly solve all the problems. Next, let's analyze these advantages and disadvantages in depth:

vantage

  1. Services are small and cohesive: Microservices split the application into multiple independent services, each focusing on a specific function, making the system more flexible and maintainable. Compared to traditional monolithic applications, modifying a few lines of code often requires an understanding of the architecture and logic of the entire system, while microservices architecture allows developers to focus only on relevant functionality, improving development efficiency.
  2. Streamline the development process: Different teams can develop and deploy the services they are responsible for in parallel, which improves development efficiency and release frequency.
  3. stretch according to demand: The loosely coupled nature of microservices allows for independent expansion and deployment of individual services based on business requirements, facilitating dynamic adjustment of resources and optimizing performance based on changes in traffic.
  4. Front-end and back-end separation: As Java developers, we can focus on the security and performance of the back-end interface without having to focus on the front-end user interaction experience.
  5. fault tolerance: Failure of a service does not affect the availability of the entire system, improving system reliability.

drawbacks

  1. Increased complexity of operations and maintenance: Managing multiple services increases the complexity of Ops beyond just a single WAR package, which greatly increases the workload of Ops personnel and involves a more complex technology stack (e.g., Kubernetes, Docker, Jenkins, etc.).
  2. communications cost: Inter-calls between services require network communication, which can lead to latency and performance issues.
  3. Data consistency challenges: Maintaining data consistency and handling distributed transactions becomes more difficult in distributed systems.
  4. Performance Monitoring and Problem Location: More monitoring tools and policies are needed to track the performance of individual services, and problem troubleshooting becomes complex.

application scenario

So microservices are not suitable for all projects. He is only suitable for some scenarios here are some typical examples:

  1. Large and complex projects: Microservice architecture reduces the complexity of each service by splitting the system into multiple small services, allowing the team to focus more on the functional modules they are responsible for, thus significantly improving the efficiency of development and maintenance.
  2. Rapid project iteration: Microservices architecture enables different teams to develop and release their services independently, resulting in higher frequency of iteration and faster market response.
  3. Projects with high concurrency: Microservice architecture provides flexible elasticity and scalability, and each service can be expanded independently according to demand, ensuring that the system can still maintain good performance and stability under high concurrency.

Okay, so we've covered the basic concepts of microservices. Next, we will dive into a crucial part of the microservices architecture: service registration and discovery. This part is the core of the microservices ecosystem and has a direct impact on the flexibility and scalability of the system.

Registration Center

From the above discussion, we can see that the core of the microservice architecture lies in separating the modules independently to achieve better flexibility and maintainability. However, this modular design also brings about consumption in network transmission, and thus it becomes particularly important to understand how network calls are made between microservices.

Next, we will step-by-step explore the ways in which microservices communicate with each other and how these ways affect the overall performance of the system.

invocation method

Let's start by thinking about a key question: in a microservices architecture, how can we efficiently maintain complex invocation relationships to ensure smooth coordination and communication between individual services?

If you're not familiar with microservices, consider them differently: how do our computers enable calls and access to other websites?

invoke a fixed number

The simplest thing we can do is to hardcode the IP address or domain name in our code so that we can make the call directly. For example, consider the following code example:

//1:The services are connected to each other through theRestTemplatecall (programming),urlwrite to death
String url = "http://localhost:8020/order/findOrderByUserId/"+id;
User result = (url,);

//2:There are others like it.http工具call (programming)
String url = "http://localhost:8020/order/findOrderByUserId/" + id;
OkHttpClient client = new OkHttpClient();
Request request = new ()
        .url(url)
        .build();
try (Response response = (request).execute()) {
    String jsonResponse = ().string();
    // deal with jsonResponse boyfriend。omit code

On the surface, while hard-coding an IP address or domain name into the code may seem like a simple solution, it's actually not a smart one. Just like when we visit Baidu for a search, we don't type its IP address into the browser, but instead use a domain name which is more convenient and easy to remember. The same is true for communication between microservices, each of which has its own unique service name.

The role of the Domain Name Server is critical here, as it is responsible for storing the correspondence between domain names and IP addresses so that we can accurately call the appropriate servers for requests and responses. A similar mechanism exists in the microservices architecture, which we call the "service discovery and registry". Conceivably, this registry acts as a "name server" for microservices, storing the names of the microservices and their network locations.

image

When configuring a domain name, we need to fill in DNS records with all sorts of information; similar configuration work is just as important in a microservice's registry, only it's usually done in a configuration file. When your service starts, it automatically registers itself with the registry, ensuring that other services can find it and call it.

"Domain" call

As a result, when we make a service call, the whole process will become more familiar and intuitive. For example, consider the following code example:

//Initiate the call using the microservice name
String url = "http://mall-order/order/findOrderByUserId/"+id;
List<Order> orderList = (url, );

Of course, this involves a lot of technical details that need to be meticulously implemented, but we can start our initial understanding by focusing on the core functionality of service discovery and registries. In short, their main purpose is to facilitate invocations between microservices and reduce the complexity that developers need to deal with when communicating with services.

By introducing the Service Discovery and Registry, we no longer need to manually maintain a large number of relationships between IP addresses and service names.

Design Ideas

As a registry, its main function is to efficiently maintain information about individual microservices, such as their IP addresses (which can, of course, be intranet-based). Given that the registry is itself a service, it can be considered an important component in a microservice architecture. Each microservice must be properly configured before it can be registered and discovered in order to ensure that they can recognize and communicate with each other.

This is similar to configuring a DNS server locally, without which we would not be able to find the corresponding IP address through the domain name, and thus would not be able to communicate effectively over the network.

image

Health monitoring plays a crucial role in this system, and its main purpose is to ensure that the client is informed of the status of the server, especially in case of a server failure, even though such monitoring cannot be done in complete real-time. The importance of health monitoring lies in the fact that in our microservices architecture, each module usually starts multiple instances. Although these instances have the same functionality and are intended to share the request load, their availability may vary.

image

For example, the same service name may correspond to multiple IP addresses. However, if the service corresponding to one of these IPs fails, the client should not try to call this service IP again; instead, it should prioritize other available IPs so that high availability can be effectively achieved.

Next, let's talk about load balancing. It is important to note here that each service node only registers its IP address with the registry, and the registry itself is not responsible for which IPs are invoked. it all depends entirely on the client's design and implementation. So, as mentioned in the previous section discussing domain calls, there are actually a lot more details here.

The role of the registration center is relatively simple; its main responsibility is to collect and maintain available IP addresses and provide this information to the client. For specific implementation details and operation flow, you can refer to the following picture

image

real combat

In this way, we basically have a comprehensive understanding about all aspects of the system architecture. Next, we can directly enter the practice session for a concrete demonstration of its use. Here, we will take Spring Cloud Alibaba as an example and choose Nacos as our service discovery and registration center.

preliminary

JDK: This is the basic environment necessary for development.

Maven: will still use maven for project dependency management. And start Springboot project.

Nacos Server: You need to build a nacos server yourself.

Nacos Docker Quick Start

If you don't have nacos yourself, I suggest you can quickly build a Nacos instance locally via Docker. You can refer to the quick start guide in the official documentation for the exact steps:Nacos Quick Start with Docker

In this way, you can build a stable Nacos service in the shortest possible time.

windows local

Of course, you can also choose to build the Nacos service directly locally. Follow the steps below, which will be used as an example here. First download:/alibaba/nacos/releases

Then unzip it locally directly and run the command successfully as follows:

-m standalone

image

Open the local address:http://127.0.0.1:8848/nacos/

image

Spring Boot Startup

So now, we can just start launching two services locally: one is a user module and the other is an order module. Additionally, we will create a public module to facilitate sharing of common functionality and resources. To simplify the demo, we will write the most basic code, with the main purpose of providing a clear framework for learning and demonstration. The structure of our project is shown in the figure:

image

First, the main responsibility of the public module is to import the dependencies shared by all services, which ensures consistency and reusability across modules. We won't demonstrate this here. Let's just look at the dependencies of the order and user modules. They are actually the same, and their purpose is to get their services registered with the center.

<dependencies>
    <dependency>
        <groupId></groupId>
        <artifactId>mall-common</artifactId>
        <version>0.0.1-SNAPSHOT</version>
        <scope>compile</scope>
    </dependency>

    <!-- nacosService Registration and Discovery -->
    <dependency>
        <groupId></groupId>
        <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
    </dependency>
</dependencies>

Please add some necessary configuration file information, the following is relatively simple. However, each service will need to specify a microservice name independently, and only an example is provided here for reference.

server:
  port: 8040

spring:
  application:
    name: mall-user #Microservice Name

  #configurenacosRegistration Center Address
  cloud:
    nacos:
      discovery:
        server-addr: 127.0.0.1:8848
        namespace: 9f545878-ca6b-478d-8a5a-5321d58b3ca3

namespace (computing)

If you do not specifically configure a namespace, resources are deployed in public by default. In this case, if you need to use another namespace, the user must create a new namespace on his own. Example:

image

Okay, now let's start both services and see how it runs. This way, both services are successfully registered. However, it is important to note that if you want these two services to be able to communicate with each other, make sure that they are deployed under the same namespace.

image

We can also view detailed information about each service, which is packed with rich content.

image

sample code (computing)

At this point, we are not integrating any other tools, but just integrating Nacos' Maven dependency into our project alone. At this stage, we have been able to annotate the service name so that we don't need to hardcode the IP address in the code. Next, let's look at the specific code for the configuration class as follows:

@Bean
@LoadBalanced  //mall-order => ip:port
public RestTemplate restTemplate() {
    return new RestTemplate();
}

We can then write the business code on the user side in a more concise and clear manner, as shown below:

@RequestMapping(value = "/findOrderByUserId/{id}")
public R findOrderByUserId(@PathVariable("id") Integer id) {
    ("according touserId:"+id+"Check Order Information");
    // ribbonrealization,restTemplateNeed to add@LoadBalancedexplanatory note
    // mall-order ip:port
    String url = "http://mall-order/order/findOrderByUserId/"+id;

    R result = (url,);
    return result;
}

Our order-side business code is relatively simple and is rendered as follows:

@RequestMapping("/findOrderByUserId/{userId}")
public R findOrderByUserId(@PathVariable("userId") Integer userId) {
    ("according touserId:"+userId+"Check Order Information");
    List<OrderEntity> orderEntities = (userId);
    return ().put("orders", orderEntities);
}

Let's take a look at the call to confirm that it does indeed achieve the desired effect.

image

Third-party component OpenFeign

In a monolithic architecture, you would simply use theRestTemplate OpenFeign provides a declarative way to define an HTTP client, which makes it possible to define the HTTP client in a declarative way. Obviously this is not possible, so in this case the interaction between services can be significantly simplified with the help of the popular third-party component OpenFeign, which provides a declarative way of defining HTTP clients, making it easier to make service calls while keeping the code readable and maintainable.

First, we need to add the project's file to add the appropriate Maven dependencies.

<dependency>
    <groupId></groupId>
    <artifactId>spring-cloud-starter-openfeign</artifactId>
</dependency>

Beyond the initial, a note needs to be added to the startup class:

@SpringBootApplication
@EnableFeignClients //Scanning and Registrationfeignclient (computing)beandefine
public class MallUserFeignDemoApplication {、
  public static void main(String[] args) {
      (, args);
  }
}

When we used to write the ip address there to replace it with a class, we needed to define the service class separately:

@FeignClient(value = "mall-order",path = "/order")
public interface OrderFeignService {
    @RequestMapping("/findOrderByUserId/{userId}")
    R findOrderByUserId(@PathVariable("userId") Integer userId);
}

This allows us to use a more concise and intuitive way of writing when calling services. Doesn't it feel more comfortable to use this way?

@Autowired
OrderFeignService orderFeignService;

@RequestMapping(value = "/findOrderByUserId/{id}")
public R findOrderByUserId(@PathVariable("id") Integer id) {
    //feigninvocations
    R result = (id);
    return result;
}

Again, it can be called successfully and normally.

image

However, there are a few more details to keep in mind during implementation. Many developers tend to encapsulate these calls into a separate microservice module - theapi-serviceand depend on the current microservice as a subproject. This approach effectively distinguishes external API calls from internal service logic and avoids mixing different types of functionality in the same package. See below:

image

Okay, so at this point we have completed a complete invocation process. All this setup and configuration has laid a solid foundation for our subsequent development. Next, we can focus on implementing the actual business logic, such as database calls and storage operations.

Advanced Learning

Next we will dive into the content. Since many of the details have yet to be explained in detail, the previous hands-on session was mainly aimed at giving you an initial understanding of the role of the Service Registration and Discovery Center. In order to get a better grasp of this topic, we need to focus on some key issues such as load balancing on the client side, heartbeat monitoring, and service registration and discovery.

Next, we will analyze the source code and walk you through a comprehensive understanding of how Nacos efficiently solves the three core tasks of the registry.

gRPC

Here, I'd like to start by describing how Nacos is implemented. As of Nacos version 2.1, traditional RPC calls such as HTTP are no longer officially recommended, although they are still supported. If you are planning to upgrade to Nacos, there is one configuration parameter you should pay special attention to: the Setting in the file.1x=true

In our previous analysis, we have explored the Nacos version of the implementation, that version does interact via regular HTTP calls, the Nacos server side implements a number of Controllers, just like the microservices we build ourselves, and the source code is very readable and easy to understand. The calls are shown in the following diagram:

image

However, since Nacos version 2.1, the system has undergone a significant upgrade in favor of gRPC. gRPC is an open source Remote Procedure Call (RPC) framework originally developed by Google. It utilizes HTTP/2 as the transport protocol to provide more efficient network communication and Protocol Buffers as the message format to enable fast and efficient data serialization and deserialization.

image

Performance Optimization: gRPC is based on the HTTP/2 protocol and supports multiplexing, allowing multiple requests to be sent simultaneously over a single connection, reducing latency and bandwidth usage.

Binary Load: Protocol buffers are serialized into a compact binary format compared to text-based JSON/XML.

Streaming and Bidirectional Streaming: gRPC supports streaming data transmission and can realize bidirectional streaming communication between the client and the server, which is suitable for real-time applications.

Solving the GC problem: Through real long connections, object creation with frequent connections and disconnections is reduced, which in turn reduces GC (garbage collection) pressure and improves system performance.

The Nacos upgrade to gRPC is based on its many advantages, but I must also emphasize that there is no such thing as a "silver bullet" for any of the technologies, and this has always been my view. The most obvious disadvantage is the increased complexity of the system. Therefore, when choosing a technology solution, you must make an informed decision based on your business needs.

In the source code for the new version of Nacos, you'll find a lot of code that starts with the.proto files named with a suffix. These files define the structure of the message, where each message represents a small logical record of information containing a series of name-value pairs called fields. This type of definition makes data transfer and parsing more efficient and flexible.

For example, we can find a random buffer file in Nacos.

image

While it's not the focus of our discussion, it's worth pointing out that the introduction of gRPC will bring significant performance optimizations to Nacos. While we won't delve into the specifics of its implementation here, it's important to understand this because gRPC will play a key role in all subsequent calls.

Service Registration

When our service starts, an important process occurs: the service instance makes a request to Nacos to complete registration. This is illustrated below:

image

In the interest of efficiency, we will not be doing a step-by-step source code trace, although we have previously explained in detail how to view Spring's autoconfiguration. Today, we're going to focus directly on key source code locations to quickly understand the details of the Nacos implementation.

@Override
public void register(Registration registration) {
  //Non-critical code omitted here
    NamingService namingService = namingService();
    String serviceId = ();
    String group = ();

    Instance instance = getNacosInstanceFromRegistration(registration);

    try {
        (serviceId, group, instance);
        ("nacos registry, {} {} {}:{} register finished", group, serviceId,
                (), ());
    }
    //Non-critical code omitted here

During the service registration process, we can observe that some of its own IP and port information is constructed. This information is crucial for the correct identification and invocation of the service. Also worth mentioning here is the concept of Namespace. Namespaces are used in Nacos to enable isolation at the tenant (user) level of granularity, which is particularly important for resource management in microservice architectures.

One of the common application scenarios for namespaces is isolation between different environments, such as resource isolation between development and test environments and production environments.

image

Next, we will make a service call, here using the gRPC protocol. In fact, this process can be reduced to a single method call.

private <T extends Response> T requestToServer(AbstractNamingRequest request, Class<T> responseClass)
        throws NacosException {
    try {
        (
                getSecurityHeaders((), (), ()));
        Response response =
                requestTimeout < 0 ? (request) : (request, requestTimeout);
        //Non-critical code omitted here

server-side processing

When the Nacos server receives a request to make a gRPC call from a client, it immediately starts a series of processes to ensure that the request is responded to efficiently. The implementation details of the key code can be found in the following section.

@Override
@TpsControl(pointName = "RemoteNamingServiceSubscribeUnSubscribe", name = "RemoteNamingServiceSubscribeUnsubscribe")
@Secured(action = )
@(rpcExtractor = )
public SubscribeServiceResponse handle(SubscribeServiceRequest request, RequestMeta meta) throws NacosException {
    String namespaceId = ();
    String serviceName = ();
    String groupName = ();
    String app = ().getBasicContext().getApp();
    String groupedServiceName = (serviceName, groupName);
    Service service = (namespaceId, groupName, serviceName, true);
    Subscriber subscriber = new Subscriber((), (), app, (),
            namespaceId, groupedServiceName, 0, ());
    ServiceInfo serviceInfo = ((service),
            (service).orElse(null), (), false, true,
            ());
    if (()) {
        (service, subscriber, ());
        (new SubscribeServiceTraceEvent((),
                (meta), (), (),
                ()));
    } else {
        (service, subscriber, ());
        (new UnsubscribeServiceTraceEvent((),
                (meta), (), (),
                ()));
    }
    return new SubscribeServiceResponse((), "success", serviceInfo);
}

This code includes extracting request information, creating related objects, handling subscription or unsubscription operations, and returning the corresponding results. In this way, Nacos can efficiently manage the service discovery and registration functions of microservices.

Heartbeat monitoring

Prior to Nacos version 2.1, each service sent a request to the registry once at runtime to notify it of its current state of survival and uptime. This mechanism, while effective, could introduce additional network burden and latency in highly concurrent environments.

However, upgrading to version 2.1 changed the process significantly. First, we need to think about the nature of heartbeat monitoring. Obviously, heartbeat monitoring is a periodic checking mechanism, which means that the service automatically sends heartbeat signals at set intervals to confirm its survival. Therefore, it is reasonable to assume that this functionality is implemented on the client side as a timed task that periodically reports the health status of the service to the registry at a predetermined time frequency.

To better understand the implementation of this mechanism, we will next focus on the key code involved.

public final void start() throws NacosException {
       // Omit some code
        
        clientEventExecutor = new ScheduledThreadPoolExecutor(2, r -> {
            Thread t = new Thread(r);
            ("");
            (true);
            return t;
        });
        
        // Omit some code
        
        (() -> {
            while (true) {
                try {
                    if (isShutdown()) {
                        break;
                    }
                    ReconnectContext reconnectContext = reconnectionSignal
                            .poll(keepAliveTime, );
                    if (reconnectContext == null) {
                        // check alive time.
                        if (() - lastActiveTimeStamp >= keepAliveTime) {
                            boolean isHealthy = healthCheck();
                            if (!isHealthy) {
                                 // Omit some code

I've largely stripped out the non-health-monitoring related code so that you can more intuitively observe how Nacos performs instance health monitoring. Since the core purpose of health monitoring is to confirm service availability, this process is relatively simple to implement.

In this code, we can clearly see that health monitoring does not involve any complex data transfer. Its main function is simply to send requests to the server in order to check whether the server is able to respond successfully. This design greatly reduces the network overhead and makes the monitoring process more efficient.

image

The server-side code is equally clear and simple. It is shown below:

@Override
@TpsControl(pointName = "HealthCheck")
public HealthCheckResponse handle(HealthCheckRequest request, RequestMeta meta) {
    return new HealthCheckResponse();
}

Overall, this optimization significantly reduces network I/O consumption and improves the overall performance of the system. At first glance, it doesn't seem to do anything complicated, but that doesn't mean we can't tell if the client is able to connect properly. In fact, the key judgment logic is designed in the outer code.

Connection connection = (GrpcServerConstants.CONTEXT_KEY_CONN_ID.get());
RequestMeta requestMeta = new RequestMeta();
(().getClientIp());
(GrpcServerConstants.CONTEXT_KEY_CONN_ID.get());

(().getLabels()).
(()).
// Refresh the time here. It's used to represent that it's actually alive
(());
prepareRequestContext(request, requestMeta, connection); //The return of this processing.
// The return of this process
Response response = (request, requestMeta); //This process returns a response.

Don't worry, the server side also runs a timed task that is responsible for periodically scanning and checking the status of each client. Let's take a look:

public void start() {
    initConnectionEjector();
    // Start UnHealthy Connection Expel Task.
    RpcScheduledExecutor.COMMON_SERVER_EXECUTOR.scheduleWithFixedDelay(() -> {
        ();
        ().set(());
    }, 1000L, 3000L, );
//Omit part of the code,doEjectThe method goes further back.,You'll then find a piece of code that looks like this
//outdated connections collect.
for (<String, Connection> entry : ()) {
    Connection client = ();
    if (now - ().getLastActiveTime() >= KEEP_ALIVE_TIME) {
        (().getConnectionId());
    } else if (().pushQueueBlockTimesLastOver(300 * 1000)) {
        (().getConnectionId());
    }
}
//Omit part of the code,

With these analyses, you have basically grasped the core concepts and implementation details. We don't need to go into much more detail. Let's move on.

load balancing

When it comes to load balancing, first we need to make sure that we have a list of servers locally so that we can distribute the load appropriately. The key, therefore, is how we get information about these available services from the registry. So, specifically, how should we effectively discover and acquire these services locally?

service discovery

The mechanism for service discovery changes dynamically as instances are added or subtracted, so we need to update the list of available services periodically. This leads to an important design consideration: why not integrate the retrieval task of service discovery directly into the heartbeat task?

First, the main purpose of the heartbeat task is to monitor the health status of the service instances to ensure that they are able to respond to requests properly. Service discovery, on the other hand, focuses on timely updating and obtaining information about currently available service instances. The purposes of these two are clearly different, so mixing them together may lead to logical confusion and functional complexity.

In addition, the time intervals between the two vary. Heartbeat monitoring may need to be performed more frequently to detect and handle service failures in a timely manner, while the frequency of service discovery can be appropriately adjusted according to specific needs. For these reasons, separating heartbeat monitoring and service discovery into two separate timed tasks is clearly a more reasonable choice.

Next, let's dive into the key code for service discovery to see exactly how this mechanism is implemented:

public void run() {
    //Omit part of the code
    if (serviceObj == null) {
        serviceObj = (serviceName, groupName, clusters, 0, false);
        (serviceObj);
        lastRefTime = ();
        return;
    }
    
    if (() <= lastRefTime) {
        serviceObj = (serviceName, groupName, clusters, 0, false);
        (serviceObj);
    }
    //Omit part of the code

Of course, we'll explore the server-side processing logic next, and here are the key code sections for server-side processing:

public QueryServiceResponse handle(ServiceQueryRequest request, RequestMeta meta) throws NacosException {
    String namespaceId = ();
    String groupName = ();
    String serviceName = ();
    Service service = (namespaceId, groupName, serviceName);
    String cluster = null == () ? "" : ();
    boolean healthyOnly = ();
    ServiceInfo result = (service);
    ServiceMetadata serviceMetadata = (service).orElse(null);
    result = (result, serviceMetadata, cluster, healthyOnly, true,
            (meta));
    return (result);
}

In this way, we were able to obtain some key information about the service.

Load Balancing Algorithm

If there are multiple IP addresses for the same microservice, how do we choose a specific server when making a service call? Typically, we think of using Nginx as a load balancing tool on the server side. However, in addition to load balancing on the server side, we can also configure load algorithms on the client side of the microservice to optimize the distribution of requests.

At this point, it's important to note that this part of the logic is not really the responsibility of Nacos, but of another component, the Ribbon, which focuses on client load balancing to ensure that in a microservices architecture, the client intelligently chooses the right servers to invoke, thereby Ribbon specializes in client-side load balancing to ensure that in a microservices architecture, clients intelligently choose the right server to call, thus improving performance and stability.

Next, we can take a deeper look at Ribbon's key code to understand how it selects servers. Specifically, the Ribbon intercepts requests through a class called LoadBalance and picks the appropriate server based on a predefined load balancing policy.

image

Let's take a deeper look at the key code. In fact, all the logic of the load balancing algorithm is concentrated in the implementation of the getServer method.

public <T> T execute(String serviceId, LoadBalancerRequest<T> request, Object hint)
        throws IOException {
    ILoadBalancer loadBalancer = getLoadBalancer(serviceId);
    Server server = getServer(loadBalancer, hint);
    if (server == null) {
        throw new IllegalStateException("No instances available for " + serviceId);
    }
    RibbonServer ribbonServer = new RibbonServer(serviceId, server,
            isSecure(server, serviceId),
            serverIntrospector(serviceId).getMetadata(server));

    return execute(serviceId, ribbonServer, request);
}

We can locally configure load balancing policies to flexibly adjust the behavior of service calls based on specific business requirements and scenarios.

#Name of the microservice being called
mall-order.
 ribbon.
    #Specify to use the load balancing policy provided by Nacos (prioritize calls to instances in the same cluster, based on random & weights)
    NFLoadBalancerRuleClassName.

Of course, we can also configure globally in order to manage load balancing policies and parameters uniformly across the entire system.

@Bean
public IRule ribbonRule() {
    // Specify to use the load balancing policy provided by Nacos (prioritize calls to instances in the same cluster, based on random weights)
    return new NacosRule();
}

summarize

As we delve deeper into this paper, we gain a more comprehensive understanding of the service discovery and registration mechanisms in microservice architectures. From the limitations of monolithic architectures to the flexibility of microservices, we have witnessed the journey of architectural evolution. The importance of service discovery and registration as the cornerstone of microservice communication cannot be overstated. Through Nacos, a powerful registry, we not only achieve dynamic registration and discovery of services, but also ensure high availability and stability of services through heartbeat monitoring, load balancing and other mechanisms.

In terms of technology selection, the gRPC implementation of Nacos demonstrated its potential for performance optimization, while also presenting the challenge of system complexity. However, with well-designed client-side and server-side code, we were able to effectively manage service instances and achieve fast response and load balancing of services. The implementation of these mechanisms not only improves the scalability and fault tolerance of the system, but also provides a solid foundation for the rapid development of microservices.

Whether you're new to microservices or an expert with years of experience, we hope the analysis and practical examples in this article will provide you with new perspectives and insights!


I'm Rain, a Java server-side coder, studying the mysteries of AI technology. I love technical communication and sharing, and I am passionate about open source community. I am also a Tencent Cloud Creative Star, Ali Cloud Expert Blogger, Huawei Cloud Enjoyment Expert, and Nuggets Excellent Author.

💡 I won't be shy about sharing my personal explorations and experiences on the path of technology, in the hope that I can bring some inspiration and help to your learning and growth.

🌟 Welcome to the effortless drizzle! 🌟