X. Preamble
Having already introduced the concepts and services in k8s, this article continues with a look at what Ingress is.
The importance of Ingress cannot be overstated. Not only does it unify the entry point for external access to the cluster, but it also provides key features such as advanced routing, seven-layer load balancing, SSL termination, and support for advanced features such as dynamic configuration updates and grayscale releases.
This is described in more detail below.
I. About Ingress
Ingress is an API object in K8s.For management and configurationExternal to in-cluster servicesvisitsThe flexibility of Ingress allowed us to implement advanced application routing, SSL termination and load balancing. It defines HTTP and HTTPS routing rules and directs requests from load balancers outside the cluster to the appropriate service. the flexibility of Ingress allows us to implement advanced application routing, SSL termination, and load balancing features.
With Ingress, we can expose multiple services within the cluster to the outside with customized routing settings as needed, which facilitates application scaling and flexible deployment.
What are the uses of Ingress:
- Unified access control.Ingress provides a unified entry point for managing access and traffic control for multiple services.
- HTTP/HTTPS routing.With Ingress, you can configure hostname and path-based routing rules to direct external requests to services within the cluster.
- SSL termination.Ingress can be configured with SSL certificates for encrypting and decrypting external traffic to secure data transmission.
- Load Balancing.The Ingress controller enables load balancing of multi-instance services, distributing requests to multiple back-end instances.
There are two key components involved in making Ingress work:Ingress Resources and Controllers。
resources are user-defined Kubernetes resources.Describes the mapping between hostnames, paths, and back-end services。
Controllers are the actual components that implement Ingress rules. Common Ingress controllers include Nginx, Traefik, and HAProxy, to name a few.The controller monitors changes to Ingress resources and configures its proxy servers accordingly for routing and traffic management.
Ingress workflow:
- Define the Ingress resource.The user creates an Ingress resource that defines the mapping of hostnames, paths, and back-end services.
- Ingress controller monitoring.The Ingress controller constantly monitors changes to Ingress resources.
- Configure the proxy server.Depending on the definition of the Ingress resource, the Ingress controller configures its proxy server (such as NGINX) to match the request.
- Processing of requests.When an external request arrives at the cluster, the Ingress controller's proxy servers route the request according to Ingress rules and forward it to the appropriate service.
Ingress application scenarios:
- Multi-service management.For environments where multiple services need to be managed, Ingress enables unified portal control.
- Domain-based routing.Run multiple applications in the same cluster, accessed through different domains or subdomains.
- Path-based routing.Route traffic to different services based on the request path, e.g. /api path to one microservice, /web path to another.
- SSL termination.Scenarios that require HTTPS encrypted traffic are terminated by configuring SSL certificates through Ingress.
- Load Balancing.Load balancing between multi-instance services to improve service availability and scalability.
Reference:/lwxvgdv/article/details/139505471
Ingress structure and configuration examples
2.1 Basic Structure of Ingress Configuration
- Rules: Each Ingress object can contain multiple rules, each of which defines a set of path matching rules and the back-end services associated with them.
- Backend Services: The backend service specified in the rule is the target service when the Ingress routing request arrives. This can be a Service, Pod, or external service in the cluster.
- Paths: The path defines how the request should be routed to the back-end service. Paths can be matched using wildcards and regular expressions.
- TLS(Transport Layer Security): Ingress also supports TLS for enabling HTTPS. the TLS configuration includes certificates and keys to ensure that data is secure in transit.
Here is an example of a simple Ingress object:
apiVersion: networking./v1
kind: Ingress
metadata: # Contains metadata about the resource, such as name, label, etc.
name: my-ingress
name: my-ingress
rules: # Defines a set of rules to match incoming HTTP requests and route them to the back-end service.
- host: # Specifies which hostname this rule applies to.
http: # Defines HTTP-related configuration.
paths: # Defines a set of paths and their corresponding back-end services.
# Specifies the prefix of the path, in this case /app.
# This means that all requests starting with /app will be routed to this path
- path: /app
# Specifies the type of the path, in this case Prefix, which means that path prefixes match
pathType: Prefix
backend: # Defines the details of the backend Service service.
service: name: my-app-service
name: my-app-service # The name of the backend service name
name: name: my-app-service # Name of the backend service. port: number: 80
name: my-app-service # Name of the backend service port: number: 80
tls: # Defines the TLS configuration for encrypted transmission.
- hosts: # Specifies which hostnames need to be encrypted with TLS.
-hosts: # Specifies which hostnames need to be encrypted using TLS.
# Specifies the name of the Secret where the TLS certificate and key are stored.
secretName: my-tls-secret
In this example, we define an Ingress object that routes /app requests to a Service named my-app-service with HTTPS enabled.
2.2 Ingress Usage Examples
To better understand the use of Ingress, here is a concrete example to demonstrate.
Let's say we have a web application that consists of front-end (frontend) and back-end (backend) services. Now, we want to expose these two services externally through Ingress with customized routing on the path.
First, we create the front-end and back-end Deployment and Service:
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: web-server
image: my-frontend-image:latest
ports:
- containerPort: 80
Two Services:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
replicas: 3
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app: backend
spec:
containers:
- name: api-server
image: my-backend-image:latest
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Ingress object:
apiVersion: networking./v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host:
http:
paths:
- path: /frontend
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /backend
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
tls:
- hosts:
-
secretName: my-tls-secret
In this example, we define an Ingress object that routes /frontend requests to the front-end service and /backend requests to the back-end service. In addition, we have enabled HTTPS and specified a Secret for the TLS certificate.
2.3 Dynamically Updating Ingress Rules
One of the powerful things about Ingress is that it supports dynamic updates.
When routing rules need to be adjusted, the Ingress object can be modified directly without restarting the application or recreating the service.
The following is a sample configuration:
apiVersion: networking./v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- host:
http:
paths:
- path: /frontend
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
tls:
- hosts:
-
secretName: my-tls-secret
In this example, we keep only the routing rules for the frontend service. Then, by applying this updated Ingress object, K8s will automatically update the load balancer's configuration so that only /frontend requests reach the frontend service.
Reference:/p/676245770
III. Approximate links for external requests
As shown above, the approximate request link is as follows:
1) Users make HTTP/HTTPS requests from clients such as web/mobile/pc.
2) Since application services are usually provided throughdomain nameso the request will be made first.DNS Domain Name ResolutionThe following is an example of a public IP address.
3) A public IP address is usually bound to aLoad Balancer, at which point the request goes to this load balancer.
- Load balancers can be hardware or software thatUsually stable (fixed public IP address), as switching IP addresses can cause the service to be unreachable for a certain period of time due to DNS caching。
- A load balancer is aImportant middle layerIt is used to receive public network traffic externally and to manage and forward traffic internally.
4) Load Balancer then forwards the request to the One of the traffic ingress points for a k8s cluster, usually the Ingress。
- Ingress is responsible forIntra-cluster route forwarding, which can be thought of as a gateway within the cluster.
- Ingress is just configuration.The specific traffic forwarding is done by Ingress-controllerThe latter has a variety of options, such as Nginx, HAProxy, Traefik, Kong, and more.
5) Ingress further forwards to service based on user-defined routing rules.
- for exampleForwarding is done according to the requested Path path:
- Forwarding based on requested Host:
6) Service forwards the request to Pod based on selector (matching label label).
- There are multiple types of services, and the default type within a cluster is ClusterIP.
- Service is also essentially a configuration that ends up in the kube-proxy component on the Node node, which does the actual request forwarding by setting up iptables/ipvs.
- The service may correspond to multiple Pods, but the final request will be forwarded to only one Pod according to certain rules.
7) Pod finally sends the request to the Container container in it.
- There may be multiple Containers inside the same Pod, but multiple Containers can't share the same port, so here the request will be sent to the corresponding Container based on the specific port number.
This is how a typical cluster external HTTP request reaches a Container in a Pod.
Note that the above request flow process is not the only way to do this due to the flexibility of the network configuration, for example:
- If you are in a cloud service environment, you can use a LoadBalancer type service to bind directly to a load balancer provided by the cloud service provider, and then connect to Ingress or other services.
- It is also possible to build your own load balancer through these nodes by using the ports on the nodes directly through a Service of type NodePort.
- If the service to be deployed is particularly simple and needs to manage internal traffic in an unorganized manner, it is possible to do so without using ingress.
In addition, there is the Linux kernel namespace, which enables resource isolation, because each Pod has its own Linux namespace. Because each pod has its own Linux namespace, different pods are resource-agnostic, and there are different kinds of namespaces, including PID, IPC, Network, Mount, Time, etc. The PID namespace enables process isolation, so a pod can have its own process number 1. The PID namespace enables process isolation, so a pod can have its own process #1. Network namespace allows each Pod to have its own network.
Reference:/a/1190000044517338