Location>code7788 >text

k8s-Using Network Policies for Network Isolation

Popularity:397 ℃/2024-08-27 15:56:37

I. Demand

Namespaces in Kubernetes are mainly used to organize and isolate resources, but by default, Pods in different namespaces can communicate with each other. To achieve stricter network isolation, the same set of k8s needs to isolate network environments based on different namespaces, such as dev01 and test01 environments.Network Policies is a mechanism provided by Kubernetes to control network traffic between Pods. You can define Network Policies for each namespace to limit communication between Pods.

II. Relevant explanations

It is the pod selector that selects the pods under the same namespace as Network Policy based on the label, and applies the rules defined in Network Policy to the pod if it is selected. This is an optional field, when there is no such field, it means all pods are selected.

The rules defined by Network Policy can be divided into two types, Ingress rules for incoming pods and Egress rules for outgoing pods. This field can be regarded as a switch, if it contains Ingress, the rules defined in the Ingress section will take effect, if it is Egress, the rules defined in the Egress section will take effect, if it contains both, then all rules will take effect. Of course, this field is also optional, so if it is not specified, Ingress will take effect by default, and Egress will take effect only if it is defined in the Egress section. How to understand this sentence, will be mentioned below, there is no explicit definition of Ingress, Egress part, it is also a rule, the default rule rather than no rule.

  • together with

respond in singing fields are used to define traffic rules that are allowed to enter the Pod and traffic rules that are allowed to leave the Pod, respectively. The structure of these two fields and the subsections they contain are explained in detail below.

framework

spec:
  ingress:
  - from:
    - podSelector:
        matchLabels: {} # Select Pods with specific tags
    - namespaceSelector:
        matchLabels: {} # Select namespaces with specific tags
    - ipBlock:
        cidr: 0.0.0.0/0 # CIDR address range
        except:
        - 10.0.0.0/8 # Excluded CIDR address ranges
  - ports:
    - protocol: TCP # Protocol type
      port: 80 # Specific ports
      endPort: 8080 # If it is a port range, you need to specify the end port

framework

spec:
  egress:
  - to:
    - podSelector:
        matchLabels: {} # Select Pods with specific tags
    - namespaceSelector:
        matchLabels: {} # Select namespaces with specific tags
    - ipBlock:
        cidr: 0.0.0.0/0 # CIDR address range
        except:
        - 10.0.0.0/8 # Excluded CIDR address ranges
  - ports:
    - protocol: TCP # Protocol type
      port: 80 # Specific ports
      endPort: 8080 # If it is a port range, you need to specify the end port

typical example

Let's say you have a NetworkPolicy that allows the Pod to receive addresses from the CIDR address range192.168.1.0/24 traffic and allows the Pod to send traffic to the CIDR address range10.0.0.0/8The TCP protocol only allows communication on ports 80 and 443.

apiVersion: networking./v1
kind: NetworkPolicy
metadata:
  name: allow-traffic
  namespace: my-namespace
spec:
  podSelector: {}
  ingress:
  - from:
    - ipBlock:
        cidr: 192.168.1.0/24
  - ports:
    - protocol: TCP
      port: 80
    - protocol: TCP
      port: 443
  egress:
  - to:
    - ipBlock:
        cidr: 10.0.0.0/8
  - ports:
    - protocol: TCP
      port: 80
    - protocol: TCP
      port: 443
  policyTypes:
  - Ingress
  - Egress

caveat

  • Applicability: This NetworkPolicy is applicable to themy-namespace All Pods in the namespace.
  • Flow Control: Controls Ingress and Egress traffic.
  • CIDR Address Range.192.168.1.0/24 cap (a poem)10.0.0.0/8 Typically used in private networks.
  • Port Control: Allows communication over TCP protocol ports 80 and 443 only.
III. Test planning

1. Creating Namespaces

2, create pod and nodeport

3. Test before applying the strategy

4. Create network policy 1-pod isolation

5. Creating Network Policies 2-Namespace Segregation

6. Creating Network Policies 3 - Business Namespace Segregation

IV. Implementation

1. Creating Namespaces

Create the 3 namespaces sub1, sub2, and sub3 used for testing.

If a later network policy uses thenamespaceSelector, which is required with a label when creating a namespace, or you can manually add a label if needed.

kubectl create ns sub1 --labels ns=sub1
kubectl create ns sub2 --labels ns=sub2
kubectl create ns sub3 --labels ns=sub2

If you are using a private repository, note that ns creates docker-secret.

kubectl create secret docker-registry my-registry-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD -n namespace

2, create pod and nodeport

 2.1. sub1 creates sub1-pod1 and sub1-pod1-nodeport, sub1-pod2;
 2.2. sub2 creates sub2-pod1 and sub2-pod1-nodeport, sub2-pod2;
 2.3. sub3 creates sub3-pod1;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub1-pod1
  namespace: sub1
spec:
  selector:
    matchLabels:
      app: sub1pod1
  replicas: 1
  template:
    metadata:
      labels:
        app: sub1pod1
    spec:
      containers:
      - name: my-test01-01
        image: /k8s-imgs/account-platform-admin:dev01-084e7d9
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: default-secret
      imagePullSecrets:
        - name: swr-secret

---

apiVersion: v1
kind: Service
metadata:
  name: sub1-pod1-nodeport
  namespace: sub1
spec:
  type: NodePort
  selector:
    app: sub1pod1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8103
      nodePort: 32700

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub1-pod2
  namespace: sub1
spec:
  selector:
    matchLabels:
      app: sub1pod2
  replicas: 1
  template:
    metadata:
      labels:
        app: sub1pod2
    spec:
      containers:
      - name: my-test01-02
        image: /k8s-imgs/account-platform-admin:dev01-084e7d9
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: default-secret
      imagePullSecrets:
        - name: swr-secret
	
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub2-pod1
  namespace: sub2
spec:
  selector:
    matchLabels:
      app: sub2pod1
  replicas: 1
  template:
    metadata:
      labels:
        app: sub2pod1
    spec:
      containers:
      - name: my-test02-01
        image: /k8s-imgs/account-platform-admin:dev01-084e7d9
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: default-secret
      imagePullSecrets:
        - name: swr-secret
---

apiVersion: v1
kind: Service
metadata:
  name: sub2-pod1-nodeport
  namespace: sub2
spec:
  type: NodePort
  selector:
    app: sub2pod1
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8103
      nodePort: 32701

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub2-pod2
  namespace: sub2
spec:
  selector:
    matchLabels:
      app: sub2pod2
  replicas: 1
  template:
    metadata:
      labels:
        app: sub2pod2
    spec:
      containers:
      - name: my-test02-02
        image: /k8s-imgs/account-platform-admin:dev01-084e7d9
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: default-secret
      imagePullSecrets:
        - name: swr-secret

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sub3-pod1
  namespace: sub3
spec:
  selector:
    matchLabels:
      app: sub3pod1
  replicas: 1
  template:
    metadata:
      labels:
        app: sub3pod1
    spec:
      containers:
      - name: my-test03-01
        image: /k8s-imgs/account-platform-admin:dev01-084e7d9
        imagePullPolicy: IfNotPresent
      imagePullSecrets:
      - name: default-secret
      imagePullSecrets:
        - name: swr-secret

View created resources

3、Test before applying policy

Test that the pods can communicate with each other and between the pods and the extranet before applying the network policy.

sub1-pod1 ping sub1-pod2、sub2-pod1、sub3-pod1、

External nodes ping sub1-pod1-nodeport, sub2-pod1-nodeport

Take the ip of any node in the k8s cluster 10.34.106.14

Summary: The above test proves that before adding the network policy, the pods are interoperable with each other, and the pods are interoperable with each other outside the network.

4. Create network policy 1-pod isolation

Policy Description: Create a policy in sub1 so that pods in sub1 cannot communicate with each other and with other namespace pods, and can only enter and exit the external network.
Testing Process:
4.1. sub1-pod1, sub1-pod2, and sub2-pod1 do not ping each other;
4.2. sub1-pod1 and sub1-pod2 can ping the extranet (including the domain name), and the extranet can communicate with sub1-pod1-nodeport;

This k8s cluster pod cidr is: 10.243.0.0/16

apiVersion: networking./v1
kind: NetworkPolicy
metadata:
  name: pod-policy
  namespace: sub1
spec:
  podSelector: {}
  ingress:
    - from:
      - ipBlock:
          cidr: 0.0.0.0/0
          except:
          - 10.243.0.0/16
  egress:
    - to:
      - ipBlock:
          cidr: 0.0.0.0/0
          except:
          - 10.243.0.0/16
  policyTypes:
  - Egress
  - Ingress

Explanation:

  1. podSelector{} Indicates that all Pods in the namespace are selected.
  2. ingress: Defines the flow control rules for entering the namespace.
    • from: Indicates the sources from which traffic is allowed.
    • ipBlock:: Indicates that the data from sources other than10.243.0.0/16 Traffic for all IP addresses outside the CIDR address range.
  3. egress: Defines flow control rules for leaving the namespace.
    • to: Indicates the destinations to which traffic is allowed.
    • ipBlock:: Indicates that flow is allowed to all but10.243.0.0/16 Traffic from all IP addresses outside the CIDR address range.
  4. policyTypes: Specifies the type of traffic that NetworkPolicy controls, which in this case includesIngress cap (a poem)Egress

Effect:

  • Ingress: Allows traffic from all sources except CIDR address ranges10.243.0.0/16
  • Egress: Allow traffic to all destinations except CIDR address ranges10.243.0.0/16

Testing:

Creating Strategies

sub1-pod1 ping sub1-pod2 and sub2-pod1 (not working)

sub2-pod1 ping sub1-pod1 and sub1-pod2 (not working)

sub1-pod1 ping external ip (pass)

sub1-pod1 ping domain name (not working) - because the domain name resolution service dns is in the kube-system space and sub1 is blocking all the space, the following example can be solved.

External access sub1-pod1-nodeport (pass)

Take the ip of any node in the k8s cluster 10.34.106.14

5. Creating Network Policies 2-Namespace Segregation

Policy Description: Create a policy in sub2 so that pods in sub2 can communicate with each other but not with other namespace pods, and can also go in and out of the extranet.

Testing Process:
5.1. sub2-pod1 and sub2-pod2 are pingable to each other;
5.2. sub2-pod1, sub2-pod2 and sub3-pod1 cannot ping each other;
5.3. sub2-pod1 and sub2-pod2 can ping the extranet (including the domain name), and the extranet can communicate with sub2-pod1-nodeport;

apiVersion: networking./v1
kind: NetworkPolicy
metadata:
  name: sub2
  namespace: sub2
spec:
  podSelector: {}
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.243.0.0/16
    - namespaceSelector:    # If you don't want to put a label on the namespace, you can replace it with - podSelector: {} # Allow traffic from all Pods in the same namespace
        matchLabels:
          ns: sub2
  ingress:
  - from:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.243.0.0/16
    - namespaceSelector:    # If you don't want to put a label on the namespace, you can replace it with - podSelector: {} # Allow traffic from all Pods in the same namespace
        matchLabels:
          ns: sub2
  policyTypes:
  - Egress
  - Ingress

account for

  1. podSelector{} Indicates that all Pods in the namespace are selected.
  2. ingress: Defines the flow control rules for entering the namespace.
    • from: Indicates the sources from which traffic is allowed.
    • ipBlock:: Indicates that the data from sources other than10.243.0.0/16 Traffic for all IP addresses outside the CIDR address range.
    • namespaceSelector: Indicates that the data from the labeledns=sub2 of the Pod in the namespace of the
    • fromfield in thepodSelector: {} Indicates that traffic from all Pods in the same namespace is allowed.
  3. egress: Defines flow control rules for leaving the namespace.
    • to: Indicates the destinations to which traffic is allowed.
    • ipBlock:: Indicates that flow is allowed to all but10.243.0.0/16 Traffic for all IP addresses outside the CIDR address range.
    • namespaceSelector:: Indicates that flow is allowed to labeledns=sub2 of the Pod in the namespace of the
    • tofield in thepodSelector: {} Indicates that traffic is allowed to all Pods in the same namespace.
  4. policyTypes: Specifies the type of traffic that NetworkPolicy controls, which in this case includesIngress cap (a poem)Egress

effect

  • Ingress: Allows traffic from all sources except CIDR address ranges10.243.0.0/16 and traffic from other namespaces, and if namespaceSelector is used it allows access to namespaces with the tagns=pod in sub2 if usingpodSelector: {}Indicates that traffic is allowed from all Pods in the same namespace
  • Egress: Allow traffic to all destinations except CIDR address ranges10.243.0.0/16 and flows to other namespaces, if namespaceSelector is used it allows flows to namespaces with the labelns=pod in sub2.If you use thepodSelector: {}Indicates that traffic is allowed to all Pods in the same namespace

test (machinery etc)

Creating Strategies

sub2-pod1 ping sub2-pod2 (pass)

sub2-pod1 ping sub3-pod1 (not working)

sub3-pod1 ping sub2-pod1, sub2-pod2 (not working)

sub2-pod1 ping external ip (pass)

sub2-pod1 ping domain name (not working) - because the domain name resolution service dns is in the kube-system space and sub1 is blocking all space, the following example can be solved.

External Access sub2-pod1-nodeport (pass)

Take the ip of any node in the k8s cluster 10.34.106.14

6. Creating Network Policies 3 - Business Namespace Segregation

In practice, pods in some namespaces may have calling relationships with pods in other namespaces, and may also use services in some system namespaces (e.g., DNS services), so you can't completely segregate a particular namespace from all other namespaces, and you only need to segregate the ones that you are sure don't have a business call.

Policy description: update the policy created in sub2 in step 5 so that the pod in sub2 can communicate with all addresses except the one in sub3.

Testing process:

6.1. sub2-pod1 and sub2-pod2 ping each other;

6.2. sub2-pod1 and coredns-pod in the kube-system namespace ping through to each other;

6.3. sub2-pod1 and sub3-pod1 cannot ping each other;

6.4. sub2-pod1 can ping the extranet (domain name), and the extranet can communicate with sub2-pod1-nodeport;

Since there is no direct way to block namespace configurations that have a certain label, it can only be done indirectly by releasing namespaces that have certain labels.

The label setting is more flexible and can be configured according to the actual situation, there are several ways for reference:

1, you can release all the namespaces to configure a unified label, so you only need to release the namespace with this label in the configuration;

2, you can also configure a label for each namespace, so that you need to release the namespace of each different label in the configuration;

3, you can set a unified label for the common system namespace and configure different labels for the business namespace;

This example uses type 2 to configure different labels for all the namespaces that need to be released;

Namespace plus label

kubectl label ns kube-system ns=kube-system
kubectl label ns kuboard ns=kuboard
kubectl label ns kube-public ns=kube-public

Network Isolation Configuration

apiVersion: networking./v1
kind: NetworkPolicy
metadata:
  name: ns-policy-2
  namespace: sub2
spec:
  podSelector: {}
  ingress:
  - from:
    - podSelector: {}
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.243.0.0/16
  - from:
    - namespaceSelector:
        matchLabels:
          ns: kube-system
  - from:
    - namespaceSelector:
        matchLabels:
          ns: kuboard
  - from:
    - namespaceSelector:
        matchLabels:
          ns: kube-public
  egress:
  - to:
    - podSelector: {}
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.243.0.0/16
  - to:
    - namespaceSelector:
        matchLabels:
          ns: kube-system
  - to:
    - namespaceSelector:
        matchLabels:
          ns: kuboard
  - to:
    - namespaceSelector:
        matchLabels:
          ns: kube-public
  policyTypes:
  - Egress
  - Ingress

account for

  • podSelector{} Indicates that all Pods in the namespace are selected.
  • policyTypesIngress cap (a poem)Egress Indicates control of both incoming and outgoing namespace traffic.
  • ingressfrom field in thepodSelector: {} Indicates that traffic from all Pods in the same namespace is allowed;ipBlock Indicates that CIDR addresses from networks other than the Kubernetes internal network are allowed in the range 10.243.0.0/16.
    Traffic from all IP addresses except
  • egressto field in thepodSelector: {} Indicates that traffic is allowed to all Pods in the same namespace;ipBlock Indicates that flows are allowed to all but the Kubernetes internal network CIDR address range 10.243.0.0/16
    Traffic from all IP addresses except
  • from cap (a poem)to: UsenamespaceSelector Allowed to be released with tagsns=kube-system、ns=kuboardns、ns=kube-publicof the Pod in the namespace, indirectly blocking traffic from other namespaces.

test (machinery etc)

Creating Strategies

sub2-pod1 ping sub2-pod2 (pass)

sub2-pod1 ping coredns-pod in kube-system namespace (pass)

sub2-pod1 ping sub3-pod1 (not working)

sub2-pod1 can ping the extranet, domain name (pass)

External Access sub2-pod1-nodeport (pass)

Take the ip of any node in the k8s cluster 10.34.106.14