上云无忧 > 文档中心 > 在百度智能云容器引擎服务 CCE 集群中使用NetworkPolicy
容器引擎服务CCE
在百度智能云容器引擎服务 CCE 集群中使用NetworkPolicy

文档简介:
NetworkPolicy 是 K8S 提供的一种资源,用于定义基于 Pod 的网络隔离策略。它描述了一组 Pod 能否与其它组 Pod 及其它 Endpoints 进行通信。本文主要演示如何使用开源工具 felix 或 kube-router 在 CCE 上实现 NetworkPolicy 功能。 用户可以根据集群的容器网络模式,选择对应的组件进行部署。
*此产品及展示信息均由百度智能云官方提供。免费试用 咨询热线:400-826-7010,为您提供专业的售前咨询,让您快速了解云产品,助您轻松上云! 微信咨询
  免费试用、价格特惠

NetworkPolicy 是 K8S 提供的一种资源,用于定义基于 Pod 的网络隔离策略。它描述了一组 Pod 能否与其它组 Pod 及其它 Endpoints 进行通信。本文主要演示如何使用开源工具 felix 或 kube-router 在 CCE 上实现 NetworkPolicy 功能。

用户可以根据集群的容器网络模式,选择对应的组件进行部署。

felix

注意: felix 仅能搭配 veth 网络模式使用(详见 “VPC 网络”模式高级选项)

felix 是开源容器网络方案 Calico 的一个组件,运行在每个节点上负责配置路由及ACL等信息。

  • 官网: https://docs.projectcalico.org/reference/felix/
  • 项目: https://github.com/projectcalico/felix

CCE 基于 felix 进行修改和适配,实现了容器网络策略功能。

部署 felix

在 CCE K8S 集群上部署 felix,YAML 如下:

---
# Source: calico-felix/templates/rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cce-calico-felix
  namespace: kube-system
---
# Source: calico-felix/templates/cce-reserved.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: calico-felix-cce-reserved
  namespace: kube-system
  labels:
    heritage: Helm
    release: RELEASE-NAME
    chart: calico-felix-1.0.0
    app: cce-calico-felix
data:
  hash: "22ec24f7bfe36fe18917ff07659f9e6e3dfd725af4c3371d3e60c7195744bea4"
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: felixconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: FelixConfiguration
    plural: felixconfigurations
    singular: felixconfiguration
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: bgpconfigurations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: BGPConfiguration
    plural: bgpconfigurations
    singular: bgpconfiguration
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: ippools.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: IPPool
    plural: ippools
    singular: ippool
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: hostendpoints.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: HostEndpoint
    plural: hostendpoints
    singular: hostendpoint
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: clusterinformations.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: ClusterInformation
    plural: clusterinformations
    singular: clusterinformation
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworkpolicies.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkPolicy
    plural: globalnetworkpolicies
    singular: globalnetworkpolicy
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: globalnetworksets.crd.projectcalico.org
spec:
  scope: Cluster
  group: crd.projectcalico.org
  version: v1
  names:
    kind: GlobalNetworkSet
    plural: globalnetworksets
    singular: globalnetworkset
---
# Source: calico-felix/templates/crd.yaml
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: networkpolicies.crd.projectcalico.org
spec:
  scope: Namespaced
  group: crd.projectcalico.org
  version: v1
  names:
    kind: NetworkPolicy
    plural: networkpolicies
    singular: networkpolicy
---
# Source: calico-felix/templates/rbac.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: cce-calico-felix
rules:
  - apiGroups: [""]
    resources: ["pods", "nodes", "namespaces", "configmaps", "serviceaccounts"]
    verbs: ["get", "watch", "list", "update"]
  - apiGroups: ["networking.k8s.io"]
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups: ["extensions"]
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch
  - apiGroups: [""]
    resources:
      - pods/status
    verbs:
      - update
  - apiGroups: ["crd.projectcalico.org"]
    resources: ["*"]
    verbs: ["*"]
---
# Source: calico-felix/templates/rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: cce-calico-felix
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cce-calico-felix
subjects:
  - kind: ServiceAccount
    name: cce-calico-felix
    namespace: kube-system
---
# Source: calico-felix/templates/daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: cce-calico-felix
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: cce-calico-felix
  template:
    metadata:
      labels:
        app: cce-calico-felix
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
    spec:
      hostPID: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
        - key: node.cloudprovider.kubernetes.io/uninitialized
          value: "true"
          effect: NoSchedule
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      terminationGracePeriodSeconds: 0
      serviceAccountName: cce-calico-felix
      hostNetwork: true
      containers:
        - name: policy
          image: registry.baidubce.com/cce-plugin-pro/cce-calico-felix:v3.5.8
          command: ["/bin/policyinit.sh"]
          imagePullPolicy: Always
          env:
            - name: NODENAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: FELIX_INTERFACEPREFIX
              value: veth
          securityContext:
            privileged: true
          resources:
            requests:
              cpu: 250m
          livenessProbe:
            httpGet:
              path: /liveness
              port: 9099
              host: localhost
            periodSeconds: 10
            initialDelaySeconds: 10
            failureThreshold: 6
          readinessProbe:
            httpGet:
              path: /readiness
              port: 9099
              host: localhost
            periodSeconds: 10
          volumeMounts:
            - mountPath: /lib/modules
              name: lib-modules
 
      volumes:
        - name: lib-modules
          hostPath:
            path: /lib/modules
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-proxy-config
  namespace: kube-system
  labels:
    app: kube-proxy-config
spec:
  selector:
    matchLabels:
      app: kube-proxy-config
  template:
    metadata:
      labels:
        app: kube-proxy-config
    spec:
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      tolerations:
        - operator: "Exists"
      restartPolicy: Always
      hostNetwork: true
      containers:
        - name: busybox
          image: busybox
          command:
            - sh
            - /tmp/update-proxy-yaml.sh
          env:
            - name: NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: NODE_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.hostIP
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: etc-k8s
              mountPath: /etc/kubernetes/
            - name: shell
              mountPath: /tmp/
 
      terminationGracePeriodSeconds: 0
      volumes:
        - name: etc-k8s
          hostPath:
            path: /etc/kubernetes/
            type: "DirectoryOrCreate"
        - name: shell
          configMap:
            name: update-proxy-yaml-shell
            optional: true
            items:
              - key: update-proxy-yaml.sh
                path: update-proxy-yaml.sh
 
---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
  name: update-proxy-yaml-shell
  namespace: kube-system
data:
  update-proxy-yaml.sh: |-
    #!/bin/sh
 
    if [[ -e /etc/kubernetes/proxy.yaml ]]; then
      sed -i 's/masqueradeAll: true/masqueradeAll: false/g' /etc/kubernetes/proxy.yaml
      if grep -q "masqueradeAll: false" /etc/kubernetes/proxy.yaml; then
        echo "update config successfully"
      else
        exit 1
      fi
    else
      echo "/etc/kubernetes/proxy.yaml not exists"
      exit 1
    fi
    sleep infinity

kube-router

注意: kube-router 仅能搭配 kubenet 网络模式使用(详见 “VPC 网络”模式高级选项)

kube-router 是一个 kubernetes 的容器网络解决方案,它的官网和代码地址如下:

  • 官网:https://www.kube-router.io
  • 项目:https://github.com/cloudnativelabs/kube-router

kube-router 有三大功能:

  • Pod Networking;
  • IPVS/LVS based service proxy;
  • Network Policy Controller.

CCE 有自己的容器网络实现方案,本文主要使用 kube-router 的 Network Policy Controller 的功能.

部署 kube-router

在 CCE K8S 集群上部署 kube-router ,YAML 如下:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-router
  namespace: kube-system

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-router
  namespace: kube-system
rules:
  - apiGroups:
    - ""
    resources:
      - namespaces
      - pods
      - services
      - nodes
      - endpoints
    verbs:
      - list
      - get
      - watch
  - apiGroups:
    - "networking.k8s.io"
    resources:
      - networkpolicies
    verbs:
      - list
      - get
      - watch
  - apiGroups:
    - extensions
    resources:
      - networkpolicies
    verbs:
      - get
      - list
      - watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kube-router
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kube-router
subjects:
- kind: ServiceAccount
  name: kube-router
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: kube-router-cfg
  namespace: kube-system
  labels:
    tier: node
    k8s-app: kube-router
data:
  cni-conf.json: |
    {
      "name":"kubernetes",
      "type":"bridge",
      "bridge":"kube-bridge",
      "isDefaultGateway":true,
      "ipam": {
        "type":"host-local"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-router
  namespace: kube-system
  labels:
    k8s-app: kube-router
spec:
  selector:
    matchLabels:
      k8s-app: kube-router
  template:
    metadata:
      labels:
        k8s-app: kube-router
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: kube-router
      containers:
      - name: kube-router
        image: registry.baidubce.com/cce-plugin-pro/kube-router:latest
        args: ["--run-router=false", "--run-firewall=true", "--run-service-proxy=false"]
        securityContext:
          privileged: true
        imagePullPolicy: Always
        env:
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        livenessProbe:
          httpGet:
            path: /healthz
            port: 20244
          initialDelaySeconds: 10
          periodSeconds: 3
        volumeMounts:
        - name: lib-modules
          mountPath: /lib/modules
          readOnly: true
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
      initContainers:
      - name: install-cni
        image: registry.baidubce.com/cce-plugin-pro/kube-router-busybox:latest
        imagePullPolicy: Always
        command:
        - /bin/sh
        - -c
        - set -e -x;
          if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
            TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
            cp /etc/kube-router/cni-conf.json ${TMP};
            mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
          fi
        volumeMounts:
        - name: cni-conf-dir
          mountPath: /etc/cni/net.d
        - name: kube-router-cfg
          mountPath: /etc/kube-router
      hostNetwork: true
      tolerations:
      - key: CriticalAddonsOnly
        operator: Exists
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      - effect: NoSchedule
        key: node.kubernetes.io/not-ready
        operator: Exists
      volumes:
      - name: lib-modules
        hostPath:
          path: /lib/modules
      - name: cni-conf-dir
        hostPath:
          path: /etc/cni/net.d
      - name: kube-router-cfg
        configMap:
          name: kube-router-cfg

例子说明

1 创建namespaces

$kubectl create namespace production
$kubectl create namespace staging

2 启动 nginx 服务

在不同的 namespace 中创建 nginx deployment.

$kubectl apply -f nginx.yaml --namespace=production
$kubectl apply -f nginx.yaml --namespace=staging

nginx.yaml 的 YAML 如下:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: hub.baidubce.com/cce/nginx-alpine-go:latest
        ports:
        - containerPort: 80

验证 Pod 启动成功:

# staging 环境
$kubectl get pods -n staging
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-7fbd5f4c55-2xgd4   1/1       Running   0          45s
nginx-deployment-7fbd5f4c55-5xr75   1/1       Running   0          45s
nginx-deployment-7fbd5f4c55-fn6lr   1/1       Running   0          20m

# productionn 环境
$kubectl get pods -n production
NAME                                READY     STATUS    RESTARTS   AGE
nginx-deployment-7fbd5f4c55-m764f   1/1       Running   0          10s
nginx-deployment-7fbd5f4c55-pdhhz   1/1       Running   0          10s
nginx-deployment-7fbd5f4c55-r98w5   1/1       Running   0          20m

没有设置 NetworkPolicy 的时候,所有的 Pod 是可以相互访问的,可以直接 ping PodIP.

Network Policy 策略测试

1. Default deny all ingress traffic

禁止 namespace=staging 中 pod 被访问.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: staging
spec:
  podSelector: {}
  policyTypes:
  - Ingress

各个字段含义说明:

  • PodSelector:选中需要隔离的 Pod;
  • policyTypes: 策略类型,NetworkPolicy 将流量分为 ingress 和 egress,即入方向和出方向。如果没有指定则表示不闲置;
  • ingress:入方向,白名单,需要指定 from、ports,即来源、目的端口号,from有三种类型,ipBlock/namespaceSelector/podSelector;
  • egress:出方向,白名单,类似 ingress,egress 需要指定 to、ports,即目的、目的端口号。

上述 NetworkPolicy 创建完成后,可以在任意 Pod 中访问 namespace=staging 下的 PodIP,发现是无法访问,比如从 production 中的 pod 进行访问 :

$kubectl exec -it nginx-deployment-7fbd5f4c55-m764f /bin/sh -n production
/ # ping 172.16.0.92
PING 172.16.0.92 (172.16.0.92): 56 data bytes

2. Default allow all ingress traffic

允许 namespace=staging 中 pod 被访问.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
  namespace: staging
spec:
  podSelector: {}
  ingress:
  - {}
  policyTypes:
  - Ingress

3. Default deny all egress traffic

禁止 namespace=production 中 pod 对外访问.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Egress

4. Default allow all egress traffic

允许 namespace=production 中 pod 对外访问.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-all
  namespace: production
spec:
  podSelector: {}
  egress:
  - {}
  policyTypes:
  - Egress

5. Default deny all ingress and all egress traffic

禁止所有 pod 的入和出的流量:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
相似文档
  • VPC-CNI 模式是百度云容器引擎 CCE 支持的扩展网络模式,基于百度云的弹性网卡产品,能够为集群内的 Pod 分配 VPC 内的 IP 地址。 由百度云 VPC 功能负责路由,打通容器网络的连通性,可实现 Pod 和 Node 的控制面和数据面完全在同一网络层面,该模式下的 Pod 能够复用百度云 VPC 所有产品特性。
  • 注意:以下内容仅针对使用 “VPC 网络”模式的集群 概述: 一个集群中最大的节点数量由容器网段的大小和每个节点上最大 Pod 数量所决定,例如: 容器网段选择 172.16.0.0/16,每个节点最大 Pod 数量为 256,则一个集群中最多只能有 256 个节点;
  • IPv6 可以有效地弥补 IPv4 网络地址空间有限的问题,CCE 目前支持 IPv4/IPv6 双栈集群,包括如下特点: 1. Node 同时支持 IPv4 和 IPv6 地址,集群内部支持使用两种类型地址通信; 2. Pod 双栈同时支持 IPv4 和 IPv6 地址,并支持两种类型 IP 访问; 3. Service 同时支持 IPv4 和 IPv6 地址,并支持通过 IPv6 对外暴露服务。
  • 本文档会详细展示如何配置 ip-masq-agent, 用户可以灵活地将各种配置进行组合,满足容器流量出节点时身份的选择。 关键术语: NAT (网络地址解析) 是一种通过修改 IP 地址头中的源和/或目标地址信息将一个 IP 地址重新映射到另一个 IP 地址的方法。通常由执行 IP 路由的设备执行。
  • 本文档会描述在对等连接场景下, 跨 VPC 的集群如何配置 VPC 路由,以便不同集群可以在节点、容器层面实现互相连通。 前置条件: 对等连接已经创建完成。 对等连接提供了VPC级别的网络互联服务,帮助用户实现在不同虚拟网络之间的流量互通。在本文档中,默认用户已经创建好对等连接以及 VPC 中的端点。 【百度智能云】对等连接
官方微信
联系客服
400-826-7010
7x24小时客服热线
分享
  • QQ好友
  • QQ空间
  • 微信
  • 微博
返回顶部