主要介绍kubernetes的流量负载组件:Service和Ingress。
1. Service
1.1 Service介绍
在kubernetes中,pod是应用程序的载体,我们可以通过pod的ip来访问应用程序,但是pod的ip地址不是固定的,这也就意味着不方便直接采用pod的ip对服务进行访问。
为了解决这个问题,kubernetes提供了Service资源,Service会对提供同一个服务的多个pod进行聚合,并且提供一个统一的入口地址。通过访问Service的入口地址就能访问到后面的pod服务。
Service在很多情况下只是一个概念,真正起作用的其实是kube-proxy服务进程,每个Node节点上都运行着一个kube-proxy服务进程。当创建Service的时候会通过api-server向etcd写入创建的service的信息,而kube-proxy会基于监听的机制发现这种Service的变动,然后它会将最新的Service信息转换成对应的访问规则。
# 10.97.97.97:80 是service提供的访问入口
# 当访问这个入口的时候,可以发现后面有三个pod的服务在等待调用,
# kube-proxy会基于rr(轮询)的策略,将请求分发到其中一个pod上去
# 这个规则会同时在集群内的所有节点上都生成,所以在任何一个节点上访问都可以。
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.97.97.97:80 rr-> 10.244.1.39:80 Masq 1 0 0-> 10.244.1.40:80 Masq 1 0 0-> 10.244.2.33:80 Masq 1 0 0
1.2 kube-proxy目前支持三种工作模式:
1.2.1 userspace
userspace模式下,kube-proxy会为每一个Service创建一个监听端口,发向Cluster IP的请求被Iptables规则重定向到kube-proxy监听的端口上,kube-proxy根据LB算法选择一个提供服务的Pod并和其建立链接,以将请求转发到Pod上。 该模式下,kube-proxy充当了一个四层负责均衡器的角色。由于kube-proxy运行在userspace中,在进行转发处理时会增加内核和用户空间之间的数据拷贝,虽然比较稳定,但是效率比较低。
1.2.2 iptables 模式
iptables模式下,kube-proxy为service后端的每个Pod创建对应的iptables规则,直接将发向Cluster IP的请求重定向到一个Pod IP。 该模式下kube-proxy不承担四层负责均衡器的角色,只负责创建iptables规则。该模式的优点是较userspace模式效率更高,但不能提供灵活的LB策略,当后端Pod不可用时也无法进行重试。
1.2.3 ipvs模式
ipvs模式和iptables类似,kube-proxy监控Pod的变化并创建相应的ipvs规则。ipvs相对iptables转发效率更高。除此以外,ipvs支持更多的LB算法。
# 此模式必须安装ipvs内核模块,否则会降级为iptables
# 开启ipvs
[root@master ~]# kubectl edit cm kube-proxy -n kube-system
修改:mode: "ipvs"
[root@master ~]# kubectl delete pod -l k8s-app=kube-proxy -n kube-system
[root@node1 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.97.97.97:80 rr-> 10.244.1.39:80 Masq 1 0 0-> 10.244.1.40:80 Masq 1 0 0-> 10.244.2.33:80 Masq 1 0 0
1.2 Service类型
Service的资源清单文件:
kind: Service # 资源类型
apiVersion: v1 # 资源版本
metadata: # 元数据name: service # 资源名称namespace: dev # 命名空间
spec: # 描述selector: # 标签选择器,用于确定当前service代理哪些podapp: nginxtype: # Service类型,指定service的访问方式clusterIP: # 虚拟服务的ip地址sessionAffinity: # session亲和性,支持ClientIP、None两个选项ports: # 端口信息- protocol: TCP port: 3017 # service端口targetPort: 5003 # pod端口nodePort: 31122 # 主机端口
ClusterIP:默认值,它是Kubernetes系统自动分配的虚拟IP,只能在集群内部访问
NodePort:将Service通过指定的Node上的端口暴露给外部,通过此方法,就可以在集群外部访问服务
LoadBalancer:使用外接负载均衡器完成到服务的负载分发,注意此模式需要外部云环境支持
ExternalName: 把集群外部的服务引入集群内部,直接使用
1.3 Service使用
1.3.1 实验环境准备
在使用service之前,首先利用Deployment创建出3个pod,注意要为pod设置`app=nginx-pod`的标签
创建deployment.yaml,内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:name: pc-deploymentnamespace: dev
spec: replicas: 3selector:matchLabels:app: nginx-podtemplate:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80
[root@master ~]# kubectl create -f deployment.yaml
deployment.apps/pc-deployment created# 查看pod详情
[root@master ~]# kubectl get pods -n dev -o wide --show-labels
NAME READY STATUS IP NODE LABELS
pc-deployment-66cb59b984-8p84h 1/1 Running 10.244.1.40 node1 app=nginx-pod
pc-deployment-66cb59b984-vx8vx 1/1 Running 10.244.2.33 node2 app=nginx-pod
pc-deployment-66cb59b984-wnncx 1/1 Running 10.244.1.39 node1 app=nginx-pod# 为了方便后面的测试,修改下三台nginx的index.html页面(三台修改的IP地址不一致)
# kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh
# echo "10.244.1.40" > /usr/share/nginx/html/index.html#修改完毕之后,访问测试
[root@master ~]# curl 10.244.1.40
10.244.1.40
[root@master ~]# curl 10.244.2.33
10.244.2.33
[root@master ~]# curl 10.244.1.39
10.244.1.39
1.3.2 ClusterIP类型的Service
创建service-clusterip.yaml文件
apiVersion: v1
kind: Service
metadata:name: service-clusteripnamespace: dev
spec:selector:app: nginx-podclusterIP: 10.97.97.97 # service的ip地址,如果不写,默认会生成一个type: ClusterIPports:- port: 80 # Service端口 targetPort: 80 # pod端口
# 创建service
[root@master ~]# kubectl create -f service-clusterip.yaml
service/service-clusterip created# 查看service
[root@master ~]# kubectl get svc -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service-clusterip ClusterIP 10.97.97.97 <none> 80/TCP 13s app=nginx-pod# 查看service的详细信息
# 在这里有一个Endpoints列表,里面就是当前service可以负载到的服务入口
[root@master ~]# kubectl describe svc service-clusterip -n dev
Name: service-clusterip
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP: 10.97.97.97
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.39:80,10.244.1.40:80,10.244.2.33:80
Session Affinity: None
Events: <none># 查看ipvs的映射规则
[root@master ~]# ipvsadm -Ln
TCP 10.97.97.97:80 rr-> 10.244.1.39:80 Masq 1 0 0-> 10.244.1.40:80 Masq 1 0 0-> 10.244.2.33:80 Masq 1 0 0# 访问10.97.97.97:80观察效果
[root@master ~]# curl 10.97.97.97:80
10.244.2.33
Endpoint
Endpoint是kubernetes中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址,它是根据service配置文件中selector描述产生的。
一个Service由一组Pod组成,这些Pod通过Endpoints暴露出来,Endpoints是实现实际服务的端点集合。换句话说,service和pod之间的联系是通过endpoints实现的。
负载分发策略
对Service的访问被分发到了后端的Pod上去,目前kubernetes提供了两种负载分发策略:
如果不定义,默认使用kube-proxy的策略,比如随机、轮询
基于客户端地址的会话保持模式,即来自同一个客户端发起的所有请求都会转发到固定的一个Pod上
此模式可以使在spec中添加
sessionAffinity:ClientIP
选项
# 查看ipvs的映射规则【rr 轮询】
[root@master ~]# ipvsadm -Ln
TCP 10.97.97.97:80 rr-> 10.244.1.39:80 Masq 1 0 0-> 10.244.1.40:80 Masq 1 0 0-> 10.244.2.33:80 Masq 1 0 0# 循环访问测试
[root@master ~]# while true;do curl 10.97.97.97:80; sleep 5; done;
10.244.1.40
10.244.1.39
10.244.2.33
10.244.1.40
10.244.1.39
10.244.2.33# 修改分发策略----sessionAffinity:ClientIP# 查看ipvs规则【persistent 代表持久】
[root@master ~]# ipvsadm -Ln
TCP 10.97.97.97:80 rr persistent 10800-> 10.244.1.39:80 Masq 1 0 0-> 10.244.1.40:80 Masq 1 0 0-> 10.244.2.33:80 Masq 1 0 0# 循环访问测试
[root@master ~]# while true;do curl 10.97.97.97; sleep 5; done;
10.244.2.33
10.244.2.33
10.244.2.33# 删除service
[root@master ~]# kubectl delete -f service-clusterip.yaml
service "service-clusterip" deleted
1.3.3 HeadLiness类型的Service
在某些场景中,开发人员可能不想使用Service提供的负载均衡功能,而希望自己来控制负载均衡策略,针对这种情况,kubernetes提供了HeadLiness Service,这类Service不会分配Cluster IP,如果想要访问service,只能通过service的域名进行查询。
创建service-headliness.yaml
apiVersion: v1
kind: Service
metadata:name: service-headlinessnamespace: dev
spec:selector:app: nginx-podclusterIP: None # 将clusterIP设置为None,即可创建headliness Servicetype: ClusterIPports:- port: 80 targetPort: 80
# 创建service
[root@master ~]# kubectl create -f service-headliness.yaml
service/service-headliness created# 获取service, 发现CLUSTER-IP未分配
[root@master ~]# kubectl get svc service-headliness -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service-headliness ClusterIP None <none> 80/TCP 11s app=nginx-pod# 查看service详情
[root@master ~]# kubectl describe svc service-headliness -n dev
Name: service-headliness
Namespace: dev
Labels: <none>
Annotations: <none>
Selector: app=nginx-pod
Type: ClusterIP
IP: None
Port: <unset> 80/TCP
TargetPort: 80/TCP
Endpoints: 10.244.1.39:80,10.244.1.40:80,10.244.2.33:80
Session Affinity: None
Events: <none># 查看域名的解析情况
[root@master ~]# kubectl exec -it pc-deployment-66cb59b984-8p84h -n dev /bin/sh
/ # cat /etc/resolv.conf
nameserver 10.96.0.10
search dev.svc.cluster.local svc.cluster.local cluster.local[root@master ~]# dig @10.96.0.10 service-headliness.dev.svc.cluster.local
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.40
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.1.39
service-headliness.dev.svc.cluster.local. 30 IN A 10.244.2.33
1.3.4 NodePort类型的Service
在之前的样例中,创建的Service的ip地址只有集群内部才可以访问,如果希望将Service暴露给集群外部使用,那么就要使用到另外一种类型的Service,称为NodePort类型。NodePort的工作原理其实就是将service的端口映射到Node的一个端口上,然后就可以通过
NodeIp:NodePort
来访问service了。
创建service-nodeport.yaml
apiVersion: v1
kind: Service
metadata:name: service-nodeportnamespace: dev
spec:selector:app: nginx-podtype: NodePort # service类型ports:- port: 80nodePort: 30002 # 指定绑定的node的端口(默认的取值范围是:30000-32767), 如果不指定,会默认分配targetPort: 80
# 创建service
[root@master ~]# kubectl create -f service-nodeport.yaml
service/service-nodeport created# 查看service
[root@master ~]# kubectl get svc -n dev -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
service-nodeport NodePort 10.105.64.191 <none> 80:30002/TCP app=nginx-pod# 接下来可以通过电脑主机的浏览器去访问集群中任意一个nodeip的30002端口,即可访问到pod
1.3.5 LoadBalancer类型的Service
LoadBalancer和NodePort很相似,目的都是向外部暴露一个端口,区别在于LoadBalancer会在集群的外部再来做一个负载均衡设备,而这个设备需要外部环境支持的,外部服务发送到这个设备上的请求,会被设备负载之后转发到集群中。
三款开源 Kubernetes 负载均衡器: MetalLB vs PureLB vs OpenELB
1. 什么是 OpenELB
K8S 对集群外暴露服务有三种方式:NodePort,Ingress和 Loadbalancer。NodePort 用于暴露 TCP 服务(4层),但限于对集群节点主机端口的占用,不适合大规模使用;Ingress 用于暴露 HTTP 服务(7层),可对域名地址做路由分发;Loadbalancer 则专属于云服务,可动态分配公网网关。
对于私有云集群,没有用到公有云服务,能否使用 LoadBalancer 对外暴露服务呢?
答案当然是肯定的,OpenELB 正是为裸金属服务器提供 LoadBalancer 服务而生的!
由青云科技 KubeSphere 容器团队开源的负载均衡器插件 OpenELB 正式通过 CNCF(云原生计算基金会)TOC 技术委员会审核.
2. 应用安装与配置
2.1 安装 OpenELB
前提:
首先需要为 kube-proxy 启用 strictARP
,以便 Kubernetes 集群中的所有网卡停止响应其他网卡的 ARP 请求,而由 OpenELB 处理 ARP 请求。
# 注意:是修改,默认strictARP 是 false
# kubectl edit configmap kube-proxy -n kube-system
......
ipvs:strictARP: true
......
安装
# wget -c https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml修改镜像地址:修改两处
image: k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 为
image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1安装
# kubectl apply -f openelb.yaml查看
[root@master ~]# kubectl get po -n openelb-system
NAME READY STATUS RESTARTS AGE
openelb-admission-create-689pk 0/1 Completed 0 32m
openelb-admission-patch-t77wg 0/1 Completed 1 32m
openelb-keepalive-vip-bbdrn 1/1 Running 0 31m
openelb-keepalive-vip-xsb7t 1/1 Running 0 31m
openelb-manager-5c484bd7cd-5mrkw 1/1 Running 0 32m
2.2 添加 EIP 池
EIP 地址要与集群主机节点在同一网段内,且不可绑定任何网卡;
[root@master ~]# cat ip_pool.yml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:name: eip-pool
spec:address: 172.16.90.231-172.16.90.238protocol: layer2disable: falseinterface: eth0# kubectl apply -f ip_pool.yml
[root@master ~]# kubectl get eip
NAME CIDR USAGE TOTAL
eip-pool 172.16.90.231-172.16.90.238 0 8
2.3 配置 Service 为 LoadBalancer
把 Service 类型修改为 LoadBalancer,同时 annotations 中添加如下三行:
lb.kubesphere.io/v1alpha1: openelb
protocol.openelb.kubesphere.io/v1alpha1: layer2
eip.openelb.kubesphere.io/v1alpha2: layer2-eip
总体配置清单如下:
[root@master test]# cat svc-lb.yml
apiVersion: v1
kind: Service
metadata:name: svc-lbnamespace: devannotations:lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:selector:app: nginx-podtype: LoadBalancerports:- port: 80targetPort: 80[root@master test]# kubectl apply -f svc-lb.yml
[root@master test]# kubectl get svc svc-lb -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-lb LoadBalancer 10.103.128.30 172.16.90.231 80:32532/TCP 22m
测试负载均衡
[root@node2 ~]# for i in {1..9}
> do
> curl 172.16.90.231
> done
web test page, ip is 10.224.166.181 .
web test page, ip is 10.224.166.179 .
web test page, ip is 10.224.104.9 .
web test page, ip is 10.224.166.181 .
web test page, ip is 10.224.166.179 .
web test page, ip is 10.224.104.9 .
web test page, ip is 10.224.166.181 .
web test page, ip is 10.224.166.179 .
web test page, ip is 10.224.104.9 .
1.3.6 ExternalName类型的Service
ExternalName类型的Service用于引入集群外部的服务,它通过
externalName
属性指定外部一个服务的地址,然后在集群内部访问此service就可以访问到外部的服务了。
apiVersion: v1
kind: Service
metadata:name: service-externalnamenamespace: dev
spec:type: ExternalName # service类型externalName: www.baidu.com #改成ip地址也可以
# 创建service
[root@master ~]# kubectl create -f service-externalname.yaml
service/service-externalname created# 域名解析
[root@master ~]# dig @10.96.0.10 service-externalname.dev.svc.cluster.local
service-externalname.dev.svc.cluster.local. 30 IN CNAME www.baidu.com.
www.baidu.com. 30 IN CNAME www.a.shifen.com.
www.a.shifen.com. 30 IN A 39.156.66.18
www.a.shifen.com. 30 IN A 39.156.66.14
2. Ingress
2.1 Ingress介绍
在前面课程中已经提到,Service对集群之外暴露服务的主要方式有两种:NotdePort和LoadBalancer,但是这两种方式,都有一定的缺点:
NodePort方式的缺点是会占用很多集群机器的端口,那么当集群服务变多的时候,这个缺点就愈发明显
LB方式的缺点是每个service需要一个LB,浪费、麻烦,并且需要kubernetes之外设备的支持
基于这种现状,kubernetes提供了Ingress资源对象,Ingress只需要一个NodePort或者一个LB就可以满足暴露多个Service的需求。工作机制大致如下图表示:
实际上,Ingress相当于一个7层的负载均衡器,是kubernetes对反向代理的一个抽象,它的工作原理类似于Nginx,可以理解成在Ingress里建立诸多映射规则,Ingress Controller通过监听这些配置规则并转化成Nginx的反向代理配置 , 然后对外部提供服务。在这里有两个核心概念:
ingress:kubernetes中的一个对象,作用是定义请求如何转发到service的规则
ingress controller:具体实现反向代理及负载均衡的程序,对ingress定义的规则进行解析,根据配置的规则来实现请求转发,实现方式有很多,比如Nginx, Contour, Haproxy等等
Ingress(以Nginx为例)的工作原理如下:
用户编写Ingress规则,说明哪个域名对应kubernetes集群中的哪个Service
Ingress控制器动态感知Ingress服务规则的变化,然后生成一段对应的Nginx反向代理配置
Ingress控制器会将生成的Nginx配置写入到一个运行着的Nginx服务中,并动态更新
到此为止,其实真正在工作的就是一个Nginx了,内部配置了用户定义的请求转发规则
2.2 Ingress使用
2.2.1 环境准备
https://github.com/kubernetes/ingress-nginx
Ingress-NGINX version | k8s supported version | Alpine Version | Nginx Version |
---|---|---|---|
v1.5.1 | 1.25, 1.24, 1.23 | 3.16.2 | 1.21.6 |
v1.4.0 | 1.25, 1.24, 1.23, 1.22 | 3.16.2 | 1.19.10† |
v1.3.1 | 1.24, 1.23, 1.22, 1.21, 1.20 | 3.16.2 | 1.19.10† |
v1.3.0 | 1.24, 1.23, 1.22, 1.21, 1.20 | 3.16.0 | 1.19.10† |
v1.2.1 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.6 | 1.19.10† |
v1.1.3 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.4 | 1.19.10† |
v1.1.2 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.1.1 | 1.23, 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.1.0 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.5 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.4 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.3 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.2 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.1 | 1.22, 1.21, 1.20, 1.19 | 3.14.2 | 1.19.9† |
v1.0.0 | 1.22, 1.21, 1.20, 1.19 | 3.13.5 | 1.20.1 |
1. 下载ingress部署的yaml文件
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml
2. 修改镜像地址
修改镜像地址为阿里云地址,共三处:
registry.k8s.io/ingress-nginx/controller:v1.3.0 改为
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.4.0k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 改为(有两处需要修改)
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1修改type: ClusterIP 为 type: NodePort (可选)
负载均衡修改:apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controllernamespace: ingress-nginx#: 添加注解annotations:lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:externalTrafficPolicy: LocalipFamilies:- IPv4ipFamilyPolicy: SingleStackports:- appProtocol: httpname: httpport: 80protocol: TCPtargetPort: http- appProtocol: httpsname: httpsport: 443protocol: TCPtargetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxtype: LoadBalancer---
apiVersion: v1
kind: Service
metadata:labels:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginxapp.kubernetes.io/part-of: ingress-nginxapp.kubernetes.io/version: 1.4.0name: ingress-nginx-controller-admissionnamespace: ingress-nginx
spec:ports:#添加及修改- appProtocol: httpname: httpport: 80protocol: TCPtargetPort: http- appProtocol: httpsname: httpsport: 443targetPort: httpsselector:app.kubernetes.io/component: controllerapp.kubernetes.io/instance: ingress-nginxapp.kubernetes.io/name: ingress-nginx#type: ClusterIPtype: NodePort
3. 部署ingress-nginx
[root@master ~]# kubectl apply -f deploy.yaml
[root@master ~]# kubectl get svc,pod -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.111.134.215 172.16.90.232 80:30724/TCP,443:30715/TCP 16h
service/ingress-nginx-controller-admission NodePort 10.97.49.157 <none> 80:30840/TCP,443:30985/TCP 16hNAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-zprkl 0/1 Completed 0 16h
pod/ingress-nginx-admission-patch-dkj7q 0/1 Completed 0 16h
pod/ingress-nginx-controller-9c8d64df-wqrgf 1/1 Running 0 16h
Ingress暴露服务的方式
方式一:Deployment+LoadBalancer模式的Service
方式二:DaemonSet+HostNetwork+nodeselector
方式三:Deployment+NodePort模式的Service
方式一:Deployment+LoadBalancer模式的Service
参考前面 OpenELB配置。
方式二:DaemonSet+HostNetwork+nodeselector
用
DaemonSet
结合nodeselector
来部署ingress-controller
到特定的node
上,然后使用HostNetwork
直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433
端口就能访问服务。这时,ingress-controller
所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器
优点
该方式整个请求链路最简单,性能相对NodePort模式更好
缺点
由于直接利用宿主机节点的网络和端口,一个node只能部署一个
ingress-controller pod
1> 指定nginx-ingress-controller运行在node2节点
kubectl label node node2 ingress=true
kubectl get nodes --show-labels
2> 修改Deployment为Daemonset,指定节点运行,并开启 hostNetwork
kubectl label node node2 ingress=true
kubectl get nodes --show-labels
修改完成后我们开始部署。
kubectl apply -f deploy.yaml# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx-admission-create-q5nj2 0/1 Completed 0 5m54s 10.244.2.99 node1 <none> <none>
ingress-nginx-admission-patch-8lxsg 0/1 Completed 1 5m54s 10.244.2.100 node1 <none> <none>
ingress-nginx-controller-l2tk5 1/1 Running 0 22s 172.16.90.85 node2 <none> <none>
到node02节点查看
[root@node2 ~]# netstat -lnupt | grep nginx
tcp 0 0 127.0.0.1:10247 0.0.0.0:* LISTEN 28531/nginx: master
tcp 0 0 127.0.0.1:10246 0.0.0.0:* LISTEN 28531/nginx: master
tcp 0 0 127.0.0.1:10245 0.0.0.0:* LISTEN 28510/nginx-ingress
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 28531/nginx: master
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 28531/nginx: master
tcp 0 0 0.0.0.0:8181 0.0.0.0:* LISTEN 28531/nginx: master
tcp6 0 0 :::443 :::* LISTEN 28531/nginx: master
tcp6 0 0 :::80 :::* LISTEN 28531/nginx: master
tcp6 0 0 :::10254 :::* LISTEN 28510/nginx-ingress
tcp6 0 0 :::8443 :::* LISTEN 28510/nginx-ingress
tcp6 0 0 :::8181 :::* LISTEN 28531/nginx: master
由于配置了hostnetwork, nginx已经在 node主机本地监听团/443/8181端口。其中 8181 是nginx-controller默认配置的一个defaultbackend (Ingress资源没有匹配的 rule 对象时,流量就会被导向这个default backend)
这样,只要访问 node主机有公网 IP,就可以直接映射域名来对外网暴露服务了。如果要nginx高可用的话,可以在多个node上部署,并在前面再搭建一套LVS+keepalive做负载均衡
3> 创建一个deployment和svc
# cat service-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentnamespace: dev
spec:selector:matchLabels:app: nginx-podreplicas: 3template:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name: svc-lbnamespace: devannotations:lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:selector:app: nginx-pod#sessionAffinity: ClientIPtype: LoadBalancerports:- port: 80targetPort: 80
========================================================
kubectl apply -f service-nginx.yaml# kubectl get pod,svc -n dev
NAME READY STATUS RESTARTS AGE
pod/pc-deployment-66d5c85c96-75rzf 1/1 Running 1 (17h ago) 19h
pod/pc-deployment-66d5c85c96-ckp6x 1/1 Running 1 (17h ago) 19h
pod/pc-deployment-66d5c85c96-xhsq8 1/1 Running 1 (17h ago) 19hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/svc-headliness ClusterIP None <none> 80/TCP 19h
service/svc-lb LoadBalancer 10.103.128.30 172.16.90.231 80:32532/TCP 89m
4> 创建ingress
[root@master1 ~]# cat ingress-1.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: nginx-ingressnamesapce: dev
spec:ingressClassName: nginxrules:- host: www.itopenlab.comhttp:paths:- path: /pathType: Prefixbackend:service:name: svc-lbport:number: 80==========================================================
[root@master test]# kubectl apply -f ingress-1.yml
Error from server (InternalError): error when creating "ingress-1.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate is valid for ingress.local, not ingress-nginx-controller-admission.ingress-nginx.svc解决方法:
[root@master test]# kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted
[root@master test]# kubectl apply -f ingress-1.yml
ingress.networking.k8s.io/nginx-ingress created[root@master test]# kubectl get svc,ing,pod -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/svc-lb LoadBalancer 10.110.157.161 172.16.90.232 80:30883/TCP 80sNAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/nginx-ingress nginx www.itopenlab.com 80 17sNAME READY STATUS RESTARTS AGE
pod/nginx-deployment-66d5c85c96-ph7nd 1/1 Running 0 80s
pod/nginx-deployment-66d5c85c96-szf8f 1/1 Running 0 80s
pod/nginx-deployment-66d5c85c96-wlq4v 1/1 Running 0 80s
Linux添加hosts解析测试:
[root@master test]# kubectl exec -it -n dev nginx-deployment-66d5c85c96-ph7nd -- /bin/bash
root@nginx-deployment-66d5c85c96-ph7nd:/# echo "web test page, ip is `hostname -I`." > /usr/share/nginx/html/index.html
root@nginx-deployment-66d5c85c96-ph7nd:/# exit
[root@master test]# kubectl exec -it -n dev nginx-deployment-66d5c85c96-szf8f -- /bin/bash
root@nginx-deployment-66d5c85c96-szf8f:/# echo "web test page, ip is `hostname -I`." > /usr/share/nginx/html/index.html
root@nginx-deployment-66d5c85c96-szf8f:/# exit
[root@master test]# kubectl exec -it -n dev nginx-deployment-66d5c85c96-wlq4v -- /bin/bash
root@nginx-deployment-66d5c85c96-wlq4v:/# echo "web test page, ip is `hostname -I`." > /usr/share/nginx/html/index.html
root@nginx-deployment-66d5c85c96-wlq4v:/# exit[root@node2 ~]# tail -1 /etc/hosts
172.16.90.232 www.itopenlab.com
[root@node2 ~]# for i in {1..9}
> do
> curl http://www.itopenlab.com
> done
web test page, ip is 10.224.166.190 .
web test page, ip is 10.224.166.189 .
web test page, ip is 10.224.104.16 .
web test page, ip is 10.224.166.190 .
web test page, ip is 10.224.166.189 .
web test page, ip is 10.224.104.16 .
web test page, ip is 10.224.166.190 .
web test page, ip is 10.224.166.189 .
web test page, ip is 10.224.104.16 .
准备service和pod
为了后面的实验比较方便,创建如下图所示的模型
创建tomcat-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: nginx-podtemplate:metadata:labels:app: nginx-podspec:containers:- name: nginximage: nginx:1.17.1ports:- containerPort: 80---apiVersion: apps/v1
kind: Deployment
metadata:name: tomcat-deploymentnamespace: dev
spec:replicas: 3selector:matchLabels:app: tomcat-podtemplate:metadata:labels:app: tomcat-podspec:containers:- name: tomcatimage: tomcat:8.5-jre10-slimports:- containerPort: 8080---apiVersion: v1
kind: Service
metadata:name: nginx-servicenamespace: devannotations:lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:selector:app: nginx-podtype: LoadBalancerports:- port: 80targetPort: 80---apiVersion: v1
kind: Service
metadata:name: tomcat-servicenamespace: devannotations:lb.kubesphere.io/v1alpha1: openelbprotocol.openelb.kubesphere.io/v1alpha1: layer2eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:selector:app: tomcat-podtype: LoadBalancerports:- port: 8080targetPort: 8080
# 创建
[root@master ~]# kubectl create -f tomcat-nginx.yaml# 查看
[root@master test]# kubectl get svc -n dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service LoadBalancer 10.106.255.213 172.16.90.232 80:32738/TCP 51s
tomcat-service LoadBalancer 10.111.62.3 172.16.90.233 8080:32079/TCP 51s
2.2.2 Http代理
创建ingress-http.yaml
# 新版ingress配置发生改变:
[root@master ~]# cat ingress-http.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:name: ingress-httpnamespace: dev
spec:rules:- host: nginx.itopenlab.comhttp:paths:- path: /pathType: Prefixbackend:service:name: nginx-serviceport: number: 80- host: tomcat.itopenlab.comhttp:paths:- path: /pathType: Prefixbackend:service:name: tomcat-serviceport:number: 8080
# 创建
[root@master ~]# kubectl create -f ingress-http.yaml
ingress.extensions/ingress-http created# 创建报错及解决方法:
[root@master ~]# kubectl apply -f ingress-http.yml
Error from server (InternalError): error when creating "ingress-http.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate is valid for ingress.local, not ingress-nginx-controller-admission.ingress-nginx.svc[root@master ~]# kubectl get validatingwebhookconfigurations
NAME WEBHOOKS AGE
ingress-nginx-admission 1 23m
[root@master ~]# kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
validatingwebhookconfiguration.admissionregistration.k8s.io "ingress-nginx-admission" deleted[root@master test]# kubectl apply -f ingress-http.yml
ingress.networking.k8s.io/ingress-http created# 查看
[root@master test]# kubectl get ing ingress-http -n dev
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-http <none> nginx.itopenlab.com,tomcat.itopenlab.com 80 17s[root@master test]# kubectl describe ing ingress-http -n dev
Name: ingress-http
Labels: <none>
Namespace: dev
Address:
Ingress Class: <none>
Default backend: <default>
Rules:Host Path Backends---- ---- --------nginx.itopenlab.com / nginx-service:80 (10.224.104.17:80,10.224.104.19:80,10.224.166.191:80)tomcat.itopenlab.com / tomcat-service:8080 (10.224.104.18:8080,10.224.166.130:8080,10.224.166.131:8080)
Annotations: <none>
Events: <none># 接下来,在本地电脑上配置host文件,解析上面的两个域名到具体的svc上
172.16.90.232 nginx.itopenlab.com
172.16.90.233 tomcat.itopenlab.com
# 然后,就可以分别访问tomcat.itopenlab.com:8080 和 nginx.itopenlab.com 查看效果了
2.2.3 Https代理
创建证书
# 生成证书
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=BJ/L=BJ/O=nginx/CN=itopenlab.com"# 创建密钥
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
创建ingress-https.yaml
# 生成证书
openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/C=CN/ST=BJ/L=BJ/O=nginx/CN=itopenlab.com"# 创建密钥
kubectl create secret tls tls-secret --key tls.key --cert tls.crt
# 创建
[root@master ~]# kubectl create -f ingress-https.yaml
ingress.extensions/ingress-https created# 查看
[root@master test]# kubectl get ing ingress-https -n dev
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-https <none> nginx.itopenlab.com,tomcat.itopenlab.com 80, 443 3m32s# 查看详情
[root@master test]# kubectl describe ing ingress-https -n dev
Name: ingress-https
Labels: <none>
Namespace: dev
Address:
Ingress Class: <none>
Default backend: <default>
TLS:tls-secret terminates nginx.itopenlab.com,tomcat.itopenlab.com
Rules:Host Path Backends---- ---- --------nginx.itopenlab.com / nginx-service:80 (10.224.104.17:80,10.224.104.19:80,10.224.166.191:80)tomcat.itopenlab.com / tomcat-service:8080 (10.224.104.18:8080,10.224.166.130:8080,10.224.166.131:8080)
Annotations: <none>
Events: <none>...# 访问方式和之前相同,用https