Cilium动手实验室: 精通之旅---4.Cilium Gateway API - Lab

Cilium动手实验室: 精通之旅---4.Cilium Gateway API - Lab

  • 1. 环境准备
  • 2. API 网关--HTTP
    • 2.1 部署应用
    • 2.2 部署网关
    • 2.3 HTTP路径匹配
    • 2.4 HTTP头匹配
  • 3. API网关--HTTPS
    • 3.1 创建TLS证书和私钥
    • 3.2 部署HTTPS网关
    • 3.3 HTTPS请求测试
  • 4. API网关--TLS 路由
    • 4.1 部署应用
    • 4.2 部署网关
    • 4.3 测试TLS请求
  • 5. API网关--流量拆分
    • 5.1 部署应用
    • 5.2 负载均衡流量
    • 5.3 流量拆分-- 50%比50%
    • 5.4 流量拆分-- 99%比1%
    • 5.5 小测试
  • 6. 测验
    • 6.1 题目
    • 6.2 解题

1. 环境准备

Lab环境访问

https://isovalent.com/labs/gateway-api/

本套环境1 control 2个worker

cilium install --version v1.17.1 \--namespace kube-system \--set kubeProxyReplacement=true \--set gatewayAPI.enabled=true

确认环境状态

root@server:~# kubectl get crd \gatewayclasses.gateway.networking.k8s.io \gateways.gateway.networking.k8s.io \httproutes.gateway.networking.k8s.io \referencegrants.gateway.networking.k8s.io \tlsroutes.gateway.networking.k8s.io
NAME                                        CREATED AT
gatewayclasses.gateway.networking.k8s.io    2025-05-27T23:51:41Z
gateways.gateway.networking.k8s.io          2025-05-27T23:51:41Z
httproutes.gateway.networking.k8s.io        2025-05-27T23:51:41Z
referencegrants.gateway.networking.k8s.io   2025-05-27T23:51:42Z
tlsroutes.gateway.networking.k8s.io         2025-05-27T23:51:42Z
root@server:~# cilium status --wait/¯¯\/¯¯\__/¯¯\    Cilium:             OK\__/¯¯\__/    Operator:           OK/¯¯\__/¯¯\    Envoy DaemonSet:    OK\__/¯¯\__/    Hubble Relay:       disabled\__/       ClusterMesh:        disabledDaemonSet              cilium                   Desired: 3, Ready: 3/3, Available: 3/3
DaemonSet              cilium-envoy             Desired: 3, Ready: 3/3, Available: 3/3
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 3cilium-envoy             Running: 3cilium-operator          Running: 1clustermesh-apiserver    hubble-relay             
Cluster Pods:          3/3 managed by Cilium
Helm chart version:    1.17.1
Image versions         cilium             quay.io/cilium/cilium:v1.17.1@sha256:8969bfd9c87cbea91e40665f8ebe327268c99d844ca26d7d12165de07f702866: 3cilium-envoy       quay.io/cilium/cilium-envoy:v1.31.5-1739264036-958bef243c6c66fcfd73ca319f2eb49fff1eb2ae@sha256:fc708bd36973d306412b2e50c924cd8333de67e0167802c9b48506f9d772f521: 3cilium-operator    quay.io/cilium/operator-generic:v1.17.1@sha256:628becaeb3e4742a1c36c4897721092375891b58bae2bfcae48bbf4420aaee97: 1
root@server:~# k get nodes
NAME                 STATUS   ROLES           AGE    VERSION
kind-control-plane   Ready    control-plane   3h2m   v1.31.0
kind-worker          Ready    <none>          3h1m   v1.31.0
kind-worker2         Ready    <none>          3h1m   v1.31.0
root@server:~# cilium config view | grep -w "enable-gateway-api"
enable-gateway-api                                true
enable-gateway-api-alpn                           false
enable-gateway-api-app-protocol                   false
enable-gateway-api-proxy-protocol                 false
enable-gateway-api-secrets-sync                   true

验证一下 GatewayClass 是否已部署并接受:

root@server:~# kubectl get GatewayClass
NAME     CONTROLLER                     ACCEPTED   AGE
cilium   io.cilium/gateway-controller   True       4m59s

GatewayClass 是一种可以部署的 Gateway:换句话说,它是一个模板。这样做是为了让基础设施提供商提供不同类型的网关。然后,用户可以选择他们喜欢的 Gateway。
例如,基础设施提供商可以创建两个名为 internetprivateGatewayClass,以反映定义面向 Internet 与私有内部应用程序的 Gateway。
在我们的例子中,Cilium Gateway API (io.cilium/gateway-controller) 将被实例化。
下面的架构表示网关 API 使用的各种组件。使用 Ingress 时,所有功能都在一个 API 中定义。通过将入口路由要求解构为多个 API,用户可以从更通用、更灵活和面向角色的模型中受益。

请添加图片描述

实际的 L7 流量规则在 HTTPRoute API 中定义。

2. API 网关–HTTP

2.1 部署应用

这个项目也是老演员了,Istio的Bookinfo.

  • 🔍 details
  • ratings
  • reviews
  • 📕 productpage

使用其中一些服务作为 Gateway API 的基础。

项目的内容

root@server:~# yq /opt/bookinfo.yml
# Copyright Istio Authors
#
#   Licensed under the Apache License, Version 2.0 (the "License");
#   you may not use this file except in compliance with the License.
#   You may obtain a copy of the License at
#
#       http://www.apache.org/licenses/LICENSE-2.0
#
#   Unless required by applicable law or agreed to in writing, software
#   distributed under the License is distributed on an "AS IS" BASIS,
#   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#   See the License for the specific language governing permissions and
#   limitations under the License.##################################################################################################
# This file defines the services, service accounts, and deployments for the Bookinfo sample.
#
# To apply all 4 Bookinfo services, their corresponding service accounts, and deployments:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
#
# Alternatively, you can deploy any resource separately:
#
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l service=reviews # reviews Service
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l account=reviews # reviews ServiceAccount
#   kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml -l app=reviews,version=v3 # reviews-v3 Deployment
####################################################################################################################################################################################################
# Details service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: detailslabels:app: detailsservice: details
spec:ports:- port: 9080name: httpselector:app: details
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-detailslabels:account: details
---
apiVersion: apps/v1
kind: Deployment
metadata:name: details-v1labels:app: detailsversion: v1
spec:replicas: 1selector:matchLabels:app: detailsversion: v1template:metadata:labels:app: detailsversion: v1spec:serviceAccountName: bookinfo-detailscontainers:- name: detailsimage: docker.io/istio/examples-bookinfo-details-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080securityContext:runAsUser: 1000
---
##################################################################################################
# Ratings service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: ratingslabels:app: ratingsservice: ratings
spec:ports:- port: 9080name: httpselector:app: ratings
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-ratingslabels:account: ratings
---
apiVersion: apps/v1
kind: Deployment
metadata:name: ratings-v1labels:app: ratingsversion: v1
spec:replicas: 1selector:matchLabels:app: ratingsversion: v1template:metadata:labels:app: ratingsversion: v1spec:serviceAccountName: bookinfo-ratingscontainers:- name: ratingsimage: docker.io/istio/examples-bookinfo-ratings-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080securityContext:runAsUser: 1000
---
##################################################################################################
# Reviews service
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: reviewslabels:app: reviewsservice: reviews
spec:ports:- port: 9080name: httpselector:app: reviews
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-reviewslabels:account: reviews
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v1labels:app: reviewsversion: v1
spec:replicas: 1selector:matchLabels:app: reviewsversion: v1template:metadata:labels:app: reviewsversion: v1spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v1:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v2labels:app: reviewsversion: v2
spec:replicas: 1selector:matchLabels:app: reviewsversion: v2template:metadata:labels:app: reviewsversion: v2spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v2:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:name: reviews-v3labels:app: reviewsversion: v3
spec:replicas: 1selector:matchLabels:app: reviewsversion: v3template:metadata:labels:app: reviewsversion: v3spec:serviceAccountName: bookinfo-reviewscontainers:- name: reviewsimage: docker.io/istio/examples-bookinfo-reviews-v3:1.16.2imagePullPolicy: IfNotPresentenv:- name: LOG_DIRvalue: "/tmp/logs"ports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmp- name: wlp-outputmountPath: /opt/ibm/wlp/outputsecurityContext:runAsUser: 1000volumes:- name: wlp-outputemptyDir: {}- name: tmpemptyDir: {}
---
##################################################################################################
# Productpage services
##################################################################################################
apiVersion: v1
kind: Service
metadata:name: productpagelabels:app: productpageservice: productpage
spec:ports:- port: 9080name: httpselector:app: productpage
---
apiVersion: v1
kind: ServiceAccount
metadata:name: bookinfo-productpagelabels:account: productpage
---
apiVersion: apps/v1
kind: Deployment
metadata:name: productpage-v1labels:app: productpageversion: v1
spec:replicas: 1selector:matchLabels:app: productpageversion: v1template:metadata:labels:app: productpageversion: v1spec:serviceAccountName: bookinfo-productpagecontainers:- name: productpageimage: docker.io/istio/examples-bookinfo-productpage-v1:1.16.2imagePullPolicy: IfNotPresentports:- containerPort: 9080volumeMounts:- name: tmpmountPath: /tmpsecurityContext:runAsUser: 1000volumes:- name: tmpemptyDir: {}
---

部署应用

kubectl apply -f /opt/bookinfo.yml

检查应用程序是否已正确部署:

root@server:~# kubectl apply -f /opt/bookinfo.yml
service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created
root@server:~# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-67894999b5-hswsw       1/1     Running   0          51s
productpage-v1-7bd5bd857c-shr9z   1/1     Running   0          51s
ratings-v1-676ff5568f-w467l       1/1     Running   0          51s
reviews-v1-f5b4b64f-sjk2s         1/1     Running   0          51s
reviews-v2-74b7dd9f45-rk2n6       1/1     Running   0          51s
reviews-v3-65d744df5c-zqljm       1/1     Running   0          51s
root@server:~# kubectl get svc
NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.96.188.110   <none>        9080/TCP   93s
kubernetes    ClusterIP   10.96.0.1       <none>        443/TCP    3h10m
productpage   ClusterIP   10.96.173.43    <none>        9080/TCP   93s
ratings       ClusterIP   10.96.118.245   <none>        9080/TCP   93s
reviews       ClusterIP   10.96.33.54     <none>        9080/TCP   93s

请注意,使用 Cilium Service Mesh 时,没有在每个演示应用程序微服务旁边创建 Envoy sidecar。使用 sidecar 实现,输出将显示 2/2 READY: 一个用于微服务,一个用于 Envoy sidecar。

2.2 部署网关

配置文件

root@server:~# yq basic-http.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: my-gateway
spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gwallowedRoutes:namespaces:from: Same
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: http-app-1
spec:parentRefs:- name: my-gatewaynamespace: defaultrules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080- matches:- headers:- type: Exactname: magicvalue: fooqueryParams:- type: Exactname: greatvalue: examplepath:type: PathPrefixvalue: /method: GETbackendRefs:- name: productpageport: 9080

部署网关

root@server:~# kubectl apply -f basic-http.yaml
gateway.gateway.networking.k8s.io/my-gateway created
httproute.gateway.networking.k8s.io/http-app-1 created

网关使用的配置:

spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gwallowedRoutes:namespaces:from: Same

首先,请注意 Gateway 部分中的 gatewayClassName 字段使用值 cilium。这是指之前配置的 Cilium GatewayClass
网关将在端口 80 上侦听南向进入集群的 HTTP 流量。allowedRoutes 用于指定 Route 可以附加到此 Gateway 的命名空间。Same 表示此 Gateway 只能使用同一命名空间中的路由。
请注意,如果我们要使用 All 而不是 Same,我们将允许此网关与任何命名空间中的路由相关联,并且它将使我们能够跨多个命名空间使用单个网关,这些命名空间可能由不同的团队管理。
我们可以在 HTTPRoutes 中指定不同的命名空间 。
现在,让我们回顾一下 HTTPRoute 清单。HTTPRoute 是一种 GatewayAPI 类型,用于指定从网关侦听器到 Kubernetes 服务的 HTTP 请求的路由行为。
它由 Rules 组成,可根据您的要求引导流量。
第一条规则本质上是一个简单的 L7 代理路由:对于路径以 /details 开头的 HTTP 流量,通过端口 9080 将流量转发到 details Service。

  rules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080

第二条规则类似,但它利用了不同的匹配标准。如果 HTTP 请求具有:

  • 名称设置为 magic 且值为 foo 的 HTTP 标头
  • HTTP 方法是 “GET”
  • HTTP 查询参数命名为 great,值为 example,则流量将通过 9080 端口发送到 productpage 服务。
 rules:- matches:- headers:- type: Exactname: magicvalue: fooqueryParams:- type: Exactname: greatvalue: examplepath:type: PathPrefixvalue: /method: GETbackendRefs:- name: productpageport: 9080

如您所见,您可以部署一致的复杂 L7 流量规则(使用 Ingress API,通常需要注释来实现此类路由目标,并且这会造成一个 Ingress 控制器与另一个 Ingress 控制器之间的不一致)。
这些新 API 的好处之一是 Gateway API 基本上被拆分为单独的功能 – 一个用于描述 Gateway,另一个用于到后端服务的路由。通过拆分这两个功能,它使运营商能够更改和交换网关,但保持相同的路由配置。
换句话说:如果您决定要改用其他 Gateway API 控制器,您将能够重复使用相同的清单。
现在,我们再看一下 Services,因为 Gateway 已经部署了:

root@server:~# kubectl get svc
NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
cilium-gateway-my-gateway   LoadBalancer   10.96.212.15    172.18.255.200   80:30157/TCP   3m2s
details                     ClusterIP      10.96.188.110   <none>           9080/TCP       7m4s
kubernetes                  ClusterIP      10.96.0.1       <none>           443/TCP        3h15m
productpage                 ClusterIP      10.96.173.43    <none>           9080/TCP       7m4s
ratings                     ClusterIP      10.96.118.245   <none>           9080/TCP       7m4s
reviews                     ClusterIP      10.96.33.54     <none>           9080/TCP       7m4s

您将看到一个名为 cilium-gateway-my-gatewayLoadBalancer 服务 它是为 Gateway API 创建的。

相同的外部 IP 地址也与网关关联:

root@server:~# kubectl get gateway
NAME         CLASS    ADDRESS          PROGRAMMED   AGE
my-gateway   cilium   172.18.255.200   True         3m22s

让我们检索此 IP 地址:

GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY

2.3 HTTP路径匹配

现在,我们来检查基于 URL 路径的流量是否由 Gateway API 代理。
检查是否可以向该外部地址发出 HTTP 请求:

root@server:~# curl --fail -s http://$GATEWAY/details/1 | jq
{"id": 1,"author": "William Shakespeare","year": 1595,"type": "paperback","pages": 200,"publisher": "PublisherA","language": "English","ISBN-10": "1234567890","ISBN-13": "123-1234567890"
}

由于路径以 /details 开头,因此此流量将与第一条规则匹配,并将通过端口 9080 代理到 details Service。

2.4 HTTP头匹配

这一次,我们将根据 HTTP 参数(如标头值、方法和查询参数)路由流量。运行以下命令:

root@server:~# curl -v -H 'magic: foo' "http://$GATEWAY?great=example"
*   Trying 172.18.255.200:80...
* Connected to 172.18.255.200 (172.18.255.200) port 80
> GET /?great=example HTTP/1.1
> Host: 172.18.255.200
> User-Agent: curl/8.5.0
> Accept: */*
> magic: foo
> 
< HTTP/1.1 200 OK
< content-type: text/html; charset=utf-8
< content-length: 1683
< server: envoy
< date: Wed, 28 May 2025 00:11:15 GMT
< x-envoy-upstream-service-time: 9
< 
<!DOCTYPE html>
<html><head><title>Simple Bookstore App</title>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1"><!-- Latest compiled and minified CSS -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap.min.css"><!-- Optional theme -->
<link rel="stylesheet" href="static/bootstrap/css/bootstrap-theme.min.css"></head><body><p><h3>Hello! This is a simple bookstore application consisting of three services as shown below</h3>
</p><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><td>http://details:9080</td></tr><tr><th>endpoint</th><td>details</td></tr><tr><th>children</th><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://details:9080</td><td>details</td><td></td></tr><tr><td>http://reviews:9080</td><td>reviews</td><td><table class="table table-condensed table-bordered table-hover"><tr><th>name</th><th>endpoint</th><th>children</th></tr><tr><td>http://ratings:9080</td><td>ratings</td><td></td></tr></table></td></tr></table></td></tr></table><p><h4>Click on one of the links below to auto generate a request to the backend as a real user or a tester</h4>
</p>
<p><a href="/productpage?u=normal">Normal user</a></p>
<p><a href="/productpage?u=test">Test user</a></p><!-- Latest compiled and minified JavaScript -->
<script src="static/jquery.min.js"></script><!-- Latest compiled and minified JavaScript -->
<script src="static/bootstrap/js/bootstrap.min.js"></script></body>
</html>
* Connection #0 to host 172.18.255.200 left intact

curl 查询应成功,并返回成功的 200 代码和详细的 HTML 回复(注意 Hello! This is a simple bookstore application consisting of three services as shown below

3. API网关–HTTPS

3.1 创建TLS证书和私钥

在此任务中,我们将使用 Gateway API 进行 HTTPS 流量路由;因此,我们需要一个 TLS 证书进行数据加密。
出于演示目的,我们将使用由虚构的自签名证书颁发机构 (CA) 签名的 TLS 证书。一种简单的方法是使用 mkcert
创建一个证书来验证 bookinfo.cilium.rockshipstershop.cilium.rocks,因为这些是此网关示例中使用的主机名:

root@server:~# mkcert '*.cilium.rocks'
Created a new local CA 💥
Note: the local CA is not installed in the system trust store.
Run "mkcert -install" for certificates to be trusted automatically ⚠️Created a new certificate valid for the following names 📜- "*.cilium.rocks"Reminder: X.509 wildcards only go one level deep, so this won't match a.b.cilium.rocks ℹ️The certificate is at "./_wildcard.cilium.rocks.pem" and the key at "./_wildcard.cilium.rocks-key.pem" ✅It will expire on 28 August 2027 🗓

Mkcert 创建了一个密钥 ( _wildcard.cilium.rocks-key.pem ) 和一个证书 (_wildcard.cilium.rocks.pem),我们将用于 Gateway 服务。
使用此密钥和证书创建 Kubernetes TLS 密钥:

root@server:~# kubectl create secret tls demo-cert \--key=_wildcard.cilium.rocks-key.pem \--cert=_wildcard.cilium.rocks.pem
secret/demo-cert created

3.2 部署HTTPS网关

查看当前目录中提供的 HTTPS Gateway API 示例:

root@server:~# yq basic-https.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: tls-gateway
spec:gatewayClassName: ciliumlisteners:- name: https-1protocol: HTTPSport: 443hostname: "bookinfo.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert- name: https-2protocol: HTTPSport: 443hostname: "hipstershop.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: https-app-route-1
spec:parentRefs:- name: tls-gatewayhostnames:- "bookinfo.cilium.rocks"rules:- matches:- path:type: PathPrefixvalue: /detailsbackendRefs:- name: detailsport: 9080
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: https-app-route-2
spec:parentRefs:- name: tls-gatewayhostnames:- "hipstershop.cilium.rocks"rules:- matches:- path:type: PathPrefixvalue: /backendRefs:- name: productpageport: 9080

它与我们之前评论的几乎相同。只需在 Gateway 清单中注意以下内容:

spec:gatewayClassName: ciliumlisteners:- name: https-1protocol: HTTPSport: 443hostname: "bookinfo.cilium.rocks"tls:certificateRefs:- kind: Secretname: demo-cert

以及 HTTPRoute 清单中的以下内容:

spec:parentRefs:- name: tls-gatewayhostnames:- "bookinfo.cilium.rocks"

HTTPS Gateway API 示例基于 HTTP 示例中所做的作,并为两个 HTTP 路由添加了 TLS 终止:

  • /details 前缀将路由到 HTTP 质询中部署的 details HTTP 服务
  • / 前缀将被路由到 HTTP 挑战赛中部署的 productpage HTTP 服务

这些服务将通过 TLS 进行保护,并可通过两个域名访问:

  • bookinfo.cilium.rocks
  • hipstershop.cilium.rocks

在我们的示例中,网关为对 bookinfo.cilium.rockshipstershop.cilium.rocks 的所有请求提供 demo-cert Secret 资源中定义的 TLS 证书。
现在,让我们将 Gateway 部署到集群:

root@server:~# kubectl apply -f basic-https.yaml
gateway.gateway.networking.k8s.io/tls-gateway created
httproute.gateway.networking.k8s.io/https-app-route-1 created
httproute.gateway.networking.k8s.io/https-app-route-2 created

这将创建一个 LoadBalancer 服务,大约 30 秒后,该服务应填充一个外部 IP 地址。
验证网关是否分配了负载均衡器 IP 地址:

root@server:~# kubectl get gateway tls-gateway
NAME          CLASS    ADDRESS          PROGRAMMED   AGE
tls-gateway   cilium   172.18.255.201   True         49s
root@server:~# GATEWAY=$(kubectl get gateway tls-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.201

3.3 HTTPS请求测试

将 Mkcert CA 安装到您的系统中,以便 cURL 可以信任它:

root@server:~# mkcert -install
The local CA is now installed in the system trust store! ⚡️

现在让我们向 Gateway 发出请求:

root@server:~# curl -s \--resolve bookinfo.cilium.rocks:443:${GATEWAY} \https://bookinfo.cilium.rocks/details/1 | jq
{"id": 1,"author": "William Shakespeare","year": 1595,"type": "paperback","pages": 200,"publisher": "PublisherA","language": "English","ISBN-10": "1234567890","ISBN-13": "123-1234567890"
}

应使用 HTTPS 正确检索数据(因此,正确实现了 TLS 握手)。

4. API网关–TLS 路由

4.1 部署应用

我们将使用 NGINX Web 服务器。查看 NGINX 配置。

root@server:~# cat nginx.conf
events {
}http {log_format main '$remote_addr - $remote_user [$time_local]  $status ''"$request" $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log /var/log/nginx/access.log main;error_log  /var/log/nginx/error.log;server {listen 443 ssl;root /usr/share/nginx/html;index index.html;server_name nginx.cilium.rocks;ssl_certificate /etc/nginx-server-certs/tls.crt;ssl_certificate_key /etc/nginx-server-certs/tls.key;}
}

如您所见,它在端口 443 上侦听 SSL 流量。请注意,它指定了之前创建的证书和密钥。
在部署服务器时,我们需要将文件挂载到正确的路径 (/etc/nginx-server-certs)。
NGINX 服务器配置保存在 Kubernetes ConfigMap 中。让我们创建它。

root@server:~# kubectl create configmap nginx-configmap --from-file=nginx.conf=./nginx.conf
configmap/nginx-configmap created

查看 NGINX 服务器 Deployment 和它前面的 Service:

root@server:~# yq tls-service.yaml
---
apiVersion: v1
kind: Service
metadata:name: my-nginxlabels:run: my-nginx
spec:ports:- port: 443protocol: TCPselector:run: my-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:name: my-nginx
spec:selector:matchLabels:run: my-nginxreplicas: 1template:metadata:labels:run: my-nginxspec:containers:- name: my-nginximage: nginxports:- containerPort: 443volumeMounts:- name: nginx-index-filemountPath: /usr/share/nginx/html/- name: nginx-configmountPath: /etc/nginxreadOnly: true- name: nginx-server-certsmountPath: /etc/nginx-server-certsreadOnly: truevolumes:- name: nginx-index-fileconfigMap:name: index-html-configmap- name: nginx-configconfigMap:name: nginx-configmap- name: nginx-server-certssecret:secretName: demo-cert

如您所见,我们正在部署一个带有 nginx 镜像的容器,挂载多个文件,例如 HTML 索引、NGINX 配置和证书。请注意,我们正在重复使用之前创建的 demo-cert TLS 密钥。

root@server:~# kubectl apply -f tls-service.yaml
service/my-nginx created
deployment.apps/my-nginx created

验证 Service 和 Deployment 已成功部署:

root@server:~# kubectl get svc,deployment my-nginx
NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/my-nginx   ClusterIP   10.96.76.254   <none>        443/TCP   27sNAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   1/1     1            1           27s

4.2 部署网关

查看当前目录中提供的 Gateway API 配置文件:

root@server:~# yq tls-gateway.yaml \tls-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: cilium-tls-gateway
spec:gatewayClassName: ciliumlisteners:- name: httpshostname: "nginx.cilium.rocks"port: 443protocol: TLStls:mode: PassthroughallowedRoutes:namespaces:from: All
---
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TLSRoute
metadata:name: nginx
spec:parentRefs:- name: cilium-tls-gatewayhostnames:- "nginx.cilium.rocks"rules:- backendRefs:- name: my-nginxport: 443

它们与我们在前面的任务中回顾的几乎相同。只需注意 Gateway 清单中设置的 Passthrough 模式即可:

spec:gatewayClassName: ciliumlisteners:- name: httpshostname: "nginx.cilium.rocks"port: 443protocol: TLStls:mode: PassthroughallowedRoutes:namespaces:from: All

以前,我们使用 HTTPRoute 资源。这一次,我们使用的是 TLSRoute

apiVersion: gateway.networking.k8s.io/v1beta1
kind: TLSRoute
metadata:name: nginx
spec:parentRefs:- name: cilium-tls-gatewayhostnames:- "nginx.cilium.rocks"rules:- backendRefs:- name: my-nginxport: 443

您之前了解了如何在网关上终止 TLS 连接。那是在 Terminate 模式下使用 Gateway API。在本例中,网关处于直通模式:区别在于,流量在客户端和 Pod 之间始终保持加密状态。

Terminate 中:

  • Client -> Gateway: HTTPS
  • Gateway -> Pod: HTTP

Passthrough 中:

  • Client -> Gateway: HTTPS
  • Gateway -> Pod: HTTPS

除了使用 SNI 标头进行路由外,网关实际上不会检查流量。实际上,hostnames 字段定义了一组 SNI 名称,这些名称应与 TLS 握手中 TLS ClientHello 消息的 SNI 属性匹配。

现在,让我们将 Gateway 和 TLSRoute 部署到集群中:

root@server:~# kubectl apply -f tls-gateway.yaml -f tls-route.yaml
gateway.gateway.networking.k8s.io/cilium-tls-gateway created
tlsroute.gateway.networking.k8s.io/nginx created

验证网关是否已分配 LoadBalancer IP 地址:

root@server:~# kubectl get gateway cilium-tls-gateway
NAME                 CLASS    ADDRESS          PROGRAMMED   AGE
cilium-tls-gateway   cilium   172.18.255.202   True         25s
root@server:~# GATEWAY=$(kubectl get gateway cilium-tls-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.202

我们还要仔细检查 TLSRoute 是否已成功预置并已连接到网关。

root@server:~# kubectl get tlsroutes.gateway.networking.k8s.io -o json | jq '.items[0].status.parents[0]'
{"conditions": [{"lastTransitionTime": "2025-05-28T00:30:09Z","message": "Accepted TLSRoute","observedGeneration": 1,"reason": "Accepted","status": "True","type": "Accepted"},{"lastTransitionTime": "2025-05-28T00:30:09Z","message": "Service reference is valid","observedGeneration": 1,"reason": "ResolvedRefs","status": "True","type": "ResolvedRefs"}],"controllerName": "io.cilium/gateway-controller","parentRef": {"group": "gateway.networking.k8s.io","kind": "Gateway","name": "cilium-tls-gateway"}
}

4.3 测试TLS请求

现在,让我们通过 HTTPS 向网关发出请求:

root@server:~# curl -v \--resolve "nginx.cilium.rocks:443:$GATEWAY" \"https://nginx.cilium.rocks:443"
* Added nginx.cilium.rocks:443:172.18.255.202 to DNS cache
* Hostname nginx.cilium.rocks was found in DNS cache
*   Trying 172.18.255.202:443...
* Connected to nginx.cilium.rocks (172.18.255.202) port 443
* ALPN: curl offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
*  CAfile: /etc/ssl/certs/ca-certificates.crt
*  CApath: /etc/ssl/certs
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS
* ALPN: server accepted http/1.1
* Server certificate:
*  subject: O=mkcert development certificate; OU=root@server
*  start date: May 28 00:13:47 2025 GMT
*  expire date: Aug 28 00:13:47 2027 GMT
*  subjectAltName: host "nginx.cilium.rocks" matched cert's "*.cilium.rocks"
*  issuer: O=mkcert development CA; OU=root@server; CN=mkcert root@server
*  SSL certificate verify ok.
*   Certificate level 0: Public key type RSA (2048/112 Bits/secBits), signed using sha256WithRSAEncryption
*   Certificate level 1: Public key type RSA (3072/128 Bits/secBits), signed using sha256WithRSAEncryption
* using HTTP/1.x
> GET / HTTP/1.1
> Host: nginx.cilium.rocks
> User-Agent: curl/8.5.0
> Accept: */*
> 
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 200 OK
< Server: nginx/1.27.5
< Date: Wed, 28 May 2025 00:31:30 GMT
< Content-Type: text/html
< Content-Length: 100
< Last-Modified: Wed, 28 May 2025 00:27:14 GMT
< Connection: keep-alive
< ETag: "68365862-64"
< Accept-Ranges: bytes
< 
<html>
<h1>Welcome to our webserver listening on port 443.</h1>
</br>
<h1>Cilium rocks.</h1>
</html
* Connection #0 to host nginx.cilium.rocks left intact

应使用 HTTPS 正确检索数据(因此,正确实现了 TLS 握手)。

输出中有几点需要注意。

  • 它应该是成功的(您应该在最后看到一个带有 Cilium rocks 的 HTML 输出。
  • 连接是通过端口 443 建立的 - 您应该会看到 Connected to nginx.cilium.rocks (172.18.255.200) port 443
  • 您应该会看到 TLS handshake 和 TLS version negotiation。预计协商将导致使用 TLSv1.3。
  • 预期会看到成功的证书验证(注意 SSL 证书验证正常 )。

5. API网关–流量拆分

5.1 部署应用

首先,让我们在集群中部署一个示例 echo 应用程序。应用程序将回复客户端,并在回复正文中包含有关接收原始请求的 Pod 和节点的信息。我们将使用此信息来说明流量在多个 Kubernetes 服务之间分配。
使用以下命令查看 YAML 文件。您将看到我们正在部署多个 Pod 和服务。这些服务称为 echo-1echo-2,流量将在这些服务之间分配。

root@server:~# yq echo-servers.yaml 
---
apiVersion: v1
kind: Service
metadata:labels:app: echo-1name: echo-1
spec:ports:- port: 8080name: highprotocol: TCPtargetPort: 8080selector:app: echo-1
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: echo-1name: echo-1
spec:replicas: 1selector:matchLabels:app: echo-1template:metadata:labels:app: echo-1spec:containers:- image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2name: echo-1ports:- containerPort: 8080env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP
---
apiVersion: v1
kind: Service
metadata:labels:app: echo-2name: echo-2
spec:ports:- port: 8090name: highprotocol: TCPtargetPort: 8080selector:app: echo-2
---
apiVersion: apps/v1
kind: Deployment
metadata:labels:app: echo-2name: echo-2
spec:replicas: 1selector:matchLabels:app: echo-2template:metadata:labels:app: echo-2spec:containers:- image: gcr.io/kubernetes-e2e-test-images/echoserver:2.2name: echo-2ports:- containerPort: 8080env:- name: NODE_NAMEvalueFrom:fieldRef:fieldPath: spec.nodeName- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: POD_IPvalueFrom:fieldRef:fieldPath: status.podIP

部署应用

root@server:~# kubectl apply -f echo-servers.yaml
service/echo-1 created
deployment.apps/echo-1 created
service/echo-2 created
deployment.apps/echo-2 created

检查应用程序是否已正确部署:

root@server:~# kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-67894999b5-hswsw       1/1     Running   0          33m
echo-1-597b976bc7-5r4xb           1/1     Running   0          88s
echo-2-7ccd4fd567-2mgnn           1/1     Running   0          88s
my-nginx-7bd456664-s7mpc          1/1     Running   0          7m53s
productpage-v1-7bd5bd857c-shr9z   1/1     Running   0          33m
ratings-v1-676ff5568f-w467l       1/1     Running   0          33m
reviews-v1-f5b4b64f-sjk2s         1/1     Running   0          33m
reviews-v2-74b7dd9f45-rk2n6       1/1     Running   0          33m
reviews-v3-65d744df5c-zqljm       1/1     Running   0          33m

快速浏览一下部署的服务:

root@server:~# kubectl get svc
NAME                                TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)         AGE
cilium-gateway-cilium-tls-gateway   LoadBalancer   10.96.57.24     172.18.255.202   443:30846/TCP   5m20s
cilium-gateway-my-gateway           LoadBalancer   10.96.212.15    172.18.255.200   80:30157/TCP    29m
cilium-gateway-tls-gateway          LoadBalancer   10.96.211.194   172.18.255.201   443:31647/TCP   18m
details                             ClusterIP      10.96.188.110   <none>           9080/TCP        33m
echo-1                              ClusterIP      10.96.235.22    <none>           8080/TCP        110s
echo-2                              ClusterIP      10.96.204.162   <none>           8090/TCP        110s
kubernetes                          ClusterIP      10.96.0.1       <none>           443/TCP         3h42m
my-nginx                            ClusterIP      10.96.76.254    <none>           443/TCP         8m15s
productpage                         ClusterIP      10.96.173.43    <none>           9080/TCP        33m
ratings                             ClusterIP      10.96.118.245   <none>           9080/TCP        33m
reviews                             ClusterIP      10.96.33.54     <none>           9080/TCP        33m

5.2 负载均衡流量

让我们回顾一下 HTTPRoute 清单。

root@server:~# yq load-balancing-http-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: load-balancing-route
spec:parentRefs:- name: my-gatewayrules:- matches:- path:type: PathPrefixvalue: /echobackendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50

让我们使用以下清单部署 HTTPRoute:

root@server:~# kubectl apply -f load-balancing-http-route.yaml
httproute.gateway.networking.k8s.io/load-balancing-route created

此规则本质上是一个简单的 L7 代理路由:对于路径以 /echo 开头的 HTTP 流量,分别通过端口 8080 和 8090 将流量转发到 echo-1echo-2 服务。

    backendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50

5.3 流量拆分-- 50%比50%

让我们再次检索与网关关联的 IP 地址:

GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY

现在,我们来检查基于 URL 路径的流量是否由 Gateway API 代理。
检查是否可以向该外部地址发出 HTTP 请求:

root@server:~# GATEWAY=$(kubectl get gateway my-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY
172.18.255.200
root@server:~# curl --fail -s http://$GATEWAY/echoHostname: echo-2-7ccd4fd567-2mgnnPod Information:node name:      kind-workerpod name:       echo-2-7ccd4fd567-2mgnnpod namespace:  defaultpod IP: 10.244.1.161Server values:server_version=nginx: 1.12.2 - lua: 10010Request Information:client_address=10.244.2.110method=GETreal path=/echoquery=request_version=1.1request_scheme=httprequest_uri=http://172.18.255.200:8080/echoRequest Headers:accept=*/*  host=172.18.255.200  user-agent=curl/8.5.0  x-envoy-internal=true  x-forwarded-for=172.18.0.1  x-forwarded-proto=http  x-request-id=b17459aa-5d2c-4cb4-9d93-ebdcc123a286  Request Body:-no body in request-

在回复中,获得接收查询的 Pod 的名称。

Hostname: echo-2-7ccd4fd567-2mgnn

请注意,您还可以在原始请求中看到标头。这在即将到来的任务中非常有用。

您应该会看到回复在两个 Pod/节点之间均匀平衡。

root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-2-7ccd4fd567-2mgnn
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-1-597b976bc7-5r4xb
root@server:~# curl --fail -s http://$GATEWAY/echo|grep Hostname
Hostname: echo-2-7ccd4fd567-2mgnn

让我们通过运行循环并计算请求数来仔细检查流量是否在多个 Pod 之间均匀分配:

for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses.txt;
done

验证响应是否已(或多或少)均匀分布。

root@server:~# for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses.txt;
done
root@server:~# grep -o "Hostname: echo-." curlresponses.txt | sort | uniq -c258 Hostname: echo-1242 Hostname: echo-2

可以看到,流量几乎是1比1的.这也正符合我们配置的设定.我们再次回顾下配置文件

    backendRefs:- kind: Servicename: echo-1port: 8080weight: 50- kind: Servicename: echo-2port: 8090weight: 50

5.4 流量拆分-- 99%比1%

这一次,我们将应用权重改为99比1,并应用配置。

root@server:~# yq load-balancing-http-route.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: load-balancing-route
spec:parentRefs:- name: my-gatewayrules:- matches:- path:type: PathPrefixvalue: /echobackendRefs:- kind: Servicename: echo-1port: 8080weight: 99- kind: Servicename: echo-2port: 8090weight: 1
root@server:~# kubectl apply -f load-balancing-http-route.yaml
httproute.gateway.networking.k8s.io/load-balancing-route configured

让我们运行另一个循环,并使用以下命令再次计算回复:

for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses991.txt;
done

验证响应是否分散,其中大约 99% 的响应分布到 echo-1,大约 1% 的响应分布到 echo-2

root@server:~# for _ in {1..500}; docurl -s -k "http://$GATEWAY/echo" >> curlresponses991.txt;
done
root@server:~# grep -o "Hostname: echo-." curlresponses991.txt | sort | uniq -c498 Hostname: echo-12 Hostname: echo-2

5.5 小测试

×	Ingress API is the long-term replacement for Gateway API
√	One of the benefits of Gateway APIs is that it is role-oriented.
×	The Gateway and HTTPRoute configuration is all defined in a single API resource.
√	Cilium Gateway API requires Kube-Proxy Replacement.
×	Cilium Gateway API does not support L7 HTTP Routing.

6. 测验

6.1 题目

为了结束本实验,我们以一个简单的实验结束。我们将重用之前创建的服务(称为 echo-1echo-2)。

要成功通过考试,我们需要:

  • 可通过网关 API 访问的服务以及
  • 基于 PrefixPath /exam 到达服务的 HTTP 流量
  • echo-1echo-2 之间按 75:25 的比例分配流量:75% 的流量将到达 echo-1 服务,而其余 25% 的流量将到达 echo-2 服务。
  • 检查 /root/exam 文件夹中的 exam-gateway.yamlexam-http-route.yaml 文件。您需要使用正确的值更新 XXXX 字段。
  • 服务监听不同的端口 - 你可以使用 kubectl get svc 检查它们监听的端口,或者查看用于部署这些服务的 echo-servers.yaml 清单。
  • 请记住,您需要将 HTTPRoute 引用到父 Gateway。
  • 确保应用清单。
  • 假设 G A T E W A Y 是分配给网关的 I P 地址, ‘ c u r l − − f a i l − s h t t p : / / GATEWAY 是分配给网关的 IP 地址, `curl --fail -s http:// GATEWAY是分配给网关的IP地址,curlfailshttp://GATEWAY/exam | grep Hostname` 则应返回如下输出:
Hostname: echo-X-aaaaaaa-bbbbb

它返回的服务器与我们正在通信的服务器相同。如果设置正确,echo-1 应该接收大约 3 倍的查询 echo-2

  • 如前所述,Gateway API IP 地址也是自动创建的 LoadBalancer Service 的外部 IP。
  • 检查脚本将检查 curl 是否成功,以及分配给 echo-1 的权重是否正好为 75,而分配给 echo-2 的权重是否设置为 25。

6.2 解题

根据题意配置exam-gateway.yaml和exam-http-route.yaml

root@server:~# k get svc| grep echo-
echo-1                              ClusterIP      10.96.235.22    <none>           8080/TCP        18m
echo-2                              ClusterIP      10.96.204.162   <none>           8090/TCP        18m
root@server:~# yq exam/exam-gateway.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:name: exam-gateway
spec:gatewayClassName: ciliumlisteners:- protocol: HTTPport: 80name: web-gw-echoallowedRoutes:namespaces:from: Same
root@server:~# yq exam/exam-http-route.yaml 
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:name: exam-route-1
spec:parentRefs:- name: exam-gatewayrules:- matches:- path:type: PathPrefixvalue: /exambackendRefs:- kind: Servicename: echo-1port: 8080weight: 75- kind: Servicename: echo-2port: 8090weight: 25

部署gateway和route

root@server:~# k apply -f exam/exam-gateway.yaml
gateway.gateway.networking.k8s.io/exam-gateway created
root@server:~# k apply -f exam/exam-http-route.yaml 
httproute.gateway.networking.k8s.io/exam-route-1 created

测试

获取gateway地址

GATEWAY=$(kubectl get gateway exam-gateway -o jsonpath='{.status.addresses[0].value}')
echo $GATEWAY

测试访问

curl --fail -s http://$GATEWAY/exam | grep Hostname

比例测试

for _ in {1..500}; docurl -s -k "http://$GATEWAY/exam" >> exam.txt;
done
grep -o "Hostname: echo-." exam.txt | sort | uniq -c

测下来也符合我们的预期,76%比24%

请添加图片描述

新徽章GET!

请添加图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.pswp.cn/bicheng/83801.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

20250605在微星X99主板中配置WIN10和ubuntu22.04.6双系统启动的引导设置

rootrootrootroot-X99-Turbo:~$ sudo apt-get install boot-repair rootrootrootroot-X99-Turbo:~$ sudo add-apt-repository ppa:yannubuntu/boot-repair rootrootrootroot-X99-Turbo:~$ sudo apt-get install boot-repair 20250605在微星X99主板中配置WIN10和ubuntu22.04.6双…

MyBatis之测试添加功能

1. 首先Mybatis为我们提供了一个操作数据库的会话对象叫Sqlsession&#xff0c;所以我们就需要先获取sqlsession对象&#xff1a; //加载核心配置文件 InputStream is Resources.getResourceAsStream("mybatis-config.xml"); //获取sqlSessionFactoryBuilder(是我…

[论文阅读] 人工智能+软件工程 | MemFL:给大模型装上“项目记忆”,让软件故障定位又快又准

【论文解读】MemFL&#xff1a;给大模型装上“项目记忆”&#xff0c;让软件故障定位又快又准 论文信息 arXiv:2506.03585 Improving LLM-Based Fault Localization with External Memory and Project Context Inseok Yeo, Duksan Ryu, Jongmoon Baik Subjects: Software Engi…

Java开发中复用公共SQL的方法

在一次Java后端开发的面试中&#xff0c;面试官问了我一个问题&#xff1a;“你在写代码时会复用公共SQL吗&#xff1f;如果会的话&#xff0c;能详细介绍一下你是如何实现的吗&#xff1f;”这个问题让我眼前一亮&#xff0c;因为在实际项目中&#xff0c;SQL复用确实是一个非…

C#学习26天:内存优化的几种方法

1.减少对象创建 使用场景&#xff1a; 在循环或密集计算中频繁创建对象时。涉及大量短生命周期对象的场景&#xff0c;比如日志记录或字符串拼接。游戏开发中&#xff0c;需要频繁更新对象状态时。 说明&#xff1a; 重用对象可以降低内存分配和垃圾回收的开销。使用对象池…

【opencv】基础知识到进阶(更新中)

安装&#xff1a;pip install opencv-python 入门案例 读取图片 本节我们将来学习,如何使用opencv显示一张图片出来,我们首先需要掌握一条图片读取的api cv.imread("图片路径","读取的方式") # 图片路径: 需要在工程目录中,或者一个文件的绝对路径 # 读取…

【Part 3 Unity VR眼镜端播放器开发与优化】第二节|VR眼镜端的开发适配与交互设计

文章目录 《VR 360全景视频开发》专栏Part 3&#xff5c;Unity VR眼镜端播放器开发与优化第一节&#xff5c;基于Unity的360全景视频播放实现方案第二节&#xff5c;VR眼镜端的开发适配与交互设计一、Unity XR开发环境与设备适配1.1 启用XR Plugin Management1.2 配置OpenXR与平…

SQL进阶之旅 Day 16:特定数据库引擎高级特性

【SQL进阶之旅 Day 16】特定数据库引擎高级特性 开篇 在“SQL进阶之旅”系列的第16天&#xff0c;我们将探讨特定数据库引擎的高级特性。这些特性通常为某些特定场景设计&#xff0c;能够显著提升查询性能或简化复杂任务。本篇文章将覆盖MySQL、PostgreSQL和Oracle的核心高级…

c++算法学习4——广度搜索bfs

一、引言&#xff1a;探索迷宫的智能方法 在解决迷宫最短路径问题时&#xff0c;广度优先搜索&#xff08;BFS&#xff09;是一种高效而优雅的算法。与深度优先搜索&#xff08;DFS&#xff09;不同&#xff0c;BFS采用"由近及远"的搜索策略&#xff0c;逐层探索所有…

4.RV1126-OPENCV 图像轮廓识别

一.图像识别API 1.图像识别作用 它常用于视觉任务、目标检测、图像分割等等。在 OPENCV 中通常使用 Canny 函数、findContours 函数、drawContours 函数结合在一起去做轮廓的形检测。 2.常用的API findContours 函数&#xff1a;用于寻找图片的轮廓&#xff0c;并把所有的数…

Qt多线程访问同一个数据库源码分享(基于Sqlite实现)

Qt多线程访问同一个数据库源码分享&#xff08;基于Sqlite实现&#xff09; 一、实现难点线程安全问题死锁风险连接管理问题数据一致性性能瓶颈跨线程信号槽最佳实践建议 二、源码分享三、测试1、新建一个多线程类2、开启多线程插入数据 一、实现难点 多线程环境下多个线程同时…

双空间知识蒸馏用于大语言模型

Dual-Space Knowledge Distillation for Large Language Models 发表&#xff1a;EMNLP 2024 机构&#xff1a;Beijing Key Lab of Traffic Data Analysis and Mining 连接&#xff1a;https://aclanthology.org/2024.emnlp-main.1010.pdf 代码&#xff1a;GitHub - songmz…

贪心算法应用:多重背包启发式问题详解

贪心算法应用&#xff1a;多重背包启发式问题详解 多重背包问题是经典的组合优化问题&#xff0c;也是贪心算法的重要应用场景。本文将全面深入地探讨Java中如何利用贪心算法解决多重背包问题。 多重背包问题定义 **多重背包问题(Multiple Knapsack Problem)**是背包问题的变…

ES6 Promise 状态机

状态机&#xff1a;抽象的计算模型&#xff0c;根据特定的条件或者信号切换不同的状态 一、Promise 是什么&#xff1f; 简单来说&#xff0c;Promise 就是一个“承诺对象”。在ES6 里&#xff0c;有些代码执行起来需要点时间&#xff0c;比如加载文件、等待网络请求或者设置…

【Docker管理工具】部署Docker可视化管理面板Dpanel

【Docker管理工具】部署Docker可视化管理面板Dpanel 一、Dpanel介绍1.1 DPanel 简介1.2 主要特点 二、本次实践规划2.1 本地环境规划2.2 本次实践介绍 三、本地环境检查3.1 检查Docker服务状态3.2 检查Docker版本3.3 检查docker compose 版本 四、下载Dpanel镜像五、部署Dpanel…

最新研究揭示云端大语言模型防护机制的成效与缺陷

一项全面新研究揭露了主流云端大语言模型&#xff08;LLM&#xff09;平台安全机制存在重大漏洞与不一致性&#xff0c;对当前人工智能安全基础设施现状敲响警钟。该研究评估了三大领先生成式AI平台的内容过滤和提示注入防御效果&#xff0c;揭示了安全措施在阻止有害内容生成与…

docker中,容器时间和宿机主机时间不一致问题

win11下的docker中有个mysql。今天发现插入数据的时间不正确。后来发现原来是docker容器中的时间不正确。于是尝试了各种修改&#xff0c;什么run -e TZ"${tzutil /g}"&#xff0c;TZ"Asia/Shanghai"&#xff0c;还有初始化时带--mysqld一类的&#xff0c;…

uniapp实现的简约美观的星级评分组件

采用 uniapp 实现的一款简约美观的星级评分模板&#xff0c;提供丝滑动画效果&#xff0c;用户可根据自身需求进行自定义修改、扩展&#xff0c;纯CSS、HTML实现&#xff0c;支持web、H5、微信小程序&#xff08;其他小程序请自行测试&#xff09; 可到插件市场下载尝试&#x…

go语言的锁

本篇文章主要讲锁&#xff0c;主要会涉及go的sync.Mutex和sync.RWMutex。 一.锁的概念和发展 1.1 锁的概念 所谓的加锁和解锁其实就是指一个数据是否被占用了&#xff0c;通过Mutex内的一个状态来表示。 例如&#xff0c;取 0 表示未加锁&#xff0c;1 表示已加锁&#xff…

Ubuntu 服务器软件更新,以及常用软件安装 —— 一步一步配置 Ubuntu Server 的 NodeJS 服务器详细实录 3

前言 前面&#xff0c;我们已经 安装好了 Ubuntu 服务器系统&#xff0c;并且 配置好了 ssh 免密登录服务器 &#xff0c;现在&#xff0c;我们要来进一步的设置服务器。 那么&#xff0c;本文&#xff0c;就是进行服务器的系统更新&#xff0c;以及常用软件的安装 调整 Ubu…