安装配置ingress-nginx支持https访问
说明:Ingress简介:
? 1.k8s版本:v1.23;
? 2.内网测试环境1台master,2台node节点,使用 DaemonSet+HostNetwork+nodeSelector 方式部署 ingress-nginx 到 node02 节点,node02打标签作为边缘节点;
? 3.测试了https配置;
Ingress
是Kubernetes 1.1
版本新增的资源对象,用于将不用URL
的访问请求转发到后端不同的Service
,以实现HTTP
层(7层)的业务路由机制。简单点说:Ingress 是 HTTP 层的服务暴露规则。也可以理解为Service的Service。对于Ingress
来说,必须要绑定一个域名。它由两部分组成:
- Ingress Controller:
Ingress Controller
:是Service的入口网关,有很多种,最常见的就是Ingress-Nginx
;- 以
pod
形式运行的;
- Ingress策略设置(k8s中的ingress资源):
- 以
yaml
形式为载体的一组声明式的策略;(可使用 kubectl get ingress -n namespaces 查看)
- 以
- Deployment+LoadBalancer模式的Service
如果要把ingress
部署在公有云,那可以选择这种方式。用Deployment
部署ingress-controller
,创建一个type
为LoadBalancer
的service
关联这组pod
。大部分公有云,都会为LoadBalancer的service
自动创建一个负载均衡器,通常还绑定了公网地址。只要把域名解析指向该地址,就实现了集群服务的对外暴露。
缺点:需要额外购买公有云的负载均衡服务,不适用于没有负载均衡器的非公有云服务;
文章图片
- Deployment+NodePort模式的Service
同样用deployment
模式部署ingress-controller
,并创建对应的服务,但是type
为NodePort
。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport
暴露的端口是随机端口(端口数会大于30000),一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip
地址不变的场景。
缺点:
NodePort
方式暴露ingress
虽然简单方便,但是NodePort
多了一层NAT
,在请求量级很大时可能对性能会有一定影响。- 请求节点会是类似
https://www.xx.com:30076
,其中30076
是kubectl get svc -n ingress-nginx
的svc
暴露出来的nodeport
端口。
文章图片
- DaemonSet+HostNetwork+nodeSelector(推荐)
用DaemonSet
结合nodeselector
来部署ingress-controller
到特定的node
上(边缘节点),然后使用HostNetwork
直接把该pod
与宿主机node
的网络打通,直接使用宿主机的80/433
端口就能访问服务。这时,ingress-controller
所在的node
机器就很类似传统架构的边缘节点,比如机房入口的nginx
服务器。
优点:
- 该方式整个请求链路最简单,性能相对
NodePort
模式更好。
- 由于直接利用宿主机节点的网络和端口,一个
node
只能部署一个ingress-controller pod
。
文章图片
因为此次是内网测试环境,所以使用第3中方法部署测试
- 该方式整个请求链路最简单,性能相对
- Deployment+NodePort模式的Service
现有的测试环境是1台
master
+2台node
,我们选择node02
做为边缘节点,给他打上边缘节点的标签,这样部署的ingress-controall
的pod
会只跑在node02
这个节点上。(如果是生产环境,可以选择2台node
作为边缘节点,为了避免单点故障,可使用keepalive
提高高可用)#给node02节点打上边缘节点的标签
kubectl label nodes node02 edgenode=true#查看各节点的标签
kubectl get node --show-labels
拉取
helm
源:#添加helm源
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx#更新源
helm repo update#拉取相关配置修改values.yaml
helm pull ingress-nginx/ingress-nginx
修改
values.yaml
:commonLabels: {}
controller:
name: controller
image:
registry: k8s.gcr.io#如果怕墙此处可换为阿里镜像源
image: ingress-nginx/controller
tag: "v1.1.1"
digest: sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
pullPolicy: IfNotPresent
runAsUser: 101
allowPrivilegeEscalation: true
existingPsp: ""
containerName: controller
containerPort:
http: 80
https: 443
config: {}
configAnnotations: {}
proxySetHeaders: {}
addHeaders: {}
dnsConfig: {}
hostname: {}
dnsPolicy: ClusterFirst
reportNodeInternalIp: false
watchIngressWithoutClass: false
ingressClassByName: false
allowSnippetAnnotations: true
hostNetwork: true#此处改为true
hostPort:
enabled: false
ports:
http: 80
https: 443
electionID: ingress-controller-leader
ingressClassResource:
name: nginx
enabled: true
default: false
controllerValue: "k8s.io/ingress-nginx"
parameters: {}
ingressClass: nginx
podLabels: {}
podSecurityContext: {}
sysctls: {}
publishService:
enabled: true
pathOverride: ""
scope:
enabled: false
namespace: ""
namespaceSelector: ""
configMapNamespace: ""
tcp:
configMapNamespace: ""
annotations: {}
udp:
configMapNamespace: ""
annotations: {}
maxmindLicenseKey: ""
extraArgs: {}
extraEnvs: []
kind: DaemonSet#此处改为DaemonSet,控制器将以DaemonSet方式运行在特定node
annotations: {}
labels: {}
updateStrategy: {}
minReadySeconds: 0
tolerations: []
affinity: {}
topologySpreadConstraints: []
terminationGracePeriodSeconds: 300
nodeSelector:
kubernetes.io/os: linux
edgenode: 'true'#此处加上刚才给node02打的标签,控制器将运行在node02上
livenessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: "/healthz"
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
healthCheckPath: "/healthz"
healthCheckHost: ""
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources:
requests:
cpu: 100m
memory: 90Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 11
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
behavior: {}
autoscalingTemplate: []
keda:
apiVersion: "keda.sh/v1alpha1"
enabled: false
minReplicas: 1
maxReplicas: 11
pollingInterval: 30
cooldownPeriod: 300
restoreToOriginalReplicaCount: false
scaledObject:
annotations: {}
triggers: []
behavior: {}
enableMimalloc: true
customTemplate:
configMapName: ""
configMapKey: ""
service:
enabled: true
appProtocol: true
annotations: {}
labels: {}
externalIPs: []
loadBalancerSourceRanges: []
enableHttp: true
enableHttps: true
ipFamilyPolicy: "SingleStack"
ipFamilies:
- IPv4
ports:
http: 80
https: 443
targetPorts:
http: http
https: https
type: ClusterIP#此处改为ClusterIP,默认为LoadBalancer
nodePorts:
http: ""
https: ""
tcp: {}
udp: {}
external:
enabled: true
internal:
enabled: false
annotations: {}
loadBalancerSourceRanges: []
extraContainers: []
extraVolumeMounts: []
extraVolumes: []
extraInitContainers: []
extraModules: []
admissionWebhooks:
annotations: {}
enabled: true
failurePolicy: Fail
port: 8443
certificate: "/usr/local/certificates/cert"
key: "/usr/local/certificates/key"
namespaceSelector: {}
objectSelector: {}
labels: {}
existingPsp: ""
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 443
type: ClusterIP
createSecretJob:
resources: {}
patchWebhookJob:
resources: {}
patch:
enabled: true
image:
registry: k8s.gcr.io
image: ingress-nginx/kube-webhook-certgen
tag: v1.1.1
digest: sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
pullPolicy: IfNotPresent
priorityClassName: ""
podAnnotations: {}
nodeSelector:
kubernetes.io/os: linux
tolerations: []
labels: {}
runAsUser: 2000
metrics:
port: 10254
enabled: false
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 10254
type: ClusterIP
serviceMonitor:
enabled: false
additionalLabels: {}
namespace: ""
namespaceSelector: {}
scrapeInterval: 30s
targetLabels: []
relabelings: []
metricRelabelings: []
prometheusRule:
enabled: false
additionalLabels: {}
rules: []
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
priorityClassName: ""
revisionHistoryLimit: 10
defaultBackend:
enabled: true#此处改为true,说明创建个默认的页面,如果有不匹配的请求将返回这个页面
name: defaultbackend
image:
registry: k8s.gcr.io
image: defaultbackend-amd64
tag: "1.5"
pullPolicy: IfNotPresent
runAsUser: 65534
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
existingPsp: ""
extraArgs: {}
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
extraEnvs: []
port: 8080
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 0
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 5
tolerations: []
affinity: {}
podSecurityContext: {}
containerSecurityContext: {}
podLabels: {}
nodeSelector:
kubernetes.io/os: linux
podAnnotations: {}
replicaCount: 1
minAvailable: 1
resources: {}
extraVolumeMounts: []
extraVolumes: []
autoscaling:
annotations: {}
enabled: false
minReplicas: 1
maxReplicas: 2
targetCPUUtilizationPercentage: 50
targetMemoryUtilizationPercentage: 50
service:
annotations: {}
externalIPs: []
loadBalancerSourceRanges: []
servicePort: 80
type: ClusterIP
priorityClassName: ""
labels: {}
rbac:
create: true
scope: false
podSecurityPolicy:
enabled: false
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
annotations: {}
imagePullSecrets: []
tcp: {}
udp: {}
dhParam:
使用
helm
安装:#创建个ingress-nginx的命名空间
kubectl create ns ingress-nginx#使用helm执行安装
helm install ingress-nginx ingress-nginx/ingress-nginx -f values.yaml -n ingress-nginx
查看创建的资源:
kubectl get all -n ingress-nginx
可以看到启动了2个
pod
,一个为ingress-controller
,一个为默认的后端defaultbackend
NAMEREADYSTATUSRESTARTSAGE
pod/ingress-nginx-controller-kqqgj1/1Running021m
pod/ingress-nginx-defaultbackend-7df596dbc9-9c6ws1/1Running021mNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
service/ingress-nginx-controllerClusterIP10.106.80.3680/TCP,443/TCP21m
service/ingress-nginx-controller-admissionClusterIP10.111.63.107443/TCP21m
service/ingress-nginx-defaultbackendClusterIP10.96.124.17380/TCP21mNAMEDESIREDCURRENTREADYUP-TO-DATEAVAILABLENODE SELECTORAGE
daemonset.apps/ingress-nginx-controller11111edgenode=true,kubernetes.io/os=linux21mNAMEREADYUP-TO-DATEAVAILABLEAGE
deployment.apps/ingress-nginx-defaultbackend1/11121mNAMEDESIREDCURRENTREADYAGE
replicaset.apps/ingress-nginx-defaultbackend-7df596dbc911121m
ingress-controall
部署好之后,我们使用nginx
镜像部署个测试后端,之后再部署个ingress
资源把后测试后端的server
通过ingress
暴露出去。我们创建个
test-nginx.yaml
,定义了个nginx
的pod
和他对应的service
,service
使用ClusterIP
方式暴露80端口。#test-nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-test-service
namespace: nginx-test
spec:
selector:
app: nginx-test
ports:
- name: http
port: 81 # 后端Pod的端口
targetPort: 80 # svc暴露的端口
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-test-deployment
namespace: nginx-test
spec:
replicas: 1
selector:
matchLabels:
app: nginx-test
template:
metadata:
labels:
app: nginx-test
spec:
containers:
- name: nginx-test
image: nginx:1.15-alpine
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command: ["/bin/sh","-c","echo nginx-test.wdyxgames.com > /usr/share/nginx/html/index.html"]
ports:
- name: httpd
containerPort: 81#pod暴露出来的端口
创建测试后端
pod
和service
:#创建nginx-test命名空间
kubectl create ns nginx-test#使用yaml创建pod和svc
kubectl apply -f test-nginx.yaml#查看创建的资源
kubectl get all -n nginx-test#####
NAMEREADYSTATUSRESTARTSAGE
pod/nginx-test-deployment-fdf785bb-k6xxl1/1Running024sNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S)AGE
service/nginx-test-serviceClusterIP10.97.180.22980/TCP24sNAMEREADYUP-TO-DATEAVAILABLEAGE
deployment.apps/nginx-test-deployment1/11124sNAMEDESIREDCURRENTREADYAGE
replicaset.apps/nginx-test-deployment-fdf785bb11124s
至此,我们创建好了后端测试资源和
ingress-controall
,我们再创建ingress
资源,把后端测试的service
暴露到公网中去#test-nginx-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: example
spec:
rules: # 一个ingress可以配置多个rules
- host: nginx-test.wdyxgames.com # 域名配置,可以不写,匹配*,此域名就是浏览器里访问的URL
http:
paths: # 相当于nginx的location,同一个host可以配置多个path,此处我们写所有
- backend:
service:
name: nginx-test-service# 代理到哪个svc,与上面创建的测试后端svc对应
port:
number: 80 # svc暴露出来的端口,与上面创建的测试后端svc对应
path: /
pathType: Prefix
#执行文件安装
kubectl apply -ftest-nginx-ingress.yaml-n nginx-test
绑定
host
测试,此处我们绑到了2个host
到node02
边缘节点的ip
上去,一个是在ingress
中定义了的nginx-test.wdyxgames.com
,一个是没有定义了的nginx-test1.wdyxgames.com
,使用浏览器访问:文章图片
文章图片
文章图片
可以看见http://nginx-test.wdyxgames.com访问成功,http://nginx-test1.wdyxgames.com因为没有在
ingress
中定义返回的是defaultbakend
中的nginx
返回的页面ingress-nginx配置使用证书支持https
- 首先把证书导入到
k8s
的secret
中去:
kubectl create secret tls wdyxgames-tls --key _.wdyxgames.com.key --cert _.wdyxgames.com.crt -n nginx-test
- 再创建个
ingress
资源文件指定使用https
:
#test-nginx-ingress-https.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: "nginx" name: example spec: rules: # 一个ingress可以配置多个rules - host: nginx-test.wdyxgames.com # 域名配置,可以不写,匹配*,此域名就是浏览器里访问的URL http: paths: # 相当于nginx的location,同一个host可以配置多个path,此处我们写所有 - backend: service: name: nginx-test-service# 代理到哪个svc,与上面创建的测试后端svc对应 port: number: 80 # svc暴露出来的端口,与上面创建的测试后端svc对应 path: / pathType: Prefix tls: - hosts: - nginx-test.wdyxgames.com secretName: wdyxgames-tls
对比上面,只是添加了tls
处的内容:
#使用命令创建 kubectl apply -ftest-nginx-ingress-https.yaml-n nginx-test##### Error from server (BadRequest): error when creating "test-nginx-ingress-https.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: host "nginx-test.wdyxgames.com" and path "/" is already defined in ingress nginx-test/example #会有报错,是因为的http的ingress已经创建对应的转发关系,不可再创建#删掉之前http的ingress,再创建 kubectl delete -ftest-nginx-ingress.yaml-n nginx-test
- 使用浏览器访问,可见已经支持
https
访问了:
文章图片
推荐阅读
- Vue中的ESLint配置方式
- ng-alain安装
- ContOS|ContOS 7安装Docker使用及部署MySQL和Nginx
- 日常配置|基于虚拟机下的win7系统安装简记
- npm 直接安装 GitHub/GitLab 仓库代码及 npm link 本地调试
- linux安装tomcat
- docker desktop 在 windows11 上安装
- [scp] 利用ssh进行文件互传
- 服务器|宝塔青龙面板 京东 xdd多容器一条龙安装教程
- k8s包管理器helm_如何使用Helm软件包管理器在Kubernetes集群上安装软件