敢说敢作敢为, 无怨无恨无悔。这篇文章主要讲述k8s集群环境安装kubesphere并查看问题相关的知识,希望能为你提供帮助。
1.安装环境和内容说明1.1 服务器环境是1个主节点(k8s-master)和2个从节点构成,每台服务器都有k8s基础环境1.17.3版本,从节点都已加入主节点,3台网络都能互通
1.2 CentOS7.x系统自带的3.10.x内核存在一些Bugs,导致运行的Docker、kubernetes不稳定,而且在安装最后步骤导致kubesphere控制台管理页面不能登录,这里先升级到4.4.XX版本,命令如下:
(1)下载内核
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
(2)安装最新版本内核(这步需要等待比较长时间看网络情况)
yum --enablerepo=elrepo-kernel install -y kernel-lt
(3)设置开机从新内核启动
grub2-set-default "CentOS Linux (4.4.221-1.el7.elrepo.x86_64) 7 (Core)"
(4)重启并查看新内核版本
reboot
uname -r
1.3 以下在k8s-master节点上安装的内容依次为:
helm 2.16.2、tiller 2.16.2、openebs 1.5.0、kubesphere 3.0.0
2.开始安装helm + tiller2.1 查看master节点是否有污点
kubectl describe node k8s-master | grep Taint
显示: Taints:
node-role.kubernetes.io/master:NoSchedule
2.2 解除污点
kubectl taint nodes k8s-master node-role.kubernetes.io/master:NoSchedule-node/k8s-master untainted
再次查看污点显示:Taints:
<
none>
2.3
下载并安装helm 2.16.2
wget https://get.helm.sh/helm-v2.16.2-linux-amd64.tar.gz
tar -zxvf helm-v2.16.2-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm
2.4
安装tiller
由于Helm默认会去storage.googleapis.com拉取镜像,如果你当前执行的机器不能科学上网(墙)的话可以使用阿里的源来安装:
helm init --upgrade --tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.2 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
若不成功可尝试以下命令:
helm init --service-account=tiller --tiller-image=gcr.io/kubernetes-helm/tiller:v2.16.2 --history-max 300
等待一会,查看pod,tiller状态是Running就是成功了
kubectl get pods --all-namespaces
查看版本,显示Client和Server信息即安装成功
helm version
2.5
创建rbac权限,给tiller授权
新建tiller-rbac.yaml,内容如下
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
应用rbac
【k8s集群环境安装kubesphere并查看问题】kubectl apply -f tiller-rbac.yaml
3.安装openebs3.1 官网的执行命令无法安装,因为不能下载镜像,可以参考??https://blog.csdn.net/qq_44986889/article/details/106522841????
创建openebs1.5.0-install.yaml尝试以下yaml来安装:
# This manifest deploys the OpenEBS control plane components, with associated CRs &
RBAC rules
# NOTE: On GKE, deploy the openebs-operator.yaml in admin context
# Create the OpenEBS namespace
apiVersion: v1
kind: Namespace
metadata:
name: openebs
---
# Create Maya Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: openebs-maya-operator
namespace: openebs
---
# Define Role that allows operations on K8s pods/deployments
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
rules:
- apiGroups: ["*"]
resources: ["nodes", "nodes/proxy"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["namespaces", "services", "pods", "pods/exec", "deployments", "deployments/finalizers", "replicationcontrollers", "replicasets", "events", "endpoints", "configmaps", "secrets", "jobs", "cronjobs"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["statefulsets", "daemonsets"]
verbs: ["*"]
- apiGroups: ["*"]
resources: ["resourcequotas", "limitranges"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["ingresses", "horizontalpodautoscalers", "verticalpodautoscalers", "poddisruptionbudgets", "certificatesigningrequests"]
verbs: ["list", "watch"]
- apiGroups: ["*"]
resources: ["storageclasses", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["*"]
- apiGroups: ["volumesnapshot.external-storage.k8s.io"]
resources: ["volumesnapshots", "volumesnapshotdatas"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apiextensions.k8s.io"]
resources: ["customresourcedefinitions"]
verbs: [ "get", "list", "create", "update", "delete", "patch"]
- apiGroups: ["*"]
resources: [ "disks", "blockdevices", "blockdeviceclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpoolclusters", "storagepoolclaims", "storagepoolclaims/finalizers", "cstorpoolclusters/finalizers", "storagepools"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "castemplates", "runtasks"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpools", "cstorpools/finalizers", "cstorvolumereplicas", "cstorvolumes", "cstorvolumeclaims"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorpoolinstances", "cstorpoolinstances/finalizers"]
verbs: ["*" ]
- apiGroups: ["*"]
resources: [ "cstorbackups", "cstorrestores", "cstorcompletedbackups"]
verbs: ["*" ]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
- apiGroups: ["admissionregistration.k8s.io"]
resources: ["validatingwebhookconfigurations", "mutatingwebhookconfigurations"]
verbs: ["get", "create", "list", "delete", "update", "patch"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
- apiGroups: ["*"]
resources: [ "upgradetasks"]
verbs: ["*" ]
---
# Bind the Service Account with the Role Privileges.
# TODO: Check if default account also needs to be there
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: openebs-maya-operator
subjects:
- kind: ServiceAccount
name: openebs-maya-operator
namespace: openebs
roleRef:
kind: ClusterRole
name: openebs-maya-operator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: maya-apiserver
namespace: openebs
labels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
name: maya-apiserver
openebs.io/component-name: maya-apiserver
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: maya-apiserver
imagePullPolicy: IfNotPresent
image: openebs/m-apiserver:1.5.0
ports:
- containerPort: 5656
env:
# OPENEBS_IO_KUBE_CONFIG enables maya api service to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
#value: "/home/ubuntu/.kube/config"
# OPENEBS_IO_K8S_MASTER enables maya api service to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for maya api server version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
#value: "http://172.28.128.3:8080"
# OPENEBS_NAMESPACE provides the namespace of this deployment as an
# environment variable
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
# OPENEBS_MAYA_POD_NAME provides the name of this pod as
# environment variable
- name: OPENEBS_MAYA_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# If OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG is false then OpenEBS default
# storageclass and storagepool will not be created.
- name: OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
value: "true"
# OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL decides whether default cstor sparse pool should be
# configured as a part of openebs installation.
# If "true" a default cstor sparse pool will be configured, if "false" it will not be configured.
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
- name: OPENEBS_IO_INSTALL_DEFAULT_CSTOR_SPARSE_POOL
value: "false"
# OPENEBS_IO_CSTOR_TARGET_DIR can be used to specify the hostpath
# to be used for saving the shared content between the side cars
# of cstor volume pod.
# The default path used is /var/openebs/sparse
#- name: OPENEBS_IO_CSTOR_TARGET_DIR
#value: "/var/openebs/sparse"
# OPENEBS_IO_CSTOR_POOL_SPARSE_DIR can be used to specify the hostpath
# to be used for saving the shared content between the side cars
# of cstor pool pod. This ENV is also used to indicate the location
# of the sparse devices.
# The default path used is /var/openebs/sparse
#- name: OPENEBS_IO_CSTOR_POOL_SPARSE_DIR
#value: "/var/openebs/sparse"
# OPENEBS_IO_JIVA_POOL_DIR can be used to specify the hostpath
# to be used for default Jiva StoragePool loaded by OpenEBS
# The default path used is /var/openebs
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
#- name: OPENEBS_IO_JIVA_POOL_DIR
#value: "/var/openebs"
# OPENEBS_IO_LOCALPV_HOSTPATH_DIR can be used to specify the hostpath
# to be used for default openebs-hostpath storageclass loaded by OpenEBS
# The default path used is /var/openebs/local
# This value takes effect only if OPENEBS_IO_CREATE_DEFAULT_STORAGE_CONFIG
# is set to true
#- name: OPENEBS_IO_LOCALPV_HOSTPATH_DIR
#value: "/var/openebs/local"
- name: OPENEBS_IO_JIVA_CONTROLLER_IMAGE
value: "openebs/jiva:1.5.0"
- name: OPENEBS_IO_JIVA_REPLICA_IMAGE
value: "openebs/jiva:1.5.0"
- name: OPENEBS_IO_JIVA_REPLICA_COUNT
value: "3"
- name: OPENEBS_IO_CSTOR_TARGET_IMAGE
value: "openebs/cstor-istgt:1.5.0"
- name: OPENEBS_IO_CSTOR_POOL_IMAGE
value: "openebs/cstor-pool:1.5.0"
- name: OPENEBS_IO_CSTOR_POOL_MGMT_IMAGE
value: "openebs/cstor-pool-mgmt:1.5.0"
- name: OPENEBS_IO_CSTOR_VOLUME_MGMT_IMAGE
value: "openebs/cstor-volume-mgmt:1.5.0"
- name: OPENEBS_IO_VOLUME_MONITOR_IMAGE
value: "openebs/m-exporter:1.5.0"
- name: OPENEBS_IO_CSTOR_POOL_EXPORTER_IMAGE
###################################################################################################################
value: "openebs/m-exporter:1.5.0"
- name: OPENEBS_IO_HELPER_IMAGE
value: "openebs/linux-utils:1.5.0"
# OPENEBS_IO_ENABLE_ANALYTICS if set to true sends anonymous usage
# events to Google Analytics
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "true"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "openebs-operator"
# OPENEBS_IO_ANALYTICS_PING_INTERVAL can be used to specify the duration (in hours)
# for periodic ping events sent to Google Analytics.
# Default is 24h.
# Minimum is 1h. You can convert this to weekly by setting 168h
#- name: OPENEBS_IO_ANALYTICS_PING_INTERVAL
#value: "24h"
livenessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: 30
periodSeconds: 60
readinessProbe:
exec:
command:
- /usr/local/bin/mayactl
- version
initialDelaySeconds: 30
periodSeconds: 60
---
apiVersion: v1
kind: Service
metadata:
name: maya-apiserver-service
namespace: openebs
labels:
openebs.io/component-name: maya-apiserver-svc
spec:
ports:
- name: api
port: 5656
protocol: TCP
targetPort: 5656
selector:
name: maya-apiserver
sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-provisioner
namespace: openebs
labels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
template:
metadata:
labels:
name: openebs-provisioner
openebs.io/component-name: openebs-provisioner
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: openebs-provisioner
imagePullPolicy: IfNotPresent
image: openebs/openebs-k8s-provisioner:1.5.0
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
#value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
#value: "/home/ubuntu/.kube/config"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that provisioner should forward the volume create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
#value: "maya-apiserver-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*openebs"
initialDelaySeconds: 30
periodSeconds: 60
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-snapshot-operator
namespace: openebs
labels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-snapshot-operator
openebs.io/component-name: openebs-snapshot-operator
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: snapshot-controller
image: openebs/snapshot-controller:1.5.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
exec:
command:
- pgrep
- ".*controller"
initialDelaySeconds: 30
periodSeconds: 60
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot controller should forward the snapshot create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
#value: "maya-apiserver-apiservice"
- name: snapshot-provisioner
image: openebs/snapshot-provisioner:1.5.0
imagePullPolicy: IfNotPresent
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_MAYA_SERVICE_NAME provides the maya-apiserver K8s service name,
# that snapshot provisionershould forward the clone create/delete requests.
# If not present, "maya-apiserver-service" will be used for lookup.
# This is supported for openebs provisioner version 0.5.3-RC1 onwards
#- name: OPENEBS_MAYA_SERVICE_NAME
#value: "maya-apiserver-apiservice"
livenessProbe:
exec:
command:
- pgrep
- ".*provisioner"
initialDelaySeconds: 30
periodSeconds: 60
---
# This is the node-disk-manager related config.
# It can be used to customize the disks probes and filters
apiVersion: v1
kind: ConfigMap
metadata:
name: openebs-ndm-config
namespace: openebs
labels:
openebs.io/component-name: ndm-config
data:
# udev-probe is default or primary probe which should be enabled to run ndm
# filterconfigs contails configs of filters - in their form fo include
# and exclude comma separated strings
node-disk-manager.config: |
probeconfigs:
- key: udev-probe
name: udev probe
state: true
- key: seachest-probe
name: seachest probe
state: false
- key: smart-probe
name: smart probe
state: true
filterconfigs:
- key: os-disk-exclude-filter
name: os disk exclude filter
state: true
exclude: "/,/etc/hosts,/boot"
- key: vendor-filter
name: vendor filter
state: true
include: ""
exclude: "CLOUDBYT,OpenEBS"
- key: path-filter
name: path filter
state: true
include: ""
exclude: "loop,/dev/fd0,/dev/sr0,/dev/ram,/dev/dm-,/dev/md"
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openebs-ndm
namespace: openebs
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-ndm
openebs.io/component-name: ndm
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: openebs-ndm
openebs.io/component-name: ndm
openebs.io/version: 1.5.0
spec:
# By default the node-disk-manager will be run on all kubernetes nodes
# If you would like to limit this to only some nodes, say the nodes
# that have storage attached, you could label those node and use
# nodeSelector.
#
# e.g. label the storage nodes with - "openebs.io/nodegroup"="storage-node"
# kubectl label node <
node-name>
"openebs.io/nodegroup"="storage-node"
#nodeSelector:
#"openebs.io/nodegroup": "storage-node"
serviceAccountName: openebs-maya-operator
hostNetwork: true
containers:
- name: node-disk-manager
image: openebs/node-disk-manager-amd64:v0.4.5
imagePullPolicy: Always
securityContext:
privileged: true
volumeMounts:
- name: config
mountPath: /host/node-disk-manager.config
subPath: node-disk-manager.config
readOnly: true
- name: udev
mountPath: /run/udev
- name: procmount
mountPath: /host/proc
readOnly: true
- name: sparsepath
mountPath: /var/openebs/sparse
env:
# namespace in which NDM is installed will be passed to NDM Daemonset
# as environment variable
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# pass hostname as env variable using downward API to the NDM container
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# specify the directory where the sparse files need to be created.
# if not specified, then sparse files will not be created.
- name: SPARSE_FILE_DIR
value: "/var/openebs/sparse"
# Size(bytes) of the sparse file to be created.
- name: SPARSE_FILE_SIZE
value: "10737418240"
# Specify the number of sparse files to be created
- name: SPARSE_FILE_COUNT
value: "0"
livenessProbe:
exec:
command:
- pgrep
- ".*ndm"
initialDelaySeconds: 30
periodSeconds: 60
volumes:
- name: config
configMap:
name: openebs-ndm-config
- name: udev
hostPath:
path: /run/udev
type: Directory
# mount /proc (to access mount file of process 1 of host) inside container
# to read mount-point of disks and partitions
- name: procmount
hostPath:
path: /proc
type: Directory
- name: sparsepath
hostPath:
path: /var/openebs/sparse
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-ndm-operator
namespace: openebs
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-ndm-operator
openebs.io/component-name: ndm-operator
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: node-disk-operator
image: openebs/node-disk-operator-amd64:v0.4.5
imagePullPolicy: Always
readinessProbe:
exec:
command:
- stat
- /tmp/operator-sdk-ready
initialDelaySeconds: 4
periodSeconds: 10
failureThreshold: 1
env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# the service account of the ndm-operator pod
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPERATOR_NAME
value: "node-disk-operator"
- name: CLEANUP_JOB_IMAGE
value: "openebs/linux-utils:1.5.0"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-admission-server
namespace: openebs
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.5.0
spec:
replicas: 1
strategy:
type: Recreate
rollingUpdate: null
selector:
matchLabels:
app: admission-webhook
template:
metadata:
labels:
app: admission-webhook
openebs.io/component-name: admission-webhook
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: admission-webhook
image: openebs/admission-server:1.5.0
imagePullPolicy: IfNotPresent
args:
- -alsologtostderr
- -v=2
- 2>
&
1
env:
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: ADMISSION_WEBHOOK_NAME
value: "openebs-admission-server"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: openebs-localpv-provisioner
namespace: openebs
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.5.0
spec:
selector:
matchLabels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: openebs-localpv-provisioner
openebs.io/component-name: openebs-localpv-provisioner
openebs.io/version: 1.5.0
spec:
serviceAccountName: openebs-maya-operator
containers:
- name: openebs-provisioner-hostpath
imagePullPolicy: Always
image: openebs/provisioner-localpv:1.5.0
env:
# OPENEBS_IO_K8S_MASTER enables openebs provisioner to connect to K8s
# based on this address. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_K8S_MASTER
#value: "http://10.128.0.12:8080"
# OPENEBS_IO_KUBE_CONFIG enables openebs provisioner to connect to K8s
# based on this config. This is ignored if empty.
# This is supported for openebs provisioner version 0.5.2 onwards
#- name: OPENEBS_IO_KUBE_CONFIG
#value: "/home/ubuntu/.kube/config"
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OPENEBS_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# OPENEBS_SERVICE_ACCOUNT provides the service account of this pod as
# environment variable
- name: OPENEBS_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: OPENEBS_IO_ENABLE_ANALYTICS
value: "true"
- name: OPENEBS_IO_INSTALLER_TYPE
value: "openebs-operator"
- name: OPENEBS_IO_HELPER_IMAGE
value: "openebs/linux-utils:1.5.0"
livenessProbe:
exec:
command:
- pgrep
- ".*localpv"
initialDelaySeconds: 30
periodSeconds: 60
---
3.2 应用安装
kubectl apply -f openebs1.5.0-install.yaml
3.3
将openebs-hostpath设置为默认的StorageClass
等待openebs命名空间下的pod都创建成功如下:
查看服务
kubectl get sc
设置
kubectl patch storageclass openebs-hostpath -p "metadata": "annotations":"storageclass.kubernetes.io/is-default-class":"true"
再次查看服务多了(default)即可
3.4 再次打上污点
kubectl taint nodes k8s-master node-role.kubernetes.io=master:NoSchedule
4.安装kubesphere4.1 根据yaml最小化安装
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
也可把yaml下载到本地安装
4.2 应用执行后查看日志
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath=.items[0].metadata.name) -f
4.3 等待漫长安装之后看到成功信息和账号即可
5.通过30880端口访问kubesphere控制台页面出现错误查看如果出现错误如下:
5.1 查看pod是否都运行成功
kubectl get pods --all-namespaces
5.2 查看ks-controller-manager的运行状态,命令要指定命名空间
kubectl describe pod ks-controller-manager-69b7489b55-sbkr4 --namespace kubesphere-system
状态是Running即可
5.3 查看ks-controller-manager日志,结果发现超时异常
kubectl logs ks-controller-manager-69b7489b55-sbkr4 --namespace kubesphere-system
错误日志
以上错误最终通过升级centos内核到4.4.XX才可以访问,就是文章开头建议先升级内核的原因
参考了一个问题帖子:????https://kubesphere.com.cn/forum/d/2373-ks300??????
推荐阅读
- linux设置ntp时间同步
- LINUX下载编译lua
- 使用ultraiso制作启动盘
- linux安装prometheus+grafana+alermanager
- LINUX下载编译ccrtp(未成功)
- redis 数据量大,异常停机导致全量同步问题
- 豌豆荚看不到我的图片的处理方案
- 豌豆荚:我的照片保存位置搜索
- 靠谱助手玩游戏卡机的处理妙招