OushuDB 如何安装与升级

OushuDB 如何安装与升级
文章图片

如果您用的是Oushu Lava公有云,或者私有云2.0+,您可以通过Lava UI自动部署OushuDB,详情请见: http://oushu.io/docs/ch/lava-...。
【OushuDB 如何安装与升级】如果您不使用Oushu Lava,只想单独部署OushuDB,请按照本节步骤安装。
首先在oushum1,修改/usr/local/hawq/etc/slaves,将所有OushuDB的segment节点的hostname写入slaves中,在本次安装中,应该写入slaves的有oushus1和oushus2,slaves内容为:
oushus1oushus2
在其他节点上安装hawq:

hawq ssh -h oushum2 -e "yum install -y hawq"hawq ssh -f slaves -e "yum install -y hawq"

在oushum1节点上,在配置文件/etc/sysctl.conf添加内容:
kernel.shmmax = 1000000000kernel.shmmni = 4096kernel.shmall = 4000000000kernel.sem = 250 512000 100 2048kernel.sysrq = 1kernel.core_uses_pid = 1kernel.msgmnb = 65536kernel.msgmax = 65536kernel.msgmni = 2048net.ipv4.tcp_syncookies = 0net.ipv4.conf.default.accept_source_route = 0net.ipv4.tcp_tw_recycle = 1net.ipv4.tcp_max_syn_backlog = 200000net.ipv4.conf.all.arp_filter = 1net.ipv4.ip_local_port_range = 10000 65535net.core.netdev_max_backlog = 200000net.netfilter.nf_conntrack_max = 524288fs.nr_open = 3000000kernel.threads-max = 798720kernel.pid_max = 798720# increase networknet.core.rmem_max=2097152net.core.wmem_max=2097152net.core.somaxconn=4096

拷贝oushum1上/etc/sysctl.conf中的配置文件到所有节点:
hawq scp -r -f hostfile /etc/sysctl.conf =:/etc/ 在oushum1,使用“hawq ssh”执行下面的命令,使所有节点的/etc/sysctl.conf中的系统配置生效>: hawq ssh -f hostfile -e "sysctl -p"

在oushum1,创建文件/etc/security/limits.d/gpadmin.conf:
* soft nofile 1048576* hard nofile 1048576* soft nproc 131072* hard nproc 131072

拷贝oushum1上/etc/security/limits.d/gpadmin.conf中的配置文件到所有节点:
hawq scp -r -f hostfile /etc/security/limits.d/gpadmin.conf =:/etc/security/limits.d

在oushum1,在Hadoop上创建/hawq/default_filespace,并赋予gpadmin权限:
sudo -u hdfs hdfs dfs -mkdir -p /hawq/default_filespacesudo -u hdfs hdfs dfs -chown -R gpadmin /hawq

在oushum1,创建mhostfile,记录所有hawq的master和standby master的hostname,类似hostfile:
touch mhostfile
mhostfile记录内容:
oushum1oushum2
在oushum1,创建shostfile,记录所有hawq的segment的hostname,类似hostfile:
touch shostfile
shostfile记录内容:
oushus1oushus2
在oushum1,使用“hawq ssh”在master和standby节点创建master元数据目录和临时文件目录,并授 予gpadmin权限:
创建master元数据目录hawq ssh -f mhostfile -e 'mkdir -p /data1/hawq/masterdd'#创建临时文件目录hawq ssh -f mhostfile -e 'mkdir -p /data1/hawq/tmp'hawq ssh -f mhostfile -e 'mkdir -p /data2/hawq/tmp'hawq ssh -f mhostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'hawq ssh -f mhostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

在oushum1,使用“hawq ssh”在所有segment创建segment元数据目录和临时文件目录,并授予gpadmin权限:
创建segment元数据目录hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/segmentdd'#创建临时文件目录hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/tmp'hawq ssh -f shostfile -e 'mkdir -p /data2/hawq/tmp'hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

在oushum1,切换hawq用户,hawq相关的配置文件都需要使用该用户权限:
su - gpadmin 修改/usr/local/hawq/etc/hdfs-client.xml:(与hdfs类似,先去掉HA的注释): dfs.nameservicesoushudfs.ha.namenodes.oushunn1,nn2dfs.namenode.rpc-address.oushu.nn1oushum2:9000dfs.namenode.rpc-address.oushu.nn2oushum1:9000dfs.namenode.http-address.oushu.nn1oushum2:50070dfs.namenode.http-address.oushu.nn2oushum1:50070...dfs.domain.socket.path/var/lib/hadoop-hdfs/dn_socketOptional.This is a path to a UNIX domain socket that will be used for communication between the DataNode and local HDFS clients.If the string "_PORT" is present in this path, it will be replaced by the TCP port of the DataNode....

在oushum1,修改/usr/local/hawq/etc/hawq-site.xml 注意:hawq_dfs_url中的oushu是dfs.nameservices的值,在hdfs-site.xml中配置;magma_nodes_url中的值最好取/usr/local/hawq/etc/slaves文件中的前两行:
hawq_master_address_hostoushum1...hawq_standby_address_hostoushum2The host name of hawq standby master....hawq_dfs_urloushu/hawq/default_filespaceURL for accessing HDFS.magma_nodes_urloushus1:6666,oushus2:6666urls for accessing magma.hawq_master_directory/data1/hawq/masterddThe directory of hawq master.hawq_segment_directory/data1/hawq/segmentddThe directory of hawq segment.hawq_master_temp_directory/data1/hawq/tmp,/data2/hawq/tmpThe temporary directory reserved for hawq master. NOTE: please DONOT add " " between directories. hawq_segment_temp_directory/data1/hawq/tmp,/data2/hawq/tmpThe temporary directory reserved for hawq segment. NOTE: please DONOT add " " between directories. default_storagehdfsSets the default storage when creating tablehawq_init_with_hdfstrueChoose whether init cluster with hdfs...hawq_rm_yarn_addressoushum1:8032The address of YARN resource manager server.hawq_rm_yarn_scheduler_addressoushum1:8030The address of YARN scheduler server....hawq_rm_yarn_app_namehawqThe application name to register hawq resource manager in YARN....hawq_re_cgroup_hierarchy_namehawqThe name of the hierarchy to accomodate CGroup directories/files for resource enforcement.For example, /sys/fs/cgroup/cpu/hawq for CPU sub-system.... OushuDB4.0版本新增Magma的单独配置和启停功能,使用magam服务时, 在oushum1,使用“hawq ssh”在所有slave节点创建node数据目录,并授予gpadmin权限 hawq ssh -f shostfile -e 'mkdir -p /data1/hawq/magma_segmentdd'hawq ssh -f shostfile -e 'mkdir -p /data2/hawq/magma_segmentdd'hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data1/hawq'hawq ssh -f shostfile -e 'chown -R gpadmin:gpadmin /data2/hawq'

然后编辑配置/usr/local/hawq/etc/magma-site.xml:
nodes_fileslavesThe magma nodes file name at GPHOME/etcnode_data_directoryfile:///data1/hawq/magma_segmentdd,file:///data2/hawq/magma_segmentddThe data directory for magma nodenode_log_directory~/hawq-data-directory/segmentdd/pg_logThe log directory for magma nodenode_address_port6666The port magma node listeningmagma_range_number2magma_replica_number3magma_datadir_capacity3

在oushum1,切换成root用户:
su - root
拷贝oushum1上/usr/local/hawq/etc中的配置文件到所有节点:
source /usr/local/hawq/greenplum_path.shhawq scp -r -f hostfile /usr/local/hawq/etc =:/usr/local/hawq

在oushum1,切换到gpadmin用户,创建hhostfile:
su - gpadminsource /usr/local/hawq/greenplum_path.sh#设置hawq环境变量touch hhostfile

hhostfile文件记录所有OushuDB节点主机名称,内容如下:
oushum1oushum2oushus1oushus2
使用root用户登录到每台机器,修改gpadmin用户密码:
sudo echo 'password' | sudo passwd--stdin gpadmin

针对gpadmin用户交换key,并且按照提示输入相应节点的gpadmin用户密码:
su - gpadminsource /usr/local/hawq/greenplum_path.sh#设置hawq环境变量hawq ssh-exkeys -f hhostfile 在oushum1,使用gpadmin用户权限,初始化OushuDB集群, 当提示“Continue with HAWQ init”时,输 入 Y: hawq init cluster//OushuDB4.0 默认不启动magma服务 hawq init cluster --with_magma//OushuDB4.0新增,3.X版本不支持该用法// OushuDB4.0版本新增--with_magma选项,但只有hawq init|start|stop cluster命令可以带--with_magma选项。

注意:
在做OushuDB集群初始化的时候,需要保证在创建的/data*/hawq/目录下,masterdd和segmentdd>都是空目录,在hadoop上创建的/hawq/default_filespace确保是空目录#另外,如果hawq init cluster失败,可以先执行下面的命令停止hawq集群,清空目录,找出>问题原因后重新初始化。hawq stop cluster#在OushuDB master节点,根据本次安装的配置,s使用下面的命令清空所有hawq目录,然后重建hawq子目录:
hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/masterdd/*'hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/segmentdd/*'hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/magma_masterdd/*'hawq ssh -f hhostfile -e 'rm -fr /data1/hawq/magma_segmentdd/*'hawq ssh -f hhostfile -e 'rm -fr /data2/hawq/magma_segmentdd/*'#在HDFS namenode节点,使用下面的命令,清空/hawq/default_filespace,如果/hawq/default_filespace中有用户数据,注意备份数据,避免造成损失:hdfs dfs -rm -f -r /hawq/default_filespace/*

你也需要检查HDFS的参数配置是否正确,最好以gpadmin用户来检查。如果参数配置不正确的话,>虽然有时HDFS可以正常启动,但在高负载情况下HDFS会出现错误。
su - gpadminsource /usr/local/hawq/greenplum_path.shhawq check -f hostfile --hadoop /usr/hdp/current/hadoop-client/ --hdfs-ha
检查OushuDB是否运行正常:
su - gpadmin source /usr/local/hawq/greenplum_path.sh psql -d postgres select * from gp_segment_configuration; #确定所有节点是up状态create table t(i int); insert into t select generate_series(1,1000); select count(*) from t;

    推荐阅读