Quellcode durchsuchen

fix bug, update action would change the user of definition.

baoliang vor 6 Jahren
Ursprung
Commit
cba13a1fee
2 geänderte Dateien mit 38 neuen und 395 gelöschten Zeilen
  1. 37 394
      docs/zh_CN/后端部署文档.md
  2. 1 1
      install.sh

+ 37 - 394
docs/zh_CN/后端部署文档.md

@@ -86,370 +86,11 @@ escheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
 #Default requiretty
 ```
 
-## 配置文件说明
-
-```
-说明:配置文件位于 target/escheduler-{version}/conf 下面 
-```
-
-### escheduler-alert
-
-配置邮件告警信息
-
-
-* alert.properties 
-
-```
-#以qq邮箱为例,如果是别的邮箱,请更改对应配置
-#alert type is EMAIL/SMS
-alert.type=EMAIL
-
-# mail server configuration
-mail.protocol=SMTP
-mail.server.host=smtp.exmail.qq.com
-mail.server.port=25
-mail.sender=xxxxxxx@qq.com
-mail.passwd=xxxxxxx
-
-# xls file path, need manually create it before use if not exist
-xls.file.path=/opt/xls
-```
-
-
-
-
-### escheduler-common
-
-通用配置文件配置,队列选择及地址配置,通用文件目录配置
-
-- common/common.properties
-
-```
-#task queue implementation, default "zookeeper"
-escheduler.queue.impl=zookeeper
-
-# user data directory path, self configuration, please make sure the directory exists and have read write permissions
-data.basedir.path=/tmp/escheduler
-
-# directory path for user data download. self configuration, please make sure the directory exists and have read write permissions
-data.download.basedir.path=/tmp/escheduler/download
-
-# process execute directory. self configuration, please make sure the directory exists and have read write permissions
-process.exec.basepath=/tmp/escheduler/exec
-
-# data base dir, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions。"/escheduler" is recommended
-data.store2hdfs.basepath=/escheduler
-
-# whether hdfs starts
-hdfs.startup.state=true
-
-# system env path. self configuration, please make sure the directory and file exists and have read write execute permissions
-escheduler.env.path=/opt/.escheduler_env.sh
-escheduler.env.py=/opt/escheduler_env.py
-
-#resource.view.suffixs
-resource.view.suffixs=txt,log,sh,conf,cfg,py,java,sql,hql,xml
-
-# is development state? default "false"
-development.state=false
-```
-
-
-
-SHELL任务 环境变量配置
-
-```
-说明:配置文件位于 target/escheduler-{version}/conf/env 下面,这个会是Worker执行任务时加载的环境
-```
-
-.escheduler_env.sh 
-```
-export HADOOP_HOME=/opt/soft/hadoop
-export HADOOP_CONF_DIR=/opt/soft/hadoop/etc/hadoop
-export SPARK_HOME1=/opt/soft/spark1
-export SPARK_HOME2=/opt/soft/spark2
-export PYTHON_HOME=/opt/soft/python
-export JAVA_HOME=/opt/soft/java
-export HIVE_HOME=/opt/soft/hive
-	
-export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$PATH
-```
-
-
-​	
-
-Python任务 环境变量配置
-
-```
-说明:配置文件位于 target/escheduler-{version}/conf/env 下面
-```
-
-escheduler_env.py
-```
-import os
-
-HADOOP_HOME="/opt/soft/hadoop"
-SPARK_HOME1="/opt/soft/spark1"
-SPARK_HOME2="/opt/soft/spark2"
-PYTHON_HOME="/opt/soft/python"
-JAVA_HOME="/opt/soft/java"
-HIVE_HOME="/opt/soft/hive"
-PATH=os.environ['PATH']
-PATH="%s/bin:%s/bin:%s/bin:%s/bin:%s/bin:%s/bin:%s"%(HIVE_HOME,HADOOP_HOME,SPARK_HOME1,SPARK_HOME2,JAVA_HOME,PYTHON_HOME,PATH)
-
-os.putenv('PATH','%s'%PATH)	
-```
-
-
-
-hadoop 配置文件
-
-- common/hadoop/hadoop.properties
-
-```
-# ha or single namenode,If namenode ha needs to copy core-site.xml and hdfs-site.xml to the conf directory
-fs.defaultFS=hdfs://mycluster:8020
-
-#resourcemanager ha note this need ips , this empty if single
-yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-
-# If it is a single resourcemanager, you only need to configure one host name. If it is resourcemanager HA, the default configuration is fine
-yarn.application.status.address=http://ark1:8088/ws/v1/cluster/apps/%s
-
-```
-
-
-
-定时器配置文件
-
-- quartz.properties
-
-```
-#============================================================================
-# Configure Main Scheduler Properties
-#============================================================================
-org.quartz.scheduler.instanceName = EasyScheduler
-org.quartz.scheduler.instanceId = AUTO
-org.quartz.scheduler.makeSchedulerThreadDaemon = true
-org.quartz.jobStore.useProperties = false
-
-#============================================================================
-# Configure ThreadPool
-#============================================================================
-
-org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
-org.quartz.threadPool.makeThreadsDaemons = true
-org.quartz.threadPool.threadCount = 25
-org.quartz.threadPool.threadPriority = 5
-
-#============================================================================
-# Configure JobStore
-#============================================================================
- 
-org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
-org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
-org.quartz.jobStore.tablePrefix = QRTZ_
-org.quartz.jobStore.isClustered = true
-org.quartz.jobStore.misfireThreshold = 60000
-org.quartz.jobStore.clusterCheckinInterval = 5000
-org.quartz.jobStore.dataSource = myDs
-
-#============================================================================
-# Configure Datasources  
-#============================================================================
- 
-org.quartz.dataSource.myDs.driver = com.mysql.jdbc.Driver
-org.quartz.dataSource.myDs.URL = jdbc:mysql://192.168.xx.xx:3306/escheduler?characterEncoding=utf8&useSSL=false
-org.quartz.dataSource.myDs.user = xx
-org.quartz.dataSource.myDs.password = xx
-org.quartz.dataSource.myDs.maxConnections = 10
-org.quartz.dataSource.myDs.validationQuery = select 1
-```
-
-
-
-zookeeper 配置文件
-
-
-- zookeeper.properties
-
-```
-#zookeeper cluster
-zookeeper.quorum=192.168.xx.xx:2181,192.168.xx.xx:2181,192.168.xx.xx:2181
-
-#escheduler root directory
-zookeeper.escheduler.root=/escheduler
-
-#zookeeper server dirctory
-zookeeper.escheduler.dead.servers=/escheduler/dead-servers
-zookeeper.escheduler.masters=/escheduler/masters
-zookeeper.escheduler.workers=/escheduler/workers
-
-#zookeeper lock dirctory
-zookeeper.escheduler.lock.masters=/escheduler/lock/masters
-zookeeper.escheduler.lock.workers=/escheduler/lock/workers
-
-#escheduler failover directory
-zookeeper.escheduler.lock.masters.failover=/escheduler/lock/failover/masters
-zookeeper.escheduler.lock.workers.failover=/escheduler/lock/failover/workers
-
-#escheduler failover directory
-zookeeper.session.timeout=300
-zookeeper.connection.timeout=300
-zookeeper.retry.sleep=1000
-zookeeper.retry.maxtime=5
-
-```
-
-
-
-### escheduler-dao
-
-dao数据源配置
-
-- dao/data_source.properties
-
-```
-# base spring data source configuration
-spring.datasource.type=com.alibaba.druid.pool.DruidDataSource
-spring.datasource.driver-class-name=com.mysql.jdbc.Driver
-spring.datasource.url=jdbc:mysql://192.168.xx.xx:3306/escheduler?characterEncoding=UTF-8
-spring.datasource.username=xx
-spring.datasource.password=xx
-
-# connection configuration
-spring.datasource.initialSize=5
-# min connection number
-spring.datasource.minIdle=5
-# max connection number
-spring.datasource.maxActive=50
-
-# max wait time for get a connection in milliseconds. if configuring maxWait, fair locks are enabled by default and concurrency efficiency decreases.
-# If necessary, unfair locks can be used by configuring the useUnfairLock attribute to true.
-spring.datasource.maxWait=60000
-
-# milliseconds for check to close free connections
-spring.datasource.timeBetweenEvictionRunsMillis=60000
-
-# the Destroy thread detects the connection interval and closes the physical connection in milliseconds if the connection idle time is greater than or equal to minEvictableIdleTimeMillis.
-spring.datasource.timeBetweenConnectErrorMillis=60000
-
-# the longest time a connection remains idle without being evicted, in milliseconds
-spring.datasource.minEvictableIdleTimeMillis=300000
-
-#the SQL used to check whether the connection is valid requires a query statement. If validation Query is null, testOnBorrow, testOnReturn, and testWhileIdle will not work.
-spring.datasource.validationQuery=SELECT 1
-#check whether the connection is valid for timeout, in seconds
-spring.datasource.validationQueryTimeout=3
-
-# when applying for a connection, if it is detected that the connection is idle longer than time Between Eviction Runs Millis,
-# validation Query is performed to check whether the connection is valid
-spring.datasource.testWhileIdle=true
-
-#execute validation to check if the connection is valid when applying for a connection
-spring.datasource.testOnBorrow=true
-#execute validation to check if the connection is valid when the connection is returned
-spring.datasource.testOnReturn=false
-spring.datasource.defaultAutoCommit=true
-spring.datasource.keepAlive=true
-
-# open PSCache, specify count PSCache for every connection
-spring.datasource.poolPreparedStatements=true
-spring.datasource.maxPoolPreparedStatementPerConnectionSize=20
-```
-
-
-
-### escheduler-server
-
-master配置文件
-
-- master.properties
-
-```
-# master execute thread num
-master.exec.threads=100
-
-# master execute task number in parallel
-master.exec.task.number=20
-
-# master heartbeat interval
-master.heartbeat.interval=10
-
-# master commit task retry times
-master.task.commit.retryTimes=5
-
-# master commit task interval
-master.task.commit.interval=100
-
-
-# only less than cpu avg load, master server can work. default value : the number of cpu cores * 2
-master.max.cpuload.avg=10
-
-# only larger than reserved memory, master server can work. default value : physical memory * 1/10, unit is G.
-master.reserved.memory=1
-```
-
-
-
-worker配置文件
-
-- worker.properties
-
-```
-# worker execute thread num
-worker.exec.threads=100
-
-# worker heartbeat interval
-worker.heartbeat.interval=10
-
-# submit the number of tasks at a time
-worker.fetch.task.num = 10
-
-
-# only less than cpu avg load, worker server can work. default value : the number of cpu cores * 2
-worker.max.cpuload.avg=10
-
-# only larger than reserved memory, worker server can work. default value : physical memory * 1/6, unit is G.
-worker.reserved.memory=1
-```
-
-
-
-### escheduler-api
-
-web配置文件
-
-- application.properties
-
-```
-# server port
-server.port=12345
-
-# session config
-server.session.timeout=7200
-
-server.context-path=/escheduler/
-
-# file size limit for upload
-spring.http.multipart.max-file-size=1024MB
-spring.http.multipart.max-request-size=1024MB
-
-# post content
-server.max-http-post-size=5000000
-```
-
-
-
 ## 伪分布式部署
 
-### 1,创建部署用户
-
-​	如上 **创建部署用户**
-
 ### 2,根据实际需求来创建HDFS根路径
 
-​	根据 **common/common.properties** 中 **hdfs.startup.state** 的配置来判断是否启动HDFS,如果启动,则需要创建HDFS根路径,并将 **owner** 修改为**部署用户**,否则忽略此步骤
+​	根据 **common/common.properties** 中 **hdf.startup.states** 的配置来判断是否启动HDFS,如果启动,则需要创建HDFS根路径,并将 **owner** 修改为**部署用户**,否则忽略此步骤
 
 ### 3,项目编译
 
@@ -465,40 +106,6 @@ server.max-http-post-size=5000000
 
 - 将**.escheduler_env.sh** 和 **escheduler_env.py** 两个环境变量文件复制到 **common/common.properties**配置的**escheduler.env.path** 和 **escheduler.env.py** 的目录下,并将 **owner** 修改为**部署用户**
 
-### 6,启停服务
-
-* 启停Master
-
-```启动master
-sh ./bin/escheduler-daemon.sh start master-server
-sh ./bin/escheduler-daemon.sh stop master-server
-```
-
-* 启停Worker
-
-```
-sh ./bin/escheduler-daemon.sh start worker-server
-sh ./bin/escheduler-daemon.sh stop worker-server
-```
-
-* 启停Api
-
-```
-sh ./bin/escheduler-daemon.sh start api-server
-sh ./bin/escheduler-daemon.sh stop api-server
-```
-* 启停Logger
-
-```
-sh ./bin/escheduler-daemon.sh start logger-server
-sh ./bin/escheduler-daemon.sh stop logger-server
-```
-* 启停Alert
-
-```
-sh ./bin/escheduler-daemon.sh start alert-server
-sh ./bin/escheduler-daemon.sh stop alert-server
-```
 
 
 
@@ -546,6 +153,42 @@ sh ./bin/escheduler-daemon.sh stop alert-server
 
     - 注意:scp_hosts.sh 里     `tar -zxvf $workDir/../escheduler-1.0.0.tar.gz -C $installPath` 中的版本号(1.0.0)需要执行前手动替换成对应的版本号
     
+    
+### 7,启停服务
+
+* 启停Master
+
+```启动master
+sh ./bin/escheduler-daemon.sh start master-server
+sh ./bin/escheduler-daemon.sh stop master-server
+```
+
+* 启停Worker
+
+```
+sh ./bin/escheduler-daemon.sh start worker-server
+sh ./bin/escheduler-daemon.sh stop worker-server
+```
+
+* 启停Api
+
+```
+sh ./bin/escheduler-daemon.sh start api-server
+sh ./bin/escheduler-daemon.sh stop api-server
+```
+* 启停Logger
+
+```
+sh ./bin/escheduler-daemon.sh start logger-server
+sh ./bin/escheduler-daemon.sh stop logger-server
+```
+* 启停Alert
+
+```
+sh ./bin/escheduler-daemon.sh start alert-server
+sh ./bin/escheduler-daemon.sh stop alert-server
+```
+    
 ## 服务监控
 
 monitor_server.py 脚本是监听,master和worker服务挂掉重启的脚本

+ 1 - 1
install.sh

@@ -211,7 +211,7 @@ mailPassword="xxxxxxxxxx"
 xlsFilePath="/opt/xls"
 
 # conf/config/install_config.conf配置
-# 安装路径
+# 安装路径,不要当前路径(pwd)一样
 installPath="/data1_1T/escheduler"
 
 # 部署用户