Browse Source

remove docs (#1191)

* add ConnectionFactoryTest and ConnectionFactory read datasource from appliction.yml

* .escheduler_env.sh to dolphinscheduler_env.sh

* dao yml assembly to conf directory

* table name modify

* entity title table  name modify

* logback log name modify

* running through the big process

* running through the big process error modify

* logback log name modify

* data_source.properties rename

* logback log name modify

* install.sh optimization

* install.sh optimization

* command count modify

* command state update

* countCommandState sql update

* countCommandState sql update

* remove application.yml file

* master.properties modify

* install.sh modify

* install.sh modify

* api server startup modify

* the current user quits and the session is completely emptied. bug fix

* remove pom package resources

* checkQueueNameExist method update

* checkQueueExist

* install.sh error output update

* signOut error update

* ProcessDao is null bug fix

* install.sh add mail.user

* request url variables replace

* process define import bug fix

* process define import export bug fix

* processdefine import export bug fix

* down log suffix format modify

* import export process define contains crontab error bug fix

* add Flink local mode

* ProcessDao is null bug fix

* loadAverage display problem bug fix

* MasterServer rename Server

* rollback .env

* rollback .env

* MasterServer rename Server

* the task is abnormal and task is running bug fix

* owners and administrators can delete

* dockerfile optimization

* dockerfile optimization

* dockerfile optimization

* remove application-alert.properties

* task log print worker log bug fix

* remove .escheduler_env.sh

* change dockerfile email address

* dockerfile dao application.properties and install.sh modify

* application.properties modify

* application.properties modify

* dockerfile startup.sh modify

* remove docs

* nginx conf modify

* dockerfile application.properties modify
qiaozhanwei 5 years ago
parent
commit
96713ec0e3
100 changed files with 2 additions and 3667 deletions
  1. 1 1
      dockerfile/conf/dolphinscheduler/conf/application.properties
  2. 1 1
      dockerfile/conf/nginx/dolphinscheduler.conf
  3. 0 16
      docs/en_US/1.0.1-release.md
  4. 0 49
      docs/en_US/1.0.2-release.md
  5. 0 30
      docs/en_US/1.0.3-release.md
  6. 0 2
      docs/en_US/1.0.4-release.md
  7. 0 2
      docs/en_US/1.0.5-release.md
  8. 0 55
      docs/en_US/1.1.0-release.md
  9. 0 299
      docs/en_US/EasyScheduler Proposal.md
  10. 0 284
      docs/en_US/EasyScheduler-FAQ.md
  11. 0 96
      docs/en_US/README.md
  12. 0 50
      docs/en_US/SUMMARY.md
  13. 0 316
      docs/en_US/architecture-design.md
  14. 0 207
      docs/en_US/backend-deployment.md
  15. 0 48
      docs/en_US/backend-development.md
  16. 0 23
      docs/en_US/book.json
  17. 0 115
      docs/en_US/frontend-deployment.md
  18. 0 650
      docs/en_US/frontend-development.md
  19. BIN
      docs/en_US/images/auth-project.png
  20. BIN
      docs/en_US/images/complement.png
  21. BIN
      docs/en_US/images/depend-b-and-c.png
  22. BIN
      docs/en_US/images/depend-last-tuesday.png
  23. BIN
      docs/en_US/images/depend-week.png
  24. BIN
      docs/en_US/images/save-definition.png
  25. BIN
      docs/en_US/images/save-global-parameters.png
  26. BIN
      docs/en_US/images/start-process.png
  27. BIN
      docs/en_US/images/timing.png
  28. 0 53
      docs/en_US/quick-start.md
  29. 0 699
      docs/en_US/system-manual.md
  30. 0 39
      docs/en_US/upgrade.md
  31. 0 16
      docs/zh_CN/1.0.1-release.md
  32. 0 49
      docs/zh_CN/1.0.2-release.md
  33. 0 30
      docs/zh_CN/1.0.3-release.md
  34. 0 28
      docs/zh_CN/1.0.4-release.md
  35. 0 23
      docs/zh_CN/1.0.5-release.md
  36. 0 63
      docs/zh_CN/1.1.0-release.md
  37. 0 287
      docs/zh_CN/EasyScheduler-FAQ.md
  38. 0 66
      docs/zh_CN/README.md
  39. 0 47
      docs/zh_CN/SUMMARY.md
  40. 0 23
      docs/zh_CN/book.json
  41. BIN
      docs/zh_CN/images/addtenant.png
  42. BIN
      docs/zh_CN/images/architecture.jpg
  43. BIN
      docs/zh_CN/images/auth_project.png
  44. BIN
      docs/zh_CN/images/auth_user.png
  45. BIN
      docs/zh_CN/images/cdh_hive_error.png
  46. BIN
      docs/zh_CN/images/complement.png
  47. BIN
      docs/zh_CN/images/complement_data.png
  48. BIN
      docs/zh_CN/images/create-queue.png
  49. BIN
      docs/zh_CN/images/dag1.png
  50. BIN
      docs/zh_CN/images/dag2.png
  51. BIN
      docs/zh_CN/images/dag3.png
  52. BIN
      docs/zh_CN/images/dag4.png
  53. BIN
      docs/zh_CN/images/dag_examples_cn.jpg
  54. BIN
      docs/zh_CN/images/dag_examples_en.jpg
  55. BIN
      docs/zh_CN/images/decentralization.png
  56. BIN
      docs/zh_CN/images/definition_create.png
  57. BIN
      docs/zh_CN/images/definition_edit.png
  58. BIN
      docs/zh_CN/images/definition_list.png
  59. BIN
      docs/zh_CN/images/depend-node.png
  60. BIN
      docs/zh_CN/images/depend-node2.png
  61. BIN
      docs/zh_CN/images/depend-node3.png
  62. BIN
      docs/zh_CN/images/dependent_edit.png
  63. BIN
      docs/zh_CN/images/dependent_edit2.png
  64. BIN
      docs/zh_CN/images/dependent_edit3.png
  65. BIN
      docs/zh_CN/images/dependent_edit4.png
  66. BIN
      docs/zh_CN/images/distributed_lock.png
  67. BIN
      docs/zh_CN/images/distributed_lock_procss.png
  68. BIN
      docs/zh_CN/images/fault-tolerant.png
  69. BIN
      docs/zh_CN/images/fault-tolerant_master.png
  70. BIN
      docs/zh_CN/images/fault-tolerant_worker.png
  71. BIN
      docs/zh_CN/images/favicon.ico
  72. BIN
      docs/zh_CN/images/file-manage.png
  73. BIN
      docs/zh_CN/images/file_create.png
  74. BIN
      docs/zh_CN/images/file_detail.png
  75. BIN
      docs/zh_CN/images/file_rename.png
  76. BIN
      docs/zh_CN/images/file_upload.png
  77. BIN
      docs/zh_CN/images/gant-pic.png
  78. BIN
      docs/zh_CN/images/gantt.png
  79. BIN
      docs/zh_CN/images/global_parameter.png
  80. BIN
      docs/zh_CN/images/grpc.png
  81. BIN
      docs/zh_CN/images/hive_edit.png
  82. BIN
      docs/zh_CN/images/hive_edit2.png
  83. BIN
      docs/zh_CN/images/hive_kerberos.png
  84. BIN
      docs/zh_CN/images/instance-detail.png
  85. BIN
      docs/zh_CN/images/instance-list.png
  86. BIN
      docs/zh_CN/images/lack_thread.png
  87. BIN
      docs/zh_CN/images/local_parameter.png
  88. BIN
      docs/zh_CN/images/login.jpg
  89. BIN
      docs/zh_CN/images/login.png
  90. BIN
      docs/zh_CN/images/logo.png
  91. BIN
      docs/zh_CN/images/logout.png
  92. BIN
      docs/zh_CN/images/mail_edit.png
  93. BIN
      docs/zh_CN/images/master-jk.png
  94. BIN
      docs/zh_CN/images/master.png
  95. BIN
      docs/zh_CN/images/master2.png
  96. BIN
      docs/zh_CN/images/master_slave.png
  97. BIN
      docs/zh_CN/images/master_worker_lack_res.png
  98. BIN
      docs/zh_CN/images/mr_edit.png
  99. BIN
      docs/zh_CN/images/mr_java.png
  100. 0 0
      docs/zh_CN/images/mysql-jk.png

+ 1 - 1
dockerfile/conf/dolphinscheduler/conf/application.properties

@@ -27,7 +27,7 @@ spring.datasource.timeBetweenConnectErrorMillis=60000
 spring.datasource.minEvictableIdleTimeMillis=300000
 
 #the SQL used to check whether the connection is valid requires a query statement. If validation Query is null, testOnBorrow, testOnReturn, and testWhileIdle will not work.
-spring.datasource.validationQuery=SELECT 1 FROM DUAL
+spring.datasource.validationQuery=SELECT 1
 
 #check whether the connection is valid for timeout, in seconds
 spring.datasource.validationQueryTimeout=3

+ 1 - 1
dockerfile/conf/nginx/dolphinscheduler.conf

@@ -4,7 +4,7 @@ server {
     #charset koi8-r;
     #access_log  /var/log/nginx/host.access.log  main;
     location / {
-        root   /opt/easyscheduler_source/dolphinscheduler-ui/dist;
+        root   /opt/dolphinscheduler_source/dolphinscheduler-ui/dist;
         index  index.html index.html;
     }
     location /dolphinscheduler {

+ 0 - 16
docs/en_US/1.0.1-release.md

@@ -1,16 +0,0 @@
-Easy Scheduler Release 1.0.1
-===
-Easy Scheduler 1.0.1 is the second version in the 1.x series. The update is as follows:
-
-- 1,outlook TSL email support
-- 2,servlet and protobuf jar conflict resolution
-- 3,create a tenant and establish a Linux user at the same time
-- 4,the re-run time is negative
-- 5,stand-alone and cluster can be deployed with one click of install.sh
-- 6,queue support interface added
-- 7,escheduler.t_escheduler_queue added create_time and update_time fields
-
-
-
-
-

File diff suppressed because it is too large
+ 0 - 49
docs/en_US/1.0.2-release.md


+ 0 - 30
docs/en_US/1.0.3-release.md

@@ -1,30 +0,0 @@
-Easy Scheduler Release 1.0.3
-===
-Easy Scheduler 1.0.3 is the fourth version in the 1.x series.
-
-Enhanced:
-===
--  [[EasyScheduler-482]](https://github.com/analysys/EasyScheduler/issues/482)sql task mail header added support for custom variables
--  [[EasyScheduler-483]](https://github.com/analysys/EasyScheduler/issues/483)sql task failed to send mail, then this sql task is failed
--  [[EasyScheduler-484]](https://github.com/analysys/EasyScheduler/issues/484)modify the replacement rule of the custom variable in the sql task, and support the replacement of multiple single quotes and double quotes.
--   [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/485)when creating a resource file, increase the verification that the resource file already exists on hdfs
-
-Repair:
-===
--  [[EasyScheduler-198]](https://github.com/analysys/EasyScheduler/issues/198) the process definition list is sorted according to the timing status and update time
--  [[EasyScheduler-419]](https://github.com/analysys/EasyScheduler/issues/419)  fixes online creation of files, hdfs file is not created, but returns successfully
--  [[EasyScheduler-481] ](https://github.com/analysys/EasyScheduler/issues/481)fixes the problem that the job does not exist at the same time.
--  [[EasyScheduler-425]](https://github.com/analysys/EasyScheduler/issues/425) kills the kill of its child process when killing the task
--  [[EasyScheduler-422]](https://github.com/analysys/EasyScheduler/issues/422) fixed an issue where the update time and size were not updated when updating resource files
--  [[EasyScheduler-431]](https://github.com/analysys/EasyScheduler/issues/431) fixed an issue where deleting a tenant failed if hdfs was not started when the tenant was deleted
--  [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/486) the shell process exits, the yarn state is not final and waits for judgment.
-
-Thank:
-===
-Last but not least, no new version was born without the contributions of the following partners:
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, 
-feloxx, coding-now, hymzcn, nysyxxg, chgxtony 
-
-And many enthusiastic partners in the WeChat group! Thank you very much!
-

+ 0 - 2
docs/en_US/1.0.4-release.md

@@ -1,2 +0,0 @@
-# 1.0.4 release
-

+ 0 - 2
docs/en_US/1.0.5-release.md

@@ -1,2 +0,0 @@
-# 1.0.5 release
-

+ 0 - 55
docs/en_US/1.1.0-release.md

@@ -1,55 +0,0 @@
-Easy Scheduler Release 1.1.0
-===
-Easy Scheduler 1.1.0 is the first release in the 1.1.x series.
-
-New features:
-===
-- [[EasyScheduler-391](https://github.com/analysys/EasyScheduler/issues/391)] run a process under a specified tenement user
-- [[EasyScheduler-288](https://github.com/analysys/EasyScheduler/issues/288)] feature/qiye_weixin
-- [[EasyScheduler-189](https://github.com/analysys/EasyScheduler/issues/189)] security support such as Kerberos
-- [[EasyScheduler-398](https://github.com/analysys/EasyScheduler/issues/398)]dministrator, with tenants (install.sh set default tenant), can create resources, projects and data sources (limited to one administrator)
-- [[EasyScheduler-293](https://github.com/analysys/EasyScheduler/issues/293)]click on the parameter selected when running the process, there is no place to view, no save
-- [[EasyScheduler-401](https://github.com/analysys/EasyScheduler/issues/401)]timing is easy to time every second. After the timing is completed, you can display the next trigger time on the page.
-- [[EasyScheduler-493](https://github.com/analysys/EasyScheduler/pull/493)]add datasource kerberos auth and FAQ modify and add resource upload s3
-
-
-Enhanced:
-===
-- [[EasyScheduler-227](https://github.com/analysys/EasyScheduler/issues/227)] upgrade spring-boot to 2.1.x and spring to 5.x
-- [[EasyScheduler-434](https://github.com/analysys/EasyScheduler/issues/434)] number of worker nodes zk and mysql are inconsistent
-- [[EasyScheduler-435](https://github.com/analysys/EasyScheduler/issues/435)]authentication of the mailbox format
-- [[EasyScheduler-441](https://github.com/analysys/EasyScheduler/issues/441)] prohibits running nodes from joining completed node detection
-- [[EasyScheduler-400](https://github.com/analysys/EasyScheduler/issues/400)] Home page, queue statistics are not harmonious, command statistics have no data
-- [[EasyScheduler-395](https://github.com/analysys/EasyScheduler/issues/395)] For fault-tolerant recovery processes, the status cannot be ** is running
-- [[EasyScheduler-529](https://github.com/analysys/EasyScheduler/issues/529)] optimize poll task from zookeeper
-- [[EasyScheduler-242](https://github.com/analysys/EasyScheduler/issues/242)]worker-server node gets task performance problem
-- [[EasyScheduler-352](https://github.com/analysys/EasyScheduler/issues/352)]worker grouping, queue consumption problem
-- [[EasyScheduler-461](https://github.com/analysys/EasyScheduler/issues/461)]view data source parameters, need to encrypt account password information
-- [[EasyScheduler-396](https://github.com/analysys/EasyScheduler/issues/396)]Dockerfile optimization, and associated Dockerfile and github to achieve automatic mirroring
-- [[EasyScheduler-389](https://github.com/analysys/EasyScheduler/issues/389)]service monitor cannot find the change of master/worker
-- [[EasyScheduler-511](https://github.com/analysys/EasyScheduler/issues/511)]support recovery process from stop/kill nodes.
-- [[EasyScheduler-399](https://github.com/analysys/EasyScheduler/issues/399)]HadoopUtils specifies user actions instead of **Deploying users
-
-Repair:
-===
-- [[EasyScheduler-394](https://github.com/analysys/EasyScheduler/issues/394)] When the master&worker is deployed on the same machine, if the master&worker service is restarted, the previously scheduled tasks cannot be scheduled.
-- [[EasyScheduler-469](https://github.com/analysys/EasyScheduler/issues/469)]Fix naming errors,monitor page
-- [[EasyScheduler-392](https://github.com/analysys/EasyScheduler/issues/392)]Feature request: fix email regex check
-- [[EasyScheduler-405](https://github.com/analysys/EasyScheduler/issues/405)]timed modification/addition page, start time and end time cannot be the same
-- [[EasyScheduler-517](https://github.com/analysys/EasyScheduler/issues/517)]complement - subworkflow - time parameter 
-- [[EasyScheduler-532](https://github.com/analysys/EasyScheduler/issues/532)] python node does not execute the problem
-- [[EasyScheduler-543](https://github.com/analysys/EasyScheduler/issues/543)]optimize datasource connection params safety
-- [[EasyScheduler-569](https://github.com/analysys/EasyScheduler/issues/569)] timed tasks can't really stop
-- [[EasyScheduler-463](https://github.com/analysys/EasyScheduler/issues/463)]mailbox verification does not support very suffixed mailboxes
-
-
-
-
-Thank:
-===
-Last but not least, no new version was born without the contributions of the following partners:
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, chgxtony, Stanfan, lfyee, thisnew, hujiang75277381, sunnyingit, lgbo-ustc, ivivi, lzy305, JackIllkid, telltime, lipengbo2018, wuchunfu, telltime
-
-And many enthusiastic partners in the WeChat group! Thank you very much!
-

File diff suppressed because it is too large
+ 0 - 299
docs/en_US/EasyScheduler Proposal.md


File diff suppressed because it is too large
+ 0 - 284
docs/en_US/EasyScheduler-FAQ.md


File diff suppressed because it is too large
+ 0 - 96
docs/en_US/README.md


+ 0 - 50
docs/en_US/SUMMARY.md

@@ -1,50 +0,0 @@
-# Summary
-
-* [Instruction](README.md)
-
-* Frontend Deployment
-    * [Preparations](frontend-deployment.md#Preparations)
-    * [Deployment](frontend-deployment.md#Deployment)
-    * [FAQ](frontend-deployment.md#FAQ)
-    
-* Backend Deployment
-    * [Preparations](backend-deployment.md#Preparations)
-    * [Deployment](backend-deployment.md#Deployment)
-    
-* [Quick Start](quick-start.md#Quick Start)
-
-* System Use Manual
-    * [Operational Guidelines](system-manual.md#Operational Guidelines)
-    * [Security](system-manual.md#Security)
-    * [Monitor center](system-manual.md#Monitor center)
-    * [Task Node Type and Parameter Setting](system-manual.md#Task Node Type and Parameter Setting)
-    * [System parameter](system-manual.md#System parameter)
-    
-* [Architecture Design](architecture-design.md)
-
-* Front-end development
-    * [Development environment](frontend-development.md#Development environment)
-    * [Project directory structure](frontend-development.md#Project directory structure)
-    * [System function module](frontend-development.md#System function module)
-    * [Routing and state management](frontend-development.md#Routing and state management)
-    * [specification](frontend-development.md#specification)
-    * [interface](frontend-development.md#interface)
-    * [Extended development](frontend-development.md#Extended development)
-    
-* Backend development documentation
-    * [Environmental requirements](backend-development.md#Environmental requirements)
-    * [Project compilation](backend-development.md#Project compilation)
-* [Interface documentation](http://52.82.13.76:8888/dolphinscheduler/doc.html?language=en_US&lang=en)
-* FAQ
-    * [FAQ](EasyScheduler-FAQ.md)
-* EasyScheduler upgrade documentation
-    * [upgrade documentation](upgrade.md)
-* History release notes
-    * [1.1.0 release](1.1.0-release.md)
-    * [1.0.5 release](1.0.5-release.md)
-    * [1.0.4 release](1.0.4-release.md)
-    * [1.0.3 release](1.0.3-release.md)
-    * [1.0.2 release](1.0.2-release.md)
-    * [1.0.1 release](1.0.1-release.md)
-    * [1.0.0 release]
-

File diff suppressed because it is too large
+ 0 - 316
docs/en_US/architecture-design.md


+ 0 - 207
docs/en_US/backend-deployment.md

@@ -1,207 +0,0 @@
-# Backend Deployment Document
-
-There are two deployment modes for the backend: 
-
-- automatic deployment  
-- source code compile and then deployment
-
-## Preparations
-
-Download the latest version of the installation package, download address: [gitee download](https://gitee.com/easyscheduler/EasyScheduler/attach_files/) or [github download](https://github.com/apache/incubator-dolphinscheduler/releases), download dolphinscheduler-backend-x.x.x.tar.gz(back-end referred to as dolphinscheduler-backend),dolphinscheduler-ui-x.x.x.tar.gz(front-end referred to as dolphinscheduler-ui)
-
-
-
-#### Preparations 1: Installation of basic software (self-installation of required items)
-
- * [Mysql](http://geek.analysys.cn/topic/124) (5.5+) :  Mandatory
- * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  Mandatory
- * [ZooKeeper](https://www.jianshu.com/p/de90172ea680)(3.4.6+) :Mandatory
- * [Hadoop](https://blog.csdn.net/Evankaka/article/details/51612437)(2.6+) :Optionally, if you need to use the resource upload function, MapReduce task submission needs to configure Hadoop (uploaded resource files are currently stored on Hdfs)
- * [Hive](https://staroon.pro/2017/12/09/HiveInstall/)(1.2.1) :   Optional, hive task submission needs to be installed
- * Spark(1.x,2.x) :  Optional, Spark task submission needs to be installed
- * PostgreSQL(8.2.15+) : Optional, PostgreSQL PostgreSQL stored procedures need to be installed
-
-```
- Note: Easy Scheduler itself does not rely on Hadoop, Hive, Spark, PostgreSQL, but only calls their Client to run the corresponding tasks.
-```
-
-#### Preparations 2: Create deployment users
-
-- Deployment users are created on all machines that require deployment scheduling, because the worker service executes jobs in `sudo-u {linux-user}`, so deployment users need sudo privileges and are confidential.
-
-```
-vi /etc/sudoers
-
-# For example, the deployment user is an dolphinscheduler account
-dolphinscheduler  ALL=(ALL)       NOPASSWD: NOPASSWD: ALL
-
-# And you need to comment out the Default requiretty line
-#Default requiretty
-```
-
-#### Preparations 3: SSH Secret-Free Configuration
-Configure SSH secret-free login on deployment machines and other installation machines. If you want to install easyscheduler on deployment machines, you need to configure native password-free login itself.
-
-- [Connect the host and other machines SSH](http://geek.analysys.cn/topic/113)
-
-#### Preparations 4: database initialization
-
-* Create databases and accounts
-
-    Execute the following command to create database and account
-    
-    ```
-    CREATE DATABASE dolphinscheduler DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'%' IDENTIFIED BY '{password}';
-    GRANT ALL PRIVILEGES ON dolphinscheduler.* TO '{user}'@'localhost' IDENTIFIED BY '{password}';
-    flush privileges;
-    ```
-
-* creates tables and imports basic data
-    Modify the following attributes in ./conf/dao/data_source.properties
-
-    ```
-        spring.datasource.url
-        spring.datasource.username
-        spring.datasource.password
-    ```
-    
-    Execute scripts for creating tables and importing basic data
-    
-    ```
-    sh ./script/create-dolphinscheduler.sh
-    ```
-
-#### Preparations 5: Modify the deployment directory permissions and operation parameters
-
-     instruction of dolphinscheduler-backend directory 
-
-```directory
-bin : Basic service startup script
-conf : Project Profile
-lib : The project relies on jar packages, including individual module jars and third-party jars
-script :  Cluster Start, Stop and Service Monitor Start and Stop scripts
-sql : The project relies on SQL files
-install.sh :  One-click deployment script
-```
-
-- Modify permissions (please modify the 'deployUser' to the corresponding deployment user) so that the deployment user has operational privileges on the dolphinscheduler-backend directory
-
-    `sudo chown -R deployUser:deployUser dolphinscheduler-backend`
-
-- Modify the `.dolphinscheduler_env.sh` environment variable in the conf/env/directory
-
-- Modify deployment parameters (depending on your server and business situation):
-
- - Modify the parameters in **install.sh** to replace the values required by your business
-   - MonitorServerState switch variable, added in version 1.0.3, controls whether to start the self-start script (monitor master, worker status, if off-line will start automatically). The default value of "false" means that the self-start script is not started, and if it needs to start, it is changed to "true".
-   - 'hdfsStartupSate' switch variable controls whether to start hdfs
-      The default value of "false" means not to start hdfs
-      Change the variable to 'true' if you want to use hdfs, you also need to create the hdfs root path by yourself, that 'hdfsPath' in install.sh.
-
- - If you use hdfs-related functions, you need to copy**hdfs-site.xml** and **core-site.xml** to the conf directory
-
-
-## Deployment
-Automated deployment is recommended, and experienced partners can use source deployment as well.
-
-### Automated Deployment
-
-- Install zookeeper tools
-
-   `pip install kazoo`
-
-- Switch to deployment user, one-click deployment
-
-    `sh install.sh` 
-
-- Use the `jps` command to check if the services are started (`jps` comes from `Java JDK`)
-
-```aidl
-    MasterServer         ----- Master Service
-    WorkerServer         ----- Worker Service
-    LoggerServer         ----- Logger Service
-    ApiApplicationServer ----- API Service
-    AlertServer          ----- Alert Service
-```
-
-If all services are normal, the automatic deployment is successful
-
-
-After successful deployment, the log can be viewed and stored in a specified folder.
-
-```logPath
- logs/
-    ├── dolphinscheduler-alert-server.log
-    ├── dolphinscheduler-master-server.log
-    |—— dolphinscheduler-worker-server.log
-    |—— dolphinscheduler-api-server.log
-    |—— dolphinscheduler-logger-server.log
-```
-
-### Compile source code to deploy
-
-After downloading the release version of the source package, unzip it into the root directory
-
-* Execute the compilation command:
-
-```
- mvn -U clean package assembly:assembly -Dmaven.test.skip=true
-```
-
-* View directory
-
-After normal compilation, ./target/dolphinscheduler-{version}/ is generated in the current directory
-
-
-### Start-and-stop services commonly used in systems (for service purposes, please refer to System Architecture Design for details)
-
-* stop all services in the cluster
-  
-   ` sh ./bin/stop-all.sh`
-   
-* start all services in the cluster
-  
-   ` sh ./bin/start-all.sh`
-
-* start and stop one master server
-
-```master
-sh ./bin/dolphinscheduler-daemon.sh start master-server
-sh ./bin/dolphinscheduler-daemon.sh stop master-server
-```
-
-* start and stop one worker server
-
-```worker
-sh ./bin/dolphinscheduler-daemon.sh start worker-server
-sh ./bin/dolphinscheduler-daemon.sh stop worker-server
-```
-
-* start and stop api server
-
-```Api
-sh ./bin/dolphinscheduler-daemon.sh start api-server
-sh ./bin/dolphinscheduler-daemon.sh stop api-server
-```
-* start and stop logger server
-
-```Logger
-sh ./bin/dolphinscheduler-daemon.sh start logger-server
-sh ./bin/dolphinscheduler-daemon.sh stop logger-server
-```
-* start and stop alert server
-
-```Alert
-sh ./bin/dolphinscheduler-daemon.sh start alert-server
-sh ./bin/dolphinscheduler-daemon.sh stop alert-server
-```
-
-## Database Upgrade
-Database upgrade is a function added in version 1.0.2. The database can be upgraded automatically by executing the following command:
-
-```upgrade
-sh ./script/upgrade-dolphinscheduler.sh
-```
-
-

+ 0 - 48
docs/en_US/backend-development.md

@@ -1,48 +0,0 @@
-# Backend development documentation
-
-## Environmental requirements
-
- * [Mysql](http://geek.analysys.cn/topic/124) (5.5+) :  Must be installed
- * [JDK](https://www.oracle.com/technetwork/java/javase/downloads/index.html) (1.8+) :  Must be installed
- * [ZooKeeper](https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper)(3.4.6+) :Must be installed
- * [Maven](http://maven.apache.org/download.cgi)(3.3+) :Must be installed
-
-Because the dolphinscheduler-rpc module in EasyScheduler uses Grpc, you need to use Maven to compile the generated classes.
-For those who are not familiar with maven, please refer to: [maven in five minutes](http://maven.apache.org/guides/getting-started/maven-in-five-minutes.html)(3.3+)
-
-http://maven.apache.org/install.html
-
-## Project compilation
-After importing the EasyScheduler source code into the development tools such as Idea, first convert to the Maven project (right click and select "Add Framework Support")
-
-* Execute the compile command:
-
-```
- mvn -U clean package assembly:assembly -Dmaven.test.skip=true
-```
-
-* View directory
-
-After normal compilation, it will generate ./target/dolphinscheduler-{version}/ in the current directory.
-
-```
-    bin
-    conf
-    lib
-    script
-    sql
-    install.sh
-```
-
-- Description
-
-```
-bin : basic service startup script
-conf : project configuration file
-lib : the project depends on the jar package, including the various module jars and third-party jars
-script : cluster start, stop, and service monitoring start and stop scripts
-sql : project depends on sql file
-install.sh : one-click deployment script
-```
-
-   

+ 0 - 23
docs/en_US/book.json

@@ -1,23 +0,0 @@
-{
-  "title": "EasyScheduler",
-  "author": "",
-  "description": "Scheduler",
-  "language": "en-US",
-  "gitbook": "3.2.3",
-  "styles": {
-    "website": "./styles/website.css"
-  },
-  "structure": {
-    "readme": "README.md"
-  },
-  "plugins":[
-    "expandable-chapters",
-    "insert-logo-link"
-  ],
-  "pluginsConfig": {
-    "insert-logo-link": {
-      "src": "http://geek.analysys.cn/static/upload/236/2019-03-29/379450b4-7919-4707-877c-4d33300377d4.png",
-      "url": "https://github.com/analysys/EasyScheduler"
-    }
-  }
-}

+ 0 - 115
docs/en_US/frontend-deployment.md

@@ -1,115 +0,0 @@
-# frontend-deployment
-
-The front-end has three deployment modes: automated deployment, manual deployment and compiled source deployment.
-
-
-
-## Preparations
-
-#### Download the installation package
-
-Please download the latest version of the installation package, download address: [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files/)
-
-After downloading dolphinscheduler-ui-x.x.x.tar.gz,decompress`tar -zxvf dolphinscheduler-ui-x.x.x.tar.gz ./`and enter the`dolphinscheduler-ui`directory
-
-
-
-
-## Deployment
-
-Automated deployment is recommended for either of the following two ways
-
-### Automated Deployment
-
-Edit the installation file`vi install-dolphinscheduler-ui.sh` in the` dolphinscheduler-ui` directory
-
-Change the front-end access port and the back-end proxy interface address
-
-```
-# Configure the front-end access port
-esc_proxy="8888"
-
-# Configure proxy back-end interface
-esc_proxy_port="http://192.168.xx.xx:12345"
-```
-
->Front-end automatic deployment based on Linux system `yum` operation, before deployment, please install and update`yum`
-
-under this directory, execute`./install-dolphinscheduler-ui.sh` 
-
-
-### Manual Deployment
-
-Install epel source `yum install epel-release -y`
-
-Install Nginx `yum install nginx -y`
-
-
-> ####  Nginx configuration file address
-
-```
-/etc/nginx/conf.d/default.conf
-```
-
-> ####  Configuration information (self-modifying)
-
-```
-server {
-    listen       8888;# access port
-    server_name  localhost;
-    #charset koi8-r;
-    #access_log  /var/log/nginx/host.access.log  main;
-    location / {
-        root   /xx/dist; # the dist directory address decompressed by the front end above (self-modifying)
-        index  index.html index.html;
-    }
-    location /dolphinscheduler {
-        proxy_pass http://192.168.xx.xx:12345; # interface address (self-modifying)
-        proxy_set_header Host $host;
-        proxy_set_header X-Real-IP $remote_addr;
-        proxy_set_header x_real_ipP $remote_addr;
-        proxy_set_header remote_addr $remote_addr;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_http_version 1.1;
-        proxy_connect_timeout 4s;
-        proxy_read_timeout 30s;
-        proxy_send_timeout 12s;
-        proxy_set_header Upgrade $http_upgrade;
-        proxy_set_header Connection "upgrade";
-    }
-    #error_page  404              /404.html;
-    # redirect server error pages to the static page /50x.html
-    #
-    error_page   500 502 503 504  /50x.html;
-    location = /50x.html {
-        root   /usr/share/nginx/html;
-    }
-}
-```
-
-> ####  Restart the Nginx service
-
-```
-systemctl restart nginx
-```
-
-#### nginx command
-
-- enable `systemctl enable nginx`
-
-- restart `systemctl restart nginx`
-
-- status `systemctl status nginx`
-
-
-## FAQ
-#### Upload file size limit
-
-Edit the configuration file `vi /etc/nginx/nginx.conf`
-
-```
-# change upload size
-client_max_body_size 1024m
-```
-
-

+ 0 - 650
docs/en_US/frontend-development.md

@@ -1,650 +0,0 @@
-# Front-end development documentation
-
-### Technical selection
-```
-Vue mvvm framework
-
-Es6 ECMAScript 6.0
-
-Ans-ui Analysys-ui
-
-D3  Visual Library Chart Library
-
-Jsplumb connection plugin library
-
-Lodash high performance JavaScript utility library
-```
-
-
-### Development environment
-
-- #### Node installation
-Node package download (note version 8.9.4) `https://nodejs.org/download/release/v8.9.4/` 
-
-
-- #### Front-end project construction
-Use the command line mode `cd`  enter the `dolphinscheduler-ui` project directory and execute `npm install` to pull the project dependency package.
-
-> If `npm install` is very slow
-
-> You can enter the Taobao image command line to enter `npm install -g cnpm --registry=https://registry.npm.taobao.org`
-
-> Run `cnpm install` 
-
-
-- Create a new `.env`  file or the interface that interacts with the backend
-
-Create a new` .env` file in the `dolphinscheduler-ui `directory, add the ip address and port of the backend service to the file, and use it to interact with the backend. The contents of the` .env` file are as follows:
-```
-# Proxy interface address (modified by yourself)
-API_BASE = http://192.168.xx.xx:12345
-
-# If you need to access the project with ip, you can remove the "#" (example)
-#DEV_HOST = 192.168.xx.xx
-```
-
-> #####  ! ! ! Special attention here. If the project reports a "node-sass error" error while pulling the dependency package, execute the following command again after execution.
-```
-npm install node-sass --unsafe-perm //单独安装node-sass依赖
-```
-
-- #### Development environment operation
-- `npm start` project development environment (after startup address http://localhost:8888/#/)
-
-
-#### Front-end project release
-
-- `npm run build` project packaging (after packaging, the root directory will create a folder called dist for publishing Nginx online)
-
-Run the `npm run build` command to generate a package file (dist) package
-
-Copy it to the corresponding directory of the server (front-end service static page storage directory)
-
-Visit address` http://localhost:8888/#/`
-
-
-#### Start with node and daemon under Linux
-
-Install pm2 `npm install -g pm2`
-
-Execute `pm2 start npm -- run dev` to start the project in the project `dolphinscheduler-ui `root directory
-
-#### command
-
-- Start `pm2 start npm -- run dev`
-
-- Stop `pm2 stop npm`
-
-- delete `pm2 delete npm`
-
-- Status  `pm2 list`
-
-```
-
-[root@localhost dolphinscheduler-ui]# pm2 start npm -- run dev
-[PM2] Applying action restartProcessId on app [npm](ids: 0)
-[PM2] [npm](0) ✓
-[PM2] Process successfully started
-┌──────────┬────┬─────────┬──────┬──────┬────────┬─────────┬────────┬─────┬──────────┬──────┬──────────┐
-│ App name │ id │ version │ mode │ pid  │ status │ restart │ uptime │ cpu │ mem      │ user │ watching │
-├──────────┼────┼─────────┼──────┼──────┼────────┼─────────┼────────┼─────┼──────────┼──────┼──────────┤
-│ npm      │ 0  │ N/A     │ fork │ 6168 │ online │ 31      │ 0s     │ 0%  │ 5.6 MB   │ root │ disabled │
-└──────────┴────┴─────────┴──────┴──────┴────────┴─────────┴────────┴─────┴──────────┴──────┴──────────┘
- Use `pm2 show <id|name>` to get more details about an app
-
-```
-
-
-### Project directory structure
-
-`build` some webpack configurations for packaging and development environment projects
-
-`node_modules` development environment node dependency package
-
-`src` project required documents
-
-`src => combo` project third-party resource localization `npm run combo` specific view `build/combo.js`
-
-`src => font` Font icon library can be added by visiting https://www.iconfont.cn Note: The font library uses its own secondary development to reintroduce its own library `src/sass/common/_font.scss`
-
-`src => images` public image storage
-
-`src => js` js/vue
-
-`src => lib` internal components of the company (company component library can be deleted after open source)
-
-`src => sass` sass file One page corresponds to a sass file
-
-`src => view` page file One page corresponds to an html file
-
-```
-> Projects are developed using vue single page application (SPA)
-- All page entry files are in the `src/js/conf/${ corresponding page filename => home} index.js` entry file
-- The corresponding sass file is in `src/sass/conf/${corresponding page filename => home}/index.scss`
-- The corresponding html file is in `src/view/${corresponding page filename => home}/index.html`
-```
-
-Public module and utill `src/js/module`
-
-`components` => internal project common components
-
-`download` => download component
-
-`echarts` => chart component
-
-`filter` => filter and vue pipeline
-
-`i18n` => internationalization
-
-`io` => io request encapsulation based on axios
-
-`mixin` => vue mixin public part for disabled operation
-
-`permissions` => permission operation
-
-`util` => tool
-
-
-### System function module
-
-Home  => `http://localhost:8888/#/home`
-
-Project Management => `http://localhost:8888/#/projects/list`
-```
-| Project Home
-| Workflow
-  - Workflow definition
-  - Workflow instance
-  - Task instance
-```
-
-Resource Management => `http://localhost:8888/#/resource/file`
-```
-| File Management
-| udf Management
-  - Resource Management
-  - Function management
-
-
-
-```
-
-Data Source Management => `http://localhost:8888/#/datasource/list`
-
-Security Center => `http://localhost:8888/#/security/tenant`
-```
-| Tenant Management
-| User Management
-| Alarm Group Management
-  - master
-  - worker
-```
-
-User Center => `http://localhost:8888/#/user/account`
-
-
-## Routing and state management
-
-The project `src/js/conf/home` is divided into
-
-`pages` => route to page directory
-```
- The page file corresponding to the routing address
-```
-
-`router` => route management
-```
-vue router, the entry file index.js in each page will be registered. Specific operations: https://router.vuejs.org/zh/
-```
-
-`store` => status management
-```
-The page corresponding to each route has a state management file divided into:
-
-actions => mapActions => Details:https://vuex.vuejs.org/zh/guide/actions.html
-
-getters => mapGetters => Details:https://vuex.vuejs.org/zh/guide/getters.html
-
-index => entrance
-mutations => mapMutations => Details:https://vuex.vuejs.org/zh/guide/mutations.html
-
-state => mapState => Details:https://vuex.vuejs.org/zh/guide/state.html
-
-Specific action:https://vuex.vuejs.org/zh/
-
-```
-
-
-## specification
-## Vue specification
-##### 1.Component name
-The component is named multiple words and is connected with a wire (-) to avoid conflicts with HTML tags and a clearer structure.
-```
-// positive example
-export default {
-    name: 'page-article-item'
-}
-```
-
-##### 2.Component files
-The internal common component of the `src/js/module/components` project writes the folder name with the same name as the file name. The subcomponents and util tools that are split inside the common component are placed in the internal `_source` folder of the component.
-```
-└── components
-    ├── header
-        ├── header.vue
-        └── _source
-            └── nav.vue
-            └── util.js
-    ├── conditions
-        ├── conditions.vue
-        └── _source
-            └── search.vue
-            └── util.js
-```
-
-##### 3.Prop
-When you define Prop, you should always name it in camel format (camelCase) and use the connection line (-) when assigning values to the parent component.This follows the characteristics of each language, because it is case-insensitive in HTML tags, and the use of links is more friendly; in JavaScript, the more natural is the hump name.
-
-```
-// Vue
-props: {
-    articleStatus: Boolean
-}
-// HTML
-<article-item :article-status="true"></article-item>
-```
-
-The definition of Prop should specify its type, defaults, and validation as much as possible.
-
-Example:
-
-```
-props: {
-    attrM: Number,
-    attrA: {
-        type: String,
-        required: true
-    },
-    attrZ: {
-        type: Object,
-        //  The default value of the array/object should be returned by a factory function
-        default: function () {
-            return {
-                msg: 'achieve you and me'
-            }
-        }
-    },
-    attrE: {
-        type: String,
-        validator: function (v) {
-            return !(['success', 'fail'].indexOf(v) === -1) 
-        }
-    }
-}
-```
-
-##### 4.v-for
-When performing v-for traversal, you should always bring a key value to make rendering more efficient when updating the DOM.
-```
-<ul>
-    <li v-for="item in list" :key="item.id">
-        {{ item.title }}
-    </li>
-</ul>
-```
-
-v-for should be avoided on the same element as v-if (`for example: <li>`) because v-for has a higher priority than v-if. To avoid invalid calculations and rendering, you should try to use v-if Put it on top of the container's parent element.
-```
-<ul v-if="showList">
-    <li v-for="item in list" :key="item.id">
-        {{ item.title }}
-    </li>
-</ul>
-```
-
-##### 5.v-if / v-else-if / v-else
-If the elements in the same set of v-if logic control are logically identical, Vue reuses the same part for more efficient element switching, `such as: value`. In order to avoid the unreasonable effect of multiplexing, you should add key to the same element for identification.
-```
-<div v-if="hasData" key="mazey-data">
-    <span>{{ mazeyData }}</span>
-</div>
-<div v-else key="mazey-none">
-    <span>no data</span>
-</div>
-```
-
-##### 6.Instruction abbreviation
-In order to unify the specification, the instruction abbreviation is always used. Using `v-bind`, `v-on` is not bad. Here is only a unified specification.
-```
-<input :value="mazeyUser" @click="verifyUser">
-```
-
-##### 7.Top-level element order of single file components
-Styles are packaged in a file, all the styles defined in a single vue file, the same name in other files will also take effect. All will have a top class name before creating a component.
-Note: The sass plugin has been added to the project, and the sas syntax can be written directly in a single vue file.
-For uniformity and ease of reading, they should be placed in the order of  `<template>`、`<script>`、`<style>`.
-
-```
-<template>
-  <div class="test-model">
-    test
-  </div>
-</template>
-<script>
-  export default {
-    name: "test",
-    data() {
-      return {}
-    },
-    props: {},
-    methods: {},
-    watch: {},
-    beforeCreate() {
-    },
-    created() {
-    },
-    beforeMount() {
-    },
-    mounted() {
-    },
-    beforeUpdate() {
-    },
-    updated() {
-    },
-    beforeDestroy() {
-    },
-    destroyed() {
-    },
-    computed: {},
-    components: {},
-  }
-</script>
-
-<style lang="scss" rel="stylesheet/scss">
-  .test-model {
-
-  }
-</style>
-
-```
-
-
-## JavaScript specification
-
-##### 1.var / let / const
-It is recommended to no longer use var, but use let / const, prefer const. The use of any variable must be declared in advance, except that the function defined by function can be placed anywhere.
-
-##### 2.quotes
-```
-const foo = 'after division'
-const bar = `${foo},ront-end engineer`
-```
-
-##### 3.function
-Anonymous functions use the arrow function uniformly. When multiple parameters/return values are used, the object's structure assignment is used first.
-```
-function getPersonInfo ({name, sex}) {
-    // ...
-    return {name, gender}
-}
-```
-The function name is uniformly named with a camel name. The beginning of the capital letter is a constructor. The lowercase letters start with ordinary functions, and the new operator should not be used to operate ordinary functions.
-
-##### 4.object
-```
-const foo = {a: 0, b: 1}
-const bar = JSON.parse(JSON.stringify(foo))
-
-const foo = {a: 0, b: 1}
-const bar = {...foo, c: 2}
-
-const foo = {a: 3}
-Object.assign(foo, {b: 4})
-
-const myMap = new Map([])
-for (let [key, value] of myMap.entries()) {
-    // ...
-}
-```
-
-##### 5.module
-Unified management of project modules using import / export.
-```
-// lib.js
-export default {}
-
-// app.js
-import app from './lib'
-```
-
-Import is placed at the top of the file.
-
-If the module has only one output value, use `export default`,otherwise no.
-
-## HTML / CSS
-
-##### 1.Label
-
-Do not write the type attribute when referencing external CSS or JavaScript. The HTML5 default type is the text/css and text/javascript properties, so there is no need to specify them.
-```
-<link rel="stylesheet" href="//www.test.com/css/test.css">
-<script src="//www.test.com/js/test.js"></script>
-```
-
-##### 2.Naming
-The naming of Class and ID should be semantic, and you can see what you are doing by looking at the name; multiple words are connected by a link.
-```
-// positive example
-.test-header{
-    font-size: 20px;
-}
-```
-
-##### 3.Attribute abbreviation
-CSS attributes use abbreviations as much as possible to improve the efficiency and ease of understanding of the code.
-
-```
-// counter example
-border-width: 1px;
-border-style: solid;
-border-color: #ccc;
-
-// positive example
-border: 1px solid #ccc;
-```
-
-##### 4.Document type
-
-The HTML5 standard should always be used.
-
-```
-<!DOCTYPE html>
-```
-
-##### 5.Notes
-A block comment should be written to a module file.
-```
-/**
-* @module mazey/api
-* @author Mazey <mazey@mazey.net>
-* @description test.
-* */
-```
-
-
-## interface
-
-##### All interfaces are returned as Promise 
-Note that non-zero is wrong for catching catch
-
-```
-const test = () => {
-  return new Promise((resolve, reject) => {
-    resolve({
-      a:1
-    })
-  })
-}
-
-// transfer
-test.then(res => {
-  console.log(res)
-  // {a:1}
-})
-```
-
-Normal return
-```
-{
-  code:0,
-  data:{}
-  msg:'success'
-}
-```
-
-错误返回
-```
-{
-  code:10000, 
-  data:{}
-  msg:'failed'
-}
-```
-
-##### Related interface path
-
-dag related interface `src/js/conf/home/store/dag/actions.js`
-
-Data Source Center Related Interfaces  `src/js/conf/home/store/datasource/actions.js`
-
-Project Management Related Interfaces `src/js/conf/home/store/projects/actions.js`
-
-Resource Center Related Interfaces `src/js/conf/home/store/resource/actions.js`
-
-Security Center Related Interfaces `src/js/conf/home/store/security/actions.js`
-
-User Center Related Interfaces `src/js/conf/home/store/user/actions.js`
-
-
-
-## Extended development
-
-##### 1.Add node
-
-(1) First place the icon icon of the node in the `src/js/conf/home/pages/dag/img `folder, and note the English name of the node defined by the `toolbar_${in the background. For example: SHELL}.png`
-
-(2)  Find the `tasksType` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
-```
-'DEPENDENT': {  //  The background definition node type English name is used as the key value
-  desc: 'DEPENDENT',  // tooltip desc
-  color: '#2FBFD8'  // The color represented is mainly used for tree and gantt
-}
-```
-
-(3)  Add a `${node type (lowercase)}`.vue file in `src/js/conf/home/pages/dag/_source/formModel/tasks`. The contents of the components related to the current node are written here. Must belong to a node component must have a function _verification () After the verification is successful, the relevant data of the current component is thrown to the parent component.
-```
-/**
- * Verification
-*/
-  _verification () {
-    // datasource subcomponent verification
-    if (!this.$refs.refDs._verifDatasource()) {
-      return false
-    }
-
-    // verification function
-    if (!this.method) {
-      this.$message.warning(`${i18n.$t('Please enter method')}`)
-      return false
-    }
-
-    // localParams subcomponent validation
-    if (!this.$refs.refLocalParams._verifProp()) {
-      return false
-    }
-    // store
-    this.$emit('on-params', {
-      type: this.type,
-      datasource: this.datasource,
-      method: this.method,
-      localParams: this.localParams
-    })
-    return true
-  }
-```
-
-(4) Common components used inside the node component are under` _source`, and `commcon.js` is used to configure public data.
-
-##### 2.Increase the status type
-
-(1) Find the `tasksState` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
-
-```
- 'WAITTING_DEPEND': {  // 'WAITTING_DEPEND': {  //后端定义状态类型 前端用作key值
-  id: 11,  // front-end definition id is used as a sort
-  desc: `${i18n.$t('waiting for dependency')}`,  // tooltip desc
-  color: '#5101be',  // The color represented is mainly used for tree and gantt
-  icoUnicode: '&#xe68c;',  // font icon
-  isSpin: false  // whether to rotate (requires code judgment)
-}
-```
-
-##### 3.Add the action bar tool
-(1)  Find the `toolOper` object in `src/js/conf/home/pages/dag/_source/config.js` and add it to it.
-```
-{
-  code: 'pointer',  // tool identifier
-  icon: '&#xe781;',  // tool icon
-  disable: disable,  // disable
-  desc: `${i18n.$t('Drag node and selected item')}`  // tooltip desc
-}
-```
-
-(2) Tool classes are returned as a constructor  `src/js/conf/home/pages/dag/_source/plugIn`
-
-`downChart.js`  =>  dag image download processing
-
-`dragZoom.js`  =>  mouse zoom effect processing
-
-`jsPlumbHandle.js`  =>  drag and drop line processing
-
-`util.js`  =>   belongs to the `plugIn` tool class
-
-
-The operation is handled in the `src/js/conf/home/pages/dag/_source/dag.js` => `toolbarEvent` event.
-
-
-##### 3.Add a routing page
-
-(1) First add a routing address`src/js/conf/home/router/index.js` in route management
-```
-routing address{
-  path: '/test',  // routing address
-  name: 'test',  // alias
-  component: resolve => require(['../pages/test/index'], resolve),  // route corresponding component entry file
-  meta: {
-    title: `${i18n.$t('test')} - EasyScheduler`  // title display
-  }
-},
-```
-
-(2)Create a `test` folder in `src/js/conf/home/pages` and create an `index.vue `entry file in the folder.
-
-    This will give you direct access to`http://localhost:8888/#/test`
-
-
-##### 4.Increase the preset mailbox
-
-Find the `src/lib/localData/email.js` startup and timed email address input to automatically pull down the match.
-```
-export default ["test@analysys.com.cn","test1@analysys.com.cn","test3@analysys.com.cn"]
-```
-
-##### 5.Authority management and disabled state processing
-
-The permission gives the userType according to the backUser interface `getUserInfo` interface: `"ADMIN_USER/GENERAL_USER" `permission to control whether the page operation button is `disabled`.
-
-specific operation:`src/js/module/permissions/index.js`
-
-disabled processing:`src/js/module/mixin/disabledState.js`
-

BIN
docs/en_US/images/auth-project.png


BIN
docs/en_US/images/complement.png


BIN
docs/en_US/images/depend-b-and-c.png


BIN
docs/en_US/images/depend-last-tuesday.png


BIN
docs/en_US/images/depend-week.png


BIN
docs/en_US/images/save-definition.png


BIN
docs/en_US/images/save-global-parameters.png


BIN
docs/en_US/images/start-process.png


BIN
docs/en_US/images/timing.png


+ 0 - 53
docs/en_US/quick-start.md

@@ -1,53 +0,0 @@
-# Quick Start
-
-* Administrator user login
-
-  > Address:192.168.xx.xx:8888  Username and password:admin/dolphinscheduler123
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/48329107/61701549-ee738000-ad70-11e9-8d75-87ce04a0152f.png" width="60%" />
- </p>
-
-* Create queue
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/48329107/61701943-896c5a00-ad71-11e9-99b8-a279762f1bc8.png" width="60%" />
- </p>
-
-  * Create tenant
-      <p align="center">
-    <img src="https://user-images.githubusercontent.com/48329107/61702051-bb7dbc00-ad71-11e9-86e1-1c328cafe916.png" width="60%" />
-  </p>
-
-  * Creating Ordinary Users
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61704402-3517a900-ad76-11e9-865a-6325041d97e2.png" width="60%" />
- </p>
-
-  * Create an alarm group
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61704553-845dd980-ad76-11e9-85f1-05f33111409e.png" width="60%" />
-  </p>
-
-  * Log in with regular users
-  > Click on the user name in the upper right corner to "exit" and re-use the normal user login.
-
-  * Project Management - > Create Project - > Click on Project Name
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61704688-dd2d7200-ad76-11e9-82ee-0833b16bd88f.png" width="60%" />
- </p>
-
-  * Click Workflow Definition - > Create Workflow Definition - > Online Process Definition
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61705638-c425c080-ad78-11e9-8619-6c21b61a24c9.png" width="60%" />
- </p>
-
-  * Running Process Definition - > Click Workflow Instance - > Click Process Instance Name - > Double-click Task Node - > View Task Execution Log
-
- <p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61705356-34801200-ad78-11e9-8d60-9b7494231028.png" width="60%" />
-</p>
-
-

+ 0 - 699
docs/en_US/system-manual.md

@@ -1,699 +0,0 @@
-# System Use Manual
-
-## Operational Guidelines
-
-### Create a project
-
-  - Click "Project - > Create Project", enter project name,  description, and click "Submit" to create a new project.
-  - Click on the project name to enter the project home page.
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61776719-2ee50380-ae2e-11e9-9d11-41de8907efb5.png" width="60%" />
- </p>
-
-> Project Home Page contains task status statistics, process status statistics.
-
- - Task State Statistics: It refers to the statistics of the number of tasks to be run, failed, running, completed and succeeded in a given time frame.
- - Process State Statistics: It refers to the statistics of the number of waiting, failing, running, completing and succeeding process instances in a specified time range.
- - Process Definition Statistics: The process definition created by the user and the process definition granted by the administrator to the user are counted.
-
-
-### Creating Process definitions
-  - Go to the project home page, click "Process definitions" and enter the list page of process definition.
-  - Click "Create process" to create a new process definition.
-  - Drag the "SHELL" node to the canvas and add a shell task.
-  - Fill in the Node Name, Description, and Script fields.
-  - Selecting "task priority" will give priority to high-level tasks in the execution queue. Tasks with the same priority will be executed in the first-in-first-out order.
-  - Timeout alarm. Fill in "Overtime Time". When the task execution time exceeds the overtime, it can alarm and fail over time.
-  - Fill in "Custom Parameters" and refer to [Custom Parameters](#Custom Parameters)
-    <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61778402-42459e00-ae31-11e9-96c6-8fd7fed8fed2.png" width="60%" />
-      </p>
-  - Increase the order of execution between nodes: click "line connection". As shown, task 1 and task 3 are executed in parallel. When task 1 is executed, task 2 and task 3 are executed simultaneously.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61778247-f98de500-ae30-11e9-8f11-cce0530c3ff2.png" width="60%" />
- </p>
-
-  - Delete dependencies: Click on the arrow icon to "drag nodes and select items", select the connection line, click on the delete icon to delete dependencies between nodes.
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61778800-052ddb80-ae32-11e9-8ac0-4f13466d3515.png" width="60%" />
- </p>
-
-  - Click "Save", enter the name of the process definition, the description of the process definition, and set the global parameters.
-
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/save-definition.png" width="60%" />
- </p>
-
-  - For other types of nodes, refer to [task node types and parameter settings](#task node types and parameter settings)
-
-### Execution process definition
-  - **The process definition of the off-line state can be edited, but not run**, so the on-line workflow is the first step.
-  > Click on the Process definition, return to the list of process definitions, click on the icon "online", online process definition.
-
-  > Before setting workflow offline, the timed tasks in timed management should be offline, so that the definition of workflow can be set offline successfully. 
-
-  - Click "Run" to execute the process. Description of operation parameters:
-    * Failure strategy:**When a task node fails to execute, other parallel task nodes need to execute the strategy**。”Continue "Representation: Other task nodes perform normally", "End" Representation: Terminate all ongoing tasks and terminate the entire process.
-    * Notification strategy:When the process is over, send process execution information notification mail according to the process status.
-    * Process priority: The priority of process running is divided into five levels:the highest, the high, the medium, the low, and the lowest . High-level processes are executed first in the execution queue, and processes with the same priority are executed first in first out order.
-    * Worker group: This process can only be executed in a specified machine group. Default, by default, can be executed on any worker.
-    * Notification group: When the process ends or fault tolerance occurs, process information is sent to all members of the notification group by mail.
-    * Recipient: Enter the mailbox and press Enter key to save. When the process ends and fault tolerance occurs, an alert message is sent to the recipient list.
-    * Cc: Enter the mailbox and press Enter key to save. When the process is over and fault-tolerant occurs, alarm messages are copied to the copier list.
-    
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/start-process.png" width="60%" />
- </p>
-
-  * Complement: To implement the workflow definition of a specified date, you can select the time range of the complement (currently only support for continuous days), such as the data from May 1 to May 10, as shown in the figure:
-  
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/complement.png" width="60%" />
- </p>
-
-> Complement execution mode includes serial execution and parallel execution. In serial mode, the complement will be executed sequentially from May 1 to May 10. In parallel mode, the tasks from May 1 to May 10 will be executed simultaneously.
-
-### Timing Process Definition
-  - Create Timing: "Process Definition - > Timing"
-  - Choose start-stop time, in the start-stop time range, regular normal work, beyond the scope, will not continue to produce timed workflow instances.
-  
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/timing.png" width="60%" />
- </p>
-
-  - Add a timer to be executed once a day at 5:00 a.m. as shown below:
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61781968-d9adef80-ae37-11e9-9e90-3d9f0b3eb998.png" width="60%" />
- </p>
-
-  - Timely online,**the newly created timer is offline. You need to click "Timing Management - >online" to work properly.**
-
-### View process instances
-  > Click on "Process Instances" to view the list of process instances.
-
-  > Click on the process name to see the status of task execution.
-
-  <p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61855837-6ff31b80-aef3-11e9-8464-2fb5773709df.png" width="60%" />
- </p>
-
-  > Click on the task node, click "View Log" to view the task execution log.
-
-  <p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783070-bdab4d80-ae39-11e9-9ada-355614fbb7f7.png" width="60%" />
- </p>
-
- > Click on the task instance node, click **View History** to view the list of task instances that the process instance runs.
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61783240-05ca7000-ae3a-11e9-8c10-591a7635834a.png" width="60%" />
-  </p>
-
-
-  > Operations on workflow instances:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783291-21357b00-ae3a-11e9-837c-fc3d85404410.png" width="60%" />
-</p>
-
-  * Editor: You can edit the terminated process. When you save it after editing, you can choose whether to update the process definition or not.
-  * Rerun: A process that has been terminated can be re-executed.
-  * Recovery failure: For a failed process, a recovery failure operation can be performed, starting at the failed node.
-  * Stop: Stop the running process, the background will `kill` he worker process first, then `kill -9` operation.
-  * Pause:The running process can be **suspended**, the system state becomes **waiting to be executed**, waiting for the end of the task being executed, and suspending the next task to be executed.
-  * Restore pause: **The suspended process** can be restored and run directly from the suspended node
-  * Delete: Delete process instances and task instances under process instances
-  * Gantt diagram: The vertical axis of Gantt diagram is the topological ordering of task instances under a process instance, and the horizontal axis is the running time of task instances, as shown in the figure:
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61783596-aa4cb200-ae3a-11e9-9798-e795f80dae96.png" width="60%" />
-</p>
-
-### View task instances
-  > Click on "Task Instance" to enter the Task List page and query the performance of the task.
-  >
-  >
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783544-91dc9780-ae3a-11e9-9dca-dfd901f1fe83.png" width="60%" />
-</p>
-
-  > Click "View Log" in the action column to view the log of task execution.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783441-60fc6280-ae3a-11e9-8631-963dcf78467b.png" width="60%" />
-</p>
-
-### Create data source
-  > Data Source Center supports MySQL, POSTGRESQL, HIVE and Spark data sources.
-
-#### Create and edit MySQL data source
-
-  - Click on "Datasource - > Create Datasources" to create different types of datasources according to requirements.
-- Datasource: Select MYSQL
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to MySQL
-- Port: Enter the port to connect MySQL
-- User name: Set the username to connect to MySQL
-- Password: Set the password to connect to MySQL
-- Database name: Enter the name of the database connecting MySQL
-- Jdbc connection parameters: parameter settings for MySQL connections, filled in as JSON
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783812-129b9380-ae3b-11e9-9b9c-77870371c5f3.png" width="60%" />
- </p>
-
-  > Click "Test Connect" to test whether the data source can be successfully connected.
-  >
-  >
-
-#### Create and edit POSTGRESQL data source
-
-- Datasource: Select POSTGRESQL
-- Datasource Name: Name of Input Data Source
-- Description: Description of input data sources
-- IP: Enter IP to connect to POSTGRESQL
-- Port: Input port to connect POSTGRESQL
-- Username: Set the username to connect to POSTGRESQL
-- Password: Set the password to connect to POSTGRESQL
-- Database name: Enter the name of the database connecting to POSTGRESQL
-- Jdbc connection parameters: parameter settings for POSTGRESQL connections, filled in as JSON
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61783968-60180080-ae3b-11e9-91b7-36d49246a205.png" width="60%" />
- </p>
-
-#### Create and edit HIVE data source
-
-1.Connect with HiveServer 2
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61784129-b9802f80-ae3b-11e9-8a27-7be23e0953be.png" width="60%" />
-  </p>
-
-  - Datasource: Select HIVE
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter IP to connect to HIVE
-- Port: Input port to connect to HIVE
-- Username: Set the username to connect to HIVE
-- Password: Set the password to connect to HIVE
-- Database Name: Enter the name of the database connecting to HIVE
-- Jdbc connection parameters: parameter settings for HIVE connections, filled in in as JSON
-
-2.Connect using Hive Server 2 HA Zookeeper mode
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61784420-3dd2b280-ae3c-11e9-894a-5b896863d37a.png" width="60%" />
-  </p>
-
-
-Note: If **kerberos** is turned on, you need to fill in **Principal**
-<p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61784847-0adcee80-ae3d-11e9-8ac7-ba8a13aef90c.png" width="60%" />
-  </p>
-
-
-
-
-#### Create and Edit Datasource
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/48329107/61853431-7af77d00-aeee-11e9-8e2e-95ba6cea43c8.png" width="60%" />
- </p>
-
-- Datasource: Select Spark
-- Datasource Name: Name of Input Datasource
-- Description: Description of input datasources
-- IP: Enter the IP to connect to Spark
-- Port: Input port to connect Spark
-- Username: Set the username to connect to Spark
-- Password: Set the password to connect to Spark
-- Database name: Enter the name of the database connecting to Spark
-- Jdbc Connection Parameters: Parameter settings for Spark Connections, filled in as JSON
-
-
-
-Note: If **kerberos** If Kerberos is turned on, you need to fill in  **Principal**
-
-<p align="center">
-    <img src="https://user-images.githubusercontent.com/48329107/61853668-0709a480-aeef-11e9-8960-92107dd1a9ca.png" width="60%" />
-  </p>
-
-### Upload Resources
-  - Upload resource files and udf functions, all uploaded files and resources will be stored on hdfs, so the following configuration items are required:
-
-```
-conf/common/common.properties
-    -- hdfs.startup.state=true
-conf/common/hadoop.properties  
-    -- fs.defaultFS=hdfs://xxxx:8020  
-    -- yarn.resourcemanager.ha.rm.ids=192.168.xx.xx,192.168.xx.xx
-    -- yarn.application.status.address=http://xxxx:8088/ws/v1/cluster/apps/%s
-```
-
-#### File Manage
-
-  > It is the management of various resource files, including creating basic txt/log/sh/conf files, uploading jar packages and other types of files, editing, downloading, deleting and other operations.
-  >
-  >
-  > <p align="center">
-  >  <img src="https://user-images.githubusercontent.com/53217792/61785274-ed5c5480-ae3d-11e9-8461-2178f49b228d.png" width="60%" />
-  > </p>
-
-  * Create file
- > File formats support the following types:txt、log、sh、conf、cfg、py、java、sql、xml、hql
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61841049-f133b980-aec5-11e9-8ac8-db97cdccc599.png" width="60%" />
- </p>
-
-  * Upload Files
-
-> Upload Files: Click the Upload button to upload, drag the file to the upload area, and the file name will automatically complete the uploaded file name.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61841179-73bc7900-aec6-11e9-8780-28756e684754.png" width="60%" />
- </p>
-
-
-  * File View
-
-> For viewable file types, click on the file name to view file details
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61841247-9cdd0980-aec6-11e9-9f6f-0a7dd145f865.png" width="60%" />
- </p>
-
-  * Download files
-
-> You can download a file by clicking the download button in the top right corner of the file details, or by downloading the file under the download button after the file list.
-
-  * File rename
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61841322-f47b7500-aec6-11e9-93b1-b00328e7b69e.png" width="60%" />
- </p>
-
-#### Delete
->  File List - > Click the Delete button to delete the specified file
-
-#### Resource management
-  > Resource management and file management functions are similar. The difference is that resource management is the UDF function of uploading, and file management uploads user programs, scripts and configuration files.
-
-  * Upload UDF resources
-  > The same as uploading files.
-
-#### Function management
-
-  * Create UDF Functions
-  > Click "Create UDF Function", enter parameters of udf function, select UDF resources, and click "Submit" to create udf function.
-  >
-  >
-  >
-  > Currently only temporary udf functions for HIVE are supported
-  >
-  > 
-  >
-  > - UDF function name: name when entering UDF Function
-  > - Package Name: Full Path of Input UDF Function
-  > - Parameter: Input parameters used to annotate functions
-  > - Database Name: Reserved Field for Creating Permanent UDF Functions
-  > - UDF Resources: Set up the resource files corresponding to the created UDF
-  >
-  > 
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61841562-c6e2fb80-aec7-11e9-9481-4202d63dab6f.png" width="60%" />
- </p>
-
-## Security
-
-  - The security has the functions of queue management, tenant management, user management, warning group management, worker group manager, token manage and other functions. It can also authorize resources, data sources, projects, etc.
-- Administrator login, default username password: admin/dolphinscheduler123
-
-
-
-### Create queues
-
-
-
-  - Queues are used to execute spark, mapreduce and other programs, which require the use of "queue" parameters.
-- "Security" - > "Queue Manage" - > "Create Queue" 
-     <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61841945-078f4480-aec9-11e9-92fb-05b6f42f07d6.png" width="60%" />
-  </p>
-
-
-### Create Tenants
-  - The tenant corresponds to the account of Linux, which is used by the worker server to submit jobs. If Linux does not have this user, the worker would create the account when executing the task.
-  - Tenant Code:**the tenant code is the only account on Linux that can't be duplicated.**
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61842372-8042d080-aeca-11e9-8c54-e3dee583eeff.png" width="60%" />
-  </p>
-
-### Create Ordinary Users
-  -  User types are **ordinary users** and **administrator users**..
-    * Administrators have **authorization and user management** privileges, and no privileges to **create project and process-defined operations**.
-    * Ordinary users can **create projects and create, edit, and execute process definitions**.
-    * Note: **If the user switches the tenant, all resources under the tenant will be copied to the switched new tenant.**
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61842461-da439600-aeca-11e9-98e3-f8327dbafa60.png" width="60%" />
- </p>
-
-### Create alarm group
-  * The alarm group is a parameter set at start-up. After the process is finished, the status of the process and other information will be sent to the alarm group by mail.
-  * New and Editorial Warning Group
-    <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61842553-34445b80-aecb-11e9-84a8-3cc66b6c6135.png" width="60%" />
-    </p>
-
-### Create Worker Group
-  - Worker group provides a mechanism for tasks to run on a specified worker. Administrators create worker groups, which can be specified in task nodes and operation parameters. If the specified grouping is deleted or no grouping is specified, the task will run on any worker.
-- Multiple IP addresses within a worker group (**aliases can not be written**), separated by **commas in English**
-
-  <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61842630-6b1a7180-aecb-11e9-8988-b4444de16b36.png" width="60%" />
-  </p>
-
-### Token manage
-  - Because the back-end interface has login check and token management, it provides a way to operate the system by calling the interface.
-- Call examples:
-
-```令牌调用示例
-    /**
-     * test token
-     */
-    public  void doPOSTParam()throws Exception{
-        // create HttpClient
-        CloseableHttpClient httpclient = HttpClients.createDefault();
-
-        // create http post request
-        HttpPost httpPost = new HttpPost("http://127.0.0.1:12345/dolphinscheduler/projects/create");
-        httpPost.setHeader("token", "123");
-        // set parameters
-        List<NameValuePair> parameters = new ArrayList<NameValuePair>();
-        parameters.add(new BasicNameValuePair("projectName", "qzw"));
-        parameters.add(new BasicNameValuePair("desc", "qzw"));
-        UrlEncodedFormEntity formEntity = new UrlEncodedFormEntity(parameters);
-        httpPost.setEntity(formEntity);
-        CloseableHttpResponse response = null;
-        try {
-            // execute
-            response = httpclient.execute(httpPost);
-            // response status code 200
-            if (response.getStatusLine().getStatusCode() == 200) {
-                String content = EntityUtils.toString(response.getEntity(), "UTF-8");
-                System.out.println(content);
-            }
-        } finally {
-            if (response != null) {
-                response.close();
-            }
-            httpclient.close();
-        }
-    }
-```
-
-### Grant authority
-  - Granting permissions includes project permissions, resource permissions, datasource permissions, UDF Function permissions.
-> Administrators can authorize projects, resources, data sources and UDF Functions that are not created by ordinary users. Because project, resource, data source and UDF Function are all authorized in the same way, the project authorization is introduced as an example.
-
-> Note:For projects created by the user himself, the user has all the permissions. The list of items and the list of selected items will not be reflected
-
-  - 1.Click on the authorization button of the designated person as follows:
-    <p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61843204-71a9e880-aecd-11e9-83ad-365d7bf99375.png" width="60%" />
- </p>
-
-- 2.Select the project button to authorize the project
-
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/auth-project.png" width="60%" />
- </p>
-
-### Monitor center
-  - Service management is mainly to monitor and display the health status and basic information of each service in the system.
-
-#### Master monitor
-  - Mainly related information about master.
-<p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61843245-8edeb700-aecd-11e9-9916-ea50080e7d08.png" width="60%" />
- </p>
-
-#### Worker monitor
-  - Mainly related information of worker.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61843277-ae75df80-aecd-11e9-9667-b9f1615b6f3b.png" width="60%" />
- </p>
-
-#### Zookeeper monitor
-  - Mainly the configuration information of each worker and master in zookpeeper.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61843323-c64d6380-aecd-11e9-8392-1ca9b84cd794.png" width="60%" />
- </p>
-
-#### Mysql monitor
-  - Mainly the health status of mysql
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61843358-e11fd800-aecd-11e9-86d1-9490e48dc955.png" width="60%" />
- </p>
-
-## Task Node Type and Parameter Setting
-
-### Shell
-
-  - The shell node, when the worker executes, generates a temporary shell script, which is executed by a Linux user with the same name as the tenant.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SHELL.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61843728-6788e980-aecf-11e9-8006-241a7ec5024b.png" width="60%" />
- </p>`
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node can be scheduled properly, and if it does not need to be executed, you can turn on the forbidden execution switch.
-- Description : Describes the function of the node
-- Number of failed retries: Number of failed task submissions, support drop-down and manual filling
-- Failure Retry Interval: Interval between tasks that fail to resubmit tasks, support drop-down and manual filling
-- Script: User-developed SHELL program
-- Resources: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of SHELL replace the contents of scripts with ${variables}
-
-### SUB_PROCESS
-  - The sub-process node is to execute an external workflow definition as an task node.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SUB_PROCESS.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61843799-adde4880-aecf-11e9-846e-f1696107029f.png" width="60%" />
- </p>
-
-- Node name: The node name in a process definition is unique
-- Run flag: Identify whether the node is scheduled properly
-- Description: Describes the function of the node
-- Sub-node: The process definition of the selected sub-process is selected, and the process definition of the selected sub-process can be jumped to by entering the sub-node in the upper right corner.
-
-### DEPENDENT
-
-  - Dependent nodes are **dependent checking nodes**. For example, process A depends on the successful execution of process B yesterday, and the dependent node checks whether process B has a successful execution instance yesterday.
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_DEPENDENT.png) ask node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61844369-be8fbe00-aed1-11e9-965d-ddb9aeeba9db.png" width="60%" />
- </p>
-
-  > Dependent nodes provide logical judgment functions, such as checking whether yesterday's B process was successful or whether the C process was successfully executed.
-
-  <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/depend-b-and-c.png" width="80%" />
- </p>
-
-  > For example, process A is a weekly task and process B and C are daily tasks. Task A requires that task B and C be successfully executed every day of the last week, as shown in the figure:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/depend-week.png" width="80%" />
- </p>
-
-  > If weekly A also needs to be implemented successfully on Tuesday:
-
- <p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/depend-last-tuesday.png" width="80%" />
- </p>
-
-###  PROCEDURE
-  - The procedure is executed according to the selected data source.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PROCEDURE.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61844464-1af2dd80-aed2-11e9-9486-6cf1b8585aa5.png" width="60%" />
- </p>
-
-- Datasource: The data source type of stored procedure supports MySQL and POSTGRESQL, and chooses the corresponding data source.
-- Method: The method name of the stored procedure
-- Custom parameters: Custom parameter types of stored procedures support IN and OUT, and data types support nine data types: VARCHAR, INTEGER, LONG, FLOAT, DOUBLE, DATE, TIME, TIMESTAMP and BOOLEAN.
-
-### SQL
-  - Execute non-query SQL functionality
-    <p align="center">
-      <img src="https://user-images.githubusercontent.com/53217792/61850397-d7569e80-aee6-11e9-9da0-c4d96deaa8a1.png" width="60%" />
- </p>
-
-  - Executing the query SQL function, you can choose to send mail in the form of tables and attachments to the designated recipients.
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SQL.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61850594-4d5b0580-aee7-11e9-9c9e-1934c91962b9.png" width="60%" />
- </p>
-
-- Datasource: Select the corresponding datasource
-- sql type: support query and non-query, query is select type query, there is a result set returned, you can specify mail notification as table, attachment or table attachment three templates. Non-query is not returned by result set, and is for update, delete, insert three types of operations
-- sql parameter: input parameter format is key1 = value1; key2 = value2...
-- sql statement: SQL statement
-- UDF function: For HIVE type data sources, you can refer to UDF functions created in the resource center, other types of data sources do not support UDF functions for the time being.
-- Custom parameters: SQL task type, and stored procedure is to customize the order of parameters to set values for methods. Custom parameter type and data type are the same as stored procedure task type. The difference is that the custom parameter of the SQL task type replaces the ${variable} in the SQL statement.
-
-
-
-### SPARK 
-
-  - Through SPARK node, SPARK program can be directly executed. For spark node, worker will use `spark-submit` mode to submit tasks.
-
-> Drag the   ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_SPARK.png)  task node in the toolbar onto the palette and double-click the task node as follows:
->
-> 
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/48329107/61852935-3d462480-aeed-11e9-8241-415314bfc2e5.png" width="60%" />
- </p>
-
-- Program Type: Support JAVA, Scala and Python
-- Class of the main function: The full path of Main Class, the entry to the Spark program
-- Master jar package: It's Spark's jar package
-- Deployment: support three modes: yarn-cluster, yarn-client, and local
-- Driver Kernel Number: Driver Kernel Number and Memory Number can be set
-- Executor Number: Executor Number, Executor Memory Number and Executor Kernel Number can be set
-- Command Line Parameters: Setting the input parameters of Spark program to support the replacement of custom parameter variables.
-- Other parameters: support - jars, - files, - archives, - conf format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-Note: JAVA and Scala are just used for identification, no difference. If it's a Spark developed by Python, there's no class of the main function, and everything else is the same.
-
-### MapReduce(MR)
-  - Using MR nodes, MR programs can be executed directly. For Mr nodes, worker submits tasks using `hadoop jar`
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_MR.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
- 1. JAVA program
-
- <p align="center">
-    <img src="https://user-images.githubusercontent.com/53217792/61851102-91023f00-aee8-11e9-9ac0-dbe588d860c2.png" width="60%" />
-  </p>
-
-- Class of the main function: The full path of the MR program's entry Main Class
-- Program Type: Select JAVA Language
-- Master jar package: MR jar package
-- Command Line Parameters: Setting the input parameters of MR program to support the replacement of custom parameter variables
-- Other parameters: support - D, - files, - libjars, - archives format
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-2. Python program
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61851224-f3f3d600-aee8-11e9-8862-435220bbda93.png" width="60%" />
- </p>
-
-- Program Type: Select Python Language
-- Main jar package: Python jar package running MR
-- Other parameters: support - D, - mapper, - reducer, - input - output format, where user-defined parameters can be set, such as:
-- mapper "mapper.py 1" - file mapper.py-reducer reducer.py-file reducer.py-input/journey/words.txt-output/journey/out/mr/${current TimeMillis}
-- Among them, mapper. py 1 after - mapper is two parameters, the first parameter is mapper. py, and the second parameter is 1.
-- Resource: If a resource file is referenced in other parameters, you need to select the specified resource.
-- Custom parameters: User-defined parameters in MR locality that replace the contents in scripts with ${variables}
-
-### Python
-  - With Python nodes, Python scripts can be executed directly. For Python nodes, worker will use `python ** `to submit tasks.
-
-
-
-
-> Drag the ![PNG](https://analysys.github.io/easyscheduler_docs/images/toolbar_PYTHON.png) task node in the toolbar onto the palette and double-click the task node as follows:
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61851959-daec2480-aeea-11e9-83fd-3e00a030cb84.png" width="60%" />
- </p>
-
-- Script: User-developed Python program
-- Resource: A list of resource files that need to be invoked in a script
-- Custom parameters: User-defined parameters that are part of Python that replace the contents in the script with ${variables}
-
-### System parameter
-
-<table>
-    <tr><th>variable</th><th>meaning</th></tr>
-    <tr>
-        <td>${system.biz.date}</td>
-        <td>The timing time of routine dispatching instance is one day before, in yyyyyMMdd format. When data is supplemented, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.biz.curdate}</td>
-        <td> Daily scheduling example timing time, format is yyyyyMMdd, when supplementing data, the date + 1</td>
-    </tr>
-    <tr>
-        <td>${system.datetime}</td>
-        <td>Daily scheduling example timing time, format is yyyyyMMddHmmss, when supplementing data, the date + 1</td>
-    </tr>
-</table>
-
-
-### Time Customization Parameters
-
-> Support code to customize the variable name, declaration: ${variable name}. It can refer to "system parameters" or specify "constants".
-
-> When we define this benchmark variable as $[...], [yyyyMMddHHmmss] can be decomposed and combined arbitrarily, such as:$[yyyyMMdd], $[HHmmss], $[yyyy-MM-dd] ,etc.
-
-> Can also do this:
->
-> 
-
-- Later N years: $[add_months (yyyyyyMMdd, 12*N)]
-- The previous N years: $[add_months (yyyyyyMMdd, -12*N)]
-- Later N months: $[add_months (yyyyyMMdd, N)]
-- The first N months: $[add_months (yyyyyyMMdd, -N)]
-- Later N weeks: $[yyyyyyMMdd + 7*N]
-- The first N weeks: $[yyyyyMMdd-7*N]
-- The day after that: $[yyyyyyMMdd + N]
-- The day before yesterday: $[yyyyyMMdd-N]
-- Later N hours: $[HHmmss + N/24]
-- First N hours: $[HHmmss-N/24]
-- After N minutes: $[HHmmss + N/24/60]
-- First N minutes: $[HHmmss-N/24/60]
-
-
-
-### User-defined parameters
-
-> User-defined parameters are divided into global parameters and local parameters. Global parameters are the global parameters passed when the process definition and process instance are saved. Global parameters can be referenced by local parameters of any task node in the whole process.
-
-> For example:
-<p align="center">
-   <img src="https://analysys.github.io/easyscheduler_docs/images/save-global-parameters.png" width="60%" />
- </p>
-
-> global_bizdate is a global parameter, referring to system parameters.
-
-<p align="center">
-   <img src="https://user-images.githubusercontent.com/53217792/61857313-78992100-aef6-11e9-9ba3-521c6ca33ce3.png" width="60%" />
- </p>
-
-> In tasks, local_param_bizdate refers to global parameters by  ${global_bizdate} for scripts, the value of variable local_param_bizdate can be referenced by${local_param_bizdate}, or the value of local_param_bizdate can be set directly by JDBC.
-
-
-

+ 0 - 39
docs/en_US/upgrade.md

@@ -1,39 +0,0 @@
-
-# EasyScheduler upgrade documentation
-
-## 1. Back up the previous version of the files and database
-
-## 2. Stop all services of dolphinscheduler
-
- `sh ./script/stop-all.sh`
-
-## 3. Download the new version of the installation package
-
-- [gitee](https://gitee.com/easyscheduler/EasyScheduler/attach_files), download the latest version of the front and back installation packages (backend referred to as dolphinscheduler-backend, front end referred to as dolphinscheduler-ui)
-- The following upgrade operations need to be performed in the new version of the directory
-
-## 4. Database upgrade
-- Modify the following properties in conf/dao/data_source.properties
-
-```
-    spring.datasource.url
-    spring.datasource.username
-    spring.datasource.password
-```
-
-- Execute database upgrade script
-
-`sh ./script/upgrade-dolphinscheduler.sh`
-
-## 5. Backend service upgrade
-
-- Modify the content of the install.sh configuration and execute the upgrade script
-  
-  `sh install.sh`
-
-## 6. Frontend service upgrade
-
-- Overwrite the previous version of the dist directory
-- Restart the nginx service
-  
-    `systemctl restart nginx`

+ 0 - 16
docs/zh_CN/1.0.1-release.md

@@ -1,16 +0,0 @@
-Easy Scheduler Release 1.0.1
-===
-Easy Scheduler 1.0.2是1.x系列中的第二个版本。更新内容具体如下:
-
-- 1,outlook TSL 发邮件支持
-- 2,servlet 和 protobuf jar冲突解决
-- 3,创建租户同时建立linux用户
-- 4,重跑时间负数
-- 5,单机和集群都可以使用install.sh一键部署
-- 6,队列支持界面添加
-- 7,escheduler.t_escheduler_queue 增加了create_time和update_time字段
-
-
-
-
-

File diff suppressed because it is too large
+ 0 - 49
docs/zh_CN/1.0.2-release.md


+ 0 - 30
docs/zh_CN/1.0.3-release.md

@@ -1,30 +0,0 @@
-Easy Scheduler Release 1.0.3
-===
-Easy Scheduler 1.0.3是1.x系列中的第四个版本。
-
-增强:
-===
--  [[EasyScheduler-482]](https://github.com/analysys/EasyScheduler/issues/482)sql任务中的邮件标题增加了对自定义变量的支持
--  [[EasyScheduler-483]](https://github.com/analysys/EasyScheduler/issues/483)sql任务中的发邮件失败,则此sql任务为失败
--  [[EasyScheduler-484]](https://github.com/analysys/EasyScheduler/issues/484)修改sql任务中自定义变量的替换规则,支持多个单引号和双引号的替换
--  [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/485)创建资源文件时,增加对该资源文件是否在hdfs上已存在的验证
-
-修复:
-===
--  [[EasyScheduler-198]](https://github.com/analysys/EasyScheduler/issues/198) 流程定义列表根据定时状态和更新时间进行排序
--  [[EasyScheduler-419]](https://github.com/analysys/EasyScheduler/issues/419) 修复在线创建文件,hdfs文件未创建,却返回成功
--  [[EasyScheduler-481]](https://github.com/analysys/EasyScheduler/issues/481)修复job不存在定时无法下线的问题
--  [[EasyScheduler-425]](https://github.com/analysys/EasyScheduler/issues/425) kill任务时增加对其子进程的kill
--  [[EasyScheduler-422]](https://github.com/analysys/EasyScheduler/issues/422) 修复更新资源文件时更新时间和大小未更新的问题
--  [[EasyScheduler-431]](https://github.com/analysys/EasyScheduler/issues/431) 修复删除租户时,如果未启动hdfs,则删除租户失败的问题
--  [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/486) shell进程退出,yarn状态非终态等待判断
-
-感谢:
-===
-最后但最重要的是,没有以下伙伴的贡献就没有新版本的诞生:
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, 
-feloxx, coding-now, hymzcn, nysyxxg, chgxtony 
-
-以及微信群里众多的热心伙伴!在此非常感谢!
-

+ 0 - 28
docs/zh_CN/1.0.4-release.md

@@ -1,28 +0,0 @@
-Easy Scheduler Release 1.0.4
-===
-Easy Scheduler 1.0.4是1.x系列中的第五个版本。
-
-**修复**:
--  [[EasyScheduler-198]](https://github.com/analysys/EasyScheduler/issues/198) 流程定义列表根据定时状态和更新时间进行排序
--  [[EasyScheduler-419]](https://github.com/analysys/EasyScheduler/issues/419) 修复在线创建文件,hdfs文件未创建,却返回成功
--  [[EasyScheduler-481]](https://github.com/analysys/EasyScheduler/issues/481)修复job不存在定时无法下线的问题
--  [[EasyScheduler-425]](https://github.com/analysys/EasyScheduler/issues/425) kill任务时增加对其子进程的kill
--  [[EasyScheduler-422]](https://github.com/analysys/EasyScheduler/issues/422) 修复更新资源文件时更新时间和大小未更新的问题
--  [[EasyScheduler-431]](https://github.com/analysys/EasyScheduler/issues/431) 修复删除租户时,如果未启动hdfs,则删除租户失败的问题
--  [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/486) shell进程退出,yarn状态非终态等待判断
-
-**增强**:
--  [[EasyScheduler-482]](https://github.com/analysys/EasyScheduler/issues/482)sql任务中的邮件标题增加了对自定义变量的支持
--  [[EasyScheduler-483]](https://github.com/analysys/EasyScheduler/issues/483)sql任务中的发邮件失败,则此sql任务为失败
--  [[EasyScheduler-484]](https://github.com/analysys/EasyScheduler/issues/484)修改sql任务中自定义变量的替换规则,支持多个单引号和双引号的替换
--  [[EasyScheduler-485]](https://github.com/analysys/EasyScheduler/issues/485)创建资源文件时,增加对该资源文件是否在hdfs上已存在的验证
-
-
-感谢:
-===
-最后但最重要的是,没有以下伙伴的贡献就没有新版本的诞生(排名不分先后):
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, 
-feloxx, coding-now, hymzcn, nysyxxg, chgxtony, lfyee, Crossoverrr, gj-zhang, sunnyingit, xianhu, zhengqiangtan
-
-以及微信群/钉钉群里众多的热心伙伴!在此非常感谢!

+ 0 - 23
docs/zh_CN/1.0.5-release.md

@@ -1,23 +0,0 @@
-Easy Scheduler Release 1.0.5
-===
-Easy Scheduler 1.0.5是1.x系列中的第六个版本。
-
-增强:
-===
-- [[EasyScheduler-597]](https://github.com/analysys/EasyScheduler/issues/597)child process cannot extend father's receivers and cc
-
-修复
-===
-- [[EasyScheduler-516]](https://github.com/analysys/EasyScheduler/issues/516)The task instance of MR cannot stop in some cases
-- [[EasyScheduler-594]](https://github.com/analysys/EasyScheduler/issues/594)soft kill task 后 进程依旧存在(父进程 子进程) 
-
-
-感谢:
-===
-最后但最重要的是,没有以下伙伴的贡献就没有新版本的诞生:
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, feloxx, coding-now, hymzcn, nysyxxg, chgxtony, gj-zhang, xianhu, sunnyingit,
-zhengqiangtan, chinashenkai
-
-以及微信群里众多的热心伙伴!在此非常感谢!
-

+ 0 - 63
docs/zh_CN/1.1.0-release.md

@@ -1,63 +0,0 @@
-Easy Scheduler Release 1.1.0
-===
-Easy Scheduler 1.1.0是1.1.x系列中的第一个版本。
-
-新特性:
-===
-- [[EasyScheduler-391](https://github.com/analysys/EasyScheduler/issues/391)] run a process under a specified tenement user
-- [[EasyScheduler-288](https://github.com/analysys/EasyScheduler/issues/288)] Feature/qiye_weixin
-- [[EasyScheduler-189](https://github.com/analysys/EasyScheduler/issues/189)] Kerberos等安全支持
-- [[EasyScheduler-398](https://github.com/analysys/EasyScheduler/issues/398)]管理员,有租户(install.sh设置默认租户),可以创建资源、项目和数据源(限制有一个管理员)
-- [[EasyScheduler-293](https://github.com/analysys/EasyScheduler/issues/293)]点击运行流程时候选择的参数,没有地方可查看,也没有保存
-- [[EasyScheduler-401](https://github.com/analysys/EasyScheduler/issues/401)]定时很容易定时每秒一次,定时完成以后可以在页面显示一下下次触发时间
-- [[EasyScheduler-493](https://github.com/analysys/EasyScheduler/pull/493)]add datasource kerberos auth and FAQ modify and add resource upload s3
-
-
-增强:
-===
-- [[EasyScheduler-227](https://github.com/analysys/EasyScheduler/issues/227)] upgrade spring-boot to 2.1.x and spring to 5.x
-- [[EasyScheduler-434](https://github.com/analysys/EasyScheduler/issues/434)] worker节点数量 zk和mysql中不一致
-- [[EasyScheduler-435](https://github.com/analysys/EasyScheduler/issues/435)]邮箱格式的验证
-- [[EasyScheduler-441](https://github.com/analysys/EasyScheduler/issues/441)] 禁止运行节点加入已完成节点检测
-- [[EasyScheduler-400](https://github.com/analysys/EasyScheduler/issues/400)] 首页页面,队列统计不和谐,命令统计无数据
-- [[EasyScheduler-395](https://github.com/analysys/EasyScheduler/issues/395)] 对于容错恢复的流程,状态不能为 **正在运行
-- [[EasyScheduler-529](https://github.com/analysys/EasyScheduler/issues/529)] optimize poll task from zookeeper
-- [[EasyScheduler-242](https://github.com/analysys/EasyScheduler/issues/242)]worker-server节点获取任务性能问题
-- [[EasyScheduler-352](https://github.com/analysys/EasyScheduler/issues/352)]worker 分组, 队列消费问题
-- [[EasyScheduler-461](https://github.com/analysys/EasyScheduler/issues/461)]查看数据源参数,需要加密账号密码信息
-- [[EasyScheduler-396](https://github.com/analysys/EasyScheduler/issues/396)]Dockerfile优化,并关联Dockerfile和github实现自动打镜像
-- [[EasyScheduler-389](https://github.com/analysys/EasyScheduler/issues/389)]service monitor cannot find the change of master/worker
-- [[EasyScheduler-511](https://github.com/analysys/EasyScheduler/issues/511)]support recovery process from stop/kill nodes.
-- [[EasyScheduler-399](https://github.com/analysys/EasyScheduler/issues/399)]HadoopUtils指定用户操作,而不是 **部署用户
-- [[EasyScheduler-378](https://github.com/analysys/EasyScheduler/issues/378)]Mailbox regular match
-- [[EasyScheduler-625](https://github.com/analysys/EasyScheduler/issues/625)]EasyScheduler call shell "task instance not set host"
-- [[EasyScheduler-622](https://github.com/analysys/EasyScheduler/issues/622)]Front-end interface deployment k8s, background deployment big data cluster session error
-
-修复:
-===
-- [[EasyScheduler-394](https://github.com/analysys/EasyScheduler/issues/394)] master&worker部署在同一台机器上时,如果重启master&worker服务,会导致之前调度的任务无法继续调度
-- [[EasyScheduler-469](https://github.com/analysys/EasyScheduler/issues/469)]Fix naming errors,monitor page
-- [[EasyScheduler-392](https://github.com/analysys/EasyScheduler/issues/392)]Feature request: fix email regex check
-- [[EasyScheduler-405](https://github.com/analysys/EasyScheduler/issues/405)]定时修改/添加页面,开始时间和结束时间不能相同
-- [[EasyScheduler-517](https://github.com/analysys/EasyScheduler/issues/517)]补数 - 子工作流 - 时间参数 
-- [[EasyScheduler-532](https://github.com/analysys/EasyScheduler/issues/532)]python节点不执行的问题 
-- [[EasyScheduler-543](https://github.com/analysys/EasyScheduler/issues/543)]optimize datasource connection params safety
-- [[EasyScheduler-569](https://github.com/analysys/EasyScheduler/issues/569)]定时任务无法真正停止
-- [[EasyScheduler-463](https://github.com/analysys/EasyScheduler/issues/463)]邮箱验证不支持非常见后缀邮箱
-- [[EasyScheduler-650](https://github.com/analysys/EasyScheduler/issues/650)]Creating a hive data source without a principal will cause the connection to fail
-- [[EasyScheduler-641](https://github.com/analysys/EasyScheduler/issues/641)]The cellphone is not supported for 199 telecom segment when create a user
-- [[EasyScheduler-627](https://github.com/analysys/EasyScheduler/issues/627)]Different sql node task logs in parallel in the same workflow will be mixed
-- [[EasyScheduler-655](https://github.com/analysys/EasyScheduler/issues/655)]when deploy a spark task,the tentant queue not empty,set with a empty queue name 
-- [[EasyScheduler-667](https://github.com/analysys/EasyScheduler/issues/667)]HivePreparedStatement can't print the actual SQL executed 
-
-
-
-
-感谢:
-===
-最后但最重要的是,没有以下伙伴的贡献就没有新版本的诞生:
-
-Baoqi, jimmy201602, samz406, petersear, millionfor, hyperknob, fanguanqun, yangqinlong, qq389401879, chgxtony, Stanfan, lfyee, thisnew, hujiang75277381, sunnyingit, lgbo-ustc, ivivi, lzy305, JackIllkid, telltime, lipengbo2018, wuchunfu, telltime, chenyuan9028, zhangzhipeng621, thisnew, 307526982,  crazycarry
-
-以及微信群里众多的热心伙伴!在此非常感谢!
-

File diff suppressed because it is too large
+ 0 - 287
docs/zh_CN/EasyScheduler-FAQ.md


+ 0 - 66
docs/zh_CN/README.md

@@ -1,66 +0,0 @@
-Easy Scheduler
-============
-[![License](https://img.shields.io/badge/license-Apache%202-4EB1BA.svg)](https://www.apache.org/licenses/LICENSE-2.0.html)
-
-> Easy Scheduler for Big Data
-
-**设计特点:** 一个分布式易扩展的可视化DAG工作流任务调度系统。致力于解决数据处理流程中错综复杂的依赖关系,使调度系统在数据处理流程中`开箱即用`。
-其主要目标如下:
- - 以DAG图的方式将Task按照任务的依赖关系关联起来,可实时可视化监控任务的运行状态
- - 支持丰富的任务类型:Shell、MR、Spark、SQL(mysql、postgresql、hive、sparksql),Python,Sub_Process、Procedure等
- - 支持工作流定时调度、依赖调度、手动调度、手动暂停/停止/恢复,同时支持失败重试/告警、从指定节点恢复失败、Kill任务等操作
- - 支持工作流优先级、任务优先级及任务的故障转移及任务超时告警/失败
- - 支持工作流全局参数及节点自定义参数设置
- - 支持资源文件的在线上传/下载,管理等,支持在线文件创建、编辑
- - 支持任务日志在线查看及滚动、在线下载日志等
- - 实现集群HA,通过Zookeeper实现Master集群和Worker集群去中心化
- - 支持对`Master/Worker` cpu load,memory,cpu在线查看
- - 支持工作流运行历史树形/甘特图展示、支持任务状态统计、流程状态统计
- - 支持补数
- - 支持多租户
- - 支持国际化
- - 还有更多等待伙伴们探索
-
-### 与同类调度系统的对比
-
-![调度系统对比](http://geek.analysys.cn/static/upload/47/2019-03-01/9609ca82-cf8b-4d91-8dc0-0e2805194747.jpeg)
-
-### 系统部分截图
-
-![](http://geek.analysys.cn/static/upload/221/2019-03-29/0a9dea80-fb02-4fa5-a812-633b67035ffc.jpeg)
-
-![](http://geek.analysys.cn/static/upload/221/2019-04-01/83686def-a54f-4169-8cae-77b1f8300cc1.png)
-
-![](http://geek.analysys.cn/static/upload/221/2019-03-29/83c937c7-1793-4d7a-aa28-b98460329fe0.jpeg)
-
-### 文档
-
-- <a href="https://analysys.github.io/easyscheduler_docs_cn/后端部署文档.html" target="_blank">后端部署文档</a>
-
-- <a href="https://analysys.github.io/easyscheduler_docs_cn/前端部署文档.html" target="_blank">前端部署文档</a>
-
-- [**使用手册**](https://analysys.github.io/easyscheduler_docs_cn/系统使用手册.html?_blank "系统使用手册") 
-
-- [**升级文档**](https://analysys.github.io/easyscheduler_docs_cn/升级文档.html?_blank "升级文档") 
-
-- <a href="http://52.82.13.76:8888" target="_blank">我要体验</a> 
-
-更多文档请参考 <a href="https://analysys.github.io/easyscheduler_docs_cn/" target="_blank">easyscheduler中文在线文档</a>
-
-### 感谢
-
-- Easy Scheduler使用了很多优秀的开源项目,比如google的guava、guice、grpc,netty,ali的bonecp,quartz,以及apache的众多开源项目等等,
-正是由于站在这些开源项目的肩膀上,才有Easy Scheduler的诞生的可能。对此我们对使用的所有开源软件表示非常的感谢!我们也希望自己不仅是开源的受益者,也能成为开源的
-贡献者,于是我们决定把易调度贡献出来,并承诺长期维护。也希望对开源有同样热情和信念的伙伴加入进来,一起为开源献出一份力!
-
-### 帮助
-The fastest way to get response from our developers is to submit issues,   or add our wechat : 510570367
- 
- 
-
-
-
-
-
-
-

+ 0 - 47
docs/zh_CN/SUMMARY.md

@@ -1,47 +0,0 @@
-# Summary
-
-* [Easyscheduler简介](README.md)
-* 前端部署文档
-    * [准备工作](前端部署文档.md#1、准备工作)
-    * [部署](前端部署文档.md#2、部署)
-    * [常见问题](前端部署文档.md#前端常见问题)
-* 后端部署文档
-    * [准备工作](后端部署文档.md#1、准备工作)  
-    * [部署](后端部署文档.md#2、部署)  
-* [快速上手](快速上手.md#快速上手)
-* 系统使用手册
-    * [快速上手](系统使用手册.md#快速上手)
-    * [操作指南](系统使用手册.md#操作指南)
-    * [安全中心(权限系统)](系统使用手册.md#安全中心(权限系统))
-    * [监控中心](系统使用手册.md#监控中心)
-    * [任务节点类型和参数设置](系统使用手册.md#任务节点类型和参数设置)
-    * [系统参数](系统使用手册.md#系统参数)
-* [系统架构设计](系统架构设计.md#系统架构设计)
-* 前端开发文档
-    * [开发环境搭建](前端开发文档.md#开发环境搭建)
-    * [项目目录结构](前端开发文档.md#项目目录结构)
-    * [系统功能模块](前端开发文档.md#系统功能模块)
-    * [路由和状态管理](前端开发文档.md#路由和状态管理)
-    * [规范](前端开发文档.md#规范)
-    * [接口](前端开发文档.md#接口)
-    * [扩展开发](前端开发文档.md#扩展开发)   
-* 后端开发文档
-    * [开发环境搭建](后端开发文档.md#项目编译)
-    * [自定义任务插件文档](任务插件开发.md#任务插件开发)
-    
-* [接口文档](http://52.82.13.76:8888/dolphinscheduler/doc.html?language=zh_CN&lang=cn)
-* FAQ
-    * [FAQ](EasyScheduler-FAQ.md)
-* 系统版本升级文档
-    * [版本升级](升级文档.md)
-* 历次版本发布内容
-    * [1.1.0 release](1.1.0-release.md)
-    * [1.0.5 release](1.0.5-release.md)
-    * [1.0.4 release](1.0.4-release.md)
-    * [1.0.3 release](1.0.3-release.md)
-    * [1.0.2 release](1.0.2-release.md)
-    * [1.0.1 release](1.0.1-release.md)
-    * [1.0.0 release 正式开源]
-    
-    
-    

+ 0 - 23
docs/zh_CN/book.json

@@ -1,23 +0,0 @@
-{
-  "title": "调度系统-EasyScheduler",
-  "author": "",
-  "description": "调度系统",
-  "language": "zh-hans",
-  "gitbook": "3.2.3",
-  "styles": {
-    "website": "./styles/website.css"
-  },
-  "structure": {
-    "readme": "README.md"
-  },
-  "plugins":[
-    "expandable-chapters",
-    "insert-logo-link"
-  ],
-  "pluginsConfig": {
-    "insert-logo-link": {
-      "src": "http://geek.analysys.cn/static/upload/236/2019-03-29/379450b4-7919-4707-877c-4d33300377d4.png",
-      "url": "https://github.com/analysys/EasyScheduler"
-    }
-  }
-}

BIN
docs/zh_CN/images/addtenant.png


BIN
docs/zh_CN/images/architecture.jpg


BIN
docs/zh_CN/images/auth_project.png


BIN
docs/zh_CN/images/auth_user.png


BIN
docs/zh_CN/images/cdh_hive_error.png


BIN
docs/zh_CN/images/complement.png


BIN
docs/zh_CN/images/complement_data.png


BIN
docs/zh_CN/images/create-queue.png


BIN
docs/zh_CN/images/dag1.png


BIN
docs/zh_CN/images/dag2.png


BIN
docs/zh_CN/images/dag3.png


BIN
docs/zh_CN/images/dag4.png


BIN
docs/zh_CN/images/dag_examples_cn.jpg


BIN
docs/zh_CN/images/dag_examples_en.jpg


BIN
docs/zh_CN/images/decentralization.png


BIN
docs/zh_CN/images/definition_create.png


BIN
docs/zh_CN/images/definition_edit.png


BIN
docs/zh_CN/images/definition_list.png


BIN
docs/zh_CN/images/depend-node.png


BIN
docs/zh_CN/images/depend-node2.png


BIN
docs/zh_CN/images/depend-node3.png


BIN
docs/zh_CN/images/dependent_edit.png


BIN
docs/zh_CN/images/dependent_edit2.png


BIN
docs/zh_CN/images/dependent_edit3.png


BIN
docs/zh_CN/images/dependent_edit4.png


BIN
docs/zh_CN/images/distributed_lock.png


BIN
docs/zh_CN/images/distributed_lock_procss.png


BIN
docs/zh_CN/images/fault-tolerant.png


BIN
docs/zh_CN/images/fault-tolerant_master.png


BIN
docs/zh_CN/images/fault-tolerant_worker.png


BIN
docs/zh_CN/images/favicon.ico


BIN
docs/zh_CN/images/file-manage.png


BIN
docs/zh_CN/images/file_create.png


BIN
docs/zh_CN/images/file_detail.png


BIN
docs/zh_CN/images/file_rename.png


BIN
docs/zh_CN/images/file_upload.png


BIN
docs/zh_CN/images/gant-pic.png


BIN
docs/zh_CN/images/gantt.png


BIN
docs/zh_CN/images/global_parameter.png


BIN
docs/zh_CN/images/grpc.png


BIN
docs/zh_CN/images/hive_edit.png


BIN
docs/zh_CN/images/hive_edit2.png


BIN
docs/zh_CN/images/hive_kerberos.png


BIN
docs/zh_CN/images/instance-detail.png


BIN
docs/zh_CN/images/instance-list.png


BIN
docs/zh_CN/images/lack_thread.png


BIN
docs/zh_CN/images/local_parameter.png


BIN
docs/zh_CN/images/login.jpg


BIN
docs/zh_CN/images/login.png


BIN
docs/zh_CN/images/logo.png


BIN
docs/zh_CN/images/logout.png


BIN
docs/zh_CN/images/mail_edit.png


BIN
docs/zh_CN/images/master-jk.png


BIN
docs/zh_CN/images/master.png


BIN
docs/zh_CN/images/master2.png


BIN
docs/zh_CN/images/master_slave.png


BIN
docs/zh_CN/images/master_worker_lack_res.png


BIN
docs/zh_CN/images/mr_edit.png


BIN
docs/zh_CN/images/mr_java.png


+ 0 - 0
docs/zh_CN/images/mysql-jk.png


Some files were not shown because too many files changed in this diff