Browse Source

Merge remote-tracking branch 'upstream/dev-1.1.0' into dev-1.1.0

lenboo 5 years ago
parent
commit
a386de8f1b
26 changed files with 159 additions and 648 deletions
  1. BIN
      docs/zh_CN/images/hive_kerberos.png
  2. BIN
      docs/zh_CN/images/sparksql_kerberos.png
  3. 38 22
      docs/zh_CN/系统使用手册.md
  4. 3 2
      escheduler-alert/src/main/java/cn/escheduler/alert/utils/FuncUtils.java
  5. 4 0
      escheduler-api/pom.xml
  6. 1 0
      escheduler-api/src/main/java/cn/escheduler/api/enums/Status.java
  7. 38 18
      escheduler-api/src/main/java/cn/escheduler/api/service/DataSourceService.java
  8. 1 1
      escheduler-api/src/main/java/cn/escheduler/api/service/ProcessDefinitionService.java
  9. 14 4
      escheduler-api/src/main/java/cn/escheduler/api/service/SchedulerService.java
  10. 14 16
      escheduler-api/src/main/java/cn/escheduler/api/service/TenantService.java
  11. 20 20
      escheduler-api/src/main/java/cn/escheduler/api/service/UsersService.java
  12. 3 1
      escheduler-common/src/main/java/cn/escheduler/common/Constants.java
  13. 0 1
      escheduler-common/src/main/java/cn/escheduler/common/zk/AbstractZKClient.java
  14. 4 0
      escheduler-dao/pom.xml
  15. 2 2
      escheduler-dao/src/main/java/cn/escheduler/dao/ProcessDao.java
  16. 5 3
      escheduler-dao/src/main/java/cn/escheduler/dao/upgrade/shell/CreateEscheduler.java
  17. 1 1
      escheduler-dao/src/main/java/cn/escheduler/dao/upgrade/shell/UpgradeEscheduler.java
  18. 2 1
      escheduler-server/src/main/java/cn/escheduler/server/utils/LoggerUtils.java
  19. 0 2
      escheduler-ui/src/js/conf/home/pages/dag/_source/dag.vue
  20. 1 1
      escheduler-ui/src/js/conf/home/pages/monitor/pages/servers/statistics.vue
  21. 0 5
      escheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/timing.vue
  22. 6 2
      escheduler-ui/src/js/conf/home/pages/security/pages/users/_source/createUser.vue
  23. 2 1
      install.sh
  24. 0 365
      sql/escheduler.sql
  25. 0 179
      sql/quartz.sql
  26. 0 1
      sql/upgrade/1.1.0_schema/mysql/escheduler_dml.sql

BIN
docs/zh_CN/images/hive_kerberos.png


BIN
docs/zh_CN/images/sparksql_kerberos.png


+ 38 - 22
docs/zh_CN/系统使用手册.md

@@ -60,7 +60,7 @@
 ### 执行流程定义
   - **未上线状态的流程定义可以编辑,但是不可以运行**,所以先上线工作流
   > 点击工作流定义,返回流程定义列表,点击”上线“图标,上线工作流定义。
-  
+
   > "下线"工作流之前,要先将定时管理的定时下线,才能成功下线工作流定义  
 
   - 点击”运行“,执行工作流。运行参数说明:
@@ -98,28 +98,28 @@
 
 ### 查看流程实例
   > 点击“工作流实例”,查看流程实例列表。
-  
+
   > 点击工作流名称,查看任务执行状态。
-  
+
   <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-detail.png" width="60%" />
  </p>
 
   > 点击任务节点,点击“查看日志”,查看任务执行日志。
-  
+
   <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/task-log.png" width="60%" />
  </p>
- 
+
  > 点击任务实例节点,点击**查看历史**,可以查看该流程实例运行的该任务实例列表
- 
+
  <p align="center">
     <img src="https://analysys.github.io/EasyScheduler/zh_CN/images/task_history.png" width="60%" />
   </p>
  
 
   > 对工作流实例的操作:
-  
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/instance-list.png" width="60%" />
 </p>
@@ -165,7 +165,7 @@
   - 密码:设置连接MySQL的密码
   - 数据库名:输入连接MySQL的数据库名称
   - Jdbc连接参数:用于MySQL连接的参数设置,以JSON形式填写
-  
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mysql_edit.png" width="60%" />
  </p>
@@ -191,7 +191,7 @@
 #### 创建、编辑HIVE数据源
 
 1.使用HiveServer2方式连接
- 
+
  <p align="center">
     <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_edit.png" width="60%" />
   </p>
@@ -207,12 +207,20 @@
   - Jdbc连接参数:用于HIVE连接的参数设置,以JSON形式填写
 
 2.使用HiveServer2 HA Zookeeper方式连接
- 
+
  <p align="center">
     <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_edit2.png" width="60%" />
   </p>
 
 
+注意:如果开启了**kerberos**,则需要填写 **Principal**
+<p align="center">
+    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/hive_kerberos.png" width="60%" />
+  </p>
+
+
+
+
 #### 创建、编辑Spark数据源
 
 <p align="center">
@@ -229,9 +237,17 @@
 - 数据库名:输入连接Spark的数据库名称
 - Jdbc连接参数:用于Spark连接的参数设置,以JSON形式填写
 
+
+
+注意:如果开启了**kerberos**,则需要填写 **Principal**
+
+<p align="center">
+    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/sparksql_kerberos.png" width="60%" />
+  </p>
+
 ### 上传资源
   - 上传资源文件和udf函数,所有上传的文件和资源都会被存储到hdfs上,所以需要以下配置项:
-  
+
 ```
 conf/common/common.properties
     -- hdfs.startup.state=true
@@ -242,7 +258,7 @@ conf/common/hadoop.properties
 ```
 
 #### 文件管理
-  
+
   > 是对各种资源文件的管理,包括创建基本的txt/log/sh/conf等文件、上传jar包等各种类型文件,以及编辑、下载、删除等操作。
   <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/file-manage.png" width="60%" />
@@ -287,7 +303,7 @@ conf/common/hadoop.properties
 
 #### 资源管理
   > 资源管理和文件管理功能类似,不同之处是资源管理是上传的UDF函数,文件管理上传的是用户程序,脚本及配置文件
- 
+
   * 上传udf资源
   > 和上传文件相同。
 
@@ -303,7 +319,7 @@ conf/common/hadoop.properties
   - 参数:用来标注函数的输入参数
   - 数据库名:预留字段,用于创建永久UDF函数
   - UDF资源:设置创建的UDF对应的资源文件
-  
+
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/udf_edit.png" width="60%" />
  </p>
@@ -312,7 +328,7 @@ conf/common/hadoop.properties
 
   - 安全中心是只有管理员账户才有权限的功能,有队列管理、租户管理、用户管理、告警组管理、worker分组、令牌管理等功能,还可以对资源、数据源、项目等授权
   - 管理员登录,默认用户名密码:admin/escheduler123
-  
+
 ### 创建队列
   - 队列是在执行spark、mapreduce等程序,需要用到“队列”参数时使用的。
   - “安全中心”->“队列管理”->“创建队列”
@@ -357,7 +373,7 @@ conf/common/hadoop.properties
 ### 令牌管理
   - 由于后端接口有登录检查,令牌管理,提供了一种可以通过调用接口的方式对系统进行各种操作。
   - 调用示例:
-  
+
 ```令牌调用示例
     /**
      * test token
@@ -477,15 +493,15 @@ conf/common/hadoop.properties
 
 ### 依赖(DEPENDENT)节点
   - 依赖节点,就是**依赖检查节点**。比如A流程依赖昨天的B流程执行成功,依赖节点会去检查B流程在昨天是否有执行成功的实例。
-  
+
 > 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_DEPENDENT.png)任务节点到画板中,双击任务节点,如下图:
 
 <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/dependent_edit.png" width="60%" />
  </p>
-  
+
   > 依赖节点提供了逻辑判断功能,比如检查昨天的B流程是否成功,或者C流程是否执行成功。
-  
+
   <p align="center">
    <img src="https://analysys.github.io/easyscheduler_docs_cn/images/depend-node.png" width="80%" />
  </p>
@@ -536,7 +552,7 @@ conf/common/hadoop.properties
 
 ### SPARK节点
   - 通过SPARK节点,可以直接直接执行SPARK程序,对于spark节点,worker会使用`spark-submit`方式提交任务
-  
+
 > 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_SPARK.png)任务节点到画板中,双击任务节点,如下图:
 
 <p align="center">
@@ -563,7 +579,7 @@ conf/common/hadoop.properties
 > 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_MR.png)任务节点到画板中,双击任务节点,如下图:
 
  1. JAVA程序
- 
+
  <p align="center">
     <img src="https://analysys.github.io/easyscheduler_docs_cn/images/mr_java.png" width="60%" />
   </p>
@@ -592,7 +608,7 @@ conf/common/hadoop.properties
 
 ### Python节点
   - 使用python节点,可以直接执行python脚本,对于python节点,worker会使用`python **`方式提交任务。
-  
+
 
 > 拖动工具栏中的![PNG](https://analysys.github.io/easyscheduler_docs_cn/images/toolbar_PYTHON.png)任务节点到画板中,双击任务节点,如下图:
 

+ 3 - 2
escheduler-alert/src/main/java/cn/escheduler/alert/utils/FuncUtils.java

@@ -22,10 +22,11 @@ public class FuncUtils {
         StringBuilder sb = new StringBuilder();
         boolean first = true;
         for (String item : list) {
-            if (first)
+            if (first) {
                 first = false;
-            else
+            } else {
                 sb.append(split);
+            }
             sb.append(item);
         }
         return sb.toString();

+ 4 - 0
escheduler-api/pom.xml

@@ -47,6 +47,10 @@
           <groupId>org.springframework.boot</groupId>
           <artifactId>spring-boot-starter-tomcat</artifactId>
         </exclusion>
+        <exclusion>
+          <artifactId>log4j-to-slf4j</artifactId>
+          <groupId>org.apache.logging.log4j</groupId>
+        </exclusion>
       </exclusions>
     </dependency>
 

+ 1 - 0
escheduler-api/src/main/java/cn/escheduler/api/enums/Status.java

@@ -163,6 +163,7 @@ public enum Status {
     BATCH_DELETE_PROCESS_INSTANCE_BY_IDS_ERROR(10117,"batch delete process instance by ids {0} error"),
     PREVIEW_SCHEDULE_ERROR(10139,"preview schedule error"),
     PARSE_TO_CRON_EXPRESSION_ERROR(10140,"parse cron to cron expression error"),
+    SCHEDULE_START_TIME_END_TIME_SAME(10141,"The start time must not be the same as the end"),
 
 
     UDF_FUNCTION_NOT_EXIST(20001, "UDF function not found"),

+ 38 - 18
escheduler-api/src/main/java/cn/escheduler/api/service/DataSourceService.java

@@ -17,16 +17,14 @@
 package cn.escheduler.api.service;
 
 import cn.escheduler.api.enums.Status;
-import cn.escheduler.api.utils.CheckUtils;
 import cn.escheduler.api.utils.Constants;
 import cn.escheduler.api.utils.PageInfo;
 import cn.escheduler.api.utils.Result;
 import cn.escheduler.common.enums.DbType;
-import cn.escheduler.common.enums.ResUploadType;
 import cn.escheduler.common.enums.UserType;
 import cn.escheduler.common.job.db.*;
 import cn.escheduler.common.utils.CommonUtils;
-import cn.escheduler.common.utils.PropertyUtils;
+import cn.escheduler.common.utils.JSONUtils;
 import cn.escheduler.dao.mapper.DataSourceMapper;
 import cn.escheduler.dao.mapper.DatasourceUserMapper;
 import cn.escheduler.dao.mapper.ProjectMapper;
@@ -48,7 +46,6 @@ import java.sql.DriverManager;
 import java.sql.SQLException;
 import java.util.*;
 
-import static cn.escheduler.common.utils.PropertyUtils.getBoolean;
 import static cn.escheduler.common.utils.PropertyUtils.getString;
 
 /**
@@ -67,17 +64,13 @@ public class DataSourceService extends BaseService{
     public static final String PRINCIPAL = "principal";
     public static final String DATABASE = "database";
     public static final String USER_NAME = "userName";
-    public static final String PASSWORD = "password";
+    public static final String PASSWORD = cn.escheduler.common.Constants.PASSWORD;
     public static final String OTHER = "other";
 
-    @Autowired
-    private ProjectMapper projectMapper;
 
     @Autowired
     private DataSourceMapper dataSourceMapper;
 
-    @Autowired
-    private ProjectService projectService;
 
     @Autowired
     private DatasourceUserMapper datasourceUserMapper;
@@ -296,13 +289,37 @@ public class DataSourceService extends BaseService{
      * @return
      */
     private List<DataSource> getDataSources(User loginUser, String searchVal, Integer pageSize, PageInfo pageInfo) {
+        List<DataSource> dataSourceList = null;
         if (isAdmin(loginUser)) {
-            return dataSourceMapper.queryAllDataSourcePaging(searchVal, pageInfo.getStart(), pageSize);
+            dataSourceList = dataSourceMapper.queryAllDataSourcePaging(searchVal, pageInfo.getStart(), pageSize);
+        }else{
+            dataSourceList = dataSourceMapper.queryDataSourcePaging(loginUser.getId(), searchVal,
+                    pageInfo.getStart(), pageSize);
+        }
+
+        handlePasswd(dataSourceList);
+
+        return dataSourceList;
+    }
+
+
+    /**
+     * handle datasource connection password for safety
+     * @param dataSourceList
+     */
+    private void handlePasswd(List<DataSource> dataSourceList) {
+
+        for (DataSource dataSource : dataSourceList) {
+
+            String connectionParams  = dataSource.getConnectionParams();
+            JSONObject  object = JSONObject.parseObject(connectionParams);
+            object.put(cn.escheduler.common.Constants.PASSWORD, cn.escheduler.common.Constants.XXXXXX);
+            dataSource.setConnectionParams(JSONUtils.toJson(object));
+
         }
-        return dataSourceMapper.queryDataSourcePaging(loginUser.getId(), searchVal,
-                pageInfo.getStart(), pageSize);
     }
 
+
     /**
      * get datasource total num
      *
@@ -501,7 +518,10 @@ public class DataSourceService extends BaseService{
         parameterMap.put(Constants.JDBC_URL, jdbcUrl);
         parameterMap.put(Constants.USER, userName);
         parameterMap.put(Constants.PASSWORD, password);
-        parameterMap.put(Constants.PRINCIPAL,principal);
+        if (CommonUtils.getKerberosStartupState() &&
+                (type == DbType.HIVE || type == DbType.SPARK)){
+            parameterMap.put(Constants.PRINCIPAL,principal);
+        }
         if (other != null && !"".equals(other)) {
             Map map = JSONObject.parseObject(other, new TypeReference<LinkedHashMap<String, String>>() {
             });
@@ -660,13 +680,13 @@ public class DataSourceService extends BaseService{
      */
     private String[] getHostsAndPort(String address) {
         String[] result = new String[2];
-        String[] tmpArray = address.split("//");
+        String[] tmpArray = address.split(cn.escheduler.common.Constants.DOUBLE_SLASH);
         String hostsAndPorts = tmpArray[tmpArray.length - 1];
-        StringBuilder hosts = new StringBuilder("");
-        String[] hostPortArray = hostsAndPorts.split(",");
-        String port = hostPortArray[0].split(":")[1];
+        StringBuilder hosts = new StringBuilder();
+        String[] hostPortArray = hostsAndPorts.split(cn.escheduler.common.Constants.COMMA);
+        String port = hostPortArray[0].split(cn.escheduler.common.Constants.COLON)[1];
         for (String hostPort : hostPortArray) {
-            hosts.append(hostPort.split(":")[0]).append(",");
+            hosts.append(hostPort.split(cn.escheduler.common.Constants.COLON)[0]).append(cn.escheduler.common.Constants.COMMA);
         }
         hosts.deleteCharAt(hosts.length() - 1);
         result[0] = hosts.toString();

+ 1 - 1
escheduler-api/src/main/java/cn/escheduler/api/service/ProcessDefinitionService.java

@@ -490,7 +490,7 @@ public class ProcessDefinitionService extends BaseDAGService {
                     // set status
                     schedule.setReleaseState(ReleaseState.OFFLINE);
                     scheduleMapper.update(schedule);
-                    deleteSchedule(project.getId(), id);
+                    deleteSchedule(project.getId(), schedule.getId());
                 }
                 break;
             }

+ 14 - 4
escheduler-api/src/main/java/cn/escheduler/api/service/SchedulerService.java

@@ -119,6 +119,11 @@ public class SchedulerService extends BaseService {
         scheduleObj.setProcessDefinitionName(processDefinition.getName());
 
         ScheduleParam scheduleParam = JSONUtils.parseObject(schedule, ScheduleParam.class);
+        if (DateUtils.differSec(scheduleParam.getStartTime(),scheduleParam.getEndTime()) == 0) {
+            logger.warn("The start time must not be the same as the end");
+            putMsg(result,Status.SCHEDULE_START_TIME_END_TIME_SAME);
+            return result;
+        }
         scheduleObj.setStartTime(scheduleParam.getStartTime());
         scheduleObj.setEndTime(scheduleParam.getEndTime());
         if (!org.quartz.CronExpression.isValidExpression(scheduleParam.getCrontab())) {
@@ -205,6 +210,11 @@ public class SchedulerService extends BaseService {
         // updateProcessInstance param
         if (StringUtils.isNotEmpty(scheduleExpression)) {
             ScheduleParam scheduleParam = JSONUtils.parseObject(scheduleExpression, ScheduleParam.class);
+            if (DateUtils.differSec(scheduleParam.getStartTime(),scheduleParam.getEndTime()) == 0) {
+                logger.warn("The start time must not be the same as the end");
+                putMsg(result,Status.SCHEDULE_START_TIME_END_TIME_SAME);
+                return result;
+            }
             schedule.setStartTime(scheduleParam.getStartTime());
             schedule.setEndTime(scheduleParam.getEndTime());
             if (!org.quartz.CronExpression.isValidExpression(scheduleParam.getCrontab())) {
@@ -446,14 +456,14 @@ public class SchedulerService extends BaseService {
     /**
      * delete schedule
      */
-    public static void deleteSchedule(int projectId, int processId) throws RuntimeException{
-        logger.info("delete schedules of project id:{}, flow id:{}", projectId, processId);
+    public static void deleteSchedule(int projectId, int scheduleId) throws RuntimeException{
+        logger.info("delete schedules of project id:{}, schedule id:{}", projectId, scheduleId);
 
-        String jobName = QuartzExecutors.buildJobName(processId);
+        String jobName = QuartzExecutors.buildJobName(scheduleId);
         String jobGroupName = QuartzExecutors.buildJobGroupName(projectId);
 
         if(!QuartzExecutors.getInstance().deleteJob(jobName, jobGroupName)){
-            logger.warn("set offline failure:projectId:{},processId:{}",projectId,processId);
+            logger.warn("set offline failure:projectId:{},scheduleId:{}",projectId,scheduleId);
             throw new RuntimeException(String.format("set offline failure"));
         }
 

+ 14 - 16
escheduler-api/src/main/java/cn/escheduler/api/service/TenantService.java

@@ -239,24 +239,25 @@ public class TenantService extends BaseService{
     if (PropertyUtils.getResUploadStartupState()){
       String tenantPath = HadoopUtils.getHdfsDataBasePath() + "/" + tenant.getTenantCode();
 
-      String resourcePath = HadoopUtils.getHdfsDir(tenant.getTenantCode());
-      FileStatus[] fileStatus = HadoopUtils.getInstance().listFileStatus(resourcePath);
-      if (fileStatus.length > 0) {
-        putMsg(result, Status.HDFS_TERANT_RESOURCES_FILE_EXISTS);
-        return result;
-      }
-      fileStatus = HadoopUtils.getInstance().listFileStatus(HadoopUtils.getHdfsUdfDir(tenant.getTenantCode()));
-      if (fileStatus.length > 0) {
-        putMsg(result, Status.HDFS_TERANT_UDFS_FILE_EXISTS);
-        return result;
-      }
+      if (HadoopUtils.getInstance().exists(tenantPath)){
+        String resourcePath = HadoopUtils.getHdfsDir(tenant.getTenantCode());
+        FileStatus[] fileStatus = HadoopUtils.getInstance().listFileStatus(resourcePath);
+        if (fileStatus.length > 0) {
+          putMsg(result, Status.HDFS_TERANT_RESOURCES_FILE_EXISTS);
+          return result;
+        }
+        fileStatus = HadoopUtils.getInstance().listFileStatus(HadoopUtils.getHdfsUdfDir(tenant.getTenantCode()));
+        if (fileStatus.length > 0) {
+          putMsg(result, Status.HDFS_TERANT_UDFS_FILE_EXISTS);
+          return result;
+        }
 
-      HadoopUtils.getInstance().delete(tenantPath, true);
+        HadoopUtils.getInstance().delete(tenantPath, true);
+      }
     }
 
     tenantMapper.deleteById(id);
     putMsg(result, Status.SUCCESS);
-    
     return result;
   }
 
@@ -269,9 +270,6 @@ public class TenantService extends BaseService{
   public Map<String, Object> queryTenantList(User loginUser) {
 
     Map<String, Object> result = new HashMap<>(5);
-//    if (checkAdmin(loginUser, result)) {
-//      return result;
-//    }
 
     List<Tenant> resourceList = tenantMapper.queryAllTenant();
     result.put(Constants.DATA_LIST, resourceList);

+ 20 - 20
escheduler-api/src/main/java/cn/escheduler/api/service/UsersService.java

@@ -245,35 +245,35 @@ public class UsersService extends BaseService {
             Tenant newTenant = tenantMapper.queryById(tenantId);
             if (newTenant != null) {
                 // if hdfs startup
-                if (PropertyUtils.getResUploadStartupState()){
+                if (PropertyUtils.getResUploadStartupState() && oldTenant != null){
                     String newTenantCode = newTenant.getTenantCode();
                     String oldResourcePath = HadoopUtils.getHdfsDataBasePath() + "/" + oldTenant.getTenantCode() + "/resources";
                     String oldUdfsPath = HadoopUtils.getHdfsUdfDir(oldTenant.getTenantCode());
 
+                    if (HadoopUtils.getInstance().exists(oldResourcePath)){
+                        String newResourcePath = HadoopUtils.getHdfsDataBasePath() + "/" + newTenantCode + "/resources";
+                        String newUdfsPath = HadoopUtils.getHdfsUdfDir(newTenantCode);
 
-                    String newResourcePath = HadoopUtils.getHdfsDataBasePath() + "/" + newTenantCode + "/resources";
-                    String newUdfsPath = HadoopUtils.getHdfsUdfDir(newTenantCode);
-
-                    //file resources list
-                    List<Resource> fileResourcesList = resourceMapper.queryResourceCreatedByUser(userId, 0);
-                    if (CollectionUtils.isNotEmpty(fileResourcesList)) {
-                        for (Resource resource : fileResourcesList) {
-                            HadoopUtils.getInstance().copy(oldResourcePath + "/" + resource.getAlias(), newResourcePath, false, true);
+                        //file resources list
+                        List<Resource> fileResourcesList = resourceMapper.queryResourceCreatedByUser(userId, 0);
+                        if (CollectionUtils.isNotEmpty(fileResourcesList)) {
+                            for (Resource resource : fileResourcesList) {
+                                HadoopUtils.getInstance().copy(oldResourcePath + "/" + resource.getAlias(), newResourcePath, false, true);
+                            }
                         }
-                    }
 
-                    //udf resources
-                    List<Resource> udfResourceList = resourceMapper.queryResourceCreatedByUser(userId, 1);
-                    if (CollectionUtils.isNotEmpty(udfResourceList)) {
-                        for (Resource resource : udfResourceList) {
-                            HadoopUtils.getInstance().copy(oldUdfsPath + "/" + resource.getAlias(), newUdfsPath, false, true);
+                        //udf resources
+                        List<Resource> udfResourceList = resourceMapper.queryResourceCreatedByUser(userId, 1);
+                        if (CollectionUtils.isNotEmpty(udfResourceList)) {
+                            for (Resource resource : udfResourceList) {
+                                HadoopUtils.getInstance().copy(oldUdfsPath + "/" + resource.getAlias(), newUdfsPath, false, true);
+                            }
                         }
-                    }
-
-                    //Delete the user from the old tenant directory
-                    String oldUserPath = HadoopUtils.getHdfsDataBasePath() + "/" + oldTenant.getTenantCode() + "/home/" + userId;
-                    HadoopUtils.getInstance().delete(oldUserPath, true);
 
+                        //Delete the user from the old tenant directory
+                        String oldUserPath = HadoopUtils.getHdfsDataBasePath() + "/" + oldTenant.getTenantCode() + "/home/" + userId;
+                        HadoopUtils.getInstance().delete(oldUserPath, true);
+                    }
 
                     //create user in the new tenant directory
                     String newUserPath = HadoopUtils.getHdfsDataBasePath() + "/" + newTenant.getTenantCode() + "/home/" + user.getId();

+ 3 - 1
escheduler-common/src/main/java/cn/escheduler/common/Constants.java

@@ -337,7 +337,7 @@ public final class Constants {
     /**
      * email regex
      */
-    public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^[a-zA-Z0-9_-]+@[a-zA-Z0-9_-]+(\\.[a-zA-Z0-9_-]+)+$");
+    public static final Pattern REGEX_MAIL_NAME = Pattern.compile("^([a-z0-9A-Z]+[-|\\.]?)+[a-z0-9A-Z]@([a-z0-9A-Z]+(-[a-z0-9A-Z]+)?\\.)+[a-zA-Z]{2,}$");
 
     /**
      * read permission
@@ -489,6 +489,8 @@ public final class Constants {
     public static final String TASK_RECORD_PWD = "task.record.datasource.password";
 
     public static final String DEFAULT = "Default";
+    public static final String PASSWORD = "password";
+    public static final String XXXXXX = "******";
 
     public static  String TASK_RECORD_TABLE_HIVE_LOG = "eamp_hive_log_hd";
 

+ 0 - 1
escheduler-common/src/main/java/cn/escheduler/common/zk/AbstractZKClient.java

@@ -314,7 +314,6 @@ public abstract class AbstractZKClient {
 				childrenList = zkClient.getChildren().forPath(masterZNodeParentPath);
 			}
 		} catch (Exception e) {
-//			logger.warn(e.getMessage());
 			if(!e.getMessage().contains("java.lang.IllegalStateException: instance must be started")){
 				logger.warn(e.getMessage(),e);
 			}

+ 4 - 0
escheduler-dao/pom.xml

@@ -37,6 +37,10 @@
 					<groupId>org.apache.tomcat</groupId>
 					<artifactId>tomcat-jdbc</artifactId>
 				</exclusion>
+				<exclusion>
+					<artifactId>log4j-to-slf4j</artifactId>
+					<groupId>org.apache.logging.log4j</groupId>
+				</exclusion>
 			</exclusions>
 		</dependency>
 		<dependency>

+ 2 - 2
escheduler-dao/src/main/java/cn/escheduler/dao/ProcessDao.java

@@ -1011,11 +1011,11 @@ public class ProcessDao extends AbstractBaseDao {
     }
 
     /**
-     * ${processInstancePriority}_${processInstanceId}_${taskInstancePriority}_${taskId}
+     * ${processInstancePriority}_${processInstanceId}_${taskInstancePriority}_${taskId}_${task executed by ip1},${ip2}...
      *
      * The tasks with the highest priority are selected by comparing the priorities of the above four levels from high to low.
      *
-     * 流程实例优先级_流程实例id_任务优先级_任务id       high <- low
+     * 流程实例优先级_流程实例id_任务优先级_任务id_任务执行机器ip1,ip2...          high <- low
      *
      * @param taskInstance
      * @return

+ 5 - 3
escheduler-dao/src/main/java/cn/escheduler/dao/upgrade/shell/CreateEscheduler.java

@@ -30,13 +30,15 @@ public class CreateEscheduler {
 
 	public static void main(String[] args) {
 		EschedulerManager eschedulerManager = new EschedulerManager();
-		eschedulerManager.initEscheduler();
-		logger.info("init escheduler finished");
+
 		try {
+			eschedulerManager.initEscheduler();
+			logger.info("init escheduler finished");
 			eschedulerManager.upgradeEscheduler();
 			logger.info("upgrade escheduler finished");
+			logger.info("create escheduler success");
 		} catch (Exception e) {
-			logger.error("upgrade escheduler failed",e);
+			logger.error("create escheduler failed",e);
 		}
 
 	}

+ 1 - 1
escheduler-dao/src/main/java/cn/escheduler/dao/upgrade/shell/UpgradeEscheduler.java

@@ -30,7 +30,7 @@ public class UpgradeEscheduler {
 		EschedulerManager eschedulerManager = new EschedulerManager();
 		try {
 			eschedulerManager.upgradeEscheduler();
-			logger.info("upgrade escheduler finished");
+			logger.info("upgrade escheduler success");
 		} catch (Exception e) {
 			logger.error(e.getMessage(),e);
 			logger.info("Upgrade escheduler failed");

+ 2 - 1
escheduler-server/src/main/java/cn/escheduler/server/utils/LoggerUtils.java

@@ -16,6 +16,7 @@
  */
 package cn.escheduler.server.utils;
 
+import cn.escheduler.common.Constants;
 import org.slf4j.Logger;
 
 import java.util.ArrayList;
@@ -31,7 +32,7 @@ public class LoggerUtils {
     /**
      * rules for extracting application ID
      */
-    private static final Pattern APPLICATION_REGEX = Pattern.compile("\\d+_\\d+");
+    private static final Pattern APPLICATION_REGEX = Pattern.compile(Constants.APPLICATION_REGEX);
 
     /**
      *  build job id

+ 0 - 2
escheduler-ui/src/js/conf/home/pages/dag/_source/dag.vue

@@ -459,8 +459,6 @@
       'tasks': {
         deep: true,
         handler (o) {
-          console.log('+++++ save dag params +++++')
-          console.log(o)
 
           // Edit state does not allow deletion of node a...
           this.setIsEditDag(true)

+ 1 - 1
escheduler-ui/src/js/conf/home/pages/monitor/pages/servers/statistics.vue

@@ -16,7 +16,7 @@
         <div class="col-md-3">
           <div class="text-num-model text">
             <div class="title">
-              <span >{{$t('failure command number')}}}</span>
+              <span >{{$t('failure command number')}}</span>
             </div>
             <div class="value-p">
               <b :style="{color:color[1]}"> {{commandCountData.errorCount}}</b>

+ 0 - 5
escheduler-ui/src/js/conf/home/pages/projects/pages/definition/pages/list/_source/timing.vue

@@ -254,11 +254,6 @@
 
                 this.store.dispatch(api, searchParams).then(res => {
                   this.previewTimes = res
-                  if (this.previewTimes.length) {
-                    resolve()
-                  } else {
-                    reject(new Error(0))
-                  }
                 })
               }
             },

+ 6 - 2
escheduler-ui/src/js/conf/home/pages/security/pages/users/_source/createUser.vue

@@ -131,7 +131,8 @@
         }
       },
       _verification () {
-        let regEmail = /^([a-zA-Z0-9]+[_|\-|\.]?)*[a-zA-Z0-9]+@([a-zA-Z0-9]+[_|\-|\.]?)*[a-zA-Z0-9]+\.[a-zA-Z]{2,3}$/ // eslint-disable-line
+        let regEmail = /^([a-zA-Z0-9]+[_|\-|\.]?)*[a-zA-Z0-9]+@([a-zA-Z0-9]+[_|\-|\.]?)*[a-zA-Z0-9]+\.[a-zA-Z]{2,}$/ // eslint-disable-line
+
         // Mobile phone number regular
         let regPhone = /^1(3|4|5|6|7|8)\d{9}$/; // eslint-disable-line
 
@@ -184,7 +185,10 @@
       _getTenantList () {
         return new Promise((resolve, reject) => {
           this.store.dispatch('security/getTenantList').then(res => {
-            this.tenantList = _.map(res, v => {
+            let arr = _.filter(res, (o) => {
+              return o.id !== -1
+            })
+            this.tenantList = _.map(arr, v => {
               return {
                 id: v.id,
                 code: v.tenantName

+ 2 - 1
install.sh

@@ -134,7 +134,7 @@ s3Endpoint="http://192.168.199.91:9010"
 s3AccessKey="A3DXS30FO22544RE"
 s3SecretKey="OloCLq3n+8+sdPHUhJ21XrSxTC+JK"
 
-# resourcemanager HA配置,如果是单resourcemanager,这里为空即可
+# resourcemanager HA配置,如果是单resourcemanager,这里为yarnHaIps=""
 yarnHaIps="192.168.xx.xx,192.168.xx.xx"
 
 # 如果是单 resourcemanager,只需要配置一个主机名称,如果是resourcemanager HA,则默认配置就好
@@ -144,6 +144,7 @@ singleYarnIp="ark1"
 hdfsPath="/escheduler"
 
 # 拥有在hdfs根路径/下创建目录权限的用户
+# 注意:如果开启了kerberos,则直接hdfsRootUser="",就可以
 hdfsRootUser="hdfs"
 
 # common 配置

+ 0 - 365
sql/escheduler.sql

@@ -1,436 +0,0 @@
-/*
-Navicat MySQL Data Transfer
-
-Source Server         : xx.xx
-Source Server Version : 50725
-Source Host           : 192.168.xx.xx:3306
-Source Database       : escheduler
-
-Target Server Type    : MYSQL
-Target Server Version : 50725
-File Encoding         : 65001
-
-Date: 2019-03-23 11:47:30
-*/
-
-SET FOREIGN_KEY_CHECKS=0;
-
-DROP TABLE IF EXISTS `t_escheduler_alert`;
-CREATE TABLE `t_escheduler_alert` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `title` varchar(64) DEFAULT NULL COMMENT '消息标题',
-  `show_type` tinyint(4) DEFAULT NULL COMMENT '发送格式,0是TABLE,1是TEXT',
-  `content` text COMMENT '消息内容(可以是邮件,可以是短信。邮件是JSON Map存放,短信是字符串)',
-  `alert_type` tinyint(4) DEFAULT NULL COMMENT '0是邮件,1是短信',
-  `alert_status` tinyint(4) DEFAULT '0' COMMENT '0是待执行,1是执行成功,2执行失败',
-  `log` text COMMENT '执行日志',
-  `alertgroup_id` int(11) DEFAULT NULL COMMENT '发送组',
-  `receivers` text COMMENT '收件人',
-  `receivers_cc` text COMMENT '抄送人',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_alertgroup`;
-CREATE TABLE `t_escheduler_alertgroup` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `group_name` varchar(255) DEFAULT NULL COMMENT '组名称',
-  `group_type` tinyint(4) DEFAULT NULL COMMENT '组类型(邮件0,短信1...)',
-  `desc` varchar(255) DEFAULT NULL COMMENT '备注',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_command`;
-CREATE TABLE `t_escheduler_command` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `command_type` tinyint(4) DEFAULT NULL COMMENT '命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程',
-  `process_definition_id` int(11) DEFAULT NULL COMMENT '流程定义id',
-  `command_param` text COMMENT '命令的参数(json格式)',
-  `task_depend_type` tinyint(4) DEFAULT NULL COMMENT '节点依赖类型:0 当前节点,1 向前执行,2 向后执行',
-  `failure_strategy` tinyint(4) DEFAULT '0' COMMENT '失败策略:0结束,1继续',
-  `warning_type` tinyint(4) DEFAULT '0' COMMENT '告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发',
-  `warning_group_id` int(11) DEFAULT NULL COMMENT '告警组',
-  `schedule_time` datetime DEFAULT NULL COMMENT '预期运行时间',
-  `start_time` datetime DEFAULT NULL COMMENT '开始时间',
-  `executor_id` int(11) DEFAULT NULL COMMENT '执行用户id',
-  `dependence` varchar(255) DEFAULT NULL COMMENT '依赖字段',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  `process_instance_priority` int(11) DEFAULT NULL COMMENT '流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_datasource`;
-CREATE TABLE `t_escheduler_datasource` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `name` varchar(64) NOT NULL COMMENT '数据源名称',
-  `note` varchar(256) DEFAULT NULL COMMENT '描述',
-  `type` tinyint(4) NOT NULL COMMENT '数据源类型:0 mysql,1 postgresql,2 hive,3 spark',
-  `user_id` int(11) NOT NULL COMMENT '创建用户id',
-  `connection_params` text NOT NULL COMMENT '连接参数(json格式)',
-  `create_time` datetime NOT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_master_server`;
-CREATE TABLE `t_escheduler_master_server` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `host` varchar(45) DEFAULT NULL COMMENT 'ip',
-  `port` int(11) DEFAULT NULL COMMENT '进程号',
-  `zk_directory` varchar(64) DEFAULT NULL COMMENT 'zk注册目录',
-  `res_info` varchar(256) DEFAULT NULL COMMENT '集群资源信息:json格式{"cpu":xxx,"memroy":xxx}',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `last_heartbeat_time` datetime DEFAULT NULL COMMENT '最后心跳时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_process_definition`;
-CREATE TABLE `t_escheduler_process_definition` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `name` varchar(255) DEFAULT NULL COMMENT '流程定义名称',
-  `version` int(11) DEFAULT NULL COMMENT '流程定义版本',
-  `release_state` tinyint(4) DEFAULT NULL COMMENT '流程定义的发布状态:0 未上线  1已上线',
-  `project_id` int(11) DEFAULT NULL COMMENT '项目id',
-  `user_id` int(11) DEFAULT NULL COMMENT '流程定义所属用户id',
-  `process_definition_json` longtext COMMENT '流程定义json串',
-  `desc` text COMMENT '流程定义描述',
-  `global_params` text COMMENT '全局参数',
-  `flag` tinyint(4) DEFAULT NULL COMMENT '流程是否可用\r\n:0 不可用\r\n,1 可用',
-  `locations` text COMMENT '节点坐标信息',
-  `connects` text COMMENT '节点连线信息',
-  `receivers` text COMMENT '收件人',
-  `receivers_cc` text COMMENT '抄送人',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`),
-  KEY `process_definition_index` (`project_id`,`id`) USING BTREE
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_process_instance`;
-CREATE TABLE `t_escheduler_process_instance` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `name` varchar(255) DEFAULT NULL COMMENT '流程实例名称',
-  `process_definition_id` int(11) DEFAULT NULL COMMENT '流程定义id',
-  `state` tinyint(4) DEFAULT NULL COMMENT '流程实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成',
-  `recovery` tinyint(4) DEFAULT NULL COMMENT '流程实例容错标识:0 正常,1 需要被容错重启',
-  `start_time` datetime DEFAULT NULL COMMENT '流程实例开始时间',
-  `end_time` datetime DEFAULT NULL COMMENT '流程实例结束时间',
-  `run_times` int(11) DEFAULT NULL COMMENT '流程实例运行次数',
-  `host` varchar(45) DEFAULT NULL COMMENT '流程实例所在的机器',
-  `command_type` tinyint(4) DEFAULT NULL COMMENT '命令类型:0 启动工作流,1 从当前节点开始执行,2 恢复被容错的工作流,3 恢复暂停流程,4 从失败节点开始执行,5 补数,6 调度,7 重跑,8 暂停,9 停止,10 恢复等待线程',
-  `command_param` text COMMENT '命令的参数(json格式)',
-  `task_depend_type` tinyint(4) DEFAULT NULL COMMENT '节点依赖类型:0 当前节点,1 向前执行,2 向后执行',
-  `max_try_times` tinyint(4) DEFAULT '0' COMMENT '最大重试次数',
-  `failure_strategy` tinyint(4) DEFAULT '0' COMMENT '失败策略 0 失败后结束,1 失败后继续',
-  `warning_type` tinyint(4) DEFAULT '0' COMMENT '告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发',
-  `warning_group_id` int(11) DEFAULT NULL COMMENT '告警组id',
-  `schedule_time` datetime DEFAULT NULL COMMENT '预期运行时间',
-  `command_start_time` datetime DEFAULT NULL COMMENT '开始命令时间',
-  `global_params` text COMMENT '全局参数(固化流程定义的参数)',
-  `process_instance_json` longtext COMMENT '流程实例json(copy的流程定义的json)',
-  `flag` tinyint(4) DEFAULT '1' COMMENT '是否可用,1 可用,0不可用',
-  `update_time` timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
-  `is_sub_process` int(11) DEFAULT '0' COMMENT '是否是子工作流 1 是,0 不是',
-  `executor_id` int(11) NOT NULL COMMENT '命令执行者',
-  `locations` text COMMENT '节点坐标信息',
-  `connects` text COMMENT '节点连线信息',
-  `history_cmd` text COMMENT '历史命令,记录所有对流程实例的操作',
-  `dependence_schedule_times` text COMMENT '依赖节点的预估时间',
-  `process_instance_priority` int(11) DEFAULT NULL COMMENT '流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
-  PRIMARY KEY (`id`),
-  KEY `process_instance_index` (`process_definition_id`,`id`) USING BTREE,
-  KEY `start_time_index` (`start_time`) USING BTREE
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_project`;
-CREATE TABLE `t_escheduler_project` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `name` varchar(100) DEFAULT NULL COMMENT '项目名称',
-  `desc` varchar(200) DEFAULT NULL COMMENT '项目描述',
-  `user_id` int(11) DEFAULT NULL COMMENT '所属用户',
-  `flag` tinyint(4) DEFAULT '1' COMMENT '是否可用  1 可用,0 不可用',
-  `create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
-  `update_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT '修改时间',
-  PRIMARY KEY (`id`),
-  KEY `user_id_index` (`user_id`) USING BTREE
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_queue`;
-CREATE TABLE `t_escheduler_queue` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `queue_name` varchar(64) DEFAULT NULL COMMENT '队列名称',
-  `queue` varchar(64) DEFAULT NULL COMMENT 'yarn队列名称',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_datasource_user`;
-CREATE TABLE `t_escheduler_relation_datasource_user` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `datasource_id` int(11) DEFAULT NULL COMMENT '数据源id',
-  `perm` int(11) DEFAULT '1' COMMENT '权限',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_process_instance`;
-CREATE TABLE `t_escheduler_relation_process_instance` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `parent_process_instance_id` int(11) DEFAULT NULL COMMENT '父流程实例id',
-  `parent_task_instance_id` int(11) DEFAULT NULL COMMENT '父任务实例id',
-  `process_instance_id` int(11) DEFAULT NULL COMMENT '子流程实例id',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_project_user`;
-CREATE TABLE `t_escheduler_relation_project_user` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `project_id` int(11) DEFAULT NULL COMMENT '项目id',
-  `perm` int(11) DEFAULT '1' COMMENT '权限',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`),
-  KEY `user_id_index` (`user_id`) USING BTREE
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_resources_user`;
-CREATE TABLE `t_escheduler_relation_resources_user` (
-  `id` int(11) NOT NULL AUTO_INCREMENT,
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `resources_id` int(11) DEFAULT NULL COMMENT '资源id',
-  `perm` int(11) DEFAULT '1' COMMENT '权限',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_udfs_user`;
-CREATE TABLE `t_escheduler_relation_udfs_user` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `udf_id` int(11) DEFAULT NULL COMMENT 'udf id',
-  `perm` int(11) DEFAULT '1' COMMENT '权限',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_relation_user_alertgroup`;
-CREATE TABLE `t_escheduler_relation_user_alertgroup` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `alertgroup_id` int(11) DEFAULT NULL COMMENT '组消息id',
-  `user_id` int(11) DEFAULT NULL COMMENT '用户id',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_resources`;
-CREATE TABLE `t_escheduler_resources` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `alias` varchar(64) DEFAULT NULL COMMENT '别名',
-  `file_name` varchar(64) DEFAULT NULL COMMENT '文件名',
-  `desc` varchar(256) DEFAULT NULL COMMENT '描述',
-  `user_id` int(11) DEFAULT NULL COMMENT '用户id',
-  `type` tinyint(4) DEFAULT NULL COMMENT '资源类型,0 FILE,1 UDF',
-  `size` bigint(20) DEFAULT NULL COMMENT '资源大小',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_schedules`;
-CREATE TABLE `t_escheduler_schedules` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `process_definition_id` int(11) NOT NULL COMMENT '流程定义id',
-  `start_time` datetime NOT NULL COMMENT '调度开始时间',
-  `end_time` datetime NOT NULL COMMENT '调度结束时间',
-  `crontab` varchar(256) NOT NULL COMMENT 'crontab 表达式',
-  `failure_strategy` tinyint(4) NOT NULL COMMENT '失败策略: 0 结束,1 继续',
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `release_state` tinyint(4) NOT NULL COMMENT '状态:0 未上线,1 上线',
-  `warning_type` tinyint(4) NOT NULL COMMENT '告警类型:0 不发,1 流程成功发,2 流程失败发,3 成功失败都发',
-  `warning_group_id` int(11) DEFAULT NULL COMMENT '告警组id',
-  `process_instance_priority` int(11) DEFAULT NULL COMMENT '流程实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
-  `create_time` datetime NOT NULL COMMENT '创建时间',
-  `update_time` datetime NOT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_session`;
-CREATE TABLE `t_escheduler_session` (
-  `id` varchar(64) NOT NULL COMMENT '主键',
-  `user_id` int(11) DEFAULT NULL COMMENT '用户id',
-  `ip` varchar(45) DEFAULT NULL COMMENT '登录ip',
-  `last_login_time` datetime DEFAULT NULL COMMENT '最后登录时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_task_instance`;
-CREATE TABLE `t_escheduler_task_instance` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `name` varchar(255) DEFAULT NULL COMMENT '任务名称',
-  `task_type` varchar(64) DEFAULT NULL COMMENT '任务类型',
-  `process_definition_id` int(11) DEFAULT NULL COMMENT '流程定义id',
-  `process_instance_id` int(11) DEFAULT NULL COMMENT '流程实例id',
-  `task_json` longtext COMMENT '任务节点json',
-  `state` tinyint(4) DEFAULT NULL COMMENT '任务实例状态:0 提交成功,1 正在运行,2 准备暂停,3 暂停,4 准备停止,5 停止,6 失败,7 成功,8 需要容错,9 kill,10 等待线程,11 等待依赖完成',
-  `submit_time` datetime DEFAULT NULL COMMENT '任务提交时间',
-  `start_time` datetime DEFAULT NULL COMMENT '任务开始时间',
-  `end_time` datetime DEFAULT NULL COMMENT '任务结束时间',
-  `host` varchar(45) DEFAULT NULL COMMENT '执行任务的机器',
-  `execute_path` varchar(200) DEFAULT NULL COMMENT '任务执行路径',
-  `log_path` varchar(200) DEFAULT NULL COMMENT '任务日志路径',
-  `alert_flag` tinyint(4) DEFAULT NULL COMMENT '是否告警',
-  `retry_times` int(4) DEFAULT '0' COMMENT '重试次数',
-  `pid` int(4) DEFAULT NULL COMMENT '进程pid',
-  `app_link` varchar(255) DEFAULT NULL COMMENT 'yarn app id',
-  `flag` tinyint(4) DEFAULT '1' COMMENT '是否可用:0 不可用,1 可用',
-  `retry_interval` int(4) DEFAULT NULL COMMENT '重试间隔',
-  `max_retry_times` int(2) DEFAULT NULL COMMENT '最大重试次数',
-  `task_instance_priority` int(11) DEFAULT NULL COMMENT '任务实例优先级:0 Highest,1 High,2 Medium,3 Low,4 Lowest',
-  PRIMARY KEY (`id`),
-  KEY `process_instance_id` (`process_instance_id`) USING BTREE,
-  KEY `task_instance_index` (`process_definition_id`,`process_instance_id`) USING BTREE,
-  CONSTRAINT `foreign_key_instance_id` FOREIGN KEY (`process_instance_id`) REFERENCES `t_escheduler_process_instance` (`id`) ON DELETE CASCADE
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_tenant`;
-CREATE TABLE `t_escheduler_tenant` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `tenant_code` varchar(64) DEFAULT NULL COMMENT '租户编码',
-  `tenant_name` varchar(64) DEFAULT NULL COMMENT '租户名称',
-  `desc` varchar(256) DEFAULT NULL COMMENT '描述',
-  `queue_id` int(11) DEFAULT NULL COMMENT '队列id',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_udfs`;
-CREATE TABLE `t_escheduler_udfs` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `user_id` int(11) NOT NULL COMMENT '用户id',
-  `func_name` varchar(100) NOT NULL COMMENT 'UDF函数名',
-  `class_name` varchar(255) NOT NULL COMMENT '类名',
-  `type` tinyint(4) NOT NULL COMMENT 'Udf函数类型',
-  `arg_types` varchar(255) DEFAULT NULL COMMENT '参数',
-  `database` varchar(255) DEFAULT NULL COMMENT '库名',
-  `desc` varchar(255) DEFAULT NULL COMMENT '描述',
-  `resource_id` int(11) NOT NULL COMMENT '资源id',
-  `resource_name` varchar(255) NOT NULL COMMENT '资源名称',
-  `create_time` datetime NOT NULL COMMENT '创建时间',
-  `update_time` datetime NOT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_user`;
-CREATE TABLE `t_escheduler_user` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '用户id',
-  `user_name` varchar(64) DEFAULT NULL COMMENT '用户名',
-  `user_password` varchar(64) DEFAULT NULL COMMENT '用户密码',
-  `user_type` tinyint(4) DEFAULT NULL COMMENT '用户类型:0 管理员,1 普通用户',
-  `email` varchar(64) DEFAULT NULL COMMENT '邮箱',
-  `phone` varchar(11) DEFAULT NULL COMMENT '手机',
-  `tenant_id` int(11) DEFAULT NULL COMMENT '管理员0,普通用户所属租户id',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `update_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`),
-  UNIQUE KEY `user_name_unique` (`user_name`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-DROP TABLE IF EXISTS `t_escheduler_worker_server`;
-CREATE TABLE `t_escheduler_worker_server` (
-  `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '主键',
-  `host` varchar(45) DEFAULT NULL COMMENT 'ip',
-  `port` int(11) DEFAULT NULL COMMENT '进程号',
-  `zk_directory` varchar(64) CHARACTER SET utf8 COLLATE utf8_bin DEFAULT NULL COMMENT 'zk注册目录',
-  `res_info` varchar(255) DEFAULT NULL COMMENT '集群资源信息:json格式{"cpu":xxx,"memroy":xxx}',
-  `create_time` datetime DEFAULT NULL COMMENT '创建时间',
-  `last_heartbeat_time` datetime DEFAULT NULL COMMENT '更新时间',
-  PRIMARY KEY (`id`)
-) ENGINE=InnoDB DEFAULT CHARSET=utf8;
-
-INSERT INTO `t_escheduler_user` VALUES ('1', 'admin', '055a97b5fcd6d120372ad1976518f371', '0', 'xxx@qq.com', 'xxxx', '0', '2018-03-27 15:48:50', '2018-10-24 17:40:22');
-INSERT INTO `t_escheduler_alertgroup` VALUES (1, 'escheduler管理员告警组', '0', 'escheduler管理员告警组','2018-11-29 10:20:39', '2018-11-29 10:20:39');
-INSERT INTO `t_escheduler_relation_user_alertgroup` VALUES ('1', '1', '1', '2018-11-29 10:22:33', '2018-11-29 10:22:33');
-
-INSERT INTO `t_escheduler_queue` VALUES ('1', 'default', 'default');
-
-

+ 0 - 179
sql/quartz.sql

@@ -1,179 +0,0 @@
- #
- # In your Quartz properties file, you'll need to set 
- # org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
- #
- #
- # By: Ron Cordell - roncordell
- #  I didn't see this anywhere, so I thought I'd post it here. This is the script from Quartz to create the tables in a MySQL database, modified to use INNODB instead of MYISAM.
- 
- DROP TABLE IF EXISTS QRTZ_FIRED_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_PAUSED_TRIGGER_GRPS;
- DROP TABLE IF EXISTS QRTZ_SCHEDULER_STATE;
- DROP TABLE IF EXISTS QRTZ_LOCKS;
- DROP TABLE IF EXISTS QRTZ_SIMPLE_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_SIMPROP_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_CRON_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_BLOB_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_TRIGGERS;
- DROP TABLE IF EXISTS QRTZ_JOB_DETAILS;
- DROP TABLE IF EXISTS QRTZ_CALENDARS;
- 
- CREATE TABLE QRTZ_JOB_DETAILS(
- SCHED_NAME VARCHAR(120) NOT NULL,
- JOB_NAME VARCHAR(200) NOT NULL,
- JOB_GROUP VARCHAR(200) NOT NULL,
- DESCRIPTION VARCHAR(250) NULL,
- JOB_CLASS_NAME VARCHAR(250) NOT NULL,
- IS_DURABLE VARCHAR(1) NOT NULL,
- IS_NONCONCURRENT VARCHAR(1) NOT NULL,
- IS_UPDATE_DATA VARCHAR(1) NOT NULL,
- REQUESTS_RECOVERY VARCHAR(1) NOT NULL,
- JOB_DATA BLOB NULL,
- PRIMARY KEY (SCHED_NAME,JOB_NAME,JOB_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_TRIGGERS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- TRIGGER_NAME VARCHAR(200) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- JOB_NAME VARCHAR(200) NOT NULL,
- JOB_GROUP VARCHAR(200) NOT NULL,
- DESCRIPTION VARCHAR(250) NULL,
- NEXT_FIRE_TIME BIGINT(13) NULL,
- PREV_FIRE_TIME BIGINT(13) NULL,
- PRIORITY INTEGER NULL,
- TRIGGER_STATE VARCHAR(16) NOT NULL,
- TRIGGER_TYPE VARCHAR(8) NOT NULL,
- START_TIME BIGINT(13) NOT NULL,
- END_TIME BIGINT(13) NULL,
- CALENDAR_NAME VARCHAR(200) NULL,
- MISFIRE_INSTR SMALLINT(2) NULL,
- JOB_DATA BLOB NULL,
- PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
- FOREIGN KEY (SCHED_NAME,JOB_NAME,JOB_GROUP)
- REFERENCES QRTZ_JOB_DETAILS(SCHED_NAME,JOB_NAME,JOB_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_SIMPLE_TRIGGERS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- TRIGGER_NAME VARCHAR(200) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- REPEAT_COUNT BIGINT(7) NOT NULL,
- REPEAT_INTERVAL BIGINT(12) NOT NULL,
- TIMES_TRIGGERED BIGINT(10) NOT NULL,
- PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
- FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
- REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_CRON_TRIGGERS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- TRIGGER_NAME VARCHAR(200) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- CRON_EXPRESSION VARCHAR(120) NOT NULL,
- TIME_ZONE_ID VARCHAR(80),
- PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
- FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
- REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_SIMPROP_TRIGGERS
-   (          
-     SCHED_NAME VARCHAR(120) NOT NULL,
-     TRIGGER_NAME VARCHAR(200) NOT NULL,
-     TRIGGER_GROUP VARCHAR(200) NOT NULL,
-     STR_PROP_1 VARCHAR(512) NULL,
-     STR_PROP_2 VARCHAR(512) NULL,
-     STR_PROP_3 VARCHAR(512) NULL,
-     INT_PROP_1 INT NULL,
-     INT_PROP_2 INT NULL,
-     LONG_PROP_1 BIGINT NULL,
-     LONG_PROP_2 BIGINT NULL,
-     DEC_PROP_1 NUMERIC(13,4) NULL,
-     DEC_PROP_2 NUMERIC(13,4) NULL,
-     BOOL_PROP_1 VARCHAR(1) NULL,
-     BOOL_PROP_2 VARCHAR(1) NULL,
-     PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
-     FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP) 
-     REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_BLOB_TRIGGERS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- TRIGGER_NAME VARCHAR(200) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- BLOB_DATA BLOB NULL,
- PRIMARY KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP),
- INDEX (SCHED_NAME,TRIGGER_NAME, TRIGGER_GROUP),
- FOREIGN KEY (SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP)
- REFERENCES QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_CALENDARS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- CALENDAR_NAME VARCHAR(200) NOT NULL,
- CALENDAR BLOB NOT NULL,
- PRIMARY KEY (SCHED_NAME,CALENDAR_NAME))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_PAUSED_TRIGGER_GRPS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- PRIMARY KEY (SCHED_NAME,TRIGGER_GROUP))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_FIRED_TRIGGERS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- ENTRY_ID VARCHAR(95) NOT NULL,
- TRIGGER_NAME VARCHAR(200) NOT NULL,
- TRIGGER_GROUP VARCHAR(200) NOT NULL,
- INSTANCE_NAME VARCHAR(200) NOT NULL,
- FIRED_TIME BIGINT(13) NOT NULL,
- SCHED_TIME BIGINT(13) NOT NULL,
- PRIORITY INTEGER NOT NULL,
- STATE VARCHAR(16) NOT NULL,
- JOB_NAME VARCHAR(200) NULL,
- JOB_GROUP VARCHAR(200) NULL,
- IS_NONCONCURRENT VARCHAR(1) NULL,
- REQUESTS_RECOVERY VARCHAR(1) NULL,
- PRIMARY KEY (SCHED_NAME,ENTRY_ID))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_SCHEDULER_STATE (
- SCHED_NAME VARCHAR(120) NOT NULL,
- INSTANCE_NAME VARCHAR(200) NOT NULL,
- LAST_CHECKIN_TIME BIGINT(13) NOT NULL,
- CHECKIN_INTERVAL BIGINT(13) NOT NULL,
- PRIMARY KEY (SCHED_NAME,INSTANCE_NAME))
- ENGINE=InnoDB;
- 
- CREATE TABLE QRTZ_LOCKS (
- SCHED_NAME VARCHAR(120) NOT NULL,
- LOCK_NAME VARCHAR(40) NOT NULL,
- PRIMARY KEY (SCHED_NAME,LOCK_NAME))
- ENGINE=InnoDB;
- 
- CREATE INDEX IDX_QRTZ_J_REQ_RECOVERY ON QRTZ_JOB_DETAILS(SCHED_NAME,REQUESTS_RECOVERY);
- CREATE INDEX IDX_QRTZ_J_GRP ON QRTZ_JOB_DETAILS(SCHED_NAME,JOB_GROUP);
- 
- CREATE INDEX IDX_QRTZ_T_J ON QRTZ_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP);
- CREATE INDEX IDX_QRTZ_T_JG ON QRTZ_TRIGGERS(SCHED_NAME,JOB_GROUP);
- CREATE INDEX IDX_QRTZ_T_C ON QRTZ_TRIGGERS(SCHED_NAME,CALENDAR_NAME);
- CREATE INDEX IDX_QRTZ_T_G ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP);
- CREATE INDEX IDX_QRTZ_T_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE);
- CREATE INDEX IDX_QRTZ_T_N_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP,TRIGGER_STATE);
- CREATE INDEX IDX_QRTZ_T_N_G_STATE ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_GROUP,TRIGGER_STATE);
- CREATE INDEX IDX_QRTZ_T_NEXT_FIRE_TIME ON QRTZ_TRIGGERS(SCHED_NAME,NEXT_FIRE_TIME);
- CREATE INDEX IDX_QRTZ_T_NFT_ST ON QRTZ_TRIGGERS(SCHED_NAME,TRIGGER_STATE,NEXT_FIRE_TIME);
- CREATE INDEX IDX_QRTZ_T_NFT_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME);
- CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_STATE);
- CREATE INDEX IDX_QRTZ_T_NFT_ST_MISFIRE_GRP ON QRTZ_TRIGGERS(SCHED_NAME,MISFIRE_INSTR,NEXT_FIRE_TIME,TRIGGER_GROUP,TRIGGER_STATE);
- 
- CREATE INDEX IDX_QRTZ_FT_TRIG_INST_NAME ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME);
- CREATE INDEX IDX_QRTZ_FT_INST_JOB_REQ_RCVRY ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,INSTANCE_NAME,REQUESTS_RECOVERY);
- CREATE INDEX IDX_QRTZ_FT_J_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_NAME,JOB_GROUP);
- CREATE INDEX IDX_QRTZ_FT_JG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,JOB_GROUP);
- CREATE INDEX IDX_QRTZ_FT_T_G ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_NAME,TRIGGER_GROUP);
- CREATE INDEX IDX_QRTZ_FT_TG ON QRTZ_FIRED_TRIGGERS(SCHED_NAME,TRIGGER_GROUP);
- 
- commit; 

+ 0 - 1
sql/upgrade/1.1.0_schema/mysql/escheduler_dml.sql

@@ -1 +0,0 @@
-INSERT INTO `t_escheduler_version` (`version`) VALUES ('1.1.0');