1 Star 10 Fork 7

XFlee / real_time_monitor_system

加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
该仓库未声明开源许可证文件(LICENSE),使用请关注具体项目描述及其代码上游依赖。
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
Apache-2.0

系统演示视频:https://v.superbed.cn/play/62427aa527f86abb2a345c42

以下为环境搭建过程:

0、前置环境

用VMware安装好虚拟机,系统版本为centos7,配置好网络,关闭防火墙!!!

虚拟机推荐配置4核CPU,6G内存,30G硬盘以上

安装vim

新建一个普通用户pan

1、安装jdk

①解压

[pan@monitor ~]$ ll
total 220780
-rw-r--r-- 1 pan root   9046177 Mar  16 16:16 apache-maven-3.8.4-bin.tar.gz
-rw-r--r-- 1 pan root 194042837 Mar  16 16:16 jdk-8u202-linux-x64.tar.gz
-rw-r--r-- 1 pan root   1062124 Mar  16 16:16 nginx-1.20.2.tar.gz
-rw-r--r-- 1 pan root  21918532 Mar  16 16:16 node-v16.14.0-linux-x64.tar.xz
[pan@monitor ~]$ sudo mkdir /usr/local/java
[pan@monitor ~]$ sudo mv jdk-8u202-linux-x64.tar.gz /usr/local/java
[pan@monitor ~]$ cd /usr/local/java
[pan@monitor java]$ ll
total 189496
-rw-r--r-- 1 pan root 194042837 Mar  16 16:16 jdk-8u202-linux-x64.tar.gz
[pan@monitor java]$ sudo tar -zxvf jdk-8u202-linux-x64.tar.gz
...
[pan@monitor java]$ ll
total 189496
drwxr-xr-x. 7  10  143       245 Dec 16  2018 jdk1.8.0_202
-rw-r--r--. 1 pan root 194042837 Mar 16 16:16 jdk-8u202-linux-x64.tar.gz
[pan@monitor java]$ sudo rm -f jdk-8u202-linux-x64.tar.gz

②配置环境变量

sudo vim /etc/profile

文件末尾加上:

export JAVA_HOME=/usr/local/java/jdk1.8.0_202
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}

保存退出:

source /etc/profile

③验证

[pan@monitor java]$ java -version
java version "1.8.0_202"
Java(TM) SE Runtime Environment (build 1.8.0_202-b08)
Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode)

2、安装maven

①解压

[pan@monitor java]$ cd ~
[pan@monitor ~]$ sudo mkdir /usr/local/maven
[pan@monitor ~]$ sudo mv apache-maven-3.8.4-bin.tar.gz /usr/local/maven/
[pan@monitor ~]$ cd /usr/local/maven/
[pan@monitor maven]$ ll
total 8836
-rw-r--r--. 1 pan root 9046177 Mar 16 16:16 apache-maven-3.8.4-bin.tar.gz
[pan@monitor maven]$ sudo tar -zxvf apache-maven-3.8.4-bin.tar.gz
...
[pan@monitor maven]$ ll
total 8836
drwxr-xr-x. 6 root root      99 Mar 16 16:35 apache-maven-3.8.4
-rw-r--r--. 1 pan  root 9046177 Mar 16 16:16 apache-maven-3.8.4-bin.tar.gz
[pan@monitor maven]$ sudo rm -f apache-maven-3.8.4-bin.tar.gz 

②配置环境变量

sudo vim /etc/profile

文件末尾加上:

export MAVEN_HOME=/usr/local/maven/apache-maven-3.8.4
export PATH=$MAVEN_HOME/bin:$PATH

保存退出:

source /etc/profile

③验证

[pan@monitor maven]$ mvn -v
Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537)
Maven home: /usr/local/maven/apache-maven-3.8.4
Java version: 1.8.0_202, vendor: Oracle Corporation, runtime: /usr/local/java/jdk1.8.0_202/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-1160.el7.x86_64", arch: "amd64", family: "unix"

④添加国内镜像,加快下载速度(可选)

[pan@monitor maven]$ cd apache-maven-3.8.4/conf/
[pan@monitor conf]$ sudo vim settings.xml

在mirrors标签中添加阿里镜像:

<mirror>
  <id>alimaven</id>
  <mirrorOf>central</mirrorOf>
  <name>aliyun maven</name>
  <url>http://maven.aliyun.com/nexus/content/repositories/central/</url>
</mirror>

3、安装nginx

①配置nginx安装所需的环境

# 安装gcc
sudo yum install -y gcc-c++
# 安装PCRE pcre-devel
sudo yum install -y pcre pcre-devel
# 安装zlib
sudo yum install -y zlib zlib-devel
# 安装Open SSL
sudo yum install -y openssl openssl-devel

②解压

cd ~
tar -zxvf nginx-1.20.2.tar.gz
sudo mv nginx-1.20.2 /usr/local/nginx
rm -f nginx-1.20.2.tar.gz

③编译安装

cd /usr/local/nginx/
./configure --prefix=/usr/local/nginx --with-http_gzip_static_module --with-http_ssl_module
make
make install
mkdir logs
chmod 700 logs

④启动nginx

cd /usr/local/nginx/sbin/
sudo ./nginx

若无法访问到本机IP:80,则检查编译过程是否出错,是否已关闭防火墙

4、安装nodejs

①解压

cd ~
tar -xvf node-v16.14.0-linux-x64.tar.xz
sudo mv node-v16.14.0-linux-x64 /usr/local/nodejs
rm -f node-v16.14.0-linux-x64.tar.xz

②配置环境变量

sudo vim /etc/profile

文件末尾加上:

export PATH=$PATH:/usr/local/nodejs/bin

退出保存:

source /etc/profile

③验证

[pan@monitor ~]$ node -v
v16.14.0
[pan@monitor ~]$ npm -v
8.3.1

④设置淘宝镜像,加快下载速度(可选)

npm config set registry https://registry.npm.taobao.org

5、安装docker

①卸载旧版本

sudo yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-engine

②安装必要的一些系统工具

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

③添加软件源信息

sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

④安装Docker-CE

sudo yum makecache fast
sudo yum -y install docker-ce

⑤开启Docker服务

sudo service docker start

⑥验证

[pan@monitor ~]$ sudo docker version
Client: Docker Engine - Community
 Version:           20.10.13
 API version:       1.41
 Go version:        go1.16.15
 Git commit:        a224086
 Built:             Thu Mar 10 14:09:51 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.13
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.16.15
  Git commit:       906f57f
  Built:            Thu Mar 10 14:08:16 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.5.10
  GitCommit:        2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc
 runc:
  Version:          1.0.3
  GitCommit:        v1.0.3-0-gf46b6ba
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

⑦配置镜像加速器(可选)

控制台https://cr.console.aliyun.com/cn-heyuan/instances/mirrors

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

镜像地址请替换成自己的,去阿里云控制台找对应的镜像地址xxxxxx

6、安装docker-compose

请使用root用户进行安装,否则权限不足

curl -L https://get.daocloud.io/docker/compose/releases/download/v2.2.3/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose

验证

[root@monitor ~]# docker-compose -v
Docker Compose version v2.2.3

7、docker-compose部署zookeeper集群

创建容器间互联的网络

sudo docker network create -d bridge flink_network

将docker-compose相关的文件夹准备好

mkdir docker-zookeeper
cd docker-zookeeper/
vim docker-compose.yml

docker-compose.yml内容:

version: '3'

networks:
  default:
    name: flink_network

services:
  zoo1:
    image: zookeeper
    restart: always
    container_name: zoo1
    hostname: zoo1
    ports:
      - 2181:2181
    volumes:
      - ./data/zoo1/data:/data
      - ./data/zoo1/datalog:/datalog
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo2:
    image: zookeeper
    restart: always
    container_name: zoo2
    hostname: zoo2
    ports:
      - 2182:2181
    volumes:
      - ./data/zoo2/data:/data
      - ./data/zoo2/datalog:/datalog
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

  zoo3:
    image: zookeeper
    restart: always
    container_name: zoo3
    hostname: zoo3
    ports:
      - 2183:2181
    volumes:
      - ./data/zoo3/data:/data
      - ./data/zoo3/datalog:/datalog
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181

8、docker-compose部署hadoop集群

mkdir docker-hadoop
cd docker-hadoop
touch hadoop.env
vim docker-compose.yml

docker-compose.yml内容:

version: "3"

networks:
  default:
    name: flink_network

services:
  namenode:
    image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8
    container_name: namenode
    hostname: namenode
    restart: always
    ports:
      - 9870:9870
      - 9000:9000
    volumes:
      - ./hadoop/dfs/name:/hadoop/dfs/name
    environment:
      - CLUSTER_NAME=hadoop
    env_file:
      - ./hadoop.env

  datanode1:
    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
    container_name: datanode1
    restart: always
    volumes:
      - ./hadoop/dfs/data1:/hadoop/dfs/data
    environment:
      SERVICE_PRECONDITION: "namenode:9870"
    env_file:
      - ./hadoop.env

  datanode2:
    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
    container_name: datanode2
    restart: always
    volumes:
      - ./hadoop/dfs/data2:/hadoop/dfs/data
    environment:
      SERVICE_PRECONDITION: "namenode:9870"
    env_file:
      - ./hadoop.env

  resourcemanager:
    image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8
    container_name: resourcemanager
    restart: always
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864"
    env_file:
      - ./hadoop.env

  nodemanager:
    image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8
    restart: always
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088"
    env_file:
      - ./hadoop.env

  historyserver:
    image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8
    container_name: historyserver
    restart: always
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088"
    volumes:
      - ./hadoop/yarn/timeline:/hadoop/yarn/timeline
    env_file:
      - ./hadoop.env

编辑配置文件hadoop.env:

vim hadoop.env

写入以下内容:

CORE_CONF_fs_defaultFS=hdfs://namenode:9000
CORE_CONF_hadoop_http_staticuser_user=root
CORE_CONF_hadoop_proxyuser_hue_hosts=*
CORE_CONF_hadoop_proxyuser_hue_groups=*
CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec

HDFS_CONF_dfs_webhdfs_enabled=true
HDFS_CONF_dfs_permissions_enabled=false
HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false

YARN_CONF_yarn_log___aggregation___enable=true
YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/
YARN_CONF_yarn_resourcemanager_recovery_enabled=true
YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore
YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler
YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192
YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4
YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate
YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true
YARN_CONF_yarn_resourcemanager_hostname=resourcemanager
YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032
YARN_CONF_yarn_resourcemanager_scheduler_address=resourcemanager:8030
YARN_CONF_yarn_resourcemanager_resource__tracker_address=resourcemanager:8031
YARN_CONF_yarn_timeline___service_enabled=true
YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true
YARN_CONF_yarn_timeline___service_hostname=historyserver
YARN_CONF_mapreduce_map_output_compress=true
YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec
YARN_CONF_yarn_nodemanager_resource_memory___mb=8192
YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8
YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5
YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs
YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle

MAPRED_CONF_mapreduce_framework_name=yarn
MAPRED_CONF_mapred_child_java_opts=-Xmx4096m
MAPRED_CONF_mapreduce_map_memory_mb=4096
MAPRED_CONF_mapreduce_reduce_memory_mb=8192
MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m
MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m
MAPRED_CONF_yarn_app_mapreduce_am_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
MAPRED_CONF_mapreduce_map_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/
MAPRED_CONF_mapreduce_reduce_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/

9、docker-compose部署flink高可用(HA)集群

(由于Dockerfile使用了GitHub进行安装,可能不稳定,建议直接使用镜像文件安装)

①利用Dockerfile创建flink高可用集群镜像:

mkdir flink-build
cd flink-build/
vim Dockerfile

Dockerfile内容:

###############################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################

FROM openjdk:8-jre

# Install dependencies
RUN set -ex; \
  apt-get update; \
  apt-get -y install libsnappy1v5 gettext-base libjemalloc-dev; \
  rm -rf /var/lib/apt/lists/*

# Grab gosu for easy step-down from root
ENV GOSU_VERSION 1.11
RUN set -ex; \
  wget -nv -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)"; \
  wget -nv -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc"; \
  export GNUPGHOME="$(mktemp -d)"; \
  for server in ha.pool.sks-keyservers.net $(shuf -e \
                          hkp://p80.pool.sks-keyservers.net:80 \
                          keyserver.ubuntu.com \
                          hkp://keyserver.ubuntu.com:80 \
                          pgp.mit.edu) ; do \
      gpg --batch --keyserver "$server" --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && break || : ; \
  done && \
  gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
  gpgconf --kill all; \
  rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
  chmod +x /usr/local/bin/gosu; \
  gosu nobody true

# Configure Flink version
ENV FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.13.5/flink-1.13.5-bin-scala_2.12.tgz \
    FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.12.tgz.asc \
    GPG_KEY=CCFA852FD039380AB3EC36D08C3FB007FE60DEFA \
    CHECK_GPG=false

# Prepare environment
ENV FLINK_HOME=/opt/flink
ENV PATH=$FLINK_HOME/bin:$PATH
RUN groupadd --system --gid=9999 flink && \
    useradd --system --home-dir $FLINK_HOME --uid=9999 --gid=flink flink
WORKDIR $FLINK_HOME

# Install Flink
RUN set -ex; \
  wget -nv -O flink.tgz "$FLINK_TGZ_URL"; \
  \
  if [ "$CHECK_GPG" = "true" ]; then \
    wget -nv -O flink.tgz.asc "$FLINK_ASC_URL"; \
    export GNUPGHOME="$(mktemp -d)"; \
    for server in ha.pool.sks-keyservers.net $(shuf -e \
                            hkp://p80.pool.sks-keyservers.net:80 \
                            keyserver.ubuntu.com \
                            hkp://keyserver.ubuntu.com:80 \
                            pgp.mit.edu) ; do \
        gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break || : ; \
    done && \
    gpg --batch --verify flink.tgz.asc flink.tgz; \
    gpgconf --kill all; \
    rm -rf "$GNUPGHOME" flink.tgz.asc; \
  fi; \
  \
  tar -xf flink.tgz --strip-components=1; \
  rm flink.tgz; \
  \
  chown -R flink:flink .; \
  \
  mkdir /entrypoint \
  && chmod 777 /entrypoint

# Configure container
COPY docker-entrypoint.sh /entrypoint
ENTRYPOINT ["/entrypoint/docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
vim docker-entrypoint.sh
chmod 777 docker-entrypoint.sh

docker-entrypoint.sh内容:

#!/usr/bin/env bash

###############################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
###############################################################################

JOB_MANAGER_RPC_ADDRESS=${JOB_MANAGER_RPC_ADDRESS:-$(hostname -f)}
CONF_FILE="${FLINK_HOME}/conf/flink-conf.yaml"
master=$(hostname -f)

drop_privs_cmd() {
    if [ $(id -u) != 0 ]; then
        # Don't need to drop privs if EUID != 0
        return
    elif [ -x /sbin/su-exec ]; then
        # Alpine
        echo su-exec flink
    else
        # Others
        echo gosu flink
    fi
}

if [ "$1" = "help" ]; then
    echo "Usage: $(basename "$0") (jobmanager|taskmanager|help)"
    exit 0
elif [ "$1" = "jobmanager" ]; then
    shift 1
    echo "Starting Job Manager"

    if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}"
    else
        echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}"
    fi

    if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}"
    else
        echo "blob.server.port: 6124" >> "${CONF_FILE}"
    fi

    if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}"
    else
        echo "query.server.port: 6125" >> "${CONF_FILE}"
    fi

    if [ -n "${FLINK_PROPERTIES}" ]; then
        echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}"
    fi
    envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}"

    echo "config file: " && grep '^[^\n#]' "${CONF_FILE}"
    exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground "$@"
elif [ "$1" = "taskmanager" ]; then
    shift 1
    echo "Starting Task Manager"

    TASK_MANAGER_NUMBER_OF_TASK_SLOTS=${TASK_MANAGER_NUMBER_OF_TASK_SLOTS:-$(grep -c ^processor /proc/cpuinfo)}

    if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}"
    else
        echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}"
    fi

    if grep -E "^taskmanager\.numberOfTaskSlots:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/taskmanager\.numberOfTaskSlots:.*/taskmanager.numberOfTaskSlots: ${TASK_MANAGER_NUMBER_OF_TASK_SLOTS}/g" "${CONF_FILE}"
    else
        echo "taskmanager.numberOfTaskSlots: ${TASK_MANAGER_NUMBER_OF_TASK_SLOTS}" >> "${CONF_FILE}"
    fi

    if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}"
    else
        echo "blob.server.port: 6124" >> "${CONF_FILE}"
    fi

    if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}"
    else
        echo "query.server.port: 6125" >> "${CONF_FILE}"
    fi

    if [ -n "${FLINK_PROPERTIES}" ]; then
        echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}"
    fi
    envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}"

    echo "config file: " && grep '^[^\n#]' "${CONF_FILE}"
    exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start-foreground "$@"

elif [ "$1" = "jobmanagerHA" ]; then
    shift 1
    echo "Starting job Manager HA"

    if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}"
    else
        echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}"
    fi

    if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}"
    else
        echo "blob.server.port: 6124" >> "${CONF_FILE}"
    fi

    if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then
        sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}"
    else
        echo "query.server.port: 6125" >> "${CONF_FILE}"
    fi

    if [ -n "${FLINK_PROPERTIES}" ]; then
        echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}"
    fi
    envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}"

    echo "config file: " && grep '^[^\n#]' "${CONF_FILE}"
    exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground ${master} 8081 "$@"
fi

exec "$@"
vim build.sh
chmod +x build.sh

build.sh内容:

#!/bin/sh
sudo docker build -t "flink-ha:1.13.5-2.12" .

开始创建镜像:

./build.sh

(如果报错无法完成创建过程,请使用方法②)

②直接使用镜像文件创建Flink HA镜像:

通过该链接下载镜像文件flink-ha-1.13.5-2.12.docker并放到flink-build目录中

[pan@monitor flink-build]$ ll
total 617836
-rw-r--r--. 1 pan root        50 Mar 16 17:24 build.sh
-rw-r--r--. 1 pan root      5490 Mar 16 17:24 docker-entrypoint.sh
-rw-r--r--. 1 pan root      3680 Mar 16 17:24 Dockerfile
-rw-r--r--. 1 pan root 632644608 Mar 16 17:26 flink-ha-1.13.5-2.12.docker

然后载入镜像:

[pan@monitor flink-build]$ sudo docker load -i flink-ha-1.13.5-2.12.docker
0b0f2f2f5279: Loading layer  129.1MB/129.1MB
6398d5cccd2c: Loading layer   11.3MB/11.3MB
bed676ceab7a: Loading layer  19.31MB/19.31MB
2a807c7d273c: Loading layer  12.31MB/12.31MB
077f1da1f776: Loading layer  3.584kB/3.584kB
e2648222da93: Loading layer  108.2MB/108.2MB
8eb98ed999ea: Loading layer  5.458MB/5.458MB
1322f726cb72: Loading layer    2.3MB/2.3MB
0c06ab48594c: Loading layer  3.254MB/3.254MB
b43bfb02b0ea: Loading layer  2.048kB/2.048kB
4e6cb678dcca: Loading layer  341.3MB/341.3MB
6d59091f84ce: Loading layer   7.68kB/7.68kB
Loaded image: flink-ha:1.13.5-2.12
[pan@monitor flink-build]$ sudo docker image ls
REPOSITORY   TAG           IMAGE ID       CREATED       SIZE
flink-ha     1.13.5-2.12   79740047399e   2 weeks ago   625MB

编排集群:

mkdir docker-flink
cd docker-flink
vim docker-compose.yml

docker-compose.yml内容:

version: '3'

networks:
  default:
    name: flink_network

services:
  jobmanager1:
    image: flink-ha:1.13.5-2.12
    restart: always
    container_name: jobmanager1
    hostname: jobmanager1
    ports:
      - 8081:8081
    external_links:
      - zoo1
      - zoo2
      - zoo3
    volumes:
      - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
      - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
    command: jobmanagerHA

  jobmanager2:
    image: flink-ha:1.13.5-2.12
    restart: always
    container_name: jobmanager2
    hostname: jobmanager2
    ports:
      - 8082:8081
    external_links:
      - zoo1
      - zoo2
      - zoo3
    volumes:
      - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
      - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
    command: jobmanagerHA

  taskmanager1:
    image: flink-ha:1.13.5-2.12
    restart: always
    container_name: taskmanager1
    hostname: taskmanager1
    depends_on:
      - jobmanager1
      - jobmanager2
    links:
      - jobmanager1
      - jobmanager2
    external_links:
      - zoo1
      - zoo2
      - zoo3
    volumes:
      - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
      - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
    command: taskmanager

  taskmanager2:
    image: flink-ha:1.13.5-2.12
    restart: always
    container_name: taskmanager2
    hostname: taskmanager2
    depends_on:
      - jobmanager1
      - jobmanager2
    links:
      - jobmanager1
      - jobmanager2
    external_links:
      - zoo1
      - zoo2
      - zoo3
    volumes:
      - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml
      - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar
    command: taskmanager

创建挂载目录:

[pan@monitor docker-flink]$ mkdir conf lib

编辑配置文件flink-conf.yaml:

[pan@monitor docker-flink]$ vim conf/flink-conf.yaml

flink-conf.yaml内容:

################################################################################
#  Licensed to the Apache Software Foundation (ASF) under one
#  or more contributor license agreements.  See the NOTICE file
#  distributed with this work for additional information
#  regarding copyright ownership.  The ASF licenses this file
#  to you under the Apache License, Version 2.0 (the
#  "License"); you may not use this file except in compliance
#  with the License.  You may obtain a copy of the License at
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
#  Unless required by applicable law or agreed to in writing, software
#  distributed under the License is distributed on an "AS IS" BASIS,
#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
#  See the License for the specific language governing permissions and
# limitations under the License.
################################################################################


#==============================================================================
# Common
#==============================================================================

# The external address of the host on which the JobManager runs and can be
# reached by the TaskManagers and any clients which want to connect. This setting
# is only used in Standalone mode and may be overwritten on the JobManager side
# by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable.
# In high availability mode, if you use the bin/start-cluster.sh script and setup
# the conf/masters file, this will be taken care of automatically. Yarn/Mesos
# automatically configure the host name based on the hostname of the node where the
# JobManager runs.

jobmanager.rpc.address: jobmanager1

# The RPC port where the JobManager is reachable.

jobmanager.rpc.port: 6123


# The total process memory size for the JobManager.
#
# Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead.

jobmanager.memory.process.size: 1024m


# The total process memory size for the TaskManager.
#
# Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead.

taskmanager.memory.process.size: 1024m

# To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'.
# It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory.
#
# taskmanager.memory.flink.size: 1280m

# The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline.

taskmanager.numberOfTaskSlots: 2

# The parallelism used for programs that did not specify and other parallelism.

parallelism.default: 1

# The default file system scheme and authority.
# 
# By default file paths without scheme are interpreted relative to the local
# root file system 'file:///'. Use this to override the default and interpret
# relative paths relative to a different file system,
# for example 'hdfs://mynamenode:12345'
#
# fs.default-scheme

#==============================================================================
# High Availability
#==============================================================================

# The high-availability mode. Possible options are 'NONE' or 'zookeeper'.
#
high-availability: zookeeper

# The path where metadata for master recovery is persisted. While ZooKeeper stores
# the small ground truth for checkpoint and leader election, this location stores
# the larger objects, like persisted dataflow graphs.
# 
# Must be a durable file system that is accessible from all nodes
# (like HDFS, S3, Ceph, nfs, ...) 
#
high-availability.storageDir: hdfs://namenode:9000/flink/ha/

# The list of ZooKeeper quorum peers that coordinate the high-availability
# setup. This must be a list of the form:
# "host1:clientPort,host2:clientPort,..." (default clientPort: 2181)
#
high-availability.zookeeper.quorum: zoo1:2181,zoo2:2181,zoo3:2181


# ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes
# It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE)
# The default value is "open" and it can be changed to "creator" if ZK security is enabled
#
# high-availability.zookeeper.client.acl: open

#==============================================================================
# Fault tolerance and checkpointing
#==============================================================================

# The backend that will be used to store operator state checkpoints if
# checkpointing is enabled.
#
# Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the
# <class-name-of-factory>.
#
state.backend: filesystem

# Directory for checkpoints filesystem, when using any of the default bundled
# state backends.
#
state.checkpoints.dir: hdfs://namenode:9000/flink-checkpoints

# Default target directory for savepoints, optional.
#
# state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints

# Flag to enable/disable incremental checkpoints for backends that
# support incremental checkpoints (like the RocksDB state backend). 
#
# state.backend.incremental: false

# The failover strategy, i.e., how the job computation recovers from task failures.
# Only restart tasks that may have been affected by the task failure, which typically includes
# downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption.

jobmanager.execution.failover-strategy: region

#==============================================================================
# Rest & web frontend
#==============================================================================

# The port to which the REST client connects to. If rest.bind-port has
# not been specified, then the server will bind to this port as well.
#
#rest.port: 8081

# The address to which the REST client will connect to
#
#rest.address: 0.0.0.0

# Port range for the REST and web server to bind to.
#
#rest.bind-port: 8080-8090

# The address that the REST & web server binds to
#
#rest.bind-address: 0.0.0.0

# Flag to specify whether job submission is enabled from the web-based
# runtime monitor. Uncomment to disable.

#web.submit.enable: false

#==============================================================================
# Advanced
#==============================================================================

# Override the directories for temporary files. If not specified, the
# system-specific Java temporary directory (java.io.tmpdir property) is taken.
#
# For framework setups on Yarn or Mesos, Flink will automatically pick up the
# containers' temp directories without any need for configuration.
#
# Add a delimited list for multiple directories, using the system directory
# delimiter (colon ':' on unix) or a comma, e.g.:
#     /data1/tmp:/data2/tmp:/data3/tmp
#
# Note: Each directory entry is read from and written to by a different I/O
# thread. You can include the same directory multiple times in order to create
# multiple I/O threads against that directory. This is for example relevant for
# high-throughput RAIDs.
#
# io.tmp.dirs: /tmp

# The classloading resolve order. Possible values are 'child-first' (Flink's default)
# and 'parent-first' (Java's default).
#
# Child first classloading allows users to use different dependency/library
# versions in their application than those in the classpath. Switching back
# to 'parent-first' may help with debugging dependency issues.
#
# classloader.resolve-order: child-first

# The amount of memory going to the network stack. These numbers usually need 
# no tuning. Adjusting them may be necessary in case of an "Insufficient number
# of network buffers" error. The default min is 64MB, the default max is 1GB.
# 
# taskmanager.memory.network.fraction: 0.1
# taskmanager.memory.network.min: 64mb
# taskmanager.memory.network.max: 1gb

#==============================================================================
# Flink Cluster Security Configuration
#==============================================================================

# Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors -
# may be enabled in four steps:
# 1. configure the local krb5.conf file
# 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit)
# 3. make the credentials available to various JAAS login contexts
# 4. configure the connector to use JAAS/SASL

# The below configure how Kerberos credentials are provided. A keytab will be used instead of
# a ticket cache if the keytab path and principal are set.

# security.kerberos.login.use-ticket-cache: true
# security.kerberos.login.keytab: /path/to/kerberos/keytab
# security.kerberos.login.principal: flink-user

# The configuration below defines which JAAS login contexts

# security.kerberos.login.contexts: Client,KafkaClient

#==============================================================================
# ZK Security Configuration
#==============================================================================

# Below configurations are applicable if ZK ensemble is configured for security

# Override below configuration to provide custom ZK service name if configured
# zookeeper.sasl.service-name: zookeeper

# The configuration below must match one of the values set in "security.kerberos.login.contexts"
# zookeeper.sasl.login-context-name: Client

#==============================================================================
# HistoryServer
#==============================================================================

# The HistoryServer is started and stopped via bin/historyserver.sh (start|stop)

# Directory to upload completed jobs to. Add this directory to the list of
# monitored directories of the HistoryServer as well (see below).
#jobmanager.archive.fs.dir: hdfs:///completed-jobs/

# The address under which the web-based HistoryServer listens.
#historyserver.web.address: 0.0.0.0

# The port under which the web-based HistoryServer listens.
#historyserver.web.port: 8082

# Comma separated list of directories to monitor for completed jobs.
#historyserver.archive.fs.dir: hdfs:///completed-jobs/

# Interval in milliseconds for refreshing the monitored directories.
#historyserver.archive.fs.refresh-interval: 10000

blob.server.port: 6124
query.server.port: 6125

将hadoop依赖包flink-shaded-hadoop-2-uber-2.8.3-10.0.jar放到lib目录中:

[pan@monitor docker-flink]$ cd lib/
[pan@monitor lib]$ ll
total 42304
-rw-r--r--. 1 pan root 43317025 Mar 16 17:24 flink-shaded-hadoop-2-uber-2.8.3-10.0.jar

10、启动Flink HA集群

解决普通用户使用docker的权限问题:

  • 把普通用户加入到docker组中:sudo usermod -aG docker pan
  • 修改docker.sock访问权限:sudo chmod 666 /var/run/docker.sock

①启动zookeeper容器

cd docker-zookeeper/
docker-compose up -d

②启动hadoop容器

cd docker-hadoop/
docker-compose up -d

默认启动2个DataNode和1个NodeManager

若想要进行拓展,可以通过scale命令启动N个NodeManager

docker-compose up --scale nodemanager=N -d

而想要启动N个DataNode则需要编辑docker-compose.yml文件

datanode1:
    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
    container_name: datanode1
    restart: always
    volumes:
      - ./hadoop/dfs/data1:/hadoop/dfs/data
    environment:
      SERVICE_PRECONDITION: "namenode:9870"
    env_file:
      - ./hadoop.env

datanode2:
    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8
    container_name: datanode2
    restart: always
    volumes:
      - ./hadoop/dfs/data2:/hadoop/dfs/data
    environment:
      SERVICE_PRECONDITION: "namenode:9870"
    env_file:
      - ./hadoop.env

# 下面以此类推
datanode3:
	...

# 接着在resourcemanager、nodemanager和historyserver的环境配置中增加对应的datanode
resourcemanager:
    ...
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ..."
    ...

nodemanager:
    ...
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ... resourcemanager:8088"
    ...

historyserver:
    ...
    environment:
      SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ... resourcemanager:8088"

注意:不同的datanode要挂载不同的数据目录,

比如datanode1的volumes要设置成./hadoop/dfs/data1:/hadoop/dfs/data,

datanode2的volumes要设置成./hadoop/dfs/data2:/hadoop/dfs/data

③启动flink-ha容器

cd docker-flink/
docker-compose up -d

④验证是否启动成功

[pan@monitor docker-flink]$ docker ps
CONTAINER ID   IMAGE                                                    COMMAND                  CREATED          STATUS                        PORTS                                                                                  NAMES
f1801f2ef035   flink-ha:1.13.5-2.12                                     "/entrypoint/docker-…"   41 seconds ago   Up 36 seconds                 6123/tcp, 8081/tcp                                                                     taskmanager1
d81c77392652   flink-ha:1.13.5-2.12                                     "/entrypoint/docker-…"   41 seconds ago   Up 36 seconds                 6123/tcp, 8081/tcp                                                                     taskmanager2
15bedddbfd49   flink-ha:1.13.5-2.12                                     "/entrypoint/docker-…"   41 seconds ago   Up 36 seconds                 6123/tcp, 8081/tcp                                                                     taskmanager3
15dc634cd676   flink-ha:1.13.5-2.12                                     "/entrypoint/docker-…"   42 seconds ago   Up 38 seconds                 6123/tcp, 0.0.0.0:8082->8081/tcp, :::8082->8081/tcp                                    jobmanager2
91a8f0a024c4   flink-ha:1.13.5-2.12                                     "/entrypoint/docker-…"   42 seconds ago   Up 38 seconds                 6123/tcp, 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp                                    jobmanager1
8dd0dafefa76   bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8       "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        8042/tcp                                                                               docker-hadoop-nodemanager-1
20c308d43594   bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8   "/entrypoint.sh /run…"   2 minutes ago    Up About a minute (healthy)   8088/tcp                                                                               resourcemanager
1fa878977b09   bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8     "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        8188/tcp                                                                               historyserver
cd07cbea1837   bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8          "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        9864/tcp                                                                               datanode1
09b1beef7d33   bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8          "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9870->9870/tcp, :::9870->9870/tcp   namenode
3c954daf4062   bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8          "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        9864/tcp                                                                               datanode2
24aac3957ad8   bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8       "/entrypoint.sh /run…"   2 minutes ago    Up 2 minutes (healthy)        8042/tcp                                                                               docker-hadoop-nodemanager-2
a60f1554b6bc   zookeeper                                                "/docker-entrypoint.…"   3 minutes ago    Up 3 minutes                  2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp, :::2182->2181/tcp                zoo2
d858a4026074   zookeeper                                                "/docker-entrypoint.…"   3 minutes ago    Up 3 minutes                  2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp                zoo1
dc7f3fa0de62   zookeeper                                                "/docker-entrypoint.…"   3 minutes ago    Up 3 minutes                  2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp, :::2183->2181/tcp                zoo3

访问9870端口和8081/8082端口,查看ui界面

11、docker安装mysql5.7

拉取官方mysql5.7镜像

docker pull mysql:5.7

在本地创建mysql的映射目录

mkdir -p ~/mysql/data ~/mysql/logs ~/mysql/conf

在/root/mysql/conf中创建 *.cnf 文件(叫什么都行)

vim ~/mysql/conf/my.cnf

my.cnf内容(开启binlog)

[mysqld]
server-id = 12345
log-bin = mysql-bin
binlog_format = ROW
binlog_row_image  = FULL
expire_logs_days  = 10

创建容器

docker run \
-d \
--name mysql \
-p 3306:3306 \
--restart unless-stopped \
-v ~/mysql/conf:/etc/mysql/conf.d \
-v ~/mysql/logs:/logs \
-v ~/mysql/data:/var/lib/mysql \
-e MYSQL_ROOT_PASSWORD=root \
mysql:5.7

建表语句

CREATE DATABASE log_db;

USE log_db;

CREATE TABLE `comment_log` (
  `log_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `ts` bigint(20) NOT NULL,
  `product_id` varchar(30) NOT NULL,
  `user_name` varchar(30) NOT NULL,
  `point` int(11) NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='商品评价日志表';

CREATE TABLE `action_log` (
  `log_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `ts` bigint(20) NOT NULL,
  `ip` varchar(30) NOT NULL,
  `user_name` varchar(30) NOT NULL,
  `action` varchar(10) NOT NULL,
  `status` varchar(10) NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='用户行为日志表';

CREATE TABLE `order_log` (
  `log_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `ts` bigint(20) NOT NULL,
  `order_id` bigint(20) NOT NULL,
  `type` varchar(10) NOT NULL,
  `user_name` varchar(30) NOT NULL,
  `product_id` varchar(30) NOT NULL,
  `number` int(11) NOT NULL,
  `price` int(11) NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='订单日志表';

CREATE TABLE `visit_log` (
  `log_id` bigint(20) NOT NULL AUTO_INCREMENT,
  `ts` bigint(20) NOT NULL,
  `website_id` varchar(30) NOT NULL,
  `user_name` varchar(30) NOT NULL,
  `ip` varchar(30) NOT NULL,
  PRIMARY KEY (`log_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='站点访问日志表';

12、docker安装redis

拉取官方redis6.2.6镜像

docker pull redis:6.2.6

在本地创建redis的映射目录

mkdir -p ~/redis/conf ~/redis/data

创建Redis配置文件

touch ~/redis/conf/redis.conf

在redis.conf默认配置内容的基础上修改以下配置:(redis.conf可在官网下载redis-6.2.6.tar.gz解压得到)

命令 功能
appendonly yes 启动Redis持久化功能 (默认 no , 所有信息都存储在内存 [重启丢失])
requirepass 123456 设置密码,本项目设置为123456

创建Redis容器并启动

docker run \
-d \
--name redis \
-p 6379:6379 \
--restart unless-stopped \
-v ~/redis/data:/data \
-v ~/redis/conf/redis.conf:/etc/redis/redis.conf \
redis:6.2.6 \
redis-server /etc/redis/redis.conf

13、部署前端项目

注意:由于前端使用了百度地图的API,如果不联网的话是不会显示地图的,而且这部分需要自己去百度地图控制台申请密钥并替换掉本项目的密钥( 教程:https://lbsyun.baidu.com/index.php?title=FAQ/obtainAK

修改密钥

vim monitor-ui/src/components/action/OnlineMonitorMap.vue
# 找到第205行,将loadBMap("xxxxxxx").then引号中的密钥替换成自己申请的密钥

打包项目

cd monitor-ui/
vim src/api/websocket.js
# 修改let ws = new WebSocket('ws://xxx.xxx.xxx.xxx:8888/websocket')中的IP地址为本机IP地址
npm install
npm run build
sudo mv dist /usr/local/nginx/html/

配置nginx

sudo vim /usr/local/nginx/conf/nginx.conf

修改nginx.conf内容:

server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   /usr/local/nginx/html/dist;
            index  index.html index.htm;
        }

重新启动nginx

cd /usr/local/nginx/sbin/
sudo ./nginx -s reload

浏览器访问IP地址(默认为80端口),查看页面是否正常

14、部署后端项目

打包项目并后台运行

cd monitor-backend/
vim src/resources/application.properties
# 修改spring.redis.host后面的IP地址为本机IP地址
mvn package
cd target/
nohup java -jar monitor_backend-0.0.1-SNAPSHOT.jar &

15、提交flink任务

将flink任务打成jar包

cd flink-tasks/
mvn package

在UI界面中(端口8081或8082)或命令行提交target中的jar包并启动任务

必填参数:

-ip 本机IP地址 (即mysql和redis的IP地址)

选填参数:(有默认值,可根据实际情况进行调整)

jar包名 主类名 可选参数
comment_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar CommentMonitor -w 滚动窗口的时间,即好评/差评数统计间隔 默认5秒
-nc 恶意差评警告的数量阈值 默认10个
-nt 恶意差评警告的时间范围 默认5秒
-pc 刷好评警告的数量阈值 默认10个
-pt 刷好评警告的时间范围 默认5秒
jar包名 主类名 可选参数
visit_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar VisitMonitor -w 滚动窗口的时间,即pv/uv/ip访问数统计间隔 默认2秒
-pc pv监控警告的数量阈值 默认26次
-pt pv监控警告的时间范围 默认2秒
-uc 用户频繁访问警告的数量阈值 默认5次
-ut 用户频繁访问警告的时间范围 默认2秒
-ic ip频繁访问警告的数量阈值 默认5次
-it ip频繁访问警告的时间范围 默认2秒
jar包名 主类名 可选参数
order_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar OrderMonitor -w 滚动窗口的时间,即成功、过期和失败订单数统计间隔 默认1秒
-c 刷单警告的数量阈值 默认7个
-t 刷单警告的时间范围 默认5秒
jar包名 主类名 可选参数
action_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar ActionMonitor -ot 在线人数地图更新的时间间隔 默认5秒
-lc 频繁登录失败警告的次数阈值 默认4次
-lt 频繁登录失败警告的时间范围 默认5秒
-rc 频繁注册警告的次数阈值 默认14次
-rt 频繁注册警告的时间范围 默认5秒

16、随机生成模拟数据

打包项目

cd data-generate/
mvn package

将target中的jar包都移动到generator-jars文件夹中

[pan@monitor generator-jars]$ ll
total 4944
-rw-r--r--. 1 pan root 1008860 Apr  3 17:26 action_log_generator-1.0-SNAPSHOT.jar
-rw-r--r--. 1 pan root 1008351 Apr  3 17:26 comment_log_generator-1.0-SNAPSHOT.jar
-rwxr-xr-x. 1 pan root    3844 Apr  3 17:26 data-generator.sh
-rw-r--r--. 1 pan root 1007247 Apr  3 17:26 data_reset-1.0-SNAPSHOT.jar
-rw-r--r--. 1 pan root 1012416 Apr  3 17:26 order_log_generator-1.0-SNAPSHOT.jar
-rw-r--r--. 1 pan root 1008916 Apr  3 17:26 visit_log_generator-1.0-SNAPSHOT.jar

编写批量运行和停止jar包的shell脚本data-generator.sh:

#!/bin/sh

export COMMENT=comment_log_generator-1.0-SNAPSHOT.jar
export VISIT=visit_log_generator-1.0-SNAPSHOT.jar
export ORDER=order_log_generator-1.0-SNAPSHOT.jar
export ACTION=action_log_generator-1.0-SNAPSHOT.jar
export RESET=data_reset-1.0-SNAPSHOT.jar

export IP=$2
export USERNAME=$3
export PWD=$4

case "$1" in

start)
        ## 检查参数是否填写
        if [[ -z $2 || -z $3 || -z $4 ]]
        then
            echo "参数不能为空,格式:./data-generator.sh start ip username password"
            exit 0
        else
            echo "database ip: $IP"
            echo "database username: $USERNAME"
            echo "database password: $PWD"
        fi
        ## 启动COMMENT
        echo "--------COMMENT 开始启动--------------"
        nohup java -jar $COMMENT $IP $USERNAME $PWD >/dev/null 2>&1 &
        COMMENT_pid=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'`
        until [ -n "$COMMENT_pid" ]
            do
              COMMENT_pid=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'`  
            done
        echo "COMMENT pid is $COMMENT_pid"
        echo "--------COMMENT 启动成功--------------"
		
        ## 启动VISIT
        echo "--------VISIT 开始启动--------------"
        nohup java -jar $VISIT $IP $USERNAME $PWD >/dev/null 2>&1 &
        VISIT_pid=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'`
        until [ -n "$VISIT_pid" ]
            do
              VISIT_pid=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'`  
            done
        echo "VISIT pid is $VISIT_pid"
        echo "--------VISIT 启动成功--------------"

        ## 启动ORDER
        echo "--------ORDER 开始启动--------------"
        nohup java -jar $ORDER $IP $USERNAME $PWD >/dev/null 2>&1 &
        ORDER_pid=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'`
        until [ -n "$ORDER_pid" ]
            do
              ORDER_pid=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'`  
            done
        echo "ORDER pid is $ORDER_pid"
        echo "--------ORDER 启动成功--------------"

        ## 启动ACTION
        echo "--------ACTION 开始启动--------------"
        nohup java -jar $ACTION $IP $USERNAME $PWD >/dev/null 2>&1 &
        ACTION_pid=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'`
        until [ -n "$ACTION_pid" ]
            do
              ACTION_pid=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'`  
            done
        echo "ACTION pid is $ACTION_pid"
        echo "--------ACTION 启动成功--------------"
		
        echo "===startAll success==="
        ;;
 
 stop)
        P_ID=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'`
        if [ "$P_ID" == "" ]; then
            echo "===COMMENT process not exists or stop success==="
        else
            kill -9 $P_ID
            echo "COMMENT killed success"
        fi
		
		P_ID=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'`
        if [ "$P_ID" == "" ]; then
            echo "===VISIT process not exists or stop success==="
        else
            kill -9 $P_ID
            echo "VISIT killed success"
        fi
		
		P_ID=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'`
        if [ "$P_ID" == "" ]; then
            echo "===ORDER process not exists or stop success==="
        else
            kill -9 $P_ID
            echo "ORDER killed success"
        fi
		
		P_ID=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'`
        if [ "$P_ID" == "" ]; then
            echo "===ACTION process not exists or stop success==="
        else
            kill -9 $P_ID
            echo "ACTION killed success"
        fi
		
        echo "===stopAll success==="
        ;;   

reset)
        echo "--------开始清除数据--------------"
        java -jar $RESET $IP $USERNAME $PWD >/dev/null 2>&1
        echo "--------数据清除成功--------------"

esac
exit 0

安装lsof命令

sudo yum install -y lsof

开始生成模拟数据

cd generator-jars
chmod +x data-generator.sh
# 开始生成 参数:数据库IP地址、用户名和密码
./data-generator.sh start 本机IP root root
# 停止生成
./data-generator.sh stop

生成的数据将不断写入mysql数据库中相应的表中,

此时,若系统整条链路都跑通,前端界面就能显示数据了

商品评价日志表comment_log:

字段名 数据类型 主键 其他 描述
log_id bigint(20) yes 非空、自增 日志ID
ts bigint(20) no 非空 评价时间戳
product_id varchar(30) no 非空 评价商品ID
user_name varchar(30) no 非空 用户名
point int(11) no 非空 评价分数

站点访问日志表visit_log:

字段名 数据类型 主键 其他 描述
log_id bigint(20) yes 非空、自增 日志ID
ts bigint(20) no 非空 访问时间戳
website_id varchar(30) no 非空 访问站点ID
user_name varchar(30) no 非空 用户名
ip varchar(30) no 非空 IP地址

订单日志表order_log:

字段名 数据类型 主键 其他 描述
log_id bigint(20) yes 非空、自增 日志ID
ts bigint(20) no 非空 日志时间戳
order_id bigint(20) no 非空 订单ID
type varchar(10) no 非空 日志类型(创建、支付、失败)
user_name varchar(30) no 非空 用户名
product_id varchar(30) no 非空 商品ID
number int(11) no 非空 购买数量
price int(11) no 非空 购买价格

用户行为日志表action_log:

字段名 数据类型 主键 其他 描述
log_id bigint(20) yes 非空、自增 日志ID
ts bigint(20) no 非空 日志时间戳
ip varchar(30) no 非空 IP地址
user_name varchar(30) no 非空 用户名
action varchar(10) no 非空 用户操作类型(登录、注册)
status varchar(10) no 非空 操作结果(成功、失败)

17、清理数据data reset

系统连续运行多天后,数据量就过大,造成性能下降。可以通过清除数据库数据来解决

数据清除命令:

cd generator-jars
./data-generator.sh reset 本机IP root root
![](https://gitee.com/TimeWentby18/real_time_monitor_system/raw/master/images/monitor.gif) 系统演示视频:https://v.superbed.cn/play/62427aa527f86abb2a345c42 以下为环境搭建过程: ### 0、前置环境 用VMware安装好虚拟机,系统版本为**centos7**,配置好网络,**关闭防火墙!!!** 虚拟机推荐配置**4核CPU,6G内存,30G硬盘**以上 安装vim 新建一个普通用户pan ### 1、安装jdk ①解压 ```sh [pan@monitor ~]$ ll total 220780 -rw-r--r-- 1 pan root 9046177 Mar 16 16:16 apache-maven-3.8.4-bin.tar.gz -rw-r--r-- 1 pan root 194042837 Mar 16 16:16 jdk-8u202-linux-x64.tar.gz -rw-r--r-- 1 pan root 1062124 Mar 16 16:16 nginx-1.20.2.tar.gz -rw-r--r-- 1 pan root 21918532 Mar 16 16:16 node-v16.14.0-linux-x64.tar.xz [pan@monitor ~]$ sudo mkdir /usr/local/java [pan@monitor ~]$ sudo mv jdk-8u202-linux-x64.tar.gz /usr/local/java [pan@monitor ~]$ cd /usr/local/java [pan@monitor java]$ ll total 189496 -rw-r--r-- 1 pan root 194042837 Mar 16 16:16 jdk-8u202-linux-x64.tar.gz [pan@monitor java]$ sudo tar -zxvf jdk-8u202-linux-x64.tar.gz ... [pan@monitor java]$ ll total 189496 drwxr-xr-x. 7 10 143 245 Dec 16 2018 jdk1.8.0_202 -rw-r--r--. 1 pan root 194042837 Mar 16 16:16 jdk-8u202-linux-x64.tar.gz [pan@monitor java]$ sudo rm -f jdk-8u202-linux-x64.tar.gz ``` ②配置环境变量 ```sh sudo vim /etc/profile ``` 文件末尾加上: ```sh export JAVA_HOME=/usr/local/java/jdk1.8.0_202 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin export PATH=$PATH:${JAVA_PATH} ``` 保存退出: ```sh source /etc/profile ``` ③验证 ```sh [pan@monitor java]$ java -version java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) 64-Bit Server VM (build 25.202-b08, mixed mode) ``` ### 2、安装maven ①解压 ```sh [pan@monitor java]$ cd ~ [pan@monitor ~]$ sudo mkdir /usr/local/maven [pan@monitor ~]$ sudo mv apache-maven-3.8.4-bin.tar.gz /usr/local/maven/ [pan@monitor ~]$ cd /usr/local/maven/ [pan@monitor maven]$ ll total 8836 -rw-r--r--. 1 pan root 9046177 Mar 16 16:16 apache-maven-3.8.4-bin.tar.gz [pan@monitor maven]$ sudo tar -zxvf apache-maven-3.8.4-bin.tar.gz ... [pan@monitor maven]$ ll total 8836 drwxr-xr-x. 6 root root 99 Mar 16 16:35 apache-maven-3.8.4 -rw-r--r--. 1 pan root 9046177 Mar 16 16:16 apache-maven-3.8.4-bin.tar.gz [pan@monitor maven]$ sudo rm -f apache-maven-3.8.4-bin.tar.gz ``` ②配置环境变量 ```sh sudo vim /etc/profile ``` 文件末尾加上: ```sh export MAVEN_HOME=/usr/local/maven/apache-maven-3.8.4 export PATH=$MAVEN_HOME/bin:$PATH ``` 保存退出: ```sh source /etc/profile ``` ③验证 ```sh [pan@monitor maven]$ mvn -v Apache Maven 3.8.4 (9b656c72d54e5bacbed989b64718c159fe39b537) Maven home: /usr/local/maven/apache-maven-3.8.4 Java version: 1.8.0_202, vendor: Oracle Corporation, runtime: /usr/local/java/jdk1.8.0_202/jre Default locale: en_US, platform encoding: UTF-8 OS name: "linux", version: "3.10.0-1160.el7.x86_64", arch: "amd64", family: "unix" ``` ④添加国内镜像,加快下载速度(可选) ```sh [pan@monitor maven]$ cd apache-maven-3.8.4/conf/ [pan@monitor conf]$ sudo vim settings.xml ``` 在mirrors标签中添加阿里镜像: ```xml <mirror> <id>alimaven</id> <mirrorOf>central</mirrorOf> <name>aliyun maven</name> <url>http://maven.aliyun.com/nexus/content/repositories/central/</url> </mirror> ``` ### 3、安装nginx ①配置nginx安装所需的环境 ```sh # 安装gcc sudo yum install -y gcc-c++ # 安装PCRE pcre-devel sudo yum install -y pcre pcre-devel # 安装zlib sudo yum install -y zlib zlib-devel # 安装Open SSL sudo yum install -y openssl openssl-devel ``` ②解压 ```sh cd ~ tar -zxvf nginx-1.20.2.tar.gz sudo mv nginx-1.20.2 /usr/local/nginx rm -f nginx-1.20.2.tar.gz ``` ③编译安装 ```sh cd /usr/local/nginx/ ./configure --prefix=/usr/local/nginx --with-http_gzip_static_module --with-http_ssl_module make make install mkdir logs chmod 700 logs ``` ④启动nginx ```sh cd /usr/local/nginx/sbin/ sudo ./nginx ``` 若无法访问到本机IP:80,则检查编译过程是否出错,是否已**关闭防火墙** ### 4、安装nodejs ①解压 ```sh cd ~ tar -xvf node-v16.14.0-linux-x64.tar.xz sudo mv node-v16.14.0-linux-x64 /usr/local/nodejs rm -f node-v16.14.0-linux-x64.tar.xz ``` ②配置环境变量 ```sh sudo vim /etc/profile ``` 文件末尾加上: ```sh export PATH=$PATH:/usr/local/nodejs/bin ``` 退出保存: ``` source /etc/profile ``` ③验证 ```sh [pan@monitor ~]$ node -v v16.14.0 [pan@monitor ~]$ npm -v 8.3.1 ``` ④设置淘宝镜像,加快下载速度(可选) ```sh npm config set registry https://registry.npm.taobao.org ``` ### 5、安装docker ①卸载旧版本 ```sh sudo yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-engine ``` ②安装必要的一些系统工具 ```sh sudo yum install -y yum-utils device-mapper-persistent-data lvm2 ``` ③添加软件源信息 ```sh sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo ``` ④安装Docker-CE ```sh sudo yum makecache fast sudo yum -y install docker-ce ``` ⑤开启Docker服务 ```sh sudo service docker start ``` ⑥验证 ```sh [pan@monitor ~]$ sudo docker version Client: Docker Engine - Community Version: 20.10.13 API version: 1.41 Go version: go1.16.15 Git commit: a224086 Built: Thu Mar 10 14:09:51 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.13 API version: 1.41 (minimum version 1.12) Go version: go1.16.15 Git commit: 906f57f Built: Thu Mar 10 14:08:16 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.5.10 GitCommit: 2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc runc: Version: 1.0.3 GitCommit: v1.0.3-0-gf46b6ba docker-init: Version: 0.19.0 GitCommit: de40ad0 ``` ⑦配置镜像加速器(可选) 控制台https://cr.console.aliyun.com/cn-heyuan/instances/mirrors ```sh mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"] } EOF sudo systemctl daemon-reload sudo systemctl restart docker ``` **镜像地址请替换成自己的,去阿里云控制台找对应的镜像地址xxxxxx** ### 6、安装docker-compose 请使用root用户进行安装,否则权限不足 ```sh curl -L https://get.daocloud.io/docker/compose/releases/download/v2.2.3/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose chmod +x /usr/local/bin/docker-compose ``` 验证 ```sh [root@monitor ~]# docker-compose -v Docker Compose version v2.2.3 ``` ### 7、docker-compose部署zookeeper集群 创建容器间互联的网络 ```sh sudo docker network create -d bridge flink_network ``` 将docker-compose相关的文件夹准备好 ```sh mkdir docker-zookeeper cd docker-zookeeper/ vim docker-compose.yml ``` docker-compose.yml内容: ```yaml version: '3' networks: default: name: flink_network services: zoo1: image: zookeeper restart: always container_name: zoo1 hostname: zoo1 ports: - 2181:2181 volumes: - ./data/zoo1/data:/data - ./data/zoo1/datalog:/datalog environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 zoo2: image: zookeeper restart: always container_name: zoo2 hostname: zoo2 ports: - 2182:2181 volumes: - ./data/zoo2/data:/data - ./data/zoo2/datalog:/datalog environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 zoo3: image: zookeeper restart: always container_name: zoo3 hostname: zoo3 ports: - 2183:2181 volumes: - ./data/zoo3/data:/data - ./data/zoo3/datalog:/datalog environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181 ``` ### 8、docker-compose部署hadoop集群 ```sh mkdir docker-hadoop cd docker-hadoop touch hadoop.env vim docker-compose.yml ``` docker-compose.yml内容: ```yaml version: "3" networks: default: name: flink_network services: namenode: image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8 container_name: namenode hostname: namenode restart: always ports: - 9870:9870 - 9000:9000 volumes: - ./hadoop/dfs/name:/hadoop/dfs/name environment: - CLUSTER_NAME=hadoop env_file: - ./hadoop.env datanode1: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode1 restart: always volumes: - ./hadoop/dfs/data1:/hadoop/dfs/data environment: SERVICE_PRECONDITION: "namenode:9870" env_file: - ./hadoop.env datanode2: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode2 restart: always volumes: - ./hadoop/dfs/data2:/hadoop/dfs/data environment: SERVICE_PRECONDITION: "namenode:9870" env_file: - ./hadoop.env resourcemanager: image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8 container_name: resourcemanager restart: always environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864" env_file: - ./hadoop.env nodemanager: image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 restart: always environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088" env_file: - ./hadoop.env historyserver: image: bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8 container_name: historyserver restart: always environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 resourcemanager:8088" volumes: - ./hadoop/yarn/timeline:/hadoop/yarn/timeline env_file: - ./hadoop.env ``` 编辑配置文件hadoop.env: ```sh vim hadoop.env ``` 写入以下内容: ```json CORE_CONF_fs_defaultFS=hdfs://namenode:9000 CORE_CONF_hadoop_http_staticuser_user=root CORE_CONF_hadoop_proxyuser_hue_hosts=* CORE_CONF_hadoop_proxyuser_hue_groups=* CORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec HDFS_CONF_dfs_webhdfs_enabled=true HDFS_CONF_dfs_permissions_enabled=false HDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false YARN_CONF_yarn_log___aggregation___enable=true YARN_CONF_yarn_log_server_url=http://historyserver:8188/applicationhistory/logs/ YARN_CONF_yarn_resourcemanager_recovery_enabled=true YARN_CONF_yarn_resourcemanager_store_class=org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore YARN_CONF_yarn_resourcemanager_scheduler_class=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___mb=8192 YARN_CONF_yarn_scheduler_capacity_root_default_maximum___allocation___vcores=4 YARN_CONF_yarn_resourcemanager_fs_state___store_uri=/rmstate YARN_CONF_yarn_resourcemanager_system___metrics___publisher_enabled=true YARN_CONF_yarn_resourcemanager_hostname=resourcemanager YARN_CONF_yarn_resourcemanager_address=resourcemanager:8032 YARN_CONF_yarn_resourcemanager_scheduler_address=resourcemanager:8030 YARN_CONF_yarn_resourcemanager_resource__tracker_address=resourcemanager:8031 YARN_CONF_yarn_timeline___service_enabled=true YARN_CONF_yarn_timeline___service_generic___application___history_enabled=true YARN_CONF_yarn_timeline___service_hostname=historyserver YARN_CONF_mapreduce_map_output_compress=true YARN_CONF_mapred_map_output_compress_codec=org.apache.hadoop.io.compress.SnappyCodec YARN_CONF_yarn_nodemanager_resource_memory___mb=8192 YARN_CONF_yarn_nodemanager_resource_cpu___vcores=8 YARN_CONF_yarn_nodemanager_disk___health___checker_max___disk___utilization___per___disk___percentage=98.5 YARN_CONF_yarn_nodemanager_remote___app___log___dir=/app-logs YARN_CONF_yarn_nodemanager_aux___services=mapreduce_shuffle MAPRED_CONF_mapreduce_framework_name=yarn MAPRED_CONF_mapred_child_java_opts=-Xmx4096m MAPRED_CONF_mapreduce_map_memory_mb=4096 MAPRED_CONF_mapreduce_reduce_memory_mb=8192 MAPRED_CONF_mapreduce_map_java_opts=-Xmx3072m MAPRED_CONF_mapreduce_reduce_java_opts=-Xmx6144m MAPRED_CONF_yarn_app_mapreduce_am_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/ MAPRED_CONF_mapreduce_map_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/ MAPRED_CONF_mapreduce_reduce_env=HADOOP_MAPRED_HOME=/opt/hadoop-3.2.1/ ``` ### 9、docker-compose部署flink高可用(HA)集群 (由于Dockerfile使用了GitHub进行安装,可能不稳定,建议直接使用镜像文件安装) **①利用Dockerfile创建flink高可用集群镜像:** ```sh mkdir flink-build cd flink-build/ vim Dockerfile ``` Dockerfile内容: ```sh ############################################################################### # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ############################################################################### FROM openjdk:8-jre # Install dependencies RUN set -ex; \ apt-get update; \ apt-get -y install libsnappy1v5 gettext-base libjemalloc-dev; \ rm -rf /var/lib/apt/lists/* # Grab gosu for easy step-down from root ENV GOSU_VERSION 1.11 RUN set -ex; \ wget -nv -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture)"; \ wget -nv -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$(dpkg --print-architecture).asc"; \ export GNUPGHOME="$(mktemp -d)"; \ for server in ha.pool.sks-keyservers.net $(shuf -e \ hkp://p80.pool.sks-keyservers.net:80 \ keyserver.ubuntu.com \ hkp://keyserver.ubuntu.com:80 \ pgp.mit.edu) ; do \ gpg --batch --keyserver "$server" --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4 && break || : ; \ done && \ gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \ gpgconf --kill all; \ rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \ chmod +x /usr/local/bin/gosu; \ gosu nobody true # Configure Flink version ENV FLINK_TGZ_URL=https://www.apache.org/dyn/closer.cgi?action=download&filename=flink/flink-1.13.5/flink-1.13.5-bin-scala_2.12.tgz \ FLINK_ASC_URL=https://www.apache.org/dist/flink/flink-1.13.5/flink-1.13.5-bin-scala_2.12.tgz.asc \ GPG_KEY=CCFA852FD039380AB3EC36D08C3FB007FE60DEFA \ CHECK_GPG=false # Prepare environment ENV FLINK_HOME=/opt/flink ENV PATH=$FLINK_HOME/bin:$PATH RUN groupadd --system --gid=9999 flink && \ useradd --system --home-dir $FLINK_HOME --uid=9999 --gid=flink flink WORKDIR $FLINK_HOME # Install Flink RUN set -ex; \ wget -nv -O flink.tgz "$FLINK_TGZ_URL"; \ \ if [ "$CHECK_GPG" = "true" ]; then \ wget -nv -O flink.tgz.asc "$FLINK_ASC_URL"; \ export GNUPGHOME="$(mktemp -d)"; \ for server in ha.pool.sks-keyservers.net $(shuf -e \ hkp://p80.pool.sks-keyservers.net:80 \ keyserver.ubuntu.com \ hkp://keyserver.ubuntu.com:80 \ pgp.mit.edu) ; do \ gpg --batch --keyserver "$server" --recv-keys "$GPG_KEY" && break || : ; \ done && \ gpg --batch --verify flink.tgz.asc flink.tgz; \ gpgconf --kill all; \ rm -rf "$GNUPGHOME" flink.tgz.asc; \ fi; \ \ tar -xf flink.tgz --strip-components=1; \ rm flink.tgz; \ \ chown -R flink:flink .; \ \ mkdir /entrypoint \ && chmod 777 /entrypoint # Configure container COPY docker-entrypoint.sh /entrypoint ENTRYPOINT ["/entrypoint/docker-entrypoint.sh"] EXPOSE 6123 8081 CMD ["help"] ``` ```sh vim docker-entrypoint.sh chmod 777 docker-entrypoint.sh ``` docker-entrypoint.sh内容: ```sh #!/usr/bin/env bash ############################################################################### # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ############################################################################### JOB_MANAGER_RPC_ADDRESS=${JOB_MANAGER_RPC_ADDRESS:-$(hostname -f)} CONF_FILE="${FLINK_HOME}/conf/flink-conf.yaml" master=$(hostname -f) drop_privs_cmd() { if [ $(id -u) != 0 ]; then # Don't need to drop privs if EUID != 0 return elif [ -x /sbin/su-exec ]; then # Alpine echo su-exec flink else # Others echo gosu flink fi } if [ "$1" = "help" ]; then echo "Usage: $(basename "$0") (jobmanager|taskmanager|help)" exit 0 elif [ "$1" = "jobmanager" ]; then shift 1 echo "Starting Job Manager" if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}" else echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}" fi if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}" else echo "blob.server.port: 6124" >> "${CONF_FILE}" fi if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}" else echo "query.server.port: 6125" >> "${CONF_FILE}" fi if [ -n "${FLINK_PROPERTIES}" ]; then echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}" fi envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}" echo "config file: " && grep '^[^\n#]' "${CONF_FILE}" exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground "$@" elif [ "$1" = "taskmanager" ]; then shift 1 echo "Starting Task Manager" TASK_MANAGER_NUMBER_OF_TASK_SLOTS=${TASK_MANAGER_NUMBER_OF_TASK_SLOTS:-$(grep -c ^processor /proc/cpuinfo)} if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}" else echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}" fi if grep -E "^taskmanager\.numberOfTaskSlots:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/taskmanager\.numberOfTaskSlots:.*/taskmanager.numberOfTaskSlots: ${TASK_MANAGER_NUMBER_OF_TASK_SLOTS}/g" "${CONF_FILE}" else echo "taskmanager.numberOfTaskSlots: ${TASK_MANAGER_NUMBER_OF_TASK_SLOTS}" >> "${CONF_FILE}" fi if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}" else echo "blob.server.port: 6124" >> "${CONF_FILE}" fi if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}" else echo "query.server.port: 6125" >> "${CONF_FILE}" fi if [ -n "${FLINK_PROPERTIES}" ]; then echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}" fi envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}" echo "config file: " && grep '^[^\n#]' "${CONF_FILE}" exec $(drop_privs_cmd) "$FLINK_HOME/bin/taskmanager.sh" start-foreground "$@" elif [ "$1" = "jobmanagerHA" ]; then shift 1 echo "Starting job Manager HA" if grep -E "^jobmanager\.rpc\.address:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/jobmanager\.rpc\.address:.*/jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}/g" "${CONF_FILE}" else echo "jobmanager.rpc.address: ${JOB_MANAGER_RPC_ADDRESS}" >> "${CONF_FILE}" fi if grep -E "^blob\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/blob\.server\.port:.*/blob.server.port: 6124/g" "${CONF_FILE}" else echo "blob.server.port: 6124" >> "${CONF_FILE}" fi if grep -E "^query\.server\.port:.*" "${CONF_FILE}" > /dev/null; then sed -i -e "s/query\.server\.port:.*/query.server.port: 6125/g" "${CONF_FILE}" else echo "query.server.port: 6125" >> "${CONF_FILE}" fi if [ -n "${FLINK_PROPERTIES}" ]; then echo "${FLINK_PROPERTIES}" >> "${CONF_FILE}" fi envsubst < "${CONF_FILE}" > "${CONF_FILE}.tmp" && mv "${CONF_FILE}.tmp" "${CONF_FILE}" echo "config file: " && grep '^[^\n#]' "${CONF_FILE}" exec $(drop_privs_cmd) "$FLINK_HOME/bin/jobmanager.sh" start-foreground ${master} 8081 "$@" fi exec "$@" ``` ```sh vim build.sh chmod +x build.sh ``` build.sh内容: ```sh #!/bin/sh sudo docker build -t "flink-ha:1.13.5-2.12" . ``` 开始创建镜像: ```sh ./build.sh ``` **(如果报错无法完成创建过程,请使用方法②)** **②直接使用镜像文件创建Flink HA镜像:** 通过<a href ="https://pxh990415-my.sharepoint.com/personal/panxh_pxh990415_onmicrosoft_com/_layouts/52/download.aspx?share=ETFcYDMROARJtxkeG_ZXDboBBZZPJ3bclhuExxrLaNmK-A">该链接</a>下载镜像文件flink-ha-1.13.5-2.12.docker并放到flink-build目录中 ``` [pan@monitor flink-build]$ ll total 617836 -rw-r--r--. 1 pan root 50 Mar 16 17:24 build.sh -rw-r--r--. 1 pan root 5490 Mar 16 17:24 docker-entrypoint.sh -rw-r--r--. 1 pan root 3680 Mar 16 17:24 Dockerfile -rw-r--r--. 1 pan root 632644608 Mar 16 17:26 flink-ha-1.13.5-2.12.docker ``` 然后载入镜像: ```sh [pan@monitor flink-build]$ sudo docker load -i flink-ha-1.13.5-2.12.docker 0b0f2f2f5279: Loading layer 129.1MB/129.1MB 6398d5cccd2c: Loading layer 11.3MB/11.3MB bed676ceab7a: Loading layer 19.31MB/19.31MB 2a807c7d273c: Loading layer 12.31MB/12.31MB 077f1da1f776: Loading layer 3.584kB/3.584kB e2648222da93: Loading layer 108.2MB/108.2MB 8eb98ed999ea: Loading layer 5.458MB/5.458MB 1322f726cb72: Loading layer 2.3MB/2.3MB 0c06ab48594c: Loading layer 3.254MB/3.254MB b43bfb02b0ea: Loading layer 2.048kB/2.048kB 4e6cb678dcca: Loading layer 341.3MB/341.3MB 6d59091f84ce: Loading layer 7.68kB/7.68kB Loaded image: flink-ha:1.13.5-2.12 [pan@monitor flink-build]$ sudo docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE flink-ha 1.13.5-2.12 79740047399e 2 weeks ago 625MB ``` 编排集群: ```sh mkdir docker-flink cd docker-flink vim docker-compose.yml ``` docker-compose.yml内容: ```yaml version: '3' networks: default: name: flink_network services: jobmanager1: image: flink-ha:1.13.5-2.12 restart: always container_name: jobmanager1 hostname: jobmanager1 ports: - 8081:8081 external_links: - zoo1 - zoo2 - zoo3 volumes: - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar command: jobmanagerHA jobmanager2: image: flink-ha:1.13.5-2.12 restart: always container_name: jobmanager2 hostname: jobmanager2 ports: - 8082:8081 external_links: - zoo1 - zoo2 - zoo3 volumes: - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar command: jobmanagerHA taskmanager1: image: flink-ha:1.13.5-2.12 restart: always container_name: taskmanager1 hostname: taskmanager1 depends_on: - jobmanager1 - jobmanager2 links: - jobmanager1 - jobmanager2 external_links: - zoo1 - zoo2 - zoo3 volumes: - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar command: taskmanager taskmanager2: image: flink-ha:1.13.5-2.12 restart: always container_name: taskmanager2 hostname: taskmanager2 depends_on: - jobmanager1 - jobmanager2 links: - jobmanager1 - jobmanager2 external_links: - zoo1 - zoo2 - zoo3 volumes: - ./conf/flink-conf.yaml:/opt/flink/conf/flink-conf.yaml - ./lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar:/opt/flink/lib/flink-shaded-hadoop-2-uber-2.8.3-10.0.jar command: taskmanager ``` 创建挂载目录: ```sh [pan@monitor docker-flink]$ mkdir conf lib ``` 编辑配置文件flink-conf.yaml: ```sh [pan@monitor docker-flink]$ vim conf/flink-conf.yaml ``` flink-conf.yaml内容: ```yaml ################################################################################ # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ################################################################################ #============================================================================== # Common #============================================================================== # The external address of the host on which the JobManager runs and can be # reached by the TaskManagers and any clients which want to connect. This setting # is only used in Standalone mode and may be overwritten on the JobManager side # by specifying the --host <hostname> parameter of the bin/jobmanager.sh executable. # In high availability mode, if you use the bin/start-cluster.sh script and setup # the conf/masters file, this will be taken care of automatically. Yarn/Mesos # automatically configure the host name based on the hostname of the node where the # JobManager runs. jobmanager.rpc.address: jobmanager1 # The RPC port where the JobManager is reachable. jobmanager.rpc.port: 6123 # The total process memory size for the JobManager. # # Note this accounts for all memory usage within the JobManager process, including JVM metaspace and other overhead. jobmanager.memory.process.size: 1024m # The total process memory size for the TaskManager. # # Note this accounts for all memory usage within the TaskManager process, including JVM metaspace and other overhead. taskmanager.memory.process.size: 1024m # To exclude JVM metaspace and overhead, please, use total Flink memory size instead of 'taskmanager.memory.process.size'. # It is not recommended to set both 'taskmanager.memory.process.size' and Flink memory. # # taskmanager.memory.flink.size: 1280m # The number of task slots that each TaskManager offers. Each slot runs one parallel pipeline. taskmanager.numberOfTaskSlots: 2 # The parallelism used for programs that did not specify and other parallelism. parallelism.default: 1 # The default file system scheme and authority. # # By default file paths without scheme are interpreted relative to the local # root file system 'file:///'. Use this to override the default and interpret # relative paths relative to a different file system, # for example 'hdfs://mynamenode:12345' # # fs.default-scheme #============================================================================== # High Availability #============================================================================== # The high-availability mode. Possible options are 'NONE' or 'zookeeper'. # high-availability: zookeeper # The path where metadata for master recovery is persisted. While ZooKeeper stores # the small ground truth for checkpoint and leader election, this location stores # the larger objects, like persisted dataflow graphs. # # Must be a durable file system that is accessible from all nodes # (like HDFS, S3, Ceph, nfs, ...) # high-availability.storageDir: hdfs://namenode:9000/flink/ha/ # The list of ZooKeeper quorum peers that coordinate the high-availability # setup. This must be a list of the form: # "host1:clientPort,host2:clientPort,..." (default clientPort: 2181) # high-availability.zookeeper.quorum: zoo1:2181,zoo2:2181,zoo3:2181 # ACL options are based on https://zookeeper.apache.org/doc/r3.1.2/zookeeperProgrammers.html#sc_BuiltinACLSchemes # It can be either "creator" (ZOO_CREATE_ALL_ACL) or "open" (ZOO_OPEN_ACL_UNSAFE) # The default value is "open" and it can be changed to "creator" if ZK security is enabled # # high-availability.zookeeper.client.acl: open #============================================================================== # Fault tolerance and checkpointing #============================================================================== # The backend that will be used to store operator state checkpoints if # checkpointing is enabled. # # Supported backends are 'jobmanager', 'filesystem', 'rocksdb', or the # <class-name-of-factory>. # state.backend: filesystem # Directory for checkpoints filesystem, when using any of the default bundled # state backends. # state.checkpoints.dir: hdfs://namenode:9000/flink-checkpoints # Default target directory for savepoints, optional. # # state.savepoints.dir: hdfs://namenode-host:port/flink-savepoints # Flag to enable/disable incremental checkpoints for backends that # support incremental checkpoints (like the RocksDB state backend). # # state.backend.incremental: false # The failover strategy, i.e., how the job computation recovers from task failures. # Only restart tasks that may have been affected by the task failure, which typically includes # downstream tasks and potentially upstream tasks if their produced data is no longer available for consumption. jobmanager.execution.failover-strategy: region #============================================================================== # Rest & web frontend #============================================================================== # The port to which the REST client connects to. If rest.bind-port has # not been specified, then the server will bind to this port as well. # #rest.port: 8081 # The address to which the REST client will connect to # #rest.address: 0.0.0.0 # Port range for the REST and web server to bind to. # #rest.bind-port: 8080-8090 # The address that the REST & web server binds to # #rest.bind-address: 0.0.0.0 # Flag to specify whether job submission is enabled from the web-based # runtime monitor. Uncomment to disable. #web.submit.enable: false #============================================================================== # Advanced #============================================================================== # Override the directories for temporary files. If not specified, the # system-specific Java temporary directory (java.io.tmpdir property) is taken. # # For framework setups on Yarn or Mesos, Flink will automatically pick up the # containers' temp directories without any need for configuration. # # Add a delimited list for multiple directories, using the system directory # delimiter (colon ':' on unix) or a comma, e.g.: # /data1/tmp:/data2/tmp:/data3/tmp # # Note: Each directory entry is read from and written to by a different I/O # thread. You can include the same directory multiple times in order to create # multiple I/O threads against that directory. This is for example relevant for # high-throughput RAIDs. # # io.tmp.dirs: /tmp # The classloading resolve order. Possible values are 'child-first' (Flink's default) # and 'parent-first' (Java's default). # # Child first classloading allows users to use different dependency/library # versions in their application than those in the classpath. Switching back # to 'parent-first' may help with debugging dependency issues. # # classloader.resolve-order: child-first # The amount of memory going to the network stack. These numbers usually need # no tuning. Adjusting them may be necessary in case of an "Insufficient number # of network buffers" error. The default min is 64MB, the default max is 1GB. # # taskmanager.memory.network.fraction: 0.1 # taskmanager.memory.network.min: 64mb # taskmanager.memory.network.max: 1gb #============================================================================== # Flink Cluster Security Configuration #============================================================================== # Kerberos authentication for various components - Hadoop, ZooKeeper, and connectors - # may be enabled in four steps: # 1. configure the local krb5.conf file # 2. provide Kerberos credentials (either a keytab or a ticket cache w/ kinit) # 3. make the credentials available to various JAAS login contexts # 4. configure the connector to use JAAS/SASL # The below configure how Kerberos credentials are provided. A keytab will be used instead of # a ticket cache if the keytab path and principal are set. # security.kerberos.login.use-ticket-cache: true # security.kerberos.login.keytab: /path/to/kerberos/keytab # security.kerberos.login.principal: flink-user # The configuration below defines which JAAS login contexts # security.kerberos.login.contexts: Client,KafkaClient #============================================================================== # ZK Security Configuration #============================================================================== # Below configurations are applicable if ZK ensemble is configured for security # Override below configuration to provide custom ZK service name if configured # zookeeper.sasl.service-name: zookeeper # The configuration below must match one of the values set in "security.kerberos.login.contexts" # zookeeper.sasl.login-context-name: Client #============================================================================== # HistoryServer #============================================================================== # The HistoryServer is started and stopped via bin/historyserver.sh (start|stop) # Directory to upload completed jobs to. Add this directory to the list of # monitored directories of the HistoryServer as well (see below). #jobmanager.archive.fs.dir: hdfs:///completed-jobs/ # The address under which the web-based HistoryServer listens. #historyserver.web.address: 0.0.0.0 # The port under which the web-based HistoryServer listens. #historyserver.web.port: 8082 # Comma separated list of directories to monitor for completed jobs. #historyserver.archive.fs.dir: hdfs:///completed-jobs/ # Interval in milliseconds for refreshing the monitored directories. #historyserver.archive.fs.refresh-interval: 10000 blob.server.port: 6124 query.server.port: 6125 ``` 将hadoop依赖包flink-shaded-hadoop-2-uber-2.8.3-10.0.jar放到lib目录中: ```sh [pan@monitor docker-flink]$ cd lib/ [pan@monitor lib]$ ll total 42304 -rw-r--r--. 1 pan root 43317025 Mar 16 17:24 flink-shaded-hadoop-2-uber-2.8.3-10.0.jar ``` ### 10、启动Flink HA集群 **解决普通用户使用docker的权限问题:** - 把普通用户加入到docker组中:sudo usermod -aG docker pan - 修改docker.sock访问权限:sudo chmod 666 /var/run/docker.sock ①启动zookeeper容器 ```sh cd docker-zookeeper/ docker-compose up -d ``` ②启动hadoop容器 ```sh cd docker-hadoop/ docker-compose up -d ``` 默认启动2个DataNode和1个NodeManager 若想要进行拓展,可以通过scale命令启动N个NodeManager ```sh docker-compose up --scale nodemanager=N -d ``` 而想要启动N个DataNode则需要编辑docker-compose.yml文件 ```yaml datanode1: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode1 restart: always volumes: - ./hadoop/dfs/data1:/hadoop/dfs/data environment: SERVICE_PRECONDITION: "namenode:9870" env_file: - ./hadoop.env datanode2: image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 container_name: datanode2 restart: always volumes: - ./hadoop/dfs/data2:/hadoop/dfs/data environment: SERVICE_PRECONDITION: "namenode:9870" env_file: - ./hadoop.env # 下面以此类推 datanode3: ... # 接着在resourcemanager、nodemanager和historyserver的环境配置中增加对应的datanode resourcemanager: ... environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ..." ... nodemanager: ... environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ... resourcemanager:8088" ... historyserver: ... environment: SERVICE_PRECONDITION: "namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 ... resourcemanager:8088" ``` 注意:不同的datanode要挂载不同的数据目录, 比如datanode1的volumes要设置成./hadoop/dfs/data1:/hadoop/dfs/data, datanode2的volumes要设置成./hadoop/dfs/data2:/hadoop/dfs/data ③启动flink-ha容器 ```sh cd docker-flink/ docker-compose up -d ``` ④验证是否启动成功 ```sh [pan@monitor docker-flink]$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f1801f2ef035 flink-ha:1.13.5-2.12 "/entrypoint/docker-…" 41 seconds ago Up 36 seconds 6123/tcp, 8081/tcp taskmanager1 d81c77392652 flink-ha:1.13.5-2.12 "/entrypoint/docker-…" 41 seconds ago Up 36 seconds 6123/tcp, 8081/tcp taskmanager2 15bedddbfd49 flink-ha:1.13.5-2.12 "/entrypoint/docker-…" 41 seconds ago Up 36 seconds 6123/tcp, 8081/tcp taskmanager3 15dc634cd676 flink-ha:1.13.5-2.12 "/entrypoint/docker-…" 42 seconds ago Up 38 seconds 6123/tcp, 0.0.0.0:8082->8081/tcp, :::8082->8081/tcp jobmanager2 91a8f0a024c4 flink-ha:1.13.5-2.12 "/entrypoint/docker-…" 42 seconds ago Up 38 seconds 6123/tcp, 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp jobmanager1 8dd0dafefa76 bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 8042/tcp docker-hadoop-nodemanager-1 20c308d43594 bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up About a minute (healthy) 8088/tcp resourcemanager 1fa878977b09 bde2020/hadoop-historyserver:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 8188/tcp historyserver cd07cbea1837 bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 9864/tcp datanode1 09b1beef7d33 bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 0.0.0.0:9870->9870/tcp, :::9870->9870/tcp namenode 3c954daf4062 bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 9864/tcp datanode2 24aac3957ad8 bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8 "/entrypoint.sh /run…" 2 minutes ago Up 2 minutes (healthy) 8042/tcp docker-hadoop-nodemanager-2 a60f1554b6bc zookeeper "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2182->2181/tcp, :::2182->2181/tcp zoo2 d858a4026074 zookeeper "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp, 8080/tcp zoo1 dc7f3fa0de62 zookeeper "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 2888/tcp, 3888/tcp, 8080/tcp, 0.0.0.0:2183->2181/tcp, :::2183->2181/tcp zoo3 ``` 访问9870端口和8081/8082端口,查看ui界面 ### 11、docker安装mysql5.7 拉取官方mysql5.7镜像 ```sh docker pull mysql:5.7 ``` 在本地创建mysql的映射目录 ```sh mkdir -p ~/mysql/data ~/mysql/logs ~/mysql/conf ``` 在/root/mysql/conf中创建 *.cnf 文件(叫什么都行) ```sh vim ~/mysql/conf/my.cnf ``` my.cnf内容(开启binlog) ``` [mysqld] server-id = 12345 log-bin = mysql-bin binlog_format = ROW binlog_row_image = FULL expire_logs_days = 10 ``` 创建容器 ```sh docker run \ -d \ --name mysql \ -p 3306:3306 \ --restart unless-stopped \ -v ~/mysql/conf:/etc/mysql/conf.d \ -v ~/mysql/logs:/logs \ -v ~/mysql/data:/var/lib/mysql \ -e MYSQL_ROOT_PASSWORD=root \ mysql:5.7 ``` 建表语句 ```mysql CREATE DATABASE log_db; USE log_db; CREATE TABLE `comment_log` ( `log_id` bigint(20) NOT NULL AUTO_INCREMENT, `ts` bigint(20) NOT NULL, `product_id` varchar(30) NOT NULL, `user_name` varchar(30) NOT NULL, `point` int(11) NOT NULL, PRIMARY KEY (`log_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='商品评价日志表'; CREATE TABLE `action_log` ( `log_id` bigint(20) NOT NULL AUTO_INCREMENT, `ts` bigint(20) NOT NULL, `ip` varchar(30) NOT NULL, `user_name` varchar(30) NOT NULL, `action` varchar(10) NOT NULL, `status` varchar(10) NOT NULL, PRIMARY KEY (`log_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='用户行为日志表'; CREATE TABLE `order_log` ( `log_id` bigint(20) NOT NULL AUTO_INCREMENT, `ts` bigint(20) NOT NULL, `order_id` bigint(20) NOT NULL, `type` varchar(10) NOT NULL, `user_name` varchar(30) NOT NULL, `product_id` varchar(30) NOT NULL, `number` int(11) NOT NULL, `price` int(11) NOT NULL, PRIMARY KEY (`log_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='订单日志表'; CREATE TABLE `visit_log` ( `log_id` bigint(20) NOT NULL AUTO_INCREMENT, `ts` bigint(20) NOT NULL, `website_id` varchar(30) NOT NULL, `user_name` varchar(30) NOT NULL, `ip` varchar(30) NOT NULL, PRIMARY KEY (`log_id`) ) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4 COMMENT='站点访问日志表'; ``` ### 12、docker安装redis 拉取官方redis6.2.6镜像 ```sh docker pull redis:6.2.6 ``` 在本地创建redis的映射目录 ```sh mkdir -p ~/redis/conf ~/redis/data ``` 创建Redis配置文件 ```sh touch ~/redis/conf/redis.conf ``` 在redis.conf默认配置内容的基础上修改以下配置:(redis.conf可在官网下载redis-6.2.6.tar.gz解压得到) | **命令** | **功能** | | ------------------ | ------------------------------------------------------------ | | appendonly yes | 启动Redis持久化功能 (默认 no , 所有信息都存储在内存 [重启丢失]) | | requirepass 123456 | 设置密码,本项目设置为123456 | 创建Redis容器并启动 ```sh docker run \ -d \ --name redis \ -p 6379:6379 \ --restart unless-stopped \ -v ~/redis/data:/data \ -v ~/redis/conf/redis.conf:/etc/redis/redis.conf \ redis:6.2.6 \ redis-server /etc/redis/redis.conf ``` ### 13、部署前端项目 注意:由于前端使用了百度地图的API,如果不联网的话是不会显示地图的,而且这部分需要自己去百度地图控制台申请密钥并替换掉本项目的密钥( 教程:https://lbsyun.baidu.com/index.php?title=FAQ/obtainAK ) 修改密钥 ```sh vim monitor-ui/src/components/action/OnlineMonitorMap.vue # 找到第205行,将loadBMap("xxxxxxx").then引号中的密钥替换成自己申请的密钥 ``` 打包项目 ```sh cd monitor-ui/ vim src/api/websocket.js # 修改let ws = new WebSocket('ws://xxx.xxx.xxx.xxx:8888/websocket')中的IP地址为本机IP地址 npm install npm run build sudo mv dist /usr/local/nginx/html/ ``` 配置nginx ```sh sudo vim /usr/local/nginx/conf/nginx.conf ``` 修改nginx.conf内容: ``` server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root /usr/local/nginx/html/dist; index index.html index.htm; } ``` 重新启动nginx ```sh cd /usr/local/nginx/sbin/ sudo ./nginx -s reload ``` 浏览器访问IP地址(默认为80端口),查看页面是否正常 ### 14、部署后端项目 打包项目并后台运行 ```sh cd monitor-backend/ vim src/resources/application.properties # 修改spring.redis.host后面的IP地址为本机IP地址 mvn package cd target/ nohup java -jar monitor_backend-0.0.1-SNAPSHOT.jar & ``` ### 15、提交flink任务 将flink任务打成jar包 ```sh cd flink-tasks/ mvn package ``` 在UI界面中(端口8081或8082)或命令行提交target中的jar包并启动任务 **必填参数:** **-ip 本机IP地址 (即mysql和redis的IP地址)** 选填参数:(有默认值,可根据实际情况进行调整) | jar包名 | 主类名 | 可选参数 | | ------------------------------------------------------ | -------------- | ------------------------------------------------ | | comment_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar | CommentMonitor | -w 滚动窗口的时间,即好评/差评数统计间隔 默认5秒 | | | | -nc 恶意差评警告的数量阈值 默认10个 | | | | -nt 恶意差评警告的时间范围 默认5秒 | | | | -pc 刷好评警告的数量阈值 默认10个 | | | | -pt 刷好评警告的时间范围 默认5秒 | | jar包名 | 主类名 | 可选参数 | | ---------------------------------------------------- | ------------ | --------------------------------------------------- | | visit_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar | VisitMonitor | -w 滚动窗口的时间,即pv/uv/ip访问数统计间隔 默认2秒 | | | | -pc pv监控警告的数量阈值 默认26次 | | | | -pt pv监控警告的时间范围 默认2秒 | | | | -uc 用户频繁访问警告的数量阈值 默认5次 | | | | -ut 用户频繁访问警告的时间范围 默认2秒 | | | | -ic ip频繁访问警告的数量阈值 默认5次 | | | | -it ip频繁访问警告的时间范围 默认2秒 | | **jar包名** | **主类名** | **可选参数** | | ---------------------------------------------------- | ------------ | ----------------------------------------------------------- | | order_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar | OrderMonitor | -w 滚动窗口的时间,即成功、过期和失败订单数统计间隔 默认1秒 | | | | -c 刷单警告的数量阈值 默认7个 | | | | -t 刷单警告的时间范围 默认5秒 | | **jar包名** | **主类名** | **可选参数** | | ----------------------------------------------------- | ------------- | -------------------------------------- | | action_monitor-1.0-SNAPSHOT-jar-with-dependencies.jar | ActionMonitor | -ot 在线人数地图更新的时间间隔 默认5秒 | | | | -lc 频繁登录失败警告的次数阈值 默认4次 | | | | -lt 频繁登录失败警告的时间范围 默认5秒 | | | | -rc 频繁注册警告的次数阈值 默认14次 | | | | -rt 频繁注册警告的时间范围 默认5秒 | ### 16、随机生成模拟数据 打包项目 ```sh cd data-generate/ mvn package ``` 将target中的jar包都移动到generator-jars文件夹中 ```sh [pan@monitor generator-jars]$ ll total 4944 -rw-r--r--. 1 pan root 1008860 Apr 3 17:26 action_log_generator-1.0-SNAPSHOT.jar -rw-r--r--. 1 pan root 1008351 Apr 3 17:26 comment_log_generator-1.0-SNAPSHOT.jar -rwxr-xr-x. 1 pan root 3844 Apr 3 17:26 data-generator.sh -rw-r--r--. 1 pan root 1007247 Apr 3 17:26 data_reset-1.0-SNAPSHOT.jar -rw-r--r--. 1 pan root 1012416 Apr 3 17:26 order_log_generator-1.0-SNAPSHOT.jar -rw-r--r--. 1 pan root 1008916 Apr 3 17:26 visit_log_generator-1.0-SNAPSHOT.jar ``` 编写批量运行和停止jar包的shell脚本data-generator.sh: ```sh #!/bin/sh export COMMENT=comment_log_generator-1.0-SNAPSHOT.jar export VISIT=visit_log_generator-1.0-SNAPSHOT.jar export ORDER=order_log_generator-1.0-SNAPSHOT.jar export ACTION=action_log_generator-1.0-SNAPSHOT.jar export RESET=data_reset-1.0-SNAPSHOT.jar export IP=$2 export USERNAME=$3 export PWD=$4 case "$1" in start) ## 检查参数是否填写 if [[ -z $2 || -z $3 || -z $4 ]] then echo "参数不能为空,格式:./data-generator.sh start ip username password" exit 0 else echo "database ip: $IP" echo "database username: $USERNAME" echo "database password: $PWD" fi ## 启动COMMENT echo "--------COMMENT 开始启动--------------" nohup java -jar $COMMENT $IP $USERNAME $PWD >/dev/null 2>&1 & COMMENT_pid=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'` until [ -n "$COMMENT_pid" ] do COMMENT_pid=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'` done echo "COMMENT pid is $COMMENT_pid" echo "--------COMMENT 启动成功--------------" ## 启动VISIT echo "--------VISIT 开始启动--------------" nohup java -jar $VISIT $IP $USERNAME $PWD >/dev/null 2>&1 & VISIT_pid=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'` until [ -n "$VISIT_pid" ] do VISIT_pid=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'` done echo "VISIT pid is $VISIT_pid" echo "--------VISIT 启动成功--------------" ## 启动ORDER echo "--------ORDER 开始启动--------------" nohup java -jar $ORDER $IP $USERNAME $PWD >/dev/null 2>&1 & ORDER_pid=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'` until [ -n "$ORDER_pid" ] do ORDER_pid=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'` done echo "ORDER pid is $ORDER_pid" echo "--------ORDER 启动成功--------------" ## 启动ACTION echo "--------ACTION 开始启动--------------" nohup java -jar $ACTION $IP $USERNAME $PWD >/dev/null 2>&1 & ACTION_pid=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'` until [ -n "$ACTION_pid" ] do ACTION_pid=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'` done echo "ACTION pid is $ACTION_pid" echo "--------ACTION 启动成功--------------" echo "===startAll success===" ;; stop) P_ID=`sudo lsof $COMMENT | grep "mem" | awk '{print $2}'` if [ "$P_ID" == "" ]; then echo "===COMMENT process not exists or stop success===" else kill -9 $P_ID echo "COMMENT killed success" fi P_ID=`sudo lsof $VISIT | grep "mem" | awk '{print $2}'` if [ "$P_ID" == "" ]; then echo "===VISIT process not exists or stop success===" else kill -9 $P_ID echo "VISIT killed success" fi P_ID=`sudo lsof $ORDER | grep "mem" | awk '{print $2}'` if [ "$P_ID" == "" ]; then echo "===ORDER process not exists or stop success===" else kill -9 $P_ID echo "ORDER killed success" fi P_ID=`sudo lsof $ACTION | grep "mem" | awk '{print $2}'` if [ "$P_ID" == "" ]; then echo "===ACTION process not exists or stop success===" else kill -9 $P_ID echo "ACTION killed success" fi echo "===stopAll success===" ;; reset) echo "--------开始清除数据--------------" java -jar $RESET $IP $USERNAME $PWD >/dev/null 2>&1 echo "--------数据清除成功--------------" esac exit 0 ``` 安装lsof命令 ```sh sudo yum install -y lsof ``` 开始生成模拟数据 ```sh cd generator-jars chmod +x data-generator.sh # 开始生成 参数:数据库IP地址、用户名和密码 ./data-generator.sh start 本机IP root root # 停止生成 ./data-generator.sh stop ``` 生成的数据将不断写入mysql数据库中相应的表中, **此时,若系统整条链路都跑通,前端界面就能显示数据了** 商品评价日志表comment_log: | 字段名 | 数据类型 | 主键 | 其他 | 描述 | | ---------- | ----------- | ---- | ---------- | ---------- | | log_id | bigint(20) | yes | 非空、自增 | 日志ID | | ts | bigint(20) | no | 非空 | 评价时间戳 | | product_id | varchar(30) | no | 非空 | 评价商品ID | | user_name | varchar(30) | no | 非空 | 用户名 | | point | int(11) | no | 非空 | 评价分数 | 站点访问日志表visit_log: | 字段名 | 数据类型 | 主键 | 其他 | 描述 | | ---------- | ----------- | ---- | ---------- | ---------- | | log_id | bigint(20) | yes | 非空、自增 | 日志ID | | ts | bigint(20) | no | 非空 | 访问时间戳 | | website_id | varchar(30) | no | 非空 | 访问站点ID | | user_name | varchar(30) | no | 非空 | 用户名 | | ip | varchar(30) | no | 非空 | IP地址 | 订单日志表order_log: | 字段名 | 数据类型 | 主键 | 其他 | 描述 | | ---------- | ----------- | ---- | ---------- | -------------------------- | | log_id | bigint(20) | yes | 非空、自增 | 日志ID | | ts | bigint(20) | no | 非空 | 日志时间戳 | | order_id | bigint(20) | no | 非空 | 订单ID | | type | varchar(10) | no | 非空 | 日志类型(创建、支付、失败) | | user_name | varchar(30) | no | 非空 | 用户名 | | product_id | varchar(30) | no | 非空 | 商品ID | | number | int(11) | no | 非空 | 购买数量 | | price | int(11) | no | 非空 | 购买价格 | 用户行为日志表action_log: | 字段名 | 数据类型 | 主键 | 其他 | 描述 | | --------- | ----------- | ---- | ---------- | ------------------------ | | log_id | bigint(20) | yes | 非空、自增 | 日志ID | | ts | bigint(20) | no | 非空 | 日志时间戳 | | ip | varchar(30) | no | 非空 | IP地址 | | user_name | varchar(30) | no | 非空 | 用户名 | | action | varchar(10) | no | 非空 | 用户操作类型(登录、注册) | | status | varchar(10) | no | 非空 | 操作结果(成功、失败) | ### 17、清理数据data reset 系统连续运行多天后,数据量就过大,造成性能下降。可以通过清除数据库数据来解决 数据清除命令: ```sh cd generator-jars ./data-generator.sh reset 本机IP root root ```

简介

《基于flink的电商业务实时监控系统》 本项目基于flink CDC、CEP等技术实现对电商业务数据的实时监控,主要分为商品评价监控、站点访问监控、订单实时监控和用户行为监控四大模块,UI界面利用各种图表对监控结果进行直观的显示,本项目还利用docker来简化环境搭建过程。 展开 收起
Apache-2.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
Scala
1
https://gitee.com/TimeWentby18/real_time_monitor_system.git
git@gitee.com:TimeWentby18/real_time_monitor_system.git
TimeWentby18
real_time_monitor_system
real_time_monitor_system
master

搜索帮助