𝑰’𝒎 𝒉𝒉𝒈, 𝑰 𝒂𝒎 𝒂 𝒈𝒓𝒂𝒅𝒖𝒂𝒕𝒆 𝒔𝒕𝒖𝒅𝒆𝒏𝒕 𝒇𝒓𝒐𝒎 𝑵𝒂𝒏𝒋𝒊𝒏𝒈, 𝑪𝒉𝒊𝒏𝒂.

  • 🏫 𝑺𝒉𝒄𝒐𝒐𝒍: 𝑯𝒐𝒉𝒂𝒊 𝑼𝒏𝒊𝒗𝒆𝒓𝒔𝒊𝒕𝒚
  • 🌱 𝑳𝒆𝒂𝒓𝒏𝒊𝒏𝒈: 𝑰’𝒎 𝒄𝒖𝒓𝒓𝒆𝒏𝒕𝒍𝒚 𝒍𝒆𝒂𝒓𝒏𝒊𝒏𝒈 𝒅𝒆𝒔𝒊𝒈𝒏 𝒑𝒂𝒕𝒕𝒆𝒓𝒏, 𝑳𝒆𝒆𝒕𝒄𝒐𝒅𝒆, 𝒅𝒊𝒔𝒕𝒓𝒊𝒃𝒖𝒕𝒆𝒅 𝒔𝒚𝒔𝒕𝒆𝒎, 𝒎𝒊𝒅𝒅𝒍𝒆𝒘𝒂𝒓𝒆 𝒂𝒏𝒅 𝒔𝒐 𝒐𝒏.
  • 💓 𝑯𝒐𝒘 𝒕𝒐 𝒓𝒆𝒂𝒄𝒉 𝒎𝒆:𝑽𝑿
  • 📚 𝑴𝒚 𝒃𝒍𝒐𝒈: 𝒉𝒕𝒕𝒑𝒔://𝒉𝒉𝒈𝒚𝒚𝒅𝒔.𝒃𝒍𝒐𝒈.𝒄𝒔𝒅𝒏.𝒏𝒆𝒕/
  • 💼 𝑷𝒓𝒐𝒇𝒆𝒔𝒔𝒊𝒐𝒏𝒂𝒍 𝒔𝒌𝒊𝒍𝒍𝒔:𝒎𝒚 𝒅𝒓𝒆𝒂𝒎

1-1: Environment Description

  • Nacos 2.0.3
  • Nginx latest
  • Seata 1.4.2
  • MySQL 5.7
  • Redis latest
  • SentinelDashboard 1.7.2

1-2: Directory Tree

├── mysql
│   ├── config
│   └── data
│       ├── db2019
│       ├── mysql
│       ├── nacos_devtest
│       ├── performance_schema
│       ├── ry@002dcloud [error opening dir]
│       ├── ry@002dconfig [error opening dir]
│       ├── ry@002dseata [error opening dir]
│       ├── seata
│       ├── seata@002dserver
│       ├── seata_account
│       ├── seata_order
│       ├── seata_storage
│       └── sys
├── nacos
│   ├── cluster-logs
│   │   ├── nacos1
│   │   ├── nacos2
│   │   └── nacos3
│   ├── env
│   └── init.d
├── nginx
│   ├── config
│   ├── data
│   └── log
├── redis
│   └── data
└── seata
    └── config

1-2: Create Network for Services.

From my point of view, deployment won’t be finished once time. So everytime you use docker-compose to create containers, ips of containers will change. How do we avoid this question? We just allocate one ip to each container before using docker-compse. For example.

version: '2'
services:
   nginx:
      image: nginx:1.13.12
      container_name: nginx
      restart: always
      networks:
         extnetwork:
            ipv4_address: 172.19.0.2
 
networks:
   extnetwork:
      ipam:
         config:
         - subnet: 172.19.0.0/16
           gateway: 172.19.0.1

If we construct a nginx service, we create a network and allocate 172.19.0.2 to nginx.

1-2: MySQL 5.7

Nacos cluster deployment depends on MySQL. So we deploy MySQL firstly.

  • docker-compose.yml
  mysql:
    container_name: royi-mysql
    # 最新的mysql8版本
    image: mysql:5.7
    environment:
      # mysql root用户密码
      MYSQL_ROOT_PASSWORD: root
      TZ: Asia/Shanghai
    command:
      --default-authentication-plugin=mysql_native_password
      --character-set-server=utf8mb4
      --collation-server=utf8mb4_general_ci
      --explicit_defaults_for_timestamp=true
      --lower_case_table_names=1
      --max_allowed_packet=128M;
    volumes:
      # mysql的数据文件
      - ./mysql/data:/var/lib/mysql
      # mysql配置文件
      - ./mysql/config:/etc/mysql/conf.d
    ports:
      - "3306:3306"
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.8

1-3: Nacos 2.0.3 Cluster Deployment

The main step is to deploy nacos servie. In this place, I use docker-compose to finish it.

  • docker-compose.yml
  nacos1:
    hostname: nacos1
    container_name: royi-nacos1
    image: nacos/nacos-server:latest
    volumes:
      # 需要添加mysql8的插件
      #- ./nacos/plugins/mysql/:/home/nacos/plugins/mysql/
      # 把日志文件映射出来
      - ./nacos/cluster-logs/nacos1:/home/nacos/logs
      # 把配置文件映射出来
      - ./nacos/init.d/custom.properties:/home/nacos/init.d/custom.properties
    
    environment:                       # 设置环境变量,相当于docker run命令中的-e
      - JVM_XMS=512m
      - JVM_XMX=512m
      - JVM_XMN=128m
      - TZ=Asia/Shanghai
      #- MODE=standalone
    ports:
      - "8848:8848"
      - "9848:9848"
      - "9555:9555"
    env_file:
        # 集群配置文件
      - ./nacos/env/nacos-hostname.env
    #restart: always
    depends_on:
      - mysql
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.7
  nacos2:
    hostname: nacos2
    image: nacos/nacos-server:latest
    container_name: royi-nacos2
    volumes:
      #- ./nacos/plugins/mysql/:/home/nacos/plugins/mysql/
      - ./nacos/cluster-logs/nacos2:/home/nacos/logs
      - ./nacos/init.d/custom.properties:/home/nacos/init.d/custom.properties
    environment:                        # 设置环境变量,相当于docker run命令中的-e
      - JVM_XMS=512m
      - JVM_XMX=512m
      - JVM_XMN=128m
      - TZ=Asia/Shanghai
    ports:
      - "8849:8848"
    env_file:
      - ./nacos/env/nacos-hostname.env
    #restart: always
    depends_on:
      - mysql
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.6
  nacos3:
    hostname: nacos3
    image: nacos/nacos-server:latest
    container_name: royi-nacos3
    volumes:
      #- ./nacos/plugins/mysql/:/home/nacos/plugins/mysql/
      - ./nacos/cluster-logs/nacos3:/home/nacos/logs
      - ./nacos/init.d/custom.properties:/home/nacos/init.d/custom.properties
    environment:                      # 设置环境变量,相当于docker run命令中的-e
      - JVM_XMS=512m
      - JVM_XMX=512m
      - JVM_XMN=128m
      - TZ=Asia/Shanghai
    ports:
      - "8850:8848"
    env_file:
      - ./nacos/env/nacos-hostname.env
    #restart: always
    depends_on:
      - mysql
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.5
  • custom.properties
nacos.naming.empty-service.auto-clean=true
nacos.naming.empty-service.clean.initial-delay-ms=50000
nacos.naming.empty-service.clean.period-time-ms=30000

management.endpoints.web.exposure.include=*

management.metrics.export.elastic.enabled=false
management.metrics.export.influx.enabled=false

server.tomcat.accesslog.enabled=true
server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i

server.tomcat.basedir=

nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**

nacos.core.auth.system.type=nacos
nacos.core.auth.enabled=false
nacos.core.auth.default.token.expire.seconds=18000
nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
nacos.core.auth.caching.enabled=true
nacos.core.auth.enable.userAgentAuthWhite=false
nacos.core.auth.server.identity.key=serverIdentity
nacos.core.auth.server.identity.value=security

nacos.istio.mcp.server.enabled=false
  • nacos-hostname.env
#nacos dev env
PREFER_HOST_MODE=hostname
# docker实例别名,可以换成ip
NACOS_SERVERS=nacos1:8848 nacos2:8848 nacos3:8848
MYSQL_SERVICE_HOST=mysql
# 指定保存数据的数据库名称
MYSQL_SERVICE_DB_NAME=ry-config
# 访问mysql端口
MYSQL_SERVICE_PORT=3306
# 访问mysql的用户名
MYSQL_SERVICE_USER=root
# 访问mysql的密码
MYSQL_SERVICE_PASSWORD=root
  • ry_config_2021xxxx.sql
    Create db ry-config and import sql file

Success:
在这里插入图片描述

1-4: Nginx Reverse Proxy for Nacos Cluster

As we know, we can use nginx to finish load balance. Instead of exposing nacos ports to users, we only need to expose nginx ports. What we should do is modifying the nginx.conf and restart our container.

  • docker-compose.yml
  nginx:
    #restart: always
    image: nginx:latest
    container_name: royi-nginx
    ports:
      - "8881:8881"
      - "9881:9881"
      - "443:443"
    environment:
      TZ: Asia/Shanghai
    volumes:
      - ./nginx/config/nginx.conf:/etc/nginx/nginx.conf
      - ./nginx/data/:/usr/share/nginx/html/
      - ./nginx/log/:/var/log/nginx/
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.4
  • nginx.conf
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;

events {
	worker_connections 768;
	# multi_accept on;
}

stream {
    upstream nacosGrpc {
      server  172.1.0.7:9848 weight=10; 
      server  172.1.0.6:9848 weight=10; 
      server  172.1.0.5:9848 weight=10; 
    }

    server {
        listen 9881;     
        proxy_pass nacosGrpc;
    }
}

http {

	##
	# Basic Settings
	##

	sendfile on;
	tcp_nopush on;
	tcp_nodelay on;
	keepalive_timeout 65;
	types_hash_max_size 2048;
	# server_tokens off;

	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	##
	# SSL Settings
	##

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
	ssl_prefer_server_ciphers on;

	##
	# Logging Settings
	##

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	##
	# Gzip Settings
	##

	gzip on;

	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    upstream nacos { 
      server  172.1.0.7:8848 weight=10; 
      server  172.1.0.6:8848 weight=10; 
      server  172.1.0.5:8848 weight=10; 
    } 
    server{ 
      listen 8881; 
      server_name  localhost; 
      location / { 
          proxy_pass         http://nacos; 
      } 
    }

	##
	# Virtual Host Configs
	##

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Tips:

  • Stream is used to grpc service. You should set this config, the module of ruoyi will fail for some errors otherwise.
  • The ips must be those that we allocated for each nacos service container. Services communicate each other in docker network ruoyinet

Success:
We can input localhost:8881/nacos in chome to access nacos:
在这里插入图片描述

1-5: Redis

  • docker-compose.yml
  redis:
     image: redis:latest
     #restart: always
     container_name: ruoyi-redis
     ports:
       - "6379:6379"
     environment:
       TZ: Asia/Shanghai
     volumes:
       - ./redis/redis.conf:/etc/redis/redis.conf 
       - ./redis/data:/data      
     command: redis-server /etc/redis/redis.conf 
     privileged: true
     networks:
       ruoyinet:
         ipv4_address: 172.1.0.3

1-6: Seata Deployment Based on Nacos

  • docker-compose.yml
  seata-server-1.4.2:
    image: seataio/seata-server:1.4.2
    container_name: ruoyi-seata
    hostname: seata-server
    ports:
      - "8091:8091"
    environment:
      # 指定seata服务启动端口
      - SEATA_PORT=8091
      # 注册到nacos上的ip。客户端将通过该ip访问seata服务。
      # 注意公网ip和内网ip的差异。
      - SEATA_IP=127.0.0.1
      - SEATA_CONFIG_NAME=file:/root/seata-config/registry
      - TZ=Asia/Shanghai
    volumes:
    # 因为registry.conf中是nacos配置中心,只需要把registry.conf放到./seata-server/config文件夹中
      - "./seata/config:/root/seata-config"
#    network_mode: "host"
  • registry.conf
registry {
  type = "nacos"
  
  nacos {
  # seata服务注册在nacos上的别名,客户端通过该别名调用服务
    application = "seata-server"
  # 请根据实际生产环境配置nacos服务的ip和端口
    serverAddr = "172.1.0.4:8881"
  # nacos上指定的namespace
    namespace = "2b19e55b-6325-4d48-87f1-7958f86d0053"
    group = "SEATA_GROUP"
    cluster = "default"
    username = "nacos"
    password = "nacos"
  }
}

config {
  type = "nacos"
  
  nacos {
    # 请根据实际生产环境配置nacos服务的ip和端口
    serverAddr = "172.1.0.4:8881"
    # nacos上指定的namespace1
    namespace = "2b19e55b-6325-4d48-87f1-7958f86d0053"
    group = "SEATA_GROUP"
    username = "nacos"
    password = "nacos"
  # 从v1.4.2版本开始,已支持从一个Nacos dataId中获取所有配置信息,你只需要额外添加一个dataId配置项
    #dataId: "seataServer.properties"
  }
}
  • config.txt
service.vgroupMapping.ruoyi-system-group=default
store.mode=db
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://172.1.0.8:3306/ry-seata?useUnicode=true
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

Reference: seata nacos deployment official webHowever, the method of using seataServer.properties failed. So I use one shell script to finish this problem.

  • shell script
#!/bin/sh
# Copyright 1999-2019 Seata.io Group.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at、
#
#      http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

while getopts ":h:p:g:t:u:w:" opt
do
  case $opt in
  h)
    host=$OPTARG
    ;;
  p)
    port=$OPTARG
    ;;
  g)
    group=$OPTARG
    ;;
  t)
    tenant=$OPTARG
    ;;
  u)
    username=$OPTARG
    ;;
  w)
    password=$OPTARG
    ;;
  ?)
    echo " USAGE OPTION: $0 [-h host] [-p port] [-g group] [-t tenant] [-u username] [-w password] "
    exit 1
    ;;
  esac
done

if [ -z ${host} ]; then
    host=localhost
fi
if [ -z ${port} ]; then
    port=8848
fi
if [ -z ${group} ]; then
    group="SEATA_GROUP"
fi
if [ -z ${tenant} ]; then
    tenant=""
fi
if [ -z ${username} ]; then
    username=""
fi
if [ -z ${password} ]; then
    password=""
fi

nacosAddr=$host:$port
contentType="content-type:application/json;charset=UTF-8"

echo "set nacosAddr=$nacosAddr"
echo "set group=$group"

urlencode() {
  length="${#1}"
  i=0
  while [ $length -gt $i ]; do
    char="${1:$i:1}"
    case $char in
    [a-zA-Z0-9.~_-]) printf $char ;;
    *) printf '%%%02X' "'$char" ;;
    esac
    i=`expr $i + 1`
  done
}

failCount=0
tempLog=$(mktemp -u)
function addConfig() {
  dataId=`urlencode $1`
  content=`urlencode $2`
  curl -X POST -H "${contentType}" "http://$nacosAddr/nacos/v1/cs/configs?dataId=$dataId&group=$group&content=$content&tenant=$tenant&username=$username&password=$password" >"${tempLog}" 2>/dev/null
  if [ -z $(cat "${tempLog}") ]; then
    echo " Please check the cluster status. "
    exit 1
  fi
  if [ "$(cat "${tempLog}")" == "true" ]; then
    echo "Set $1=$2 successfully "
  else
    echo "Set $1=$2 failure "
    failCount=`expr $failCount + 1`
  fi
}

count=0
COMMENT_START="#"
for line in $(cat $(dirname "$PWD")/config.txt | sed s/[[:space:]]//g); do
    if [[ "$line" =~ ^"${COMMENT_START}".*  ]]; then
      continue
    fi
    count=`expr $count + 1`
	  key=${line%%=*}
    value=${line#*=}
	  addConfig "${key}" "${value}"
done

echo "========================================================================="
echo " Complete initialization parameters,  total-count:$count ,  failure-count:$failCount "
echo "========================================================================="

if [ ${failCount} -eq 0 ]; then
	echo " Init nacos config finished, please start seata-server. "
else
	echo " init nacos config fail. "
fi

Then use the next orders to import the config.txt.

sh nacos-config.sh -h localhost -p 8848 -g SEATA_GROUP(group) -t 8f86ea8c-2a39-4728-bade-4af9ebece539(namespace) -u nacos -w naco

Success:
在这里插入图片描述

1-7: Sentinel Deployment

  • download sentinel-dashboard-1.7.2.jar
  • use Dockerfile to build image.

Dockerfile

FROM openjdk:8
#使用镜像
RUN sed -i "s/archive.ubuntu./mirrors.aliyun./g" /etc/apt/sources.list
RUN sed -i "s/deb.debian.org/mirrors.aliyun.com/g" /etc/apt/sources.list
RUN sed -i "s/security.debian.org/mirrors.aliyun.com\/debian-security/g" /etc/apt/sources.list

#复制上下文目录下的jar包到容器里  使用COPY命令亦可
ADD sentinel-dashboard-1.7.2.jar sentinel-dashboard-1.7.2.jar

EXPOSE 8080

#指定容器启动程序及参数   <ENTRYPOINT> "<CMD>"
ENTRYPOINT ["java","-jar","sentinel-dashboard-1.7.2.jar"]
docker build -t sentinel-dashboard-1.7.2 .

REPOSITORY                 TAG                  IMAGE ID       CREATED         SIZE
sentinel-dashboard-1.7.2   latest               442c56e77ebe   2 weeks ago     547MB
  • docker-compose.yml
  sentinel-dashboard-1.7.2:
    image: sentinel-dashboard-1.7.2
    container_name: ruoyi-sentinel-dashboard
    environment:
      JAVA_OPTS: "-Dserver.port=8080 -Dcsp.sentinel.dashboard.server=localhost:8080 -Dproject.name=sentinel-dashboard -Djava.security.egd=file:/dev/./urandom -Dcsp.sentinel.api.port=8719"
      TZ: Asia/Shanghai
    ports: #避免出现端口映射错误,建议采用字符串格式 8080端口为Dockerfile中EXPOSE端口
      - "58080:8080"
      - "8719:8719"
    volumes:
      - ./sentinel/logs:/root/logs
    networks:
      ruoyinet:
        ipv4_address: 172.1.0.9
Logo

快速构建 Web 应用程序

更多推荐