大数据服务完全分布式部署- 其他组件(阿里云版)

ZooKeeper

安装

官网
在这里插入图片描述
解压

cd /export/server/
tar -zxvf /export/server/apache-zookeeper-3.9.3-bin.tar.gz -C /export/server/

软链接

ln -s /export/server/apache-zookeeper-3.9.3-bin /export/server/zookeeper

配置

cd /export/server/zookeeper/
mkdir zkData

myid

cd zkData
vim myid

写个数字,node1就写1

zoo.cfg

cd /export/server/zookeeper/conf/
cp zoo_sample.cfg zoo.cfg
vim zoo.cfg

修改dataDir属性为

dataDir=/export/server/zookeeper/zkData

并在末尾添加

#########cluster#########
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888

然后分发一下

xsync /export/server/zookeeper /export/server/apache-zookeeper-3.9.3-bin

然后分别去node2、node3把myid中的数字改成2和3

vim /export/server/zookeeper/zkData/myid

启动

xcall /export/server/zookeeper/bin/zkServer.sh start

在这里插入图片描述

# 查看状态,应该有leader有follower
xcall /export/server/zookeeper/bin/zkServer.sh status
# 停止
xcall /export/server/zookeeper/bin/zkServer.sh stop

kafka

官网
在这里插入图片描述

安装

解压

cd /export/server
tar -zxvf /export/server/kafka_2.13-3.9.1.tgz -C /export/server/

软链接

ln -s /export/server/kafka_2.13-3.9.1 /export/server/kafka

配置

server.properties

cd /export/server/kafka/config/
vim server.properties

新增

advertised.listeners=PLAINTEXT://node1:9092

替换

log.dirs=/export/server/kafka/datas

替换

# Kafka 连接 Zookeeper 集群的地址列表(逗号分隔)
# 每个地址格式为:主机名:端口
# node1、node2、node3 分别代表三台 Zookeeper 节点主机
# /kafka 表示 Kafka 在 Zookeeper 中的 chroot 路径,用于隔离存储 Kafka 元数据,防止一个 Zookeeper 集群中多个应用(例如 Kafka、HBase、Hadoop 等)数据冲突。
zookeeper.connect=node1:2181,node2:2181,node3:2181/kafka

保存退出后,分发一下

xsync /export/server/kafka /export/server/kafka_2.13-3.9.1

然后其他节点需要修改一下broker.idadvertised.listeners

cd /export/server/kafka/config/
vim server.properties

node2 broker.id=1 advertised.listeners改成node2
node3 broker.id=2 advertised.listeners改成node3

环境变量

sudo vim /etc/profile

末尾新增

export KAFKA_HOME=/export/server/kafka
export PATH=$PATH:$KAFKA_HOME/bin

分发一下

sudo xsync /etc/profile

各个节点刷新一下

source /etc/profile

启动

xcall /export/server/kafka/bin/kafka-server-start.sh -daemon /export/server/kafka/config/server.properties

停止

xcall /export/server/kafka/bin/kafka-server-stop.sh

如果没有启动成功,那就等一会重启一下。

flume

官网
在这里插入图片描述

安装

解压

cd /export/server
tar -zxvf /export/server/apache-flume-1.11.0-bin.tar.gz -C /export/server/

软链接

ln -s /export/server/apache-flume-1.11.0-bin /export/server/flume

配置

log4j2.xml

cd /export/server/flume/conf/

替换

  <Properties><Property name="LOG_DIR">/export/server/flume/log</Property></Properties>

替换

    <Root level="INFO"><AppenderRef ref="LogFile" /><AppenderRef ref="Console" /></Root>

DataX

官网
在这里插入图片描述

cd /export/server/
tar -zxvf /export/server/datax.tar.gz -C /export/server/

自检

python3 /export/server/datax/bin/datax.py /export/server/datax/job/job.json 

在这里插入图片描述

DolphinScheduler

下载

官网
文档
在这里插入图片描述

cd /export/server/
tar -zxvf /export/server/apache-dolphinscheduler-3.1.9-bin.tar.gz -C /export/server/

准备mysql账号

sudo mysql -u root -p
-- 若需要撤销
DROP DATABASE IF EXISTS dolphinscheduler; DROP USER IF EXISTS 'dolphinscheduler'@'%';FLUSH PRIVILEGES;-- 创建 DolphinScheduler 元数据库
CREATE DATABASE IF NOT EXISTS dolphinschedulerDEFAULT CHARACTER SET utf8mb4COLLATE utf8mb4_general_ci;-- 创建 DolphinScheduler 用户(只允许远程连接)
CREATE USER 'dolphinscheduler'@'%' IDENTIFIED BY 'Dolphin!123';-- 授权 DolphinScheduler 用户管理 dolphinscheduler 库
GRANT ALL PRIVILEGES ON dolphinscheduler.* TO 'dolphinscheduler'@'%';-- 刷新权限
FLUSH PRIVILEGES;
EXIT;

mysql driver

# 选定驱动(优先 hive 里的,其次 /export/server 下的)
JAR_SRC=""
for c in /export/server/hive/lib/mysql-connector-j-8.0.33.jar /export/server/mysql-connector-j-8.0.33.jar; do[ -f "$c" ] && JAR_SRC="$c" && break
done
[ -z "$JAR_SRC" ] && echo "未找到 mysql-connector-j-8.0.33.jar" && exit 1
echo "使用驱动: $JAR_SRC"# 复制到四个服务 + tools
for comp in api-server master-server worker-server alert-server tools; domkdir -p /export/server/apache-dolphinscheduler-3.1.9-bin/$comp/libscp -f "$JAR_SRC" /export/server/apache-dolphinscheduler-3.1.9-bin/$comp/libs/
done# 校验是否复制成功
for comp in api-server master-server worker-server alert-server tools; doecho "### $comp"ls /export/server/apache-dolphinscheduler-3.1.9-bin/$comp/libs | grep -i "mysql-connector"
done

配置

install_env.sh

cd /export/server/apache-dolphinscheduler-3.1.9-bin/bin/env
mv dolphinscheduler_env.sh dolphinscheduler_env.sh.back
mv install_env.sh install_env.sh.back
vim install_env.sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## ---------------------------------------------------------
# INSTALL MACHINE
# ---------------------------------------------------------
# A comma separated list of machine hostname or IP would be installed DolphinScheduler,
# including master, worker, api, alert. If you want to deploy in pseudo-distributed
# mode, just write a pseudo-distributed hostname
# Example for hostnames: ips="ds1,ds2,ds3,ds4,ds5", Example for IPs: ips="192.168.8.1,192.168.8.2,192.168.8.3,192.168.8.4,192.168.8.5"
ips=${ips:-"node1,node2,node3"}
# 需要安装 DolphinScheduler 的所有机器列表;此处为三节点集群:node1、node2、node3# Port of SSH protocol, default value is 22. For now we only support same port in all `ips` machine
# modify it if you use different ssh port
sshPort=${sshPort:-"22"}
# SSH 端口,三台机器保持一致;如果自定义端口请修改# A comma separated list of machine hostname or IP would be installed Master server, it
# must be a subset of configuration `ips`.
# Example for hostnames: masters="ds1,ds2", Example for IPs: masters="192.168.8.1,192.168.8.2"
masters=${masters:-"node1,node2"}
# Master 角色所在节点;建议至少 2 个节点以保证高可用# A comma separated list of machine <hostname>:<workerGroup> or <IP>:<workerGroup>.All hostname or IP must be a
# subset of configuration `ips`, And workerGroup have default value as `default`, but we recommend you declare behind the hosts
# Example for hostnames: workers="ds1:default,ds2:default,ds3:default", Example for IPs: workers="192.168.8.1:default,192.168.8.2:default,192.168.8.3:default"
workers=${workers:-"node1:default,node2:default,node3:default"}
# Worker 角色与分组;这里三台都加入 default 分组,便于调度均衡# A comma separated list of machine hostname or IP would be installed Alert server, it
# must be a subset of configuration `ips`.
# Example for hostname: alertServer="ds3", Example for IP: alertServer="192.168.8.3"
alertServer=${alertServer:-"node3"}
# Alert 告警服务所在节点;放在 node3# A comma separated list of machine hostname or IP would be installed API server, it
# must be a subset of configuration `ips`.
# Example for hostname: apiServers="ds1", Example for IP: apiServers="192.168.8.1"
apiServers=${apiServers:-"node1"}
# API 服务所在节点;放在 node1(可按需扩容到多节点)# The directory to install DolphinScheduler for all machine we config above. It will automatically be created by `install.sh` script if not exists.
# Do not set this configuration same as the current path (pwd). Do not add quotes to it if you using related path.installPath=${installPath:-"/export/server/dolphinscheduler"}
# 安装目标目录;install.sh 会在各节点自动创建# The user to deploy DolphinScheduler for all machine we config above. For now user must create by yourself before running `install.sh`
# script. The user needs to have sudo privileges and permissions to operate hdfs. If hdfs is enabled than the root directory needs
# to be created by this user
deployUser=${deployUser:-"hadoop"}
# 部署用户(需具备 sudo 与 HDFS 操作权限);与你当前使用的 hadoop 用户一致# The root of zookeeper, for now DolphinScheduler default registry server is zookeeper.
zkRoot=${zkRoot:-"/dolphinscheduler"}
# Zookeeper 注册中心根路径;三节点 ZK 集群共享该路径

dolphinscheduler_env.sh

vim dolphinscheduler_env.sh
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
## JAVA_HOME, will use it to start DolphinScheduler server
export JAVA_HOME=${JAVA_HOME:-/export/server/jdk}
# (Java 安装路径;优先取系统已配置的 JAVA_HOME,否则使用 /export/server/jdk# Database related configuration, set database type, username and password
export DATABASE=${DATABASE:-mysql}
export SPRING_PROFILES_ACTIVE=${DATABASE}
export SPRING_DATASOURCE_URL=${SPRING_DATASOURCE_URL:-"jdbc:mysql://node1:3306/dolphinscheduler?useUnicode=true&characterEncoding=utf8&useSSL=false&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true&nullCatalogMeansCurrent=true"}
export SPRING_DATASOURCE_USERNAME=${SPRING_DATASOURCE_USERNAME:-dolphinscheduler}
export SPRING_DATASOURCE_PASSWORD=${SPRING_DATASOURCE_PASSWORD:-Dolphin!123}
# (切换为 MySQL;URL 使用主机名 node1,时区与字符集已设置;用户名/密码与之前创建一致# DolphinScheduler server related configuration
export SPRING_CACHE_TYPE=${SPRING_CACHE_TYPE:-none}
export SPRING_JACKSON_TIME_ZONE=${SPRING_JACKSON_TIME_ZONE:-Asia/Shanghai}
export MASTER_FETCH_COMMAND_NUM=${MASTER_FETCH_COMMAND_NUM:-10}
# (服务端配置:关闭缓存;Jackson 使用中国时区;Master 每次抓取命令数量为 10(可按需调整)# Registry center configuration, determines the type and link of the registry center
export REGISTRY_TYPE=${REGISTRY_TYPE:-zookeeper}
export REGISTRY_ZOOKEEPER_CONNECT_STRING=${REGISTRY_ZOOKEEPER_CONNECT_STRING:-node1:2181,node2:2181,node3:2181}
# (注册中心为 Zookeeper;三节点集群连接串# Tasks related configurations, need to change the configuration if you use the related tasks.
export HADOOP_HOME=${HADOOP_HOME:-/export/server/hadoop}
export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-/export/server/hadoop/etc/hadoop}
export SPARK_HOME1=${SPARK_HOME1:-/export/server/spark1}
export SPARK_HOME2=${SPARK_HOME2:-/export/server/spark2}
export PYTHON_HOME=${PYTHON_HOME:-/usr/bin}
export HIVE_HOME=${HIVE_HOME:-/export/server/hive}
export FLINK_HOME=${FLINK_HOME:-/export/server/flink}
export DATAX_HOME=${DATAX_HOME:-/export/server/datax}
export SEATUNNEL_HOME=${SEATUNNEL_HOME:-/export/server/seatunnel}
export CHUNJUN_HOME=${CHUNJUN_HOME:-/export/server/chunjun}
# (各类任务依赖路径;未安装的组件可保持默认或注释;PYTHON_HOME 设为系统 /usr/bin 便于直接调用 python3export PATH=$HADOOP_HOME/bin:$SPARK_HOME1/bin:$SPARK_HOME2/bin:$PYTHON_HOME/bin:$JAVA_HOME/bin:$HIVE_HOME/bin:$FLINK_HOME/bin:$DATAX_HOME/bin:$SEATUNNEL_HOME/bin:$CHUNJUN_HOME/bin:$PATH
# (将相关组件的 bin 目录加入 PATH,确保任务执行时可直接调用对应命令

初始化数据库

bash /export/server/apache-dolphinscheduler-3.1.9-bin/tools/bin/upgrade-schema.sh 

安装

默认生成的目录权限是hadoop组hadoop用户,这是不合理的,改成默认主组(bigdata组)

cd /export/server/apache-dolphinscheduler-3.1.9-bin/bin# 1) 备份
mv install.sh install.sh.back# 2) 在 "if [ ! -d $installPath ];then" 之后插入属组选择逻辑(追加多行)
sed -E '/^\s*if \[ ! -d \$installPath \];then\s*$/a\\  # 【修改】计算属组优先级:DEPLOY_GROUP > bigdata(当 deployUser=hadoop 且存在) > deployUser 主组\\  if [ -n "${DEPLOY_GROUP}" ]; then\\    _target_group="${DEPLOY_GROUP}";\\  elif [ "${deployUser}" = "hadoop" ] && getent group bigdata >/dev/null 2>\&1; then\\    _target_group="bigdata";\\  else\\    _target_group="$(id -gn ${deployUser} 2>/dev/null || echo ${deployUser})";\\  fi
' install.sh.back > install.sh.tmp# 3) 精确替换 chown 行(不使用有歧义的字符类)
#    把:sudo chown -R $deployUser:$deployUser $installPath
#    换成:sudo chown -R $deployUser:${_target_group} $installPath
sed -E \'s#^([[:space:]]*)sudo[[:space:]]+chown[[:space:]]+-R[[:space:]]+\$deployUser:\$deployUser[[:space:]]+\$installPath#\1sudo chown -R \$deployUser:${_target_group} \$installPath#' \install.sh.tmp > install.shrm -f install.sh.tmp# 4) 校验插入与替换
echo "------ inserted lines around if-block ------"
nl -ba install.sh | sed -n '1,200p' | sed -n '/if \[ ! -d \$installPath \];then/,+8p'
echo "------ chown lines ------"
grep -n "chown -R" install.sh || true

解决垃圾回收采用cms但是高版本jdk以完全废弃的问题
解决commons-cli问题

cd /export/server/apache-dolphinscheduler-3.1.9-bin/bin# 1) 备份(若未备份)
[ -f remove-zk-node.sh ] && cp -a remove-zk-node.sh remove-zk-node.sh.back# 2) 用干净的 awk 补丁重写脚本
awk '{# --- 第一步:修复 GC 参数(仅字符串替换,不影响其他逻辑) ---gsub(/-XX:\+UseConcMarkSweepGC/,"-XX:+UseG1GC");gsub(/[[:space:]]-XX:\+CMSParallelRemarkEnabled/,"");gsub(/[[:space:]]-XX:\+UseCMSInitiatingOccupancyOnly/,"");gsub(/[[:space:]]-XX:CMSInitiatingOccupancyFraction=[0-9]+/,"");print $0;# --- 第二步:在定义 DOLPHINSCHEDULER_LIB_JARS 之后插入 commons-cli 解析 ---if ($0 ~ /^export[[:space:]]+DOLPHINSCHEDULER_LIB_JARS=/ && done != 1) {print "";print "# ---- auto resolve commons-cli ----";print ": ${ZOOKEEPER_HOME:=/export/server/zookeeper}";print "COMMONS_CLI_JAR=\"\"";print "for p in \\";print "  \"$DOLPHINSCHEDULER_HOME/api-server/libs/commons-cli-*.jar\" \\\\";print "  \"$DOLPHINSCHEDULER_HOME/tools/libs/commons-cli-*.jar\" \\\\";print "  \"$ZOOKEEPER_HOME/lib/commons-cli-*.jar\" \\\\";print "  \"/usr/lib/zookeeper/commons-cli-*.jar\" \\\\";print "  \"/opt/zookeeper/lib/commons-cli-*.jar\"; do";print "  for f in $p; do";print "    [ -f \"$f\" ] && COMMONS_CLI_JAR=\"$f\" && break";print "  done";print "  [ -n \"$COMMONS_CLI_JAR\" ] && break";print "done";print "if [ -z \"$COMMONS_CLI_JAR\" ]; then";print "  echo \"[ERROR] commons-cli-*.jar 未找到,请拷贝一份到 api-server/libs/ 或设置 ZOOKEEPER_HOME/lib\" >&2";print "  exit 1";print "fi";print "export DOLPHINSCHEDULER_LIB_JARS=\"$DOLPHINSCHEDULER_LIB_JARS:$COMMONS_CLI_JAR\"";print "# ---- end resolve commons-cli ----";print "";done=1}}
' remove-zk-node.sh.back > remove-zk-node.shchmod +x remove-zk-node.sh
 bash /export/server/apache-dolphinscheduler-3.1.9-bin/bin/install.sh

浏览器访问地址 http://node1公网ip:12345/dolphinscheduler/ui 即可登录系统UI。
默认的用户名和密码是 admin/dolphinscheduler123

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。
如若转载,请注明出处:http://www.pswp.cn/news/919442.shtml
繁体地址,请注明出处:http://hk.pswp.cn/news/919442.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Windows 平板/电脑 上使用 DHCPSRV 搭建 DHCP 服务器

一、DHCPSRV 核心优势 轻量便携:单文件绿色软件,无需安装 全图形界面:比命令行工具更友好 支持IPv4/IPv6:满足现代网络需求 低资源占用:适合平板电脑运行(内存<10MB) 租约管理:可查看实时IP分配情况 二、超详细配置流程 1. 下载与初始化 官网下载:http://www…

ArcGIS动态表格批量出图

前言&#xff1a;产品介绍&#xff1a;ArcGIS动态表格扩展模块Mapping and Charting Solutions&#xff0c;可用于插入动态表格&#xff0c;与数据驱动结合&#xff0c;出图效率无敌。注&#xff1a;优先选择arcgis10.2.2。 一、首先是根据自身携带的arcgis数据进行下载对应的…

Linux小白加油站,第三周周考

1.如何查看当前系统中所有磁盘设备及其分区结构(如磁盘名称、大小、挂载点等)? lsblk # 显示磁盘名称、大小、挂载点&#xff08;P21&#xff09;2.若需对空闲磁盘(如/dev/sdb)进行交互式划分&#xff0c;如何进入操作界面并创建一个5GB的主分区(类型为Linux默认文件系统)? …

SEO的红利没了,下一个风口叫GEO

一、 搜索在退场&#xff0c;答案在上台过去二十多年&#xff0c;我们习惯了这样的路径&#xff1a;输入关键词 → 点开一堆蓝色链接 → 慢慢筛出想要的信息。SEO&#xff08;搜索引擎优化&#xff09;就是围绕这套游戏规则展开的&#xff0c;谁玩得溜&#xff0c;谁就有流量、…

Kubernetes 的 YAML 配置文件-apiVersion

Kubernetes的YAML配置文件–apiVersion 关于 Kubernetes 的 apiVersion 说明 以及 生产环境中推荐使用的版本 的完整指南,帮助你正确、安全地编写 Kubernetes 配置文件。 一、什么是 apiVersion? 在 Kubernetes 的 YAML 配置文件中,apiVersion 字段用于指定你所使用的 Kub…

uniapp 5+App项目,在android studio模拟器上运行调试

1.安装android studio&#xff0c;默认安装即可 点击下载android studio 2.安装完成后&#xff0c;添加设备 选择机型并下载 启动模拟机&#xff08;启动比较慢&#xff0c;稍等一会即可&#xff09; 3.等待模拟器启动后&#xff0c;在uniapp上运行项目到模拟器 如果下…

Qt猜数字游戏项目开发教程 - 从零开始构建趣味小游戏

Qt猜数字游戏项目开发教程 - 从零开始构建趣味小游戏 项目概述 本项目是一个基于Qt框架开发的猜数字游戏&#xff0c;具有现代化的UI设计和完整的游戏逻辑。项目采用C语言开发&#xff0c;使用Qt的信号槽机制实现界面交互&#xff0c;通过随机数生成和状态管理实现完整的游戏…

初识CNN05——经典网络认识2

系列文章目录 初识CNN01——认识CNN 初识CNN02——认识CNN2 初识CNN03——预训练与迁移学习 初识CNN04——经典网络认识 文章目录系列文章目录一、GoogleNet——Inception1.1 1x1卷积1.2 维度升降1.3 网络结构1.4 Inception Module1.5 辅助分类器二、ResNet——越深越好2.1 梯…

学习笔记分享——基于STM32的平衡车项目

学习笔记分享——基于STM32的平衡车项目前言笔记正文结语前言 本文是我在学习铁头山羊的平衡车教程的过程中&#xff0c;记录的笔记&#xff0c;里面不但有Up主的讲解&#xff0c;也有我个人的学习心得&#xff0c;还有查阅的资料&#xff0c;由于内容太多&#xff0c;不方便逐…

学习strandsagents的http_request tool

今天我们通过来拆strandsagents官网的一个例子来学习strandsagents的http_request tool https://strandsagents.com/latest/documentation/docs/examples/python/agents_workflows/ 看上去能做实事核查,实际上没那么高大上。 Show me the code https://github.com/strands-…

大模型对齐算法(四): DAPO,VAPO,GMPO,GSPO, CISPO,GFPO

DAPO DAPO 在 GRPO 的基础上做了 4 处关键升级&#xff0c;既保持 GRPO 的“无价值函数 组内归一化”思想&#xff0c;又通过 剪枝、采样、Token 级梯度、长度惩罚 解决长 Chain-of-Thought RL 的四大痛点。 1 剪枝范围解耦&#xff1a;Clip-Higher GRPO&#xff1a;单一对称…

OpenHarmony之「星链Data」—— 分布式数据管理子系统核心架构与实战解密

目录 系统概述 架构设计 核心模块详解 数据库实现与设计原理 关键函数调用流程链 实际案例分析 常见需求与Bug分析 性能监控与调优

基于SpringBoot+Vue的写真馆预约管理系统(邮箱通知、WebSocket及时通讯、协同过滤算法)

&#x1f388;系统亮点&#xff1a;邮箱通知、WebSocket及时通讯、协同过滤算法&#xff1b;一.系统开发工具与环境搭建1.系统设计开发工具前后端分离项目架构&#xff1a;B/S架构 运行环境&#xff1a;win10/win11、jdk17前端&#xff1a; 技术&#xff1a;框架Vue.js&#xf…

linux下timerfd和posix timer为什么存在较大的抖动?

在linux中开发引用&#xff0c;timerfd和posix timer是最常用的定时器。timerfd是linux特有的定时器&#xff0c;通过fd来实现定时器&#xff0c;体现了linux"一切皆文件"的思想&#xff1b;posix timer&#xff0c;只要符合posix标准的操作系统&#xff0c;均应支持…

网络聚合链路与软件网桥配置指南

网络聚合链路与软件网桥配置指南一、聚合链路&#xff08;Team&#xff09; 网络组队&#xff08;聚合链路&#xff09;是一种将多个网络接口控制器&#xff08;NIC&#xff0c;Network Interface Controller&#xff09;以逻辑方式组合在一起的技术&#xff0c;通过这种方式可…

IDE/去读懂STM32CubeMX 时钟配置图(有源/无源晶振、旁路/晶振模式、倍频/分频)

文章目录概述配置图元素说明RCCHSI/LSI/HSE/LSEAHB 和 APBSYSCLK 和 HCLKMux 多路复用器Prescaler 预分频器PLL 锁相环PLL 配置寄存器时钟物理源内部时钟和驱动无源晶振和驱动有源晶振和驱动MCO 时钟信号音频时钟配置晶体振荡器&#xff1f;外部时钟源类型RCC 如何选择旁路模式…

8 文本分析

全文检索与常规关系型数据库SQL查询的显著区别&#xff0c;就是全文检索具备对大段文本进行分析的能力&#xff0c;它可以通过文本分析把大段的文本切分为细粒度的分词。 elasticsearch在两种情况下会用到文本分析&#xff1a; 原始数据写入索引时&#xff0c;如果索引的某个字…

告别 Count Distinct 慢查询:StarRocks 高效去重全攻略

在大数据分析中&#xff0c;去重计算&#xff08;如 Count Distinct&#xff09;是一个常见但计算开销极高的操作&#xff0c;尤其在高基数和高并发场景下&#xff0c;常常成为查询性能的瓶颈。以用户访问行为为例&#xff0c;同一用户一天内多次访问页面时&#xff0c;PV 会累…

MVC、MVP、MVCC 和 MVI 架构的介绍及区别对比

✅作者简介&#xff1a;大家好&#xff0c;我是 Meteors., 向往着更加简洁高效的代码写法与编程方式&#xff0c;持续分享Java技术内容。 &#x1f34e;个人主页&#xff1a;Meteors.的博客 &#x1f49e;当前专栏&#xff1a; ✨特色专栏&#xff1a; 知识分享 &#x1f96d;本…

【运维进阶】Ansible 角色管理

Ansible 角色管理 实验环境 [lthcontroller ~ 21:47:45]$ mkdir web && cd web[lthcontroller web 21:47:50]$ cat > ansible.cfg <<EOF [defaults] remote_user lth inventory ./inventory[privilege_escalation] become True become_user root become_m…