feat:初版

This commit is contained in:
2025-12-03 15:48:44 +08:00
commit b4df26f61d
199 changed files with 23434 additions and 0 deletions

200
plugins/AutoReply/README.md Normal file
View File

@@ -0,0 +1,200 @@
# AutoReply 插件
基于双LLM架构的智能自动回复插件让机器人能够智能判断并主动参与群聊。
## 功能特色
- **双LLM架构**:小模型快速判断 + 大模型生成高质量回复
- **多维度评估**从5个维度智能评估是否需要回复
- **精力系统**:自动控制回复频率,避免过度活跃
- **群聊隔离**:每个群聊独立状态管理
- **白名单支持**:可限制插件作用范围
## 工作原理
```
群聊消息 → 小模型判断(5维度评分) → 超过阈值 → 触发AIChat生成回复
```
### 判断维度
1. **内容相关度** (0-10分):消息是否有趣、有价值、适合回复
2. **回复意愿** (0-10分):基于当前精力状态的回复意愿
3. **社交适宜性** (0-10分):回复是否符合群聊氛围
4. **时机恰当性** (0-10分):考虑频率控制和时间间隔
5. **对话连贯性** (0-10分):与上次回复的关联程度
## 配置说明
### 必要配置
1. **启用插件**`enabled = true`
2. **配置判断模型API**
```toml
[basic]
judge_api_url = "https://api.openai.com/v1/chat/completions"
judge_api_key = "your-api-key-here"
judge_model = "gpt-4o-mini" # 建议使用小模型
```
### 推荐模型
- **OpenAI**: gpt-4o-mini, gpt-3.5-turbo
- **其他兼容API**: 任何支持OpenAI格式的小参数模型
### 可选配置
```toml
[basic]
reply_threshold = 0.6 # 回复阈值,越高越严格
[energy]
decay_rate = 0.1 # 精力衰减速度
recovery_rate = 0.02 # 精力恢复速度
[context]
messages_count = 5 # 判断时考虑的历史消息数量
[rate_limit]
min_interval = 10 # 最小判断间隔(秒),避免高频判断
skip_if_judging = true # 如果正在判断中,跳过新消息
[whitelist]
enabled = false # 是否启用白名单
chat_list = [] # 白名单群聊ID列表
[weights]
# 判断权重总和必须为1.0
relevance = 0.25
willingness = 0.20
social = 0.20
timing = 0.15
continuity = 0.20
```
## 使用指南
### 1. 安装配置
1. 确保已安装AIChat插件
2. 配置`config.toml`中的判断模型API
3. 设置`enabled = true`启用插件
4. 可选:配置白名单限制作用范围
### 2. 工作流程
1. AutoReply插件priority=90先拦截群聊消息
2. 使用小模型进行5维度评估
3. 如果综合评分超过阈值,标记消息为`_auto_reply_triggered`
4. AIChat插件识别标记生成并发送回复
5. 更新精力系统和回复统计
### 3. 与AIChat的配合
- AutoReply只负责**判断**是否需要回复
- AIChat负责**生成**实际回复内容
- 两者通过`_auto_reply_triggered`标记通信
## 精力系统
- **精力范围**0.1 - 1.0
- **消耗机制**:每次主动回复后精力下降
- **恢复机制**:不回复时精力缓慢恢复
- **每日重置**每天额外恢复0.2精力
精力值影响"回复意愿"维度的评分,从而自然控制回复频率。
## 性能优化
### History监听模式推荐
插件采用**智能监听模式**不是监听每条消息而是监听history文件的变化
1. **定时检查** (`check_interval`):
- 默认每5秒检查一次history文件
- 检测到有新的用户消息时,标记为"待判断"
2. **批量处理**:
- 多条消息一起判断,而不是每条都判断
- 只在有实际对话时才触发判断
- 基于完整的对话上下文进行判断
3. **工作流程**:
```
用户发消息 → AIChat写入history → 定时任务检测到变化 →
标记"待判断" → 下一条消息触发判断 → 调用小模型
```
### 频率限制机制
为了避免高频消息导致API调用堆积插件内置了多重保护
1. **最小判断间隔** (`min_interval`):
- 默认10秒同一个群在10秒内只判断一次
- 高频消息会被自动跳过避免API堆积
2. **防抖机制** (`skip_if_judging`):
- 如果正在判断中,跳过新消息
- 避免并发调用小模型API
3. **监听模式** (`monitor_mode`):
- 默认启用只在检测到history变化时才判断
- 避免每条消息都触发判断
### 调优建议
**如果用户高频发消息导致处理不过来**
- 保持 `monitor_mode = true`(默认)
- 增加 `min_interval` 到 15-30秒
- 增加 `check_interval` 到 10秒
- 确保 `skip_if_judging = true`
**如果想更快响应**
- 减少 `check_interval` 到 3秒
- 减少 `min_interval` 到 5秒
- 但要注意API成本和性能
**如果想禁用监听模式(不推荐)**
- 设置 `monitor_mode = false`
- 每条消息都会尝试判断可能导致API堆积
## 调试建议
1. **不回复任何消息**
- 检查`enabled = true`
- 确认判断模型API配置正确
- 查看日志中的评分信息
- 尝试降低`reply_threshold`
2. **回复过于频繁**
- 提高`reply_threshold`
- 增加`decay_rate`(精力消耗更快)
- 减少`recovery_rate`(精力恢复更慢)
- 增加`min_interval`(判断间隔更长)
3. **判断不准确**
- 调整权重配置
- 增加`messages_count`获取更多上下文
- 检查判断模型是否合适
4. **高频消息处理不过来**
- 增加`min_interval`到15-30秒
- 确保`skip_if_judging = true`
- 查看日志中的"跳过消息"信息
## 日志说明
- `🔥 AutoReply触发`:判断通过,触发回复
- `AutoReply不触发`:判断未通过,不回复
- 日志中包含评分和理由,便于调试
## 注意事项
1. **优先级设置**AutoReply的priority(90)必须高于AIChat(50)
2. **API成本**每条消息都会调用判断模型API注意成本控制
3. **白名单模式**:建议先在测试群启用,稳定后再扩展
4. **与@消息的关系**:被@的消息会跳过AutoReply直接由AIChat处理
## 许可证
本插件遵循项目主许可证。

528
plugins/AutoReply/main.py Normal file
View File

@@ -0,0 +1,528 @@
"""
AutoReply 插件 - 基于双LLM架构的智能自动回复
使用小模型判断是否需要回复通过后触发AIChat插件生成回复
"""
import json
import time
import tomllib
import aiohttp
from pathlib import Path
from datetime import datetime, date
from dataclasses import dataclass
from typing import Dict
from loguru import logger
from utils.plugin_base import PluginBase
from utils.decorators import on_text_message, schedule
try:
from aiohttp_socks import ProxyConnector
PROXY_SUPPORT = True
except ImportError:
PROXY_SUPPORT = False
@dataclass
class JudgeResult:
"""判断结果"""
relevance: float = 0.0
willingness: float = 0.0
social: float = 0.0
timing: float = 0.0
continuity: float = 0.0
reasoning: str = ""
should_reply: bool = False
overall_score: float = 0.0
@dataclass
class ChatState:
"""群聊状态"""
energy: float = 1.0
last_reply_time: float = 0.0
last_reset_date: str = ""
total_messages: int = 0
total_replies: int = 0
class AutoReply(PluginBase):
"""智能自动回复插件"""
description = "基于双LLM架构的智能自动回复插件"
author = "ShiHao"
version = "1.0.0"
def __init__(self):
super().__init__()
self.config = None
self.chat_states: Dict[str, ChatState] = {}
self.weights = {}
self.last_judge_time: Dict[str, float] = {} # 记录每个群最后判断时间
self.judging: Dict[str, bool] = {} # 记录是否正在判断中
self.last_history_size: Dict[str, int] = {} # 记录每个群的history大小
self.pending_judge: Dict[str, bool] = {} # 记录是否有待判断的消息
self.whitelist_normalized = set() # 归一化后的白名单ID与history文件名一致
async def async_init(self):
"""异步初始化"""
config_path = Path(__file__).parent / "config.toml"
with open(config_path, "rb") as f:
self.config = tomllib.load(f)
# 加载权重配置
self.weights = {
"relevance": self.config["weights"]["relevance"],
"willingness": self.config["weights"]["willingness"],
"social": self.config["weights"]["social"],
"timing": self.config["weights"]["timing"],
"continuity": self.config["weights"]["continuity"]
}
# 检查权重和
weight_sum = sum(self.weights.values())
if abs(weight_sum - 1.0) > 1e-6:
logger.warning(f"判断权重和不为1当前和为{weight_sum},已自动归一化")
self.weights = {k: v / weight_sum for k, v in self.weights.items()}
# 预处理白名单与history文件名的归一化规则保持一致
self.whitelist_normalized = {
self._normalize_chat_id(cid) for cid in self.config.get("whitelist", {}).get("chat_list", [])
}
logger.info(f"AutoReply 插件已加载,判断模型: {self.config['basic']['judge_model']}")
logger.info(f"AutoReply 配置: enabled={self.config['basic']['enabled']}, priority=90")
logger.info(f"AutoReply 监听模式: 每{self.config.get('rate_limit', {}).get('check_interval', 5)}秒检查history变化")
logger.warning("⚠️ AutoReply插件已启动等待消息...")
def _normalize_chat_id(self, chat_id: str) -> str:
"""将群ID转成history文件使用的安全文件名"""
return (chat_id or "").replace("@", "_").replace(":", "_")
def _is_chat_allowed(self, raw_chat_id: str) -> bool:
"""白名单判断兼容原始ID与归一化ID"""
if not self.config["whitelist"]["enabled"]:
return True
safe_id = self._normalize_chat_id(raw_chat_id)
return raw_chat_id in self.config["whitelist"]["chat_list"] or safe_id in self.whitelist_normalized
@schedule('interval', seconds=5)
async def check_history_changes(self, *args, **kwargs):
"""定时检查history文件变化"""
if not self.config["basic"]["enabled"]:
logger.debug("[AutoReply] 插件未启用,跳过检查")
return
# 检查是否启用监听模式
if not self.config.get("rate_limit", {}).get("monitor_mode", True):
logger.debug("[AutoReply] 监听模式未启用,跳过检查")
return
try:
# 获取AIChat插件的history目录
from utils.plugin_manager import PluginManager
plugin_manager = PluginManager() # 单例模式,直接实例化
aichat_plugin = plugin_manager.plugins.get("AIChat")
if not aichat_plugin:
logger.debug("[AutoReply] 未找到AIChat插件")
return
if not hasattr(aichat_plugin, 'history_dir'):
logger.debug("[AutoReply] AIChat插件没有history_dir属性")
return
history_dir = aichat_plugin.history_dir
if not history_dir.exists():
logger.debug(f"[AutoReply] History目录不存在: {history_dir}")
return
logger.debug(f"[AutoReply] 开始检查history目录: {history_dir}")
# 遍历所有history文件
for history_file in history_dir.glob("*.json"):
chat_id = history_file.stem # 文件名就是chat_id
# 检查白名单
if self.config["whitelist"]["enabled"]:
if chat_id not in self.whitelist_normalized:
continue
try:
with open(history_file, "r", encoding="utf-8") as f:
history = json.load(f)
current_size = len(history)
last_size = self.last_history_size.get(chat_id, 0)
# 如果有新消息
if current_size > last_size:
# 获取新增的消息
new_messages = history[last_size:]
# 检查新消息中是否有非机器人的消息
with open("main_config.toml", "rb") as f:
main_config = tomllib.load(f)
bot_nickname = main_config.get("Bot", {}).get("nickname", "机器人")
has_user_message = any(
msg.get('nickname') != bot_nickname
for msg in new_messages
)
if has_user_message:
logger.debug(f"[AutoReply] 检测到群聊 {chat_id[:20]}... 有新消息")
# 标记为待判断
self.pending_judge[chat_id] = True
# 更新记录的大小
self.last_history_size[chat_id] = current_size
except Exception as e:
logger.debug(f"读取history文件失败: {history_file.name}, {e}")
continue
except Exception as e:
logger.error(f"检查history变化失败: {e}")
@on_text_message(priority=90) # 高优先级在AIChat之前执行
async def handle_message(self, bot, message: dict):
"""处理消息"""
try:
logger.debug(f"[AutoReply] 收到消息,开始处理")
# 检查是否启用
if not self.config["basic"]["enabled"]:
logger.debug("AutoReply插件未启用跳过处理")
return True
# 只处理群聊消息
is_group = message.get('IsGroup', False)
if not is_group:
logger.debug("AutoReply只处理群聊消息跳过私聊")
return True
# 群聊消息FromWxid是群IDSenderWxid是发送者ID
from_wxid = message.get('FromWxid') # 群聊ID
sender_wxid = message.get('SenderWxid') # 发送者ID
chat_id = self._normalize_chat_id(from_wxid) # 归一化ID匹配history文件名
content = (message.get('msg') or message.get('Content', '')).strip()
# 跳过空消息
if not content:
logger.debug("AutoReply跳过空消息")
return True
# 检查白名单使用from_wxid作为群聊ID
if not self._is_chat_allowed(from_wxid):
logger.debug(f"AutoReply白名单模式群聊 {from_wxid[:20]}... 不在白名单中")
return True
# 跳过已被@的消息让AIChat正常处理
if self._is_at_bot(message):
logger.debug("AutoReply跳过@消息交由AIChat处理")
return True
# 监听模式:只在检测到待判断标记时才判断
monitor_mode = self.config.get("rate_limit", {}).get("monitor_mode", True)
if monitor_mode:
if not self.pending_judge.get(chat_id, False):
logger.debug(f"AutoReply监听模式群聊 {from_wxid[:20]}... 无待判断标记")
return True
# 清除待判断标记
self.pending_judge[chat_id] = False
# 频率限制:检查是否正在判断中
if self.config.get("rate_limit", {}).get("skip_if_judging", True):
if self.judging.get(chat_id, False):
logger.debug(f"AutoReply跳过消息群聊 {from_wxid[:20]}... 正在判断中")
return True
# 频率限制:检查距离上次判断的时间间隔
min_interval = self.config.get("rate_limit", {}).get("min_interval", 10)
last_time = self.last_judge_time.get(chat_id, 0)
current_time = time.time()
if current_time - last_time < min_interval:
logger.debug(f"AutoReply跳过消息距离上次判断仅 {current_time - last_time:.1f}")
# 监听模式下,如果时间间隔不够,重新标记为待判断
if monitor_mode:
self.pending_judge[chat_id] = True
return True
logger.info(f"AutoReply开始判断消息: {content[:30]}...")
# 标记正在判断中
self.judging[chat_id] = True
self.last_judge_time[chat_id] = current_time
# 使用小模型判断是否需要回复
judge_result = await self._judge_with_small_model(bot, message)
# 清除判断中标记
self.judging[chat_id] = False
if judge_result.should_reply:
logger.info(f"🔥 AutoReply触发 | {from_wxid[:20]}... | 评分:{judge_result.overall_score:.2f} | {judge_result.reasoning[:50]}")
# 更新状态
self._update_active_state(chat_id, judge_result)
# 修改消息让AIChat认为需要回复
message['_auto_reply_triggered'] = True
return True # 继续传递给AIChat
else:
logger.debug(f"AutoReply不触发 | {from_wxid[:20]}... | 评分:{judge_result.overall_score:.2f}")
self._update_passive_state(chat_id, judge_result)
return True
except Exception as e:
logger.error(f"AutoReply处理异常: {e}")
import traceback
logger.error(traceback.format_exc())
# 异常时也要清除判断中标记
if 'chat_id' in locals():
self.judging[chat_id] = False
elif 'from_wxid' in locals():
self.judging[self._normalize_chat_id(from_wxid)] = False
return True
def _is_at_bot(self, message: dict) -> bool:
"""检查是否@了机器人"""
content = message.get('Content', '')
# 规范化后的消息使用 Ats 字段
at_list = message.get('Ats', [])
# 检查是否有@列表或内容中包含@标记
return len(at_list) > 0 or '@' in content or '' in content
async def _judge_with_small_model(self, bot, message: dict) -> JudgeResult:
"""使用小模型判断是否需要回复"""
# 规范化后的消息FromWxid是群IDSenderWxid是发送者IDContent是内容
from_wxid = message.get('FromWxid') # 群聊ID
chat_id = self._normalize_chat_id(from_wxid)
content = message.get('Content', '')
sender_wxid = message.get('SenderWxid', '')
# 获取群聊状态
chat_state = self._get_chat_state(chat_id)
# 获取最近消息历史
recent_messages = await self._get_recent_messages(chat_id)
last_bot_reply = await self._get_last_bot_reply(chat_id)
# 构建判断提示词
reasoning_part = ""
if self.config["judge"]["include_reasoning"]:
reasoning_part = ',\n "reasoning": "详细分析原因"'
judge_prompt = f"""你是群聊机器人的决策系统,判断是否应该主动回复。
## 当前群聊情况
- 群聊ID: {from_wxid}
- 精力水平: {chat_state.energy:.1f}/1.0
- 上次发言: {self._get_minutes_since_last_reply(chat_id)}分钟前
## 最近{self.config['context']['messages_count']}条对话
{recent_messages}
## 上次机器人回复
{last_bot_reply if last_bot_reply else "暂无"}
## 待判断消息
内容: {content}
时间: {datetime.now().strftime('%H:%M:%S')}
## 评估要求
从以下5个维度评估0-10分
1. **内容相关度**(0-10):消息是否有趣、有价值、适合回复
2. **回复意愿**(0-10):基于当前精力水平的回复意愿
3. **社交适宜性**(0-10):在当前群聊氛围下回复是否合适
4. **时机恰当性**(0-10):回复时机是否恰当
5. **对话连贯性**(0-10):当前消息与上次回复的关联程度
**回复阈值**: {self.config['basic']['reply_threshold']}
请以JSON格式回复
{{
"relevance": 分数,
"willingness": 分数,
"social": 分数,
"timing": 分数,
"continuity": 分数{reasoning_part}
}}
**注意你的回复必须是完整的JSON对象不要包含任何其他内容**"""
# 调用小模型API
max_retries = self.config["judge"]["max_retries"] + 1
for attempt in range(max_retries):
try:
result = await self._call_judge_api(judge_prompt)
# 解析JSON
content_text = result.strip()
if content_text.startswith("```json"):
content_text = content_text.replace("```json", "").replace("```", "").strip()
elif content_text.startswith("```"):
content_text = content_text.replace("```", "").strip()
judge_data = json.loads(content_text)
# 计算综合评分
overall_score = (
judge_data.get("relevance", 0) * self.weights["relevance"] +
judge_data.get("willingness", 0) * self.weights["willingness"] +
judge_data.get("social", 0) * self.weights["social"] +
judge_data.get("timing", 0) * self.weights["timing"] +
judge_data.get("continuity", 0) * self.weights["continuity"]
) / 10.0
should_reply = overall_score >= self.config["basic"]["reply_threshold"]
return JudgeResult(
relevance=judge_data.get("relevance", 0),
willingness=judge_data.get("willingness", 0),
social=judge_data.get("social", 0),
timing=judge_data.get("timing", 0),
continuity=judge_data.get("continuity", 0),
reasoning=judge_data.get("reasoning", "") if self.config["judge"]["include_reasoning"] else "",
should_reply=should_reply,
overall_score=overall_score
)
except json.JSONDecodeError as e:
logger.warning(f"小模型返回JSON解析失败 (尝试 {attempt + 1}/{max_retries}): {str(e)}")
if attempt == max_retries - 1:
return JudgeResult(should_reply=False, reasoning="JSON解析失败")
continue
except Exception as e:
logger.error(f"小模型判断异常: {e}")
return JudgeResult(should_reply=False, reasoning=f"异常: {str(e)}")
return JudgeResult(should_reply=False, reasoning="重试失败")
async def _call_judge_api(self, prompt: str) -> str:
"""调用判断模型API"""
api_url = self.config["basic"]["judge_api_url"]
api_key = self.config["basic"]["judge_api_key"]
model = self.config["basic"]["judge_model"]
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
payload = {
"model": model,
"messages": [
{"role": "system", "content": "你是一个专业的群聊回复决策系统。你必须严格按照JSON格式返回结果。"},
{"role": "user", "content": prompt}
],
"temperature": 0.7
}
# 配置代理
connector = None
if self.config["proxy"]["enabled"] and PROXY_SUPPORT:
proxy_type = self.config["proxy"]["type"]
proxy_host = self.config["proxy"]["host"]
proxy_port = self.config["proxy"]["port"]
proxy_url = f"{proxy_type}://{proxy_host}:{proxy_port}"
connector = ProxyConnector.from_url(proxy_url)
async with aiohttp.ClientSession(connector=connector) as session:
async with session.post(api_url, headers=headers, json=payload, timeout=aiohttp.ClientTimeout(total=30)) as response:
if response.status != 200:
raise Exception(f"API调用失败: {response.status}")
result = await response.json()
return result["choices"][0]["message"]["content"]
async def _get_recent_messages(self, chat_id: str) -> str:
"""获取最近消息历史"""
try:
# 尝试从AIChat插件获取历史记录
from utils.plugin_manager import PluginManager
plugin_manager = PluginManager() # 单例模式,直接实例化
aichat_plugin = plugin_manager.plugins.get("AIChat")
if aichat_plugin and hasattr(aichat_plugin, 'history_dir'):
history_file = aichat_plugin.history_dir / f"{chat_id}.json"
if history_file.exists():
with open(history_file, "r", encoding="utf-8") as f:
history = json.load(f)
# 获取最近N条
recent = history[-self.config['context']['messages_count']:]
messages = []
for record in recent:
nickname = record.get('nickname', '未知')
content = record.get('content', '')
messages.append(f"{nickname}: {content}")
return "\n".join(messages) if messages else "暂无对话历史"
except Exception as e:
logger.debug(f"获取消息历史失败: {e}")
return "暂无对话历史"
async def _get_last_bot_reply(self, chat_id: str) -> str:
"""获取上次机器人回复"""
try:
from utils.plugin_manager import PluginManager
plugin_manager = PluginManager() # 单例模式,直接实例化
aichat_plugin = plugin_manager.plugins.get("AIChat")
if aichat_plugin and hasattr(aichat_plugin, 'history_dir'):
history_file = aichat_plugin.history_dir / f"{chat_id}.json"
if history_file.exists():
with open(history_file, "r", encoding="utf-8") as f:
history = json.load(f)
# 从后往前查找机器人回复
with open("main_config.toml", "rb") as f:
main_config = tomllib.load(f)
bot_nickname = main_config.get("Bot", {}).get("nickname", "机器人")
for record in reversed(history):
if record.get('nickname') == bot_nickname:
return record.get('content', '')
except Exception as e:
logger.debug(f"获取上次回复失败: {e}")
return None
def _get_chat_state(self, chat_id: str) -> ChatState:
"""获取群聊状态"""
if chat_id not in self.chat_states:
self.chat_states[chat_id] = ChatState()
today = date.today().isoformat()
state = self.chat_states[chat_id]
if state.last_reset_date != today:
state.last_reset_date = today
state.energy = min(1.0, state.energy + 0.2)
return state
def _get_minutes_since_last_reply(self, chat_id: str) -> int:
"""获取距离上次回复的分钟数"""
chat_state = self._get_chat_state(chat_id)
if chat_state.last_reply_time == 0:
return 999
return int((time.time() - chat_state.last_reply_time) / 60)
def _update_active_state(self, chat_id: str, judge_result: JudgeResult):
"""更新主动回复状态"""
chat_state = self._get_chat_state(chat_id)
chat_state.last_reply_time = time.time()
chat_state.total_replies += 1
chat_state.total_messages += 1
chat_state.energy = max(0.1, chat_state.energy - self.config["energy"]["decay_rate"])
def _update_passive_state(self, chat_id: str, judge_result: JudgeResult):
"""更新被动状态"""
chat_state = self._get_chat_state(chat_id)
chat_state.total_messages += 1
chat_state.energy = min(1.0, chat_state.energy + self.config["energy"]["recovery_rate"])