Commit 68319fd8 authored by Vũ Hoàng Anh's avatar Vũ Hoàng Anh

feat: add AI Diagram Agent - 2-agent LangGraph with Mermaid.js

- New diagram agent: Planner generates Mermaid code, Responder explains
- FastAPI route: /api/diagram/chat + /clear with Redis session history
- Frontend: split-pane UI with canvas pan/zoom (scroll wheel + drag)
- Auto-fit diagram to view, grid background, PNG export
- Prompt enforces ASCII syntax labels for Mermaid compatibility
- Planner LLM max_tokens=4000 for complex diagram generation
- Supports: flowchart, sequence, class, ER, gantt, mindmap, pie chart
parent 7d550b26
"""Diagram Agent — AI-powered diagram generation using Mermaid syntax."""
from .diagram_graph import get_diagram_agent
__all__ = ["get_diagram_agent"]
This diff is collapsed.
"""
Prompts for the Diagram Agent — Planner generates Mermaid, Responder formats.
"""
PLANNER_PROMPT = """Bạn là AI Diagram Agent chuyên vẽ sơ đồ, flowchart, và diagram.
## Khả năng
Bạn có 1 tool: `generate_diagram` — nhận Mermaid code và trả lại cho user.
## Luồng xử lý
1. Phân tích yêu cầu của user: loại diagram nào? (flowchart, sequence, class, gantt, mindmap, pie, ER...)
2. Thiết kế cấu trúc diagram trong đầu
3. Gọi tool `generate_diagram` với Mermaid code hoàn chỉnh
4. Nếu user chỉnh sửa → gọi lại tool với Mermaid code đã cập nhật
## ⚠️ QUY TẮC MERMAID BẮT BUỘC (PHẢI TUÂN THỦ)
### 1. Dùng TIẾNG ANH cho tất cả syntax-level labels
Mermaid KHÔNG hỗ trợ Unicode/tiếng Việt ở các vị trí sau:
- ER relationship labels: `CUSTOMER ||--o{ ORDER : places` ✅ (KHÔNG dùng `: đặt` ❌)
- Sequence diagram arrows: `Alice->>Bob: Send request` ✅ (KHÔNG dùng `Gửi yêu cầu` ❌)
- Edge labels trên arrow: `A -->|Yes| B` ✅
### 2. Tiếng Việt CHỈ được dùng trong display brackets
- Flowchart node labels: `A[Đặt hàng]` ✅
- Subgraph titles: `subgraph Quy trình đặt hàng` ✅
- Gantt task labels: `Khảo sát :a1, 2024-01-01, 30d` ✅
- Pie labels: `"Sản phẩm A" : 40` ✅
### 3. Node IDs phải ASCII, ngắn gọn
- `A`, `B`, `C1`, `start`, `end`, `checkout` ✅
- KHÔNG dùng: `đặt_hàng`, `khách_hàng` ❌
### 4. ER Diagram — QUY TẮC ĐẶC BIỆT
```
erDiagram
CUSTOMER ||--o{ ORDER : places
ORDER ||--|{ LINE_ITEM : contains
CUSTOMER {
int id PK
string name
string email
}
```
- Entity names: UPPERCASE ASCII (CUSTOMER, ORDER, PRODUCT)
- Attribute types: `int`, `string`, `decimal`, `date`, `boolean`
- Relationship labels: 1 từ tiếng Anh đơn giản (places, contains, has, owns, manages)
- KHÔNG dùng dấu gạch ngang trong entity name, dùng underscore: `LINE_ITEM` ✅
### 5. Giới hạn độ phức tạp
- Flowchart: tối đa 15-20 nodes
- ER diagram: tối đa 6-8 entities
- Sequence diagram: tối đa 6-8 participants
- Nếu user yêu cầu quá phức tạp → chia thành nhiều diagram nhỏ hoặc simplify
### 6. Syntax cơ bản
- Khai báo: `graph TD`, `sequenceDiagram`, `classDiagram`, `gantt`, `pie`, `mindmap`, `erDiagram`, `stateDiagram-v2`, `flowchart LR`
- Arrows: `-->` solid, `-.->` dashed, `==>` thick
- Shapes: `[text]` rectangle, `(text)` rounded, `{text}` diamond, `([text])` stadium, `((text))` circle
- Subgraph: `subgraph Title ... end`
- Styling: `classDef highlight fill:#e8f5e9,stroke:#4caf50; class A,B highlight;`
## Examples
### Flowchart
```
graph TD
A[Khách truy cập] --> B{Đã đăng nhập?}
B -->|Yes| C[Trang chủ]
B -->|No| D[Trang login]
D --> E[Nhập thông tin]
E --> F{Hợp lệ?}
F -->|Yes| C
F -->|No| D
classDef highlight fill:#e3f2fd,stroke:#1976d2
class C highlight
```
### ER Diagram
```
erDiagram
CUSTOMER ||--o{ ORDER : places
ORDER ||--|{ ORDER_ITEM : contains
PRODUCT ||--o{ ORDER_ITEM : included_in
CUSTOMER {
int id PK
string name
string email
string phone
}
ORDER {
int id PK
int customer_id FK
date order_date
string status
decimal total
}
PRODUCT {
int id PK
string name
string sku
decimal price
}
ORDER_ITEM {
int id PK
int order_id FK
int product_id FK
int quantity
decimal subtotal
}
```
### Sequence Diagram
```
sequenceDiagram
participant U as User
participant FE as Frontend
participant API as Backend API
participant DB as Database
U->>FE: Click login
FE->>API: POST /auth/login
API->>DB: Query user
DB-->>API: User data
API-->>FE: JWT token
FE-->>U: Redirect dashboard
```
### Mindmap
```
mindmap
root((Marketing Plan))
Content
Blog posts
Videos
Social media
SEO
Keywords
Backlinks
Ads
Google Ads
Facebook Ads
```
### Pie Chart
```
pie title Market Share 2025
"Product A" : 40
"Product B" : 30
"Product C" : 20
"Others" : 10
```
### Gantt Chart
```
gantt
title Project Timeline
dateFormat YYYY-MM-DD
section Phase 1
Research :a1, 2024-01-01, 14d
Design :a2, after a1, 10d
section Phase 2
Development :a3, after a2, 30d
Testing :a4, after a3, 14d
```
## Phong cách
- Diagram rõ ràng, dễ đọc, chuyên nghiệp
- Dùng classDef để tô màu nhóm phần tử
- Node labels ngắn gọn, dễ hiểu (tiếng Việt OK trong brackets)
- Direction: TD (top-down) cho flow dọc, LR (left-right) cho flow ngang
- Khi chỉnh sửa diagram cũ, PHẢI giữ nguyên cấu trúc, chỉ thêm/sửa phần user yêu cầu
## Khi user hỏi chung chung
Hỏi lại 1 câu: "Bạn muốn vẽ loại diagram nào? (flowchart, sequence, ER, mindmap, gantt, pie...)"
"""
RESPONDER_PROMPT = """Bạn là AI Assistant trình bày kết quả diagram cho user.
## Nhiệm vụ
1. Giải thích ngắn gọn về diagram vừa tạo (2-3 câu max)
2. Đề xuất chỉnh sửa nếu cần: "Bạn muốn thêm/sửa gì không?"
3. KHÔNG lặp lại Mermaid code — diagram đã hiển thị ở panel bên phải
## Phong cách
- Thân thiện, chuyên nghiệp, tiếng Việt
- Ngắn gọn, không giải thích quá chi tiết
- Gợi ý hữu ích: thêm node, đổi layout, thêm màu, thêm entity...
"""
This diff is collapsed.
"""
Reaction Agent — Community Reaction Simulator
==============================================
Lấy cảm hứng từ BettaFish ForumEngine.
Simulate phản ứng cộng đồng khi Canifa launch chiến dịch mới.
Dùng LLM generate realistic reactions từ các persona segments.
"""
from .reaction_agent import (
run_reaction_simulation,
simulate_reaction_for_persona,
PERSONA_SEGMENTS,
CAMPAIGN_TYPES,
)
__all__ = [
"run_reaction_simulation",
"simulate_reaction_for_persona",
"PERSONA_SEGMENTS",
"CAMPAIGN_TYPES",
]
"""
Reaction Agent — Community Reaction Simulator Engine
=====================================================
Lấy cảm hứng từ BettaFish ForumEngine + MiroFish simulate_agent.
Simulate phản ứng cộng đồng khi Canifa có chiến dịch mới.
Dùng LLM để generate realistic reactions từ các persona segments.
Model: gpt-5.4-nano (from DEFAULT_MODEL env)
"""
import logging
import json
from typing import Any
from common.llm_factory import create_llm
from config import DEFAULT_MODEL
logger = logging.getLogger(__name__)
# ═══ PERSONA SEGMENTS ═══
PERSONA_SEGMENTS = [
{
"id": "mom_shopper",
"name": "Mom Shopper",
"desc": "Phụ nữ 30-40t, mua cho gia đình, ưu tiên chất lượng và giá hợp lý",
"traits": "thực tế, so sánh giá, quan tâm chất liệu và độ bền",
"tone": "nhẹ nhàng, có emoji, chia sẻ kinh nghiệm",
"weight": 0.20,
},
{
"id": "young_professional",
"name": "Young Professional",
"desc": "Nam/Nữ 25-32t, đi làm văn phòng, thích basic smart casual",
"traits": "theo dõi sale, thích đồ basic công sở, hay compare brand",
"tone": "ngắn gọn, lịch sự, đôi khi hỏi thêm chi tiết",
"weight": 0.20,
},
{
"id": "fashion_enthusiast",
"name": "Fashion Enthusiast",
"desc": "Nữ 28-38t, sẵn sàng chi tiền cho item đẹp, follow KOL",
"traits": "biết về trend, chú ý chất liệu, phong cách, so sánh với Muji/Uniqlo",
"tone": "hào hứng, dùng từ fashion, nhiều detail",
"weight": 0.15,
},
{
"id": "budget_conscious",
"name": "Budget Conscious",
"desc": "Nam 35-45t, mua khi sale, so sánh giá nhiều brand, thực dụng",
"traits": "nhạy giá, tính toán, compare kỹ, chờ sale",
"tone": "phân tích, so sánh số liệu, đôi khi tiêu cực về giá",
"weight": 0.15,
},
{
"id": "gen_z",
"name": "Gen Z Trendy",
"desc": "Nam/Nữ 18-25t, thích local brand, aesthetic, chia sẻ TikTok",
"traits": "FOMO, trend-driven, visual-first, share trên social",
"tone": "slang, emoji nhiều, hype, viết tắt",
"weight": 0.15,
},
{
"id": "kol",
"name": "KOL / Blogger",
"desc": "Nữ 25-35t, review sản phẩm chuyên nghiệp, phân tích kỹ",
"traits": "objective, so sánh chất liệu, phân tích chiến lược brand",
"tone": "chuyên nghiệp, review dài, phân tích deep",
"weight": 0.10,
},
{
"id": "troll",
"name": "Social Media Troll",
"desc": "Comment nhanh, hay chê, so sánh Uniqlo/Zara, toxic nhẹ",
"traits": "cynical, so sánh brand ngoại, hay dùng meme",
"tone": "châm biếm, ngắn, sarcastic, emoji mockery",
"weight": 0.05,
},
]
CAMPAIGN_TYPES = {
"product_launch": "Ra mắt sản phẩm mới",
"promotion": "Chương trình khuyến mãi / Flash Sale",
"collection": "Ra mắt Bộ sưu tập (BST)",
"price_change": "Thay đổi / Tăng giá",
"social_post": "Bài đăng Social Media",
"collab": "Hợp tác thương hiệu (Collab)",
}
REACTION_PROMPT = """Bạn là một người Việt Nam thực sự trên mạng xã hội.
Persona: {persona_name} — {persona_desc}
Tính cách: {persona_traits}
Giọng văn: {persona_tone}
Canifa vừa đăng một {campaign_type_label}:
---
{campaign_content}
---
Hãy viết MỘT comment phản ứng tự nhiên như trên Facebook/TikTok.
Trả về JSON (không markdown):
{{
"comment": "nội dung comment",
"sentiment": "positive" | "neutral" | "negative",
"likes_estimate": <number 1-200>,
"shares_estimate": <number 0-50>,
"key_concern": "vấn đề chính quan tâm (nếu có, null nếu không)"
}}
"""
async def simulate_reaction_for_persona(
persona: dict,
campaign_type: str,
campaign_content: str,
model_name: str | None = None,
) -> dict[str, Any]:
"""Generate a single persona's reaction to a campaign."""
model = model_name or DEFAULT_MODEL
llm = create_llm(model, streaming=False)
prompt = REACTION_PROMPT.format(
persona_name=persona["name"],
persona_desc=persona["desc"],
persona_traits=persona["traits"],
persona_tone=persona["tone"],
campaign_type_label=CAMPAIGN_TYPES.get(campaign_type, campaign_type),
campaign_content=campaign_content,
)
try:
response = await llm.ainvoke([
{"role": "system", "content": "You are a social media comment generator. Always respond in valid JSON."},
{"role": "user", "content": prompt},
])
raw = response.content
if isinstance(raw, list):
raw = "".join(str(c.get("text", c) if isinstance(c, dict) else c) for c in raw)
data = json.loads(raw.strip().removeprefix("```json").removesuffix("```").strip())
return {
"persona_id": persona["id"],
"persona_name": persona["name"],
"segment": persona["name"],
**data,
}
except Exception as e:
logger.error(f"❌ [{persona['name']}] Reaction gen failed: {e}")
return {
"persona_id": persona["id"],
"persona_name": persona["name"],
"segment": persona["name"],
"comment": f"[Error generating reaction: {e}]",
"sentiment": "neutral",
"likes_estimate": 0,
"shares_estimate": 0,
"key_concern": None,
}
async def run_reaction_simulation(
campaign_type: str,
campaign_content: str,
persona_weights: dict[str, float] | None = None,
model_name: str | None = None,
) -> dict[str, Any]:
"""
Run full community reaction simulation.
Args:
campaign_type: Type of campaign (product_launch, promotion, etc.)
campaign_content: The campaign text content
persona_weights: Optional weight overrides for each persona segment
model_name: LLM model override
Returns:
Full simulation results with reactions, sentiment, recommendations
"""
reactions = []
segments_used = PERSONA_SEGMENTS.copy()
# Apply weight overrides
if persona_weights:
for seg in segments_used:
if seg["id"] in persona_weights:
seg["weight"] = persona_weights[seg["id"]]
# Generate reactions
for persona in segments_used:
logger.info(f"🎭 Simulating: {persona['name']}...")
result = await simulate_reaction_for_persona(
persona, campaign_type, campaign_content, model_name
)
reactions.append(result)
# Calculate sentiment
pos = sum(1 for r in reactions if r.get("sentiment") == "positive")
neu = sum(1 for r in reactions if r.get("sentiment") == "neutral")
neg = sum(1 for r in reactions if r.get("sentiment") == "negative")
total = len(reactions)
sentiment = {
"positive": round(pos / total * 100) if total else 0,
"neutral": round(neu / total * 100) if total else 0,
"negative": round(neg / total * 100) if total else 0,
"total_reactions": total,
}
# Generate recommendations
recommendations = _generate_recommendations(reactions, sentiment, campaign_type)
return {
"status": "success",
"campaign_type": campaign_type,
"reactions": reactions,
"sentiment": sentiment,
"recommendations": recommendations,
}
def _generate_recommendations(
reactions: list[dict], sentiment: dict, campaign_type: str
) -> list[dict]:
"""Generate actionable recommendations based on reactions."""
recos = []
neg_pct = sentiment.get("negative", 0)
pos_pct = sentiment.get("positive", 0)
if neg_pct >= 40:
recos.append({
"icon": "🚨", "severity": "high",
"text": "Tỷ lệ tiêu cực CAO (>40%). Cân nhắc điều chỉnh nội dung hoặc delay launch.",
})
elif neg_pct >= 20:
recos.append({
"icon": "⚠️", "severity": "medium",
"text": "Có phản ứng tiêu cực đáng kể. Chuẩn bị response template cho team CS.",
})
neg_concerns = [r.get("key_concern") for r in reactions
if r.get("sentiment") == "negative" and r.get("key_concern")]
if neg_concerns:
recos.append({
"icon": "💡", "severity": "info",
"text": f"Concerns chính: {', '.join(neg_concerns[:3])}",
})
if pos_pct >= 60:
recos.append({
"icon": "🚀", "severity": "positive",
"text": "Tín hiệu tốt! Chiến dịch có tiềm năng viral. Chuẩn bị stock + hạ tầng đơn hàng.",
})
recos.append({
"icon": "📊", "severity": "info",
"text": "Ưu tiên kênh TikTok/Instagram nếu Gen Z phản ứng tích cực, Facebook nếu Mom Shopper positive.",
})
return recos
""",
<parameter name="Description">Backend reaction agent that uses LLM to simulate community reactions to campaigns. Follows the same pattern as the existing simulate_agent.
"""
Simulate Agent — MiroFish-Inspired Conversion Testing Engine
=============================================================
Modules:
- persona_generator: LLM-based realistic customer persona generation
- simulation_runner: Core loop: persona chat → bot reply → evaluate
- evaluator: Conversion measurement + insight accuracy scoring
Flow:
1. persona_generator tạo N personas (demographics + behavior + triggers)
2. simulation_runner chạy mỗi persona chat N turns với chatbot
3. evaluator đo conversion status + insight accuracy + product match
4. synthesis tổng hợp → Conversion Report
Model: gpt-5.4-nano (from DEFAULT_MODEL env)
"""
from .persona_generator import CanifaPersona, generate_personas, CANIFA_ARCHETYPES
from .simulation_runner import run_simulation
from .evaluator import evaluate_conversation, synthesize_results
__all__ = [
"CanifaPersona",
"generate_personas",
"CANIFA_ARCHETYPES",
"run_simulation",
"evaluate_conversation",
"synthesize_results",
]
"""
Evaluator — Conversion Measurement + Insight Accuracy
=======================================================
Đánh giá chatbot từ góc nhìn persona:
A. Quality scores (1-5)
B. Conversion status (converted/interested/dropped)
C. Insight accuracy (bot nhận ra persona đúng bao nhiêu %)
D. Product relevance (SP gợi ý phù hợp bao nhiêu %)
Model: gpt-5.4-nano
"""
import json
import logging
from typing import Any
from common.llm_factory import create_llm
from config import DEFAULT_MODEL
logger = logging.getLogger(__name__)
SIM_MODEL = DEFAULT_MODEL
# ═══════════════════════════════════════════════════════════════
# EVALUATE PROMPT
# ═══════════════════════════════════════════════════════════════
EVAL_SYSTEM = """Bạn là UX evaluator đánh giá chatbot bán hàng thời trang CANIFA.
Bạn quan sát cuộc chat giữa 1 khách giả lập (persona) và chatbot.
Đánh giá từ GÓC NHÌN PERSONA:
A. SCORES (1.0-5.0):
- clarity: Bot dễ hiểu?
- helpfulness: Giải quyết được vấn đề?
- naturalness: Tự nhiên?
- task_completion: Khách đạt mục đích?
- satisfaction: Hài lòng?
B. CONVERSION:
- conversion_status: "converted" | "interested" | "dropped"
+ converted = persona nói "mua", "đặt", "lấy cái này"
+ interested = hứng thú nhưng chưa quyết
+ dropped = mất hứng, từ chối, bot không phù hợp
- conversion_reason: Lý do 1 câu tiếng Việt
C. INSIGHT ACCURACY (nếu có):
- insight_accuracy: 0-100 (bot nhận đúng persona %)
- insight_details: Bot hiểu đúng/sai gì
D. PRODUCT:
- product_match: 0-100 (SP phù hợp persona %)
- product_details: SP hợp/không hợp
E. ANALYSIS:
- key_wins: 1-2 điểm mạnh (array, tiếng Việt)
- key_fails: 1-2 điểm yếu (array, tiếng Việt)
- prompt_fix: 1 instruction fix vấn đề lớn nhất (tiếng Việt)
Trả ONLY valid JSON."""
async def evaluate_conversation(
persona_name: str,
persona_age: int,
persona_archetype: str,
chat_style: str,
budget: str,
shopping_for: str,
style_preference: str,
conversion_trigger: str,
drop_trigger: str,
conversation: list[dict[str, str]],
model_name: str | None = None,
) -> dict[str, Any]:
"""
Evaluate 1 conversation → scores + conversion + insight accuracy.
Returns dict with: scores, conversion_status, conversion_reason,
insight_accuracy, product_match, key_wins, key_fails, prompt_fix
"""
model = model_name or SIM_MODEL
conv_text = "\n".join(
f"{'Persona' if m['role'] == 'user' else 'Chatbot'}: {m['content']}"
for m in conversation
)
eval_input = f"""Persona: {persona_name} ({persona_age} tuổi, {persona_archetype})
Chat style: {chat_style}
Budget: {budget}
Mua cho: {shopping_for}
Style: {style_preference}
CONVERSION CRITERIA:
- MUA khi: {conversion_trigger}
- BỎ khi: {drop_trigger}
Conversation ({len(conversation)} messages):
{conv_text}
Evaluate and determine CONVERSION STATUS."""
llm = create_llm(model, streaming=False, json_mode=True)
# Retry
for attempt in range(3):
try:
response = await llm.ainvoke([
{"role": "system", "content": EVAL_SYSTEM},
{"role": "user", "content": eval_input},
])
raw = response.content
if isinstance(raw, list):
raw = "".join(str(c.get("text", c) if isinstance(c, dict) else c) for c in raw)
return json.loads(raw)
except Exception as e:
logger.warning(f"⚠️ Evaluate fail (attempt {attempt+1}/3): {e}")
logger.error(f"❌ Evaluate failed for {persona_name}")
return {
"scores": {"clarity": 0, "helpfulness": 0, "naturalness": 0, "task_completion": 0, "satisfaction": 0},
"conversion_status": "error",
"conversion_reason": "Evaluation failed",
"key_wins": [],
"key_fails": ["Evaluation error"],
"prompt_fix": "",
}
# ═══════════════════════════════════════════════════════════════
# SYNTHESIZE PROMPT
# ═══════════════════════════════════════════════════════════════
SYNTHESIS_SYSTEM = """Bạn là senior UX researcher. Phân tích kết quả simulation nhiều personas.
Tạo BÁO CÁO CONVERSION:
1. conversion_summary:
- total_personas, converted, interested, dropped, conversion_rate
2. segment_analysis: Array of {archetype, conversion_status, avg_score, issue}
3. worst_personas: Tên persona điểm thấp nhất
4. systemic_issues: 2-3 vấn đề hệ thống (tiếng Việt)
5. top_fixes: Array of {title, instruction, impacts}
6. insight_accuracy_avg: Trung bình %
7. overall_score: 1.0-5.0
8. summary: 2-3 câu tóm executive (tiếng Việt)
Trả ONLY valid JSON."""
async def synthesize_results(
results: list[dict[str, Any]],
chatbot_prompt: str = "",
model_name: str | None = None,
) -> dict[str, Any]:
"""
Tổng hợp kết quả từ tất cả personas → Conversion Report.
"""
model = model_name or SIM_MODEL
summaries = []
for r in results:
scores = r.get("scores", {})
ov = sum(scores.values()) / max(len(scores), 1)
summaries.append(
f"{r.get('persona_name', '?')} ({r.get('archetype', '?')}): "
f"overall {ov:.1f}/5 | "
f"conversion: {r.get('conversion_status', '?')} | "
f"reason: {r.get('conversion_reason', '?')}\n"
f" Scores: {json.dumps(scores)}\n"
f" Insight accuracy: {r.get('insight_accuracy', '?')}%\n"
f" Product match: {r.get('product_match', '?')}%\n"
f" Wins: {'; '.join(r.get('key_wins', []))}\n"
f" Fails: {'; '.join(r.get('key_fails', []))}"
)
synth_input = f"""Chatbot prompt (trích): "{chatbot_prompt[:1000]}"
Results across {len(results)} personas:
{chr(10).join(summaries)}
Provide CONVERSION REPORT and top fixes."""
llm = create_llm(model, streaming=False, json_mode=True)
for attempt in range(3):
try:
response = await llm.ainvoke([
{"role": "system", "content": SYNTHESIS_SYSTEM},
{"role": "user", "content": synth_input},
])
raw = response.content
if isinstance(raw, list):
raw = "".join(str(c.get("text", c) if isinstance(c, dict) else c) for c in raw)
return json.loads(raw)
except Exception as e:
logger.warning(f"⚠️ Synthesize fail (attempt {attempt+1}/3): {e}")
logger.error("❌ Synthesize failed")
return {"error": "Synthesis failed after 3 attempts"}
"""
Canifa Persona Generator
=========================
Inspired by MiroFish OasisProfileGenerator — nhưng nhẹ hơn 90%.
MiroFish: Entity từ Knowledge Graph → 2000 chữ → mô phỏng MXH
Canifa: LLM generate trực tiếp → 100-150 chữ → test chatbot
Model: gpt-5.4-nano (DEFAULT_MODEL)
"""
import json
import logging
import random
from dataclasses import dataclass
from typing import Any
from common.llm_factory import create_llm
from config import DEFAULT_MODEL
logger = logging.getLogger(__name__)
# Model — dùng gpt-5.4-nano từ env
SIM_MODEL = DEFAULT_MODEL
# ═══════════════════════════════════════════════════════════════
# DATA MODEL
# ═══════════════════════════════════════════════════════════════
@dataclass
class CanifaPersona:
"""Canifa customer persona — lightweight version of MiroFish OasisAgentProfile."""
# Identity
name: str
age: int
gender: str # "male" | "female"
job: str
income_range: str # "5-10tr" | "10-20tr"
mbti: str
# Shopping context
shopping_for: str # "Bản thân" | "Con gái 3 tuổi"
budget: str # "200k-500k"
occasion: str # "Đi làm" | "Đi chơi"
style_preference: str
# Chat behavior
chat_style: str # Mô tả cách chat
system_prompt: str # Prompt để LLM đóng vai
# Conversion criteria
conversion_trigger: str # Điều kiện MUA
drop_trigger: str # Điều kiện BỎ
# Metadata
persona_id: int = 0
archetype: str = ""
def to_dict(self) -> dict[str, Any]:
return {
"persona_id": self.persona_id,
"name": self.name,
"age": self.age,
"gender": self.gender,
"job": self.job,
"income_range": self.income_range,
"mbti": self.mbti,
"archetype": self.archetype,
"shopping_for": self.shopping_for,
"budget": self.budget,
"occasion": self.occasion,
"style_preference": self.style_preference,
"chat_style": self.chat_style,
"system_prompt": self.system_prompt,
"conversion_trigger": self.conversion_trigger,
"drop_trigger": self.drop_trigger,
}
# ═══════════════════════════════════════════════════════════════
# ARCHETYPES — Khách hàng Canifa điển hình
# ═══════════════════════════════════════════════════════════════
CANIFA_ARCHETYPES = [
"Mẹ bỉm sữa — mua đồ cho con nhỏ, cẩn thận, so sánh giá",
"GenZ TikToker — thích trend, chat nhanh, dùng slang/emoji",
"Nhân viên văn phòng nữ — mua đồ đi làm, thanh lịch, budget vừa",
"Ông chú IT — mua quà cho vợ/con gái, không rành thời trang",
"Sinh viên tiết kiệm — budget thấp, săn sale, hỏi nhiều",
"Chị em công sở — mua theo nhóm, hay hỏi ý kiến, thích combo",
"Anh trai gym — tìm đồ thể thao/polo, quan tâm chất liệu",
"Bà ngoại — mua cho cháu, không quen chat, hỏi đơn giản",
"Fashionista — biết nhiều brand, kén chọn, so sánh Canifa vs Uniqlo",
"Khách vãng lai — vào hỏi 1 câu rồi đi, test bot giữ chân",
]
# ═══════════════════════════════════════════════════════════════
# PROMPTS
# ═══════════════════════════════════════════════════════════════
_GENERATE_SYSTEM = """Bạn là chuyên gia tạo persona khách hàng cho thương hiệu thời trang CANIFA Việt Nam.
Tạo persona CHÂN THỰC, NGẮN GỌN dùng cho test chatbot bán hàng.
Trả về JSON array. KHÔNG giải thích."""
def _build_generate_prompt(count: int, archetypes: list[str] | None = None) -> str:
if not archetypes:
archetypes = random.sample(CANIFA_ARCHETYPES, min(count, len(CANIFA_ARCHETYPES)))
archetypes_str = "\n".join(f" {i+1}. {a}" for i, a in enumerate(archetypes))
return f"""Tạo {count} persona khách hàng CANIFA:
{archetypes_str}
Mỗi persona là 1 JSON object:
- "name": Tên Việt Nam (VD: "Chị Lan", "Bé Vy")
- "age": Số tuổi (18-65)
- "gender": "male" hoặc "female"
- "job": Nghề nghiệp
- "income_range": Thu nhập/tháng (VD: "8-12tr")
- "mbti": MBTI type
- "archetype": Tên nhóm khách
- "shopping_for": Mua cho ai
- "budget": Ngân sách lần này (VD: "300k-600k")
- "occasion": Dịp mua
- "style_preference": Phong cách thích
- "chat_style": Cách chat 1-2 câu
- "conversion_trigger": Điều kiện MUA
- "drop_trigger": Điều kiện BỎ ĐI
Trả JSON array [{count} objects]. Đa dạng tuổi, giới tính, thu nhập."""
def _build_roleplay_prompt(p: dict) -> str:
"""Build system prompt để LLM đóng vai persona."""
return f"""Bạn đóng vai khách hàng tên {p['name']}, {p['age']} tuổi, {p['job']}.
PERSONA:
- Giới tính: {"Nữ" if p['gender'] == 'female' else "Nam"}
- Thu nhập: {p['income_range']}/tháng | MBTI: {p['mbti']}
- Nhóm: {p['archetype']}
MUA SẮM:
- Mua cho: {p['shopping_for']} | Budget: {p['budget']}
- Dịp: {p['occasion']} | Style: {p['style_preference']}
CÁCH CHAT: {p['chat_style']}
QUY TẮC:
1. Chat ĐÚNG tính cách, budget, mục đích
2. SP phù hợp → hứng thú, hỏi thêm
3. SP không hợp → từ chối nhẹ hoặc hỏi cái khác
4. MUA khi: {p['conversion_trigger']}
5. BỎ khi: {p['drop_trigger']}
6. Chat tự nhiên, ngắn. KHÔNG nói "tôi là AI"."""
# ═══════════════════════════════════════════════════════════════
# GENERATOR
# ═══════════════════════════════════════════════════════════════
async def generate_personas(
count: int = 5,
archetypes: list[str] | None = None,
model_name: str | None = None,
) -> list[CanifaPersona]:
"""
Generate N personas bằng LLM.
Retry pattern lấy từ MiroFish (max 3, giảm temperature).
"""
count = max(1, min(count, 20))
model = model_name or SIM_MODEL
logger.info(f"🎭 Generating {count} personas | model={model}")
llm = create_llm(model, streaming=False, json_mode=True)
prompt = _build_generate_prompt(count, archetypes)
# Retry (MiroFish pattern)
for attempt in range(3):
try:
response = await llm.ainvoke([
{"role": "system", "content": _GENERATE_SYSTEM},
{"role": "user", "content": prompt},
])
raw = response.content
if isinstance(raw, list):
raw = "".join(str(c.get("text", c) if isinstance(c, dict) else c) for c in raw)
data = json.loads(raw)
# Handle wrapped response
if isinstance(data, dict):
for key in ("personas", "data", "results", "items"):
if key in data and isinstance(data[key], list):
data = data[key]
break
else:
data = [data]
# Convert to CanifaPersona
personas = []
for i, p in enumerate(data[:count]):
try:
persona = CanifaPersona(
persona_id=i + 1,
name=p.get("name", f"Persona_{i+1}"),
age=int(p.get("age", 25)),
gender=p.get("gender", "female"),
job=p.get("job", "Không rõ"),
income_range=p.get("income_range", "10-15tr"),
mbti=p.get("mbti", "ISFJ"),
archetype=p.get("archetype", "Khách vãng lai"),
shopping_for=p.get("shopping_for", "Bản thân"),
budget=p.get("budget", "300k-700k"),
occasion=p.get("occasion", "Đi chơi"),
style_preference=p.get("style_preference", "Basic"),
chat_style=p.get("chat_style", "Chat bình thường"),
system_prompt=_build_roleplay_prompt(p),
conversion_trigger=p.get("conversion_trigger", "SP phù hợp"),
drop_trigger=p.get("drop_trigger", "Không tìm được SP"),
)
personas.append(persona)
except Exception as e:
logger.warning(f"⚠️ Skip persona {i}: {e}")
logger.info(f"✅ Generated {len(personas)} personas")
return personas
except json.JSONDecodeError as e:
logger.warning(f"⚠️ JSON fail (attempt {attempt+1}/3): {e}")
except Exception as e:
logger.warning(f"⚠️ Generate fail (attempt {attempt+1}/3): {e}")
raise RuntimeError("Failed to generate personas after 3 attempts")
"""
Simulation Runner — Core Engine
=================================
Chạy full loop: persona → chat → evaluate → report
Lấy từ MiroFish simulation_runner.py:
✅ Background simulation với progress tracking
✅ Retry/backoff khi gọi API
✅ Per-persona conversation logging
Model: gpt-5.4-nano
"""
import logging
from typing import Any
import httpx
from common.llm_factory import create_llm
from config import DEFAULT_MODEL
from .evaluator import evaluate_conversation
from .persona_generator import CanifaPersona, generate_personas
logger = logging.getLogger(__name__)
SIM_MODEL = DEFAULT_MODEL
DEFAULT_CHATBOT_URL = "http://172.16.2.207:5000/api/agent/chat-dev"
async def _generate_persona_message(
persona: CanifaPersona,
turn: int,
conversation: list[dict[str, str]],
) -> str:
"""Persona AI tạo tin nhắn tiếp theo."""
llm = create_llm(SIM_MODEL, streaming=False)
if turn == 0:
messages = [
{"role": "system", "content": persona.system_prompt},
{"role": "user", "content": (
f"Bắt đầu chat với chatbot thời trang Canifa. "
f"Gửi tin nhắn đầu tiên như {persona.name}. "
f"1 tin nhắn ngắn, tự nhiên. KHÔNG giải thích."
)},
]
else:
last_bot = conversation[-1]["content"] if conversation else ""
messages = [
{"role": "system", "content": persona.system_prompt},
*conversation,
{"role": "user", "content": (
f'Chatbot vừa trả lời: "{last_bot[:500]}"\n\n'
f"Tiếp tục chat như {persona.name}. 1 tin nhắn ngắn."
)},
]
response = await llm.ainvoke(messages)
raw = response.content
if isinstance(raw, list):
raw = "".join(str(c.get("text", c) if isinstance(c, dict) else c) for c in raw)
return raw.strip()
async def _send_to_chatbot(
message: str,
device_id: str,
chatbot_url: str,
) -> dict[str, Any]:
"""Forward message to chatbot API with retry."""
max_retries = 2
for attempt in range(max_retries + 1):
try:
async with httpx.AsyncClient(timeout=60) as client:
resp = await client.post(
chatbot_url,
json={"user_query": message, "device_id": device_id},
headers={"Content-Type": "application/json"},
)
resp.raise_for_status()
data = resp.json()
return {
"reply": data.get("ai_response", ""),
"product_ids": data.get("product_ids", []),
"user_insight": data.get("user_insight"),
"trace_id": data.get("trace_id", ""),
}
except Exception as e:
if attempt < max_retries:
logger.warning(f"⚠️ Chatbot retry {attempt+1}: {e}")
else:
logger.error(f"❌ Chatbot failed after {max_retries+1} attempts: {e}")
return {
"reply": "[Chatbot không phản hồi]",
"product_ids": [],
"user_insight": None,
"trace_id": "",
}
async def _simulate_one_persona(
persona: CanifaPersona,
turns: int,
chatbot_url: str,
) -> dict[str, Any]:
"""
Chạy simulation cho 1 persona:
1. Generate message → 2. Send to bot → 3. Repeat N turns → 4. Evaluate
"""
conversation: list[dict[str, str]] = []
products_recommended: list = []
device_id = f"sim-{persona.persona_id}"
logger.info(f"💬 [{persona.name}] Starting {turns} turns...")
for turn in range(turns):
# 1. Persona generates message
try:
user_msg = await _generate_persona_message(persona, turn, conversation)
except Exception as e:
logger.error(f"❌ [{persona.name}] Message gen failed turn {turn}: {e}")
break
conversation.append({"role": "user", "content": user_msg})
# 2. Send to chatbot
bot_result = await _send_to_chatbot(user_msg, device_id, chatbot_url)
bot_reply = bot_result["reply"]
if bot_result["product_ids"]:
products_recommended.extend(bot_result["product_ids"])
conversation.append({"role": "assistant", "content": bot_reply})
logger.info(
f" Turn {turn+1}: {persona.name}: {user_msg[:40]}... "
f"→ Bot: {bot_reply[:40]}..."
)
# 3. Evaluate
logger.info(f"📊 [{persona.name}] Evaluating...")
eval_result = await evaluate_conversation(
persona_name=persona.name,
persona_age=persona.age,
persona_archetype=persona.archetype,
chat_style=persona.chat_style,
budget=persona.budget,
shopping_for=persona.shopping_for,
style_preference=persona.style_preference,
conversion_trigger=persona.conversion_trigger,
drop_trigger=persona.drop_trigger,
conversation=conversation,
)
scores = eval_result.get("scores", {})
avg_score = sum(scores.values()) / max(len(scores), 1)
logger.info(
f"✅ [{persona.name}] {eval_result.get('conversion_status', '?')} | "
f"Score: {avg_score:.1f}/5"
)
return {
"persona_name": persona.name,
"archetype": persona.archetype,
"age": persona.age,
"gender": persona.gender,
"budget": persona.budget,
"shopping_for": persona.shopping_for,
"turns": len(conversation) // 2,
"conversation": conversation,
"products_recommended": products_recommended[:10],
**eval_result,
}
async def run_simulation(
persona_count: int = 3,
turns_per_persona: int = 5,
archetypes: list[str] | None = None,
chatbot_url: str | None = None,
model_name: str | None = None,
) -> dict[str, Any]:
"""
🔥 Run full automated simulation.
Flow:
1. Generate N personas (gpt-5.4-nano)
2. Each persona chats with chatbot
3. Evaluate each conversation
4. Return conversion summary
Args:
persona_count: Number of personas (1-20)
turns_per_persona: Chat turns per persona (1-10)
archetypes: Optional customer archetypes
chatbot_url: Chatbot API URL override
model_name: LLM model override
Returns:
Full simulation report with conversion summary
"""
url = chatbot_url or DEFAULT_CHATBOT_URL
model = model_name or SIM_MODEL
results: list[dict[str, Any]] = []
errors: list[dict[str, str]] = []
# ─── Step 1: Generate Personas ───
logger.info(f"🎭 Step 1: Generating {persona_count} personas | model={model}")
try:
personas = await generate_personas(
count=persona_count,
archetypes=archetypes,
model_name=model,
)
except Exception as e:
logger.error(f"❌ Persona generation failed: {e}")
return {"status": "error", "message": f"Persona generation failed: {e}"}
logger.info(f"✅ Generated {len(personas)} personas")
# ─── Step 2+3: Simulate each persona ───
for persona in personas:
try:
result = await _simulate_one_persona(persona, turns_per_persona, url)
results.append(result)
except Exception as e:
logger.error(f"❌ [{persona.name}] Simulation failed: {e}", exc_info=True)
errors.append({"persona": persona.name, "error": str(e)})
# ─── Step 4: Conversion Summary ───
converted = sum(1 for r in results if r.get("conversion_status") == "converted")
interested = sum(1 for r in results if r.get("conversion_status") == "interested")
dropped = sum(1 for r in results if r.get("conversion_status") == "dropped")
total = len(results)
conversion_summary = {
"total": total,
"converted": converted,
"interested": interested,
"dropped": dropped,
"conversion_rate": f"{converted / max(total, 1) * 100:.0f}%",
"interest_rate": f"{(converted + interested) / max(total, 1) * 100:.0f}%",
}
# Average scores
all_scores = [r.get("scores", {}) for r in results if r.get("scores")]
if all_scores:
avg_scores = {}
for key in all_scores[0]:
vals = [s.get(key, 0) for s in all_scores]
avg_scores[key] = round(sum(vals) / len(vals), 1)
conversion_summary["avg_scores"] = avg_scores
logger.info(
f"📊 SIMULATION COMPLETE: "
f"{converted}/{total} converted ({conversion_summary['conversion_rate']}) | "
f"Errors: {len(errors)}"
)
return {
"status": "success",
"conversion_summary": conversion_summary,
"persona_results": results,
"errors": errors,
"meta": {
"personas_generated": len(personas),
"personas_tested": total,
"turns_per_persona": turns_per_persona,
"chatbot_url": url,
"model": model,
},
}
"""
Diagram Agent API — FastAPI route for AI diagram generation.
Uses DiagramGraph 2-Agent (Planner → Tool → Responder).
History stored in Redis, auto-expires after 30 minutes.
"""
import json
import logging
from pydantic import BaseModel
from fastapi import APIRouter
from fastapi.responses import JSONResponse
from langchain_core.messages import AIMessage, HumanMessage
from agent.diagram_agent.diagram_graph import get_diagram_agent
from common.cache import redis_cache
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/diagram", tags=["Diagram Agent"])
HISTORY_KEY_PREFIX = "diagram:hist:"
HISTORY_TTL = 1800 # 30 min
class DiagramChatRequest(BaseModel):
query: str
session_id: str | None = None
# ─── Helpers: serialize/deserialize LangChain messages ↔ Redis ───
def _serialize_messages(messages: list) -> str:
data = []
for msg in messages:
content = msg.content if isinstance(msg.content, str) else str(msg.content)
if isinstance(msg, HumanMessage):
data.append({"role": "human", "content": content})
elif isinstance(msg, AIMessage):
data.append({"role": "ai", "content": content})
return json.dumps(data, ensure_ascii=False)
def _deserialize_messages(raw: str) -> list:
try:
data = json.loads(raw)
except Exception:
return []
messages = []
for item in data:
if item["role"] == "human":
messages.append(HumanMessage(content=item["content"]))
elif item["role"] == "ai":
messages.append(AIMessage(content=item["content"]))
return messages
async def _load_history(session_id: str) -> list:
try:
client = redis_cache.get_client()
if not client:
return []
raw = await client.get(f"{HISTORY_KEY_PREFIX}{session_id}")
if raw:
return _deserialize_messages(raw)
except Exception as e:
logger.warning("Redis load history error: %s", e)
return []
async def _save_history(session_id: str, messages: list):
try:
client = redis_cache.get_client()
if not client:
return
trimmed = messages[-30:]
raw = _serialize_messages(trimmed)
await client.setex(f"{HISTORY_KEY_PREFIX}{session_id}", HISTORY_TTL, raw)
except Exception as e:
logger.warning("Redis save history error: %s", e)
async def _clear_history(session_id: str):
try:
client = redis_cache.get_client()
if client:
await client.delete(f"{HISTORY_KEY_PREFIX}{session_id}")
except Exception as e:
logger.warning("Redis clear history error: %s", e)
# ─── Endpoints ───
@router.post("/chat", summary="Chat with Diagram Agent")
async def diagram_chat(req: DiagramChatRequest):
"""Send a message to the Diagram Agent. Returns AI response + Mermaid diagram data."""
query = req.query.strip()
if not query:
return JSONResponse(status_code=400, content={"status": "error", "message": "Query trống"})
session_id = req.session_id
try:
agent = get_diagram_agent()
history = []
if session_id:
history = await _load_history(session_id)
result = await agent.chat(query, history=history if history else None)
if session_id:
history.append(HumanMessage(content=query))
if result.get("response"):
history.append(AIMessage(content=result["response"]))
await _save_history(session_id, history)
return {
"status": "success",
"response": result["response"],
"elapsed_ms": result["elapsed_ms"],
"agent_path": result["agent_path"],
"tool_calls": result["tool_calls"],
"pipeline": result.get("pipeline", []),
"diagram": result.get("diagram"),
"session_id": session_id,
"history_count": len(history),
}
except Exception as e:
logger.error(f"❌ Diagram Agent error: {e}", exc_info=True)
return JSONResponse(status_code=500, content={"status": "error", "message": str(e)})
@router.post("/clear", summary="Clear diagram chat history")
async def diagram_clear(req: DiagramChatRequest):
if req.session_id:
await _clear_history(req.session_id)
return {"status": "success", "message": "History cleared"}
"""
Reaction Simulator Route — BettaFish-Inspired Community Reaction Simulator
===========================================================================
API layer for the reaction_agent package.
Endpoints:
GET /segments → List persona segments
GET /campaign-types → List campaign types
POST /simulate → Run full reaction simulation
POST /simulate-mock → Return mock data (no LLM needed)
"""
import logging
from typing import Any
from fastapi import APIRouter
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from agent.reaction_agent import (
CAMPAIGN_TYPES,
PERSONA_SEGMENTS,
run_reaction_simulation,
)
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/reaction-simulator", tags=["Reaction Simulator"])
# ═══ REQUEST MODELS ═══
class SimulateRequest(BaseModel):
campaign_type: str = "product_launch"
campaign_content: str
persona_weights: dict[str, float] | None = None
model_name: str | None = None
# ═══ ENDPOINTS ═══
@router.get("/segments")
async def get_segments():
"""List all available persona segments."""
return JSONResponse(content={
"segments": [
{"id": s["id"], "name": s["name"], "desc": s["desc"], "weight": s["weight"]}
for s in PERSONA_SEGMENTS
]
})
@router.get("/campaign-types")
async def get_campaign_types():
"""List all campaign types."""
return JSONResponse(content={"types": CAMPAIGN_TYPES})
@router.post("/simulate")
async def simulate_reactions(req: SimulateRequest):
"""Run full LLM-powered reaction simulation."""
logger.info(f"🔮 Reaction simulation: type={req.campaign_type}")
try:
result = await run_reaction_simulation(
campaign_type=req.campaign_type,
campaign_content=req.campaign_content,
persona_weights=req.persona_weights,
model_name=req.model_name,
)
return JSONResponse(content=result)
except Exception as e:
logger.error(f"❌ Simulation failed: {e}", exc_info=True)
return JSONResponse(
status_code=500,
content={"status": "error", "message": str(e)},
)
@router.post("/simulate-mock")
async def simulate_mock(req: SimulateRequest):
"""Return mock reaction data (no LLM required)."""
mock_reactions = _get_mock_reactions(req.campaign_type)
pos = sum(1 for r in mock_reactions if r["sentiment"] == "positive")
neu = sum(1 for r in mock_reactions if r["sentiment"] == "neutral")
neg = sum(1 for r in mock_reactions if r["sentiment"] == "negative")
total = len(mock_reactions)
return JSONResponse(content={
"status": "success",
"mode": "mock",
"campaign_type": req.campaign_type,
"reactions": mock_reactions,
"sentiment": {
"positive": round(pos / total * 100),
"neutral": round(neu / total * 100),
"negative": round(neg / total * 100),
"total_reactions": total,
},
"recommendations": [
{"icon": "📊", "severity": "info", "text": "Mock mode — kết nối LLM để có phản ứng realistic hơn."},
],
})
def _get_mock_reactions(campaign_type: str) -> list[dict[str, Any]]:
"""Static mock reactions for demo."""
base = [
{"persona_id": "mom_shopper", "persona_name": "Nguyễn Thị Mai", "segment": "Mom Shopper",
"comment": "Sản phẩm này phù hợp cho gia đình quá!", "sentiment": "positive",
"likes_estimate": 38, "shares_estimate": 6, "key_concern": None},
{"persona_id": "young_professional", "persona_name": "Trần Văn Hùng", "segment": "Young Professional",
"comment": "Thiết kế ổn, cần xem thực tế.", "sentiment": "neutral",
"likes_estimate": 21, "shares_estimate": 3, "key_concern": "thiết kế thực tế"},
{"persona_id": "fashion_enthusiast", "persona_name": "Lê Phương Anh", "segment": "Fashion Enthusiast",
"comment": "Đúng trend luôn! Must have!", "sentiment": "positive",
"likes_estimate": 54, "shares_estimate": 14, "key_concern": None},
{"persona_id": "budget_conscious", "persona_name": "Phạm Minh Tuấn", "segment": "Budget Conscious",
"comment": "Giá hơi cao, cần cân nhắc.", "sentiment": "neutral",
"likes_estimate": 16, "shares_estimate": 1, "key_concern": "giá cả"},
{"persona_id": "gen_z", "persona_name": "Hoàng Thúy Linh", "segment": "Gen Z Trendy",
"comment": "Vibe chill ghê 🔥 sắm ngay!", "sentiment": "positive",
"likes_estimate": 62, "shares_estimate": 19, "key_concern": None},
{"persona_id": "kol", "persona_name": "Fashion Blogger", "segment": "KOL / Blogger",
"comment": "Canifa đang đi đúng hướng. Waiting full review.", "sentiment": "positive",
"likes_estimate": 102, "shares_estimate": 28, "key_concern": None},
{"persona_id": "troll", "persona_name": "Random Commenter", "segment": "Social Media Troll",
"comment": "Lại marketing, scroll qua thôi 🥱", "sentiment": "negative",
"likes_estimate": 11, "shares_estimate": 1, "key_concern": "marketing fatigue"},
]
return base
"""
User Insight Dashboard API
Scan Redis for all active user insights and return them.
"""
import json
import logging
from datetime import datetime
from zoneinfo import ZoneInfo
from fastapi import APIRouter, Request
from fastapi.responses import JSONResponse
from common.cache import redis_cache
logger = logging.getLogger(__name__)
router = APIRouter(prefix="/api/user-insights", tags=["User Insights"])
INSIGHT_PREFIX = "identity_key_insight:"
HISTORY_PREFIX = "identity_key_history:"
@router.get("/all", summary="Get all user insights from Redis")
async def get_all_insights(request: Request):
"""
Scan Redis for all identity_key_insight:* keys and return parsed insights.
Each insight contains the 6-layer UserInsight structure from the chatbot.
"""
try:
client = redis_cache.get_client()
if not client:
return {"status": "error", "message": "Redis not available", "insights": []}
# Scan for all insight keys
insight_keys = []
async for key in client.scan_iter(match=f"{INSIGHT_PREFIX}*", count=100):
if isinstance(key, bytes):
key = key.decode("utf-8")
insight_keys.append(key)
if not insight_keys:
return {"status": "success", "insights": [], "count": 0}
# Fetch all values
results = []
for key in insight_keys:
try:
raw = await client.get(key)
if not raw:
continue
identity_key = key.replace(INSIGHT_PREFIX, "")
# Parse insight JSON
if isinstance(raw, bytes):
raw = raw.decode("utf-8")
insight_data = None
try:
insight_data = json.loads(raw)
except json.JSONDecodeError:
insight_data = {"raw": raw}
# Get TTL to estimate when it was created
ttl = await client.ttl(key)
# Try to get conversation history count
msg_count = 0
history_key = f"{HISTORY_PREFIX}{identity_key}"
try:
history_raw = await client.get(history_key)
if history_raw:
if isinstance(history_raw, bytes):
history_raw = history_raw.decode("utf-8")
history = json.loads(history_raw)
if isinstance(history, list):
msg_count = len(history)
except Exception:
pass
results.append({
"identity_key": identity_key,
"insight": insight_data,
"ttl_seconds": ttl,
"message_count": msg_count,
})
except Exception as e:
logger.warning(f"Error processing key {key}: {e}")
continue
# Sort by TTL ascending (newest first = highest TTL)
results.sort(key=lambda x: x.get("ttl_seconds", 0), reverse=True)
return {
"status": "success",
"insights": results,
"count": len(results),
}
except Exception as e:
logger.error(f"Error in get_all_insights: {e}", exc_info=True)
return JSONResponse(
status_code=500,
content={"status": "error", "message": str(e), "insights": []},
)
@router.get("/{identity_key}", summary="Get single user insight")
async def get_single_insight(identity_key: str):
"""Get insight for a specific identity key."""
try:
client = redis_cache.get_client()
if not client:
return {"status": "error", "message": "Redis not available"}
raw = await client.get(f"{INSIGHT_PREFIX}{identity_key}")
if not raw:
return {"status": "not_found", "identity_key": identity_key}
if isinstance(raw, bytes):
raw = raw.decode("utf-8")
try:
insight = json.loads(raw)
except json.JSONDecodeError:
insight = {"raw": raw}
# Get conversation history
history = []
try:
hist_raw = await client.get(f"{HISTORY_PREFIX}{identity_key}")
if hist_raw:
if isinstance(hist_raw, bytes):
hist_raw = hist_raw.decode("utf-8")
history = json.loads(hist_raw)
except Exception:
pass
ttl = await client.ttl(f"{INSIGHT_PREFIX}{identity_key}")
return {
"status": "success",
"identity_key": identity_key,
"insight": insight,
"history": history,
"ttl_seconds": ttl,
}
except Exception as e:
logger.error(f"Error in get_single_insight: {e}", exc_info=True)
return JSONResponse(
status_code=500,
content={"status": "error", "message": str(e)},
)
@router.delete("/{identity_key}", summary="Delete user insight")
async def delete_insight(identity_key: str):
"""Delete insight for a specific identity key."""
try:
client = redis_cache.get_client()
if not client:
return {"status": "error", "message": "Redis not available"}
deleted = await client.delete(f"{INSIGHT_PREFIX}{identity_key}")
return {"status": "success", "deleted": bool(deleted)}
except Exception as e:
return JSONResponse(status_code=500, content={"status": "error", "message": str(e)})
This diff is collapsed.
......@@ -38,7 +38,10 @@ from api.experiment_log_route import router as experiment_log_router
from api.auth_route import router as auth_router
from api.product_desc_route import router as product_desc_router
from api.bulk_ops_route import router as bulk_ops_router
from api.user_insight_route import router as user_insight_router
from api.reaction_simulator_route import router as reaction_simulator_router
from common.cache import redis_cache
from common.event_bus import event_bus
from common.middleware import middleware_manager
from config import PORT, REDIS_CACHE_TURN_ON
......@@ -72,10 +75,13 @@ app = FastAPI(
@app.on_event("startup")
async def startup_event():
"""Initialize Redis cache and start background workers."""
"""Initialize Redis cache, EventBus, and start background workers."""
await redis_cache.initialize()
logger.info("✅ Redis cache initialized")
# Start FastStream EventBus
await event_bus.start()
# Start report worker if Redis is available
if REDIS_CACHE_TURN_ON and redis_cache.get_client():
from agent.report_agent.report_queue import report_worker_loop
......@@ -86,11 +92,14 @@ async def startup_event():
@app.on_event("shutdown")
async def shutdown_event():
"""Cleanup resources before exit to prevent connection leaks during hot-reload."""
# Stop FastStream EventBus
await event_bus.stop()
from common.db_pool import db_pool
if db_pool:
db_pool.close_all()
logger.info("🛑 Postgres Connection Pool nicely closed")
# Optional: If you want to also clean up StarRocks you can do it here
try:
from common.starrocks_connection import StarRocksConnection
......@@ -172,6 +181,12 @@ from api.ai_tag_search import router as tag_search_router
app.include_router(tag_search_router) # Tag Search Agent
from api.lead_flow_route import router as lead_flow_router
app.include_router(lead_flow_router) # Lead Stage AI (Experiment)
app.include_router(user_insight_router) # User Insight Dashboard
app.include_router(reaction_simulator_router) # Reaction Simulator
from api.canifa_product_api import router as canifa_product_router
app.include_router(canifa_product_router) # Canifa Product Proxy (GraphQL)
from api.ai_diagram_route import router as diagram_router
app.include_router(diagram_router) # AI Diagram Agent
if __name__ == "__main__":
......
/**
* auth.js — Centralized Auth Guard for Canifa Admin
* ===================================================
*
* BẬT/TẮT AUTH LINH HOẠT:
*
* Cách 1: Sửa biến dưới đây
* const AUTH_ENABLED = false; // tắt auth
*
* Cách 2: URL param
* /static/main.html?noauth=1 // bypass auth cho lần này
*
* Cách 3: localStorage
* localStorage.setItem('canifa_auth_disabled', 'true'); // tắt auth mọi lúc
* localStorage.removeItem('canifa_auth_disabled'); // bật lại
*
* Cách 4: Console nhanh
* window.CANIFA_AUTH.disable(); // tắt + reload
* window.CANIFA_AUTH.enable(); // bật + reload
* window.CANIFA_AUTH.status(); // xem trạng thái
*
* USAGE: Thêm vào <head> của mọi page cần auth:
* <script src="/static/auth.js"></script>
*/
(function () {
'use strict';
// ═══════════════════════════════════════════════════
// CONFIG — Đổi false để tắt auth toàn bộ
// ═══════════════════════════════════════════════════
const AUTH_ENABLED = false; // ← false = dev mode (no login), true = production
// ═══════════════════════════════════════════════════
// HELPER: Check if auth should be enforced
// ═══════════════════════════════════════════════════
function isAuthRequired() {
// Master switch
if (!AUTH_ENABLED) return false;
// URL param bypass: ?noauth=1
const params = new URLSearchParams(window.location.search);
if (params.get('noauth') === '1') return false;
// localStorage bypass
if (localStorage.getItem('canifa_auth_disabled') === 'true') return false;
return true;
}
// ═══════════════════════════════════════════════════
// CORE: Get current auth state
// ═══════════════════════════════════════════════════
function getToken() {
return localStorage.getItem('canifa_token') || null;
}
function getUser() {
try {
return JSON.parse(localStorage.getItem('canifa_user') || 'null');
} catch {
return null;
}
}
function isLoggedIn() {
return !!getToken() && !!getUser();
}
// ═══════════════════════════════════════════════════
// AUTH GUARD — Redirect to login if needed
// ═══════════════════════════════════════════════════
function guard() {
if (!isAuthRequired()) {
console.log('🔓 Auth DISABLED — skipping guard');
// Inject fake user nếu chưa có (để sidebar không lỗi)
if (!getUser()) {
localStorage.setItem('canifa_user', JSON.stringify({
username: 'dev',
role: 'user',
id: 'dev-mode',
}));
localStorage.setItem('canifa_token', 'dev-bypass-token');
}
return true; // allow access
}
if (!isLoggedIn()) {
const redirect = encodeURIComponent(window.location.href);
window.location.replace('/static/login.html?redirect=' + redirect);
return false; // blocked
}
return true; // authenticated
}
// ═══════════════════════════════════════════════════
// ADMIN REDIRECT — For pages that need admin role
// ═══════════════════════════════════════════════════
function guardAdmin() {
if (!isAuthRequired()) return true;
if (!guard()) return false;
const user = getUser();
if (!user || user.role !== 'admin') {
window.location.replace('/static/main.html');
return false;
}
return true;
}
// ═══════════════════════════════════════════════════
// LOGOUT
// ═══════════════════════════════════════════════════
function logout() {
localStorage.removeItem('canifa_token');
localStorage.removeItem('canifa_user');
window.location.replace('/static/login.html');
}
// ═══════════════════════════════════════════════════
// AUTH HEADER — For fetch() calls
// ═══════════════════════════════════════════════════
function authHeaders(extra = {}) {
const token = getToken();
const headers = { 'Content-Type': 'application/json', ...extra };
if (token) headers['Authorization'] = 'Bearer ' + token;
return headers;
}
// ═══════════════════════════════════════════════════
// UI HELPERS — Update sidebar user info
// ═══════════════════════════════════════════════════
function updateSidebarUser() {
try {
const user = getUser();
if (!user) return;
const nameEl = document.getElementById('userName');
const avatarEl = document.getElementById('userAvatar');
if (nameEl && user.username) nameEl.textContent = user.username;
if (avatarEl && user.username) avatarEl.textContent = user.username.charAt(0).toUpperCase();
} catch (e) {
console.warn('updateSidebarUser:', e);
}
}
// ═══════════════════════════════════════════════════
// CONSOLE API — For quick toggling in DevTools
// ═══════════════════════════════════════════════════
window.CANIFA_AUTH = {
enable() {
localStorage.removeItem('canifa_auth_disabled');
console.log('🔒 Auth ENABLED. Reloading...');
location.reload();
},
disable() {
localStorage.setItem('canifa_auth_disabled', 'true');
console.log('🔓 Auth DISABLED. Reloading...');
location.reload();
},
status() {
const required = isAuthRequired();
const loggedIn = isLoggedIn();
const user = getUser();
console.table({
'AUTH_ENABLED (hardcode)': AUTH_ENABLED,
'localStorage bypass': localStorage.getItem('canifa_auth_disabled') === 'true',
'Effective': required ? '🔒 ON' : '🔓 OFF',
'Logged in': loggedIn,
'User': user?.username || 'none',
'Role': user?.role || 'none',
'Token': getToken() ? '✅ exists' : '❌ missing',
});
},
// Getters
token: getToken,
user: getUser,
isLoggedIn,
isAuthRequired,
};
// ═══════════════════════════════════════════════════
// EXPORTS (for pages that import this)
// ═══════════════════════════════════════════════════
window.canifaAuth = {
guard,
guardAdmin,
logout,
getToken,
getUser,
isLoggedIn,
isAuthRequired,
authHeaders,
updateSidebarUser,
};
// Log status on load
const mode = isAuthRequired() ? '🔒 Auth ON' : '🔓 Auth OFF (dev mode)';
console.log(`[auth.js] ${mode} | User: ${getUser()?.username || 'none'}`);
})();
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment