A newer version of the Streamlit SDK is available:
1.54.0
AI Messaging System - Visualization Tool
An interactive Streamlit-based visualization and experimentation tool for the AI Messaging System v2. This tool enables non-technical users to generate, visualize, analyze, and improve personalized message campaigns with integrated A/B testing capabilities and comprehensive historical tracking.
π― Purpose
- Generate personalized messages with various configurations
- Run A/B tests with parallel processing and side-by-side comparison
- Visualize and analyze messages in real-time across all campaign stages
- Provide detailed feedback with message header/body tracking
- Track improvements and trends across historical experiments
- Cloud-native architecture with Snowflake integration
- Ready for deployment on HuggingFace Spaces
ποΈ Architecture Overview
Core Philosophy: In-Memory + Cloud Persistence
The system operates on a hybrid architecture:
- In-Memory Operations: All active experiments run in
session_state(no local files) - Cloud Persistence: Data stored in Snowflake for long-term analytics
- On-Demand Loading: Historical data loaded from Snowflake when needed
- One-Click Storage: Results persisted with a single button click
Why This Architecture?
- HuggingFace Ready: No local file dependencies
- Fast Operations: In-memory processing for real-time feedback
- Scalable: Snowflake handles unlimited historical data
- Clean Separation: Current experiments vs. historical data
- Versioned Configs: Automatic configuration versioning in Snowflake
π Directory Structure
visualization/
βββ app.py # Main entry point with authentication & brand selection
βββ pages/ # Multi-page Streamlit application
β βββ 1_Campaign_Builder.py # Campaign configuration & generation (with A/B testing)
β βββ 2_Message_Viewer.py # Message browsing & feedback (A/B aware)
β βββ 4_Analytics.py # Performance metrics for CURRENT experiment
β βββ 5_Historical_Analytics.py # Historical experiments from Snowflake
βββ utils/ # Utility modules
β βββ __init__.py
β βββ auth.py # Authentication logic
β βββ config_manager.py # Configuration loading from Snowflake
β βββ db_manager.py # Snowflake database operations (NEW)
β βββ experiment_runner.py # Parallel experiment execution (NEW)
β βββ session_feedback_manager.py # In-memory feedback management (NEW)
β βββ theme.py # Brand-specific theming
βββ data/ # Local data storage (configs cached here)
β βββ UI_users/ # Pre-loaded user lists (100 users per brand)
β βββ drumeo_users.csv
β βββ pianote_users.csv
β βββ guitareo_users.csv
β βββ singeo_users.csv
βββ requirements.txt # Python dependencies
βββ README.md # This file
βββ IMPLEMENTATION_COMPLETE.md # Refactoring details & progress
βββ ARCHITECTURE_REFACTOR_GUIDE.md # Technical refactoring guide
Deprecated Files (No Longer Used)
These files are legacy and no longer part of the active codebase:
- Replaced by session_state loadingutils/data_loader.py- Replaced by SessionFeedbackManagerutils/feedback_manager.py- Configs now cached locally but loaded from Snowflakedata/configs/- Feedback now in session_state β Snowflakedata/feedback/- No more file outputsai_messaging_system_v2/Data/ui_output/
ποΈ Snowflake Database Schema
Tables
1. MESSAGING_SYSTEM_V2.UI.CONFIGS
CONFIG_NAME VARCHAR -- Configuration identifier
CONFIG_FILE VARIANT -- JSON configuration
CONFIG_VERSION INTEGER -- Auto-incrementing version
BRAND VARCHAR -- Brand name (drumeo, pianote, etc.)
CREATED_AT TIMESTAMP -- Creation timestamp
2. MESSAGING_SYSTEM_V2.UI.EXPERIMENT_METADATA
EXPERIMENT_ID VARCHAR -- Unique experiment identifier
CONFIG_NAME VARCHAR -- Configuration used
BRAND VARCHAR -- Brand name
CAMPAIGN_NAME VARCHAR -- Campaign identifier
STAGE INTEGER -- Stage number (1-11)
LLM_MODEL VARCHAR -- Model used (gpt-4o-mini, gemini-2.5-flash-lite, etc.)
TOTAL_MESSAGES INTEGER -- Messages generated in this stage
TOTAL_USERS INTEGER -- Unique users in this stage
PLATFORM VARCHAR -- Platform (push, email, etc.)
PERSONALIZATION BOOLEAN -- Personalization enabled
INVOLVE_RECSYS BOOLEAN -- Recommendations enabled
RECSYS_CONTENTS ARRAY -- Recommendation types
SEGMENT_INFO VARCHAR -- Segment description
CAMPAIGN_INSTRUCTIONS VARCHAR -- Campaign-wide instructions
PER_MESSAGE_INSTRUCTIONS VARCHAR -- Stage-specific instructions
START_TIME TIMESTAMP -- Experiment start time
END_TIME TIMESTAMP -- Experiment end time (optional)
3. MESSAGING_SYSTEM_V2.UI.FEEDBACKS
EXPERIMENT_ID VARCHAR -- Links to EXPERIMENT_METADATA
USER_ID INTEGER -- User who received the message
STAGE INTEGER -- Stage number
FEEDBACK_TYPE VARCHAR -- 'reject' (only type currently)
REJECTION_REASON VARCHAR -- Reason category key
REJECTION_TEXT VARCHAR -- Custom text explanation
MESSAGE_HEADER VARCHAR -- Full message header
MESSAGE_BODY VARCHAR -- Full message body
CAMPAIGN_NAME VARCHAR -- Campaign identifier
BRAND VARCHAR -- Brand name
CONFIG_NAME VARCHAR -- Configuration used
TIMESTAMP TIMESTAMP -- Feedback submission time
π Getting Started
Prerequisites
- Python 3.9+
- Snowflake account and credentials
- OpenAI/Google AI API keys
- AI Messaging System v2 installed in parent directory
- Plotly for data visualization
Installation
# Install dependencies
pip install -r requirements.txt
# Set up environment variables (.env file)
SNOWFLAKE_USER=your_user
SNOWFLAKE_PASSWORD=your_password
SNOWFLAKE_ACCOUNT=your_account
SNOWFLAKE_ROLE=your_role
SNOWFLAKE_DATABASE=MESSAGING_SYSTEM_V2
SNOWFLAKE_WAREHOUSE=your_warehouse
SNOWFLAKE_SCHEMA=UI
# LLM API keys
OPENAI_API_KEY=your_key
GOOGLE_API_KEY=your_key
Running the Application
# From the visualization directory
cd visualization
streamlit run app.py
First-Time Setup
- Ensure
.envfile exists with Snowflake credentials - Verify user CSV files exist in
data/UI_users/for each brand - Create Snowflake tables using schema above (or let system auto-create)
- Upload initial configurations to Snowflake (optional - can create in UI)
- Login with authorized email and access token
π Complete Data Flow
Session State Architecture
All active data lives in Streamlit's session_state:
# Single Experiment Mode
st.session_state.ui_log_data # DataFrame: Generated messages
st.session_state.current_experiment_id # String: Experiment identifier
st.session_state.current_experiment_metadata # List[Dict]: Metadata per stage
st.session_state.current_feedbacks # List[Dict]: Feedback records
# AB Testing Mode
st.session_state.ui_log_data_a # DataFrame: Experiment A messages
st.session_state.ui_log_data_b # DataFrame: Experiment B messages
st.session_state.experiment_a_id # String: Experiment A identifier
st.session_state.experiment_b_id # String: Experiment B identifier
st.session_state.experiment_a_metadata # List[Dict]: A's metadata
st.session_state.experiment_b_metadata # List[Dict]: B's metadata
st.session_state.feedbacks_a # List[Dict]: A's feedback
st.session_state.feedbacks_b # List[Dict]: B's feedback
# Configuration
st.session_state.campaign_config # Dict: Single mode config
st.session_state.campaign_config_a # Dict: AB mode config A
st.session_state.campaign_config_b # Dict: AB mode config B
st.session_state.configs_cache # Dict: Cached configs from Snowflake
End-to-End Workflow
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 1. CAMPAIGN BUILDER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β’ Load configs from Snowflake (cached in session_state) β
β β’ User selects/modifies configuration β
β β’ Sample users from CSV files β
β β’ Generate messages (stored in session_state.ui_log_data) β
β β’ Track metadata (session_state.current_experiment_metadata) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 2. MESSAGE VIEWER β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β’ Load messages from session_state.ui_log_data β
β β’ Display in user-centric or stage-centric views β
β β’ User provides feedback (reject messages) β
β β’ Feedback stored in session_state.current_feedbacks β
β β’ [BUTTON] Store Results to Snowflake β
β β Writes metadata to EXPERIMENT_METADATA table β
β β Writes feedback to FEEDBACKS table β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 3. ANALYTICS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β’ Load current experiment from session_state β
β β’ Calculate metrics using SessionFeedbackManager β
β β’ Show overall performance, stage analysis, rejection reasons β
β β’ Support AB testing side-by-side comparison β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β 4. HISTORICAL ANALYTICS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β’ [BUTTON] Load Historical Data from Snowflake β
β β’ Query EXPERIMENT_METADATA + FEEDBACKS tables β
β β’ Calculate aggregate metrics across all experiments β
β β’ Show trends: rejection rates over time β
β β’ Compare configurations: which performs best β
β β’ Filter by date range β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
π Pages Overview
Page 0: Home & Authentication (app.py)
Purpose: Login, brand selection, config loading, and navigation hub
Features:
- Email and token-based authentication
- Brand selection (Drumeo, Pianote, Guitareo, Singeo)
- Config loading from Snowflake on startup
- Brand-specific theming applied throughout app
- Navigation guide and quick start instructions
- Current experiment status overview
Technical Details:
- Session-based authentication
- Brand selection persists via
st.session_state.selected_brand - Configs cached in
st.session_state.configs_cacheon first load - Dynamic theming using
utils/theme.py - Loads environment variables from
.envfile
Key Functions:
def load_configs_from_snowflake(brand):
"""Load all configs for brand from Snowflake, cache in session_state."""
session = create_snowflake_session()
config_manager = ConfigManager(session)
configs = config_manager.load_configs_from_snowflake(brand)
st.session_state.configs_cache = configs
session.close()
Page 1: Campaign Builder
Purpose: Create and run message generation campaigns with integrated A/B testing
Architecture: In-memory experiment execution with parallel AB testing
Key Features
Configuration Management:
- Loads configs from cached
session_state.configs_cache - Real-time config editing in UI
- Save configs to Snowflake with auto-versioning
- Quick save or save-as-new options
A/B Testing Toggle:
- Single Experiment Mode (default): One campaign
- A/B Testing Mode: Two parallel experiments for comparison
User Sampling:
- Random sampling from brand-specific CSV files
- 1-25 users selectable
- Same users used for both AB experiments (fair comparison)
Parallel Execution:
- Uses
ExperimentRunner.run_ab_test_parallel() - Threading for simultaneous A/B generation
- Separate Snowflake sessions per experiment (no conflicts)
- Console logging for thread progress
- Results stored directly in session_state
Technical Details:
Single Mode Generation:
# ExperimentRunner handles all generation logic
runner = ExperimentRunner(brand=brand, system_config=config)
success, ui_log_data, metadata = runner.run_single_experiment(
config=campaign_config,
sampled_users_df=users_df,
experiment_id=experiment_id,
create_session_func=create_snowflake_session,
progress_container=st.container()
)
# Store in session_state
st.session_state.ui_log_data = ui_log_data
st.session_state.current_experiment_metadata = metadata
st.session_state.current_experiment_id = experiment_id
AB Mode Generation:
# Parallel execution in threads
results = runner.run_ab_test_parallel(
config_a=campaign_config_a,
config_b=campaign_config_b,
sampled_users_df=users_df,
experiment_a_id=experiment_a_id,
experiment_b_id=experiment_b_id,
create_session_func=create_snowflake_session
)
# Store both results
st.session_state.ui_log_data_a = results['a']['ui_log_data']
st.session_state.ui_log_data_b = results['b']['ui_log_data']
st.session_state.experiment_a_metadata = results['a']['metadata']
st.session_state.experiment_b_metadata = results['b']['metadata']
Configuration Saving to Snowflake:
# Save with auto-versioning
session = create_snowflake_session()
db_manager = UIDatabaseManager(session)
success = db_manager.save_config(
config_name=config_name,
config_data=campaign_config,
brand=brand
)
session.close()
Thread-Safe Progress Handling:
- Main thread: Full Streamlit UI updates
- Worker threads: Console logging only
- Dummy progress bars prevent Streamlit errors in threads
Page 2: Message Viewer
Purpose: Browse, search, and evaluate generated messages with A/B testing awareness
Architecture: Loads from session_state, feedback stored in-memory
Key Features
Automatic AB Detection:
def detect_ab_testing_mode():
"""Detect AB mode from session_state."""
return (
'ui_log_data_a' in st.session_state and
'ui_log_data_b' in st.session_state and
st.session_state.ui_log_data_a is not None and
st.session_state.ui_log_data_b is not None
)
Message Loading:
# Single mode
def get_single_experiment_data():
if 'ui_log_data' in st.session_state:
return st.session_state.ui_log_data
return None
# AB mode - loads both dataframes
messages_a_df = st.session_state.ui_log_data_a
messages_b_df = st.session_state.ui_log_data_b
Feedback System:
- In-memory storage using
SessionFeedbackManager - Rejection categories: Poor header, Poor body, Grammar, Emoji, Recommendation issues, Similar to previous, etc.
- Stores full message header and body with feedback
- Undo rejection capability
# Add feedback
SessionFeedbackManager.add_feedback(
experiment_id=experiment_id,
user_id=user_id,
stage=stage,
feedback_type="reject",
rejection_reason="poor_header",
rejection_text="Too generic",
message_header=header,
message_body=body,
campaign_name=campaign_name,
brand=brand,
config_name=config_name,
feedback_list_key="current_feedbacks" # or "feedbacks_a", "feedbacks_b"
)
Store Results to Snowflake (CRITICAL FEATURE):
Located after message viewing section. Button appears prominently.
if st.button("πΎ Store Results to Snowflake"):
session = create_snowflake_session()
db_manager = UIDatabaseManager(session)
# Single mode
if not ab_mode:
# Store metadata
for meta in st.session_state.current_experiment_metadata:
db_manager.store_experiment_metadata(meta)
# Store feedback
for feedback in st.session_state.current_feedbacks:
db_manager.store_feedback(feedback)
# AB mode
else:
# Store both experiments
for meta in st.session_state.experiment_a_metadata:
db_manager.store_experiment_metadata(meta)
for meta in st.session_state.experiment_b_metadata:
db_manager.store_experiment_metadata(meta)
for feedback in st.session_state.feedbacks_a:
db_manager.store_feedback(feedback)
for feedback in st.session_state.feedbacks_b:
db_manager.store_feedback(feedback)
session.close()
st.success("β
Results stored to Snowflake successfully!")
st.balloons()
View Modes:
- User-Centric: All stages for each user
- Stage-Centric: All users for each stage
- Filters: Stage selection, keyword search, pagination
Page 3: Analytics Dashboard
Purpose: Visualize performance metrics for CURRENT experiment only
Architecture: Loads from session_state, uses SessionFeedbackManager
Key Features
Data Loading:
# Single mode
def get_single_experiment_data():
return st.session_state.ui_log_data
# AB mode
detect_ab_testing_mode() # Returns True if AB data exists
messages_a_df = st.session_state.ui_log_data_a
messages_b_df = st.session_state.ui_log_data_b
Feedback Stats Calculation:
# Single mode
feedback_stats = SessionFeedbackManager.get_feedback_stats(
experiment_id=experiment_id,
total_messages=len(messages_df),
feedback_list_key="current_feedbacks"
)
# AB mode
feedback_stats_a = SessionFeedbackManager.get_feedback_stats(
experiment_a_id,
total_messages=len(messages_a_df),
feedback_list_key="feedbacks_a"
)
feedback_stats_b = SessionFeedbackManager.get_feedback_stats(
experiment_b_id,
total_messages=len(messages_b_df),
feedback_list_key="feedbacks_b"
)
Metrics Displayed:
- Overall: Total messages, rejection rate, feedback count
- Stage-by-Stage: Performance breakdown per stage
- Rejection Reasons: Pie charts and bar charts
- AB Comparison: Side-by-side metrics with winner determination
Export Options:
- Export current messages to CSV
- Export current feedback to CSV
- Export analytics summary to CSV
Important: Analytics page shows ONLY the current in-memory experiment. For historical data, use Historical Analytics.
Page 4: Historical Analytics
Purpose: Track all past experiments and analyze trends from Snowflake
Architecture: Button-triggered Snowflake queries
Key Features
Load Button:
if st.button("π Load Historical Data from Snowflake"):
session = create_snowflake_session()
db_manager = UIDatabaseManager(session)
# Load experiment summary with JOIN
experiments_df = db_manager.get_experiment_summary(brand=brand)
st.session_state['historical_experiments'] = experiments_df
st.session_state['historical_data_loaded'] = True
session.close()
SQL Query Example:
SELECT
m.EXPERIMENT_ID,
m.CONFIG_NAME,
m.CAMPAIGN_NAME,
m.BRAND,
MIN(m.START_TIME) as start_time,
SUM(m.TOTAL_MESSAGES) as total_messages,
MAX(m.TOTAL_USERS) as total_users,
COUNT(DISTINCT m.STAGE) as total_stages,
COUNT(f.FEEDBACK_TYPE) as total_rejects,
(COUNT(f.FEEDBACK_TYPE) * 100.0 / NULLIF(SUM(m.TOTAL_MESSAGES), 0)) as rejection_rate
FROM MESSAGING_SYSTEM_V2.UI.EXPERIMENT_METADATA m
LEFT JOIN MESSAGING_SYSTEM_V2.UI.FEEDBACKS f
ON m.EXPERIMENT_ID = f.EXPERIMENT_ID
WHERE m.BRAND = :brand
GROUP BY m.EXPERIMENT_ID, m.CONFIG_NAME, m.CAMPAIGN_NAME, m.BRAND
ORDER BY start_time DESC
Visualizations:
- Experiments summary table
- Rejection rate trend over time (line chart)
- Performance comparison by configuration (bar chart)
- Best/worst performing configs
Filters:
- Date range filtering
- Automatic refresh button
Export:
- Export summary to CSV
- Note: Detailed feedback export coming soon (use SQL queries for now)
π§ Utility Modules
utils/db_manager.py - UIDatabaseManager
Purpose: All Snowflake database operations
Key Methods:
class UIDatabaseManager:
def __init__(self, session: Session):
"""Initialize with Snowflake session."""
def save_config(self, config_name, config_data, brand):
"""Save config with auto-versioning."""
def load_config(self, config_name, brand, version=None):
"""Load specific config version."""
def store_experiment_metadata(self, metadata: dict):
"""Insert metadata record."""
def store_feedback(self, feedback: dict):
"""Insert feedback record."""
def get_experiment_summary(self, brand=None, start_date=None, end_date=None):
"""Get aggregated experiment metrics with JOIN."""
def close(self):
"""Close Snowflake session."""
Usage Pattern:
# Always use context-like pattern
session = create_snowflake_session()
db_manager = UIDatabaseManager(session)
try:
# Do operations
db_manager.save_config(...)
db_manager.store_feedback(...)
finally:
db_manager.close() # or session.close()
utils/config_manager.py - ConfigManager
Purpose: Configuration loading and caching
Key Methods:
class ConfigManager:
def __init__(self, session: Session):
"""Initialize with Snowflake session."""
def load_configs_from_snowflake(self, brand: str) -> Dict:
"""Load all configs for brand, returns {name: config_data}."""
def get_latest_version(self, config_name: str, brand: str) -> int:
"""Get latest version number."""
Caching Strategy:
- Configs loaded once on app startup
- Cached in
st.session_state.configs_cache - Format:
{"config_name": {...config_data...}, ...} - No re-querying Snowflake during session
utils/experiment_runner.py - ExperimentRunner
Purpose: Execute experiments with proper session management
Key Methods:
class ExperimentRunner:
def run_single_experiment(
self, config, sampled_users_df, experiment_id,
create_session_func, progress_container
):
"""Run one experiment, all stages sequentially."""
def run_ab_test_parallel(
self, config_a, config_b, sampled_users_df,
experiment_a_id, experiment_b_id, create_session_func
):
"""Run two experiments in parallel threads."""
Thread-Safe Design:
- Each thread gets own Snowflake session
- Progress updates handled safely:
- Main thread: Full Streamlit UI
- Worker threads: Console logging with dummy UI objects
- No Streamlit context errors
Implementation:
# Thread function
def run_experiment(exp_key, config, exp_id):
try:
success, data, metadata = self.run_single_experiment(
config=config,
sampled_users_df=users_df,
experiment_id=exp_id,
create_session_func=create_session_func,
progress_container=None # None = threaded mode
)
results[exp_key] = {'success': success, 'ui_log_data': data, 'metadata': metadata}
except Exception as e:
results[exp_key] = {'success': False, 'error': str(e)}
# Start threads
thread_a = threading.Thread(target=run_experiment, args=('a', config_a, exp_a_id))
thread_b = threading.Thread(target=run_experiment, args=('b', config_b, exp_b_id))
thread_a.start()
thread_b.start()
thread_a.join()
thread_b.join()
utils/session_feedback_manager.py - SessionFeedbackManager
Purpose: In-memory feedback management
Static Methods (no instance needed):
@staticmethod
def add_feedback(experiment_id, user_id, stage, feedback_type,
rejection_reason, rejection_text, message_header,
message_body, campaign_name, brand, config_name,
feedback_list_key):
"""Add feedback to session_state list."""
@staticmethod
def get_feedback_stats(experiment_id, total_messages, feedback_list_key):
"""Calculate aggregate stats from feedback list."""
@staticmethod
def get_stage_feedback_stats(experiment_id, messages_df, feedback_list_key):
"""Calculate per-stage stats."""
@staticmethod
def get_rejection_reason_label(reason_key):
"""Map reason key to display label."""
Feedback List Keys:
"current_feedbacks"- Single mode"feedbacks_a"- AB mode experiment A"feedbacks_b"- AB mode experiment B
Usage:
# Add feedback
SessionFeedbackManager.add_feedback(
experiment_id="drumeo_20260114_1234",
user_id=12345,
stage=1,
feedback_type="reject",
rejection_reason="poor_header",
rejection_text="Too generic",
message_header="Your next lesson π",
message_body="Check it out...",
campaign_name="Re-engagement",
brand="drumeo",
config_name="drumeo_re_engagement_test",
feedback_list_key="current_feedbacks"
)
# Get stats
stats = SessionFeedbackManager.get_feedback_stats(
experiment_id="drumeo_20260114_1234",
total_messages=100,
feedback_list_key="current_feedbacks"
)
# Returns: {'total_feedback': 10, 'total_rejects': 10,
# 'reject_rate': 10.0, 'rejection_reasons': {...}}
π¨ Brand Theming
Theming automatically adjusts based on selected brand:
| Brand | Primary Color | Sidebar BG | Accent | Emoji |
|---|---|---|---|---|
| Base | Gold | Dark Gold | Gold | π΅ |
| Drumeo | Light Blue | Dark Blue | Blue | π₯ |
| Pianote | Light Red | Dark Red | Red | πΉ |
| Guitareo | Light Green | Dark Green | Green | πΈ |
| Singeo | Light Purple | Dark Purple | Purple | π€ |
Implementation: utils/theme.py
def get_brand_theme(brand):
"""Returns theme dictionary."""
def apply_theme(brand):
"""Applies CSS via st.markdown."""
def get_brand_emoji(brand):
"""Returns brand emoji."""
π Key Implementation Details
Configuration File Structure
{
"brand": "drumeo",
"campaign_type": "re_engagement",
"campaign_name": "UI-Test-Campaign",
"campaign_instructions": "Keep messages encouraging and motivational.",
"1": {
"stage": 1,
"model": "gemini-2.5-flash-lite",
"personalization": true,
"involve_recsys_result": true,
"recsys_contents": ["workout", "course", "quick_tips"],
"specific_content_id": null,
"segment_info": "Students inactive for 3+ days",
"instructions": "",
"sample_examples": "Header: Your next lesson π\nMessage: Check it out!",
"identifier_column": "user_id",
"platform": "push"
},
"2": {
"stage": 2,
"model": "gpt-4o-mini",
"personalization": true,
"involve_recsys_result": true,
"recsys_contents": ["song"],
"specific_content_id": 12345,
"segment_info": "Students inactive for 7+ days",
"instructions": "Focus on easy songs",
"sample_examples": "Header: Let's jam! πΈ\nMessage: Try this song!",
"identifier_column": "user_id",
"platform": "push"
}
}
Rejection Reason Categories
REJECTION_REASONS = {
"poor_header": "Poor Header",
"poor_body": "Poor Body/Content",
"grammar_issues": "Grammar Issues",
"emoji_problems": "Emoji Problems",
"recommendation_issues": "Recommendation Issues",
"wrong_information": "Wrong/Inaccurate Information",
"tone_issues": "Tone Issues",
"similarity": "Similar To Previous Header/Messages",
"other": "Other"
}
Environment Variables Required
# Snowflake
SNOWFLAKE_USER=your_user
SNOWFLAKE_PASSWORD=your_password
SNOWFLAKE_ACCOUNT=your_account
SNOWFLAKE_ROLE=your_role
SNOWFLAKE_DATABASE=MESSAGING_SYSTEM_V2
SNOWFLAKE_WAREHOUSE=your_warehouse
SNOWFLAKE_SCHEMA=UI
# LLM APIs
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=AIza...
π οΈ Development Guide
Adding a New Page
- Create
pages/{number}_{Name}.py - Follow standard structure:
import streamlit as st
import os
from pathlib import Path
from dotenv import load_dotenv
# Load .env
env_path = Path(__file__).parent.parent.parent / '.env'
if env_path.exists():
load_dotenv(env_path)
from utils.auth import check_authentication
from utils.theme import apply_theme, get_brand_emoji
st.set_page_config(page_title="New Page", page_icon="π", layout="wide")
if not check_authentication():
st.error("π Please login first")
st.stop()
if "selected_brand" not in st.session_state:
st.error("β οΈ Please select a brand first")
st.stop()
brand = st.session_state.selected_brand
apply_theme(brand)
# Your content here
- Update
app.pynavigation - Test with all brands
Adding a New Feedback Category
- Update
utils/session_feedback_manager.py:
REJECTION_REASONS = {
# ... existing
"new_reason": "New Reason Label"
}
- Automatically appears in UI
Adding a New Brand
- Create
data/UI_users/{brand}_users.csv(100 users, must haveUSER_IDcolumn) - Add to
utils/theme.py:
BRAND_THEMES["newbrand"] = {
"primary": "#COLOR",
"accent": "#COLOR",
"sidebar_bg": "#DARK_COLOR",
"text": "#FFFFFF"
}
BRAND_EMOJIS["newbrand"] = "π"
- Update
app.py:
brands = ["drumeo", "pianote", "guitareo", "singeo", "newbrand"]
brand_labels["newbrand"] = "π New Brand"
- Create default config in Snowflake or via UI
Debugging Tips
Common Issues:
"No messages found" in Message Viewer:
- Check
st.session_state.ui_log_dataexists - Verify generation completed in Campaign Builder
- Look for errors in terminal
- Check
Snowflake connection errors:
- Verify
.envfile exists and is loaded - Check credentials are correct
- Test connection:
create_snowflake_session()
- Verify
AB test AttributeError:
- Fixed in latest version
- Ensure ExperimentRunner uses thread-safe progress handling
Config save errors with quotes:
- Fixed: Now uses
write_pandas()instead of raw SQL - Handles JSON with apostrophes correctly
- Fixed: Now uses
Feedback not in Analytics:
- Check
st.session_state.current_feedbackshas data - Verify correct feedback_list_key used
- Check experiment_id matches
- Check
Debugging Code:
# Debug session state
with st.expander("Debug Info"):
st.write("Session State Keys:", list(st.session_state.keys()))
if 'ui_log_data' in st.session_state:
st.write("Messages shape:", st.session_state.ui_log_data.shape)
if 'current_feedbacks' in st.session_state:
st.write("Feedback count:", len(st.session_state.current_feedbacks))
π― Key Design Decisions
1. In-Memory + Cloud Hybrid
Decision: Use session_state for active data, Snowflake for persistence Rationale:
- Fast in-memory operations
- No local file dependencies (HuggingFace ready)
- Scalable historical storage
- Clean separation: current vs. historical
2. One-Click Storage
Decision: Single "Store Results" button to persist everything Rationale:
- Simple user workflow
- Explicit persistence action
- User controls when data is saved
- No auto-save surprises
3. Config Caching
Decision: Load all configs once, cache in session_state Rationale:
- Reduces Snowflake queries
- Faster config switching
- Session-scoped cache (fresh on page load)
- No stale data issues
4. Thread-Safe AB Testing
Decision: Separate Snowflake sessions per thread, console logging Rationale:
- Prevents session conflicts
- Streamlit UI only in main thread
- Clean error handling
- Production-ready parallel execution
5. Versioned Configurations
Decision: Auto-increment version on every config save Rationale:
- Full audit trail
- Can rollback to previous versions
- Supports experimentation
- No data loss
6. Button-Triggered Historical Loading
Decision: Historical Analytics loads on button click, not auto Rationale:
- User controls when to query Snowflake
- Avoids unnecessary queries
- Faster page load
- Clear user action
7. SessionFeedbackManager Static Methods
Decision: All methods static, no instance needed Rationale:
- Simpler API
- Works directly with session_state
- No state to manage
- Cleaner code
π Deployment Guide
HuggingFace Spaces Deployment
Requirements:
- No local file dependencies β
- Environment variables for secrets β
- Snowflake connectivity β
- CSV files in repo (data/UI_users/) β
Steps:
- Push code to GitHub/HuggingFace repo
- Include
data/UI_users/CSV files - Set environment variables in Space settings:
- All SNOWFLAKE_* variables
- All API keys
- Run:
streamlit run app.py - Verify Snowflake connection works
Files to Exclude:
.env(use Space secrets instead)- Local cache directories
- Test data
π Support & Resources
Contact:
- Technical Support: danial@musora.com
Related Documentation:
- Main System:
ai_messaging_system_v2/README.md - UI Mode Guide:
ai_messaging_system_v2/UI_MODE_GUIDE.md - Implementation Details:
visualization/IMPLEMENTATION_COMPLETE.md - Refactoring Guide:
visualization/ARCHITECTURE_REFACTOR_GUIDE.md
Useful Links:
- Streamlit Documentation: https://docs.streamlit.io
- Snowflake Python Connector: https://docs.snowflake.com/en/developer-guide/python-connector/python-connector
- Plotly Charts: https://plotly.com/python/
π System Status
Completion: 100% β
Completed Components:
- β Database Layer (db_manager.py)
- β Config Manager (config_manager.py)
- β Session Feedback Manager (session_feedback_manager.py)
- β Experiment Runner (experiment_runner.py)
- β App.py - Authentication & config loading
- β Campaign Builder - Generation & AB testing
- β Message Viewer - Viewing & feedback
- β Analytics - Current experiment metrics
- β Historical Analytics - Snowflake integration
Recent Fixes:
- β
Configuration save error (JSON escaping) - Fixed with
write_pandas() - β
AB testing
AttributeError: enter- Fixed with thread-safe design - β
Historical Analytics Snowflake connection - Fixed to use
.env
Ready For:
- β Production use
- β HuggingFace deployment
- β End-to-end testing
- β Team onboarding
Built with β€οΈ for the Musora team
Last Updated: 2026-01-14 Version: 2.0 (Refactored Architecture)