oppo-node / TESTING.md
DJ-Goanna-Coding's picture
Deploy from GitHub Actions
c87f72b verified

A newer version of the Streamlit SDK is available: 1.57.0

Upgrade

Testing Documentation for VAMGUARD_TITAN / TIA-ARCHITECT-CORE

Overview

This document provides comprehensive information about the test suite for the VAMGUARD_TITAN repository, including test coverage, how to run tests, and testing best practices.

Test Structure

tests/
β”œβ”€β”€ __init__.py                           # Test package initialization
β”œβ”€β”€ conftest.py                           # Pytest fixtures and configuration
β”œβ”€β”€ test_genesis_boiler.py                # Tests for genesis_boiler.py
β”œβ”€β”€ test_worker_watchdog.py               # Tests for worker_watchdog.py
β”œβ”€β”€ test_self_healing_worker.py           # Tests for self_healing_worker.py
β”œβ”€β”€ test_apps_script_toolbox.py           # Tests for apps_script_toolbox.py
β”œβ”€β”€ test_download_citadel_omega_models.py # Tests for download scripts
└── test_app.py                           # Tests for Streamlit app

Test Coverage

Module Coverage

Module Coverage Test Cases Status
genesis_boiler.py ~95% 25+ βœ… Complete
worker_watchdog.py ~90% 30+ βœ… Complete
self_healing_worker.py ~90% 35+ βœ… Complete
apps_script_toolbox.py ~85% 20+ βœ… Complete
download_citadel_omega_models.py ~80% 15+ βœ… Complete
app.py ~75% 25+ βœ… Complete

Coverage by Component

GenesisBoiler (genesis_boiler.py)

  • βœ… Initialization
  • βœ… Territory auditing
  • βœ… File consolidation (tarball creation)
  • βœ… Error handling (OSError, PermissionError, IOError)
  • βœ… Path validation
  • βœ… Multiple source handling
  • βœ… Non-existent path handling

WorkerWatchdog (worker_watchdog.py)

  • βœ… Initialization and configuration
  • βœ… File hash calculation (SHA256)
  • βœ… Change detection (new, modified, deleted files)
  • βœ… Self-healing trigger
  • βœ… Workflow health checking
  • βœ… State persistence (save/load)
  • βœ… Continuous monitoring
  • βœ… Template change detection

SelfHealingWorker (self_healing_worker.py)

  • βœ… Script health checking
  • βœ… Python script validation (AST parsing)
  • βœ… Bash script validation
  • βœ… Import checking
  • βœ… Auto-repair (shebang, imports, permissions)
  • βœ… Backup creation
  • βœ… Health reporting
  • βœ… Full healing workflow

AppsScriptToolbox (apps_script_toolbox.py)

  • βœ… Worker initialization
  • βœ… Connection verification
  • βœ… Identity strike reports
  • βœ… Full archive audits
  • βœ… Worker status dashboard
  • βœ… Error handling

Download Scripts

  • βœ… Model downloading
  • βœ… Registry creation
  • βœ… Path management
  • βœ… Error handling
  • βœ… Already-downloaded detection

Streamlit App (app.py)

  • βœ… Configuration structure
  • βœ… Environment variables
  • βœ… Data directory management
  • βœ… UI component structure
  • βœ… Models registry integration
  • βœ… Workers constellation
  • βœ… RAG system references
  • βœ… Tools and utilities

Running Tests

Prerequisites

# Install main dependencies
pip install -r requirements.txt

# Install test dependencies
pip install -r requirements-test.txt

Run All Tests

# Run all tests with coverage
pytest -v --cov=. --cov-report=term-missing

# Run all tests with HTML coverage report
pytest -v --cov=. --cov-report=html

# Run specific test file
pytest tests/test_genesis_boiler.py -v

# Run specific test class
pytest tests/test_genesis_boiler.py::TestGenesisBoilerInit -v

# Run specific test
pytest tests/test_genesis_boiler.py::TestGenesisBoilerInit::test_init_default_values -v

Test Markers

Tests are marked with the following markers:

  • @pytest.mark.unit - Unit tests
  • @pytest.mark.integration - Integration tests
  • @pytest.mark.slow - Slow-running tests
  • @pytest.mark.requires_network - Tests requiring network access
  • @pytest.mark.requires_hf_token - Tests requiring HuggingFace token
# Run only unit tests
pytest -v -m unit

# Run only integration tests
pytest -v -m integration

# Skip slow tests
pytest -v -m "not slow"

# Skip network-dependent tests
pytest -v -m "not requires_network"

Coverage Reports

# Generate coverage report
coverage run -m pytest
coverage report

# Generate HTML coverage report
coverage html
# Open htmlcov/index.html in browser

# Generate XML coverage report (for CI/CD)
coverage xml

Test Fixtures

Common Fixtures (from conftest.py)

  • temp_dir - Creates a temporary directory for testing
  • mock_env_vars - Mocks environment variables
  • sample_python_file - Creates a sample Python file
  • sample_directory_structure - Creates a directory structure with files

Usage Example

def test_with_temp_dir(temp_dir):
    """Test using temp_dir fixture"""
    test_file = temp_dir / "test.txt"
    test_file.write_text("content")
    assert test_file.exists()

def test_with_mock_env(mock_env_vars):
    """Test using mocked environment variables"""
    assert os.getenv("HF_TOKEN") == "test_token_123"

Writing New Tests

Test Structure

"""
Module docstring explaining what is being tested
"""
import pytest
from pathlib import Path
from unittest.mock import Mock, patch
import sys

# Add parent to path if needed
sys.path.insert(0, str(Path(__file__).parent.parent))

from module_to_test import ClassToTest


class TestClassName:
    """Test class with descriptive name"""

    def test_specific_functionality(self):
        """Test with clear description"""
        # Arrange
        obj = ClassToTest()

        # Act
        result = obj.method()

        # Assert
        assert result == expected_value

Best Practices

  1. Descriptive Names: Use clear, descriptive test names
  2. Arrange-Act-Assert: Structure tests with clear sections
  3. One Assertion Per Test: Focus each test on one behavior
  4. Use Fixtures: Reuse common setup code via fixtures
  5. Mock External Dependencies: Use mocks for external services
  6. Test Edge Cases: Include error conditions and edge cases
  7. Document Tests: Add docstrings explaining what is being tested

Continuous Integration

Tests run automatically on:

  • Push to main, develop, or claude/* branches
  • Pull requests to main
  • Manual workflow dispatch

CI/CD Pipeline

  1. Test Job: Runs tests on Python 3.10, 3.11, 3.12, 3.13
  2. Lint Job: Runs ruff, black, isort
  3. Coverage Upload: Uploads coverage to Codecov
  4. Artifacts: Saves HTML coverage reports

Areas for Future Improvement

Missing Test Coverage

  1. Integration Tests

    • End-to-end workflow tests
    • Multi-component integration tests
    • Real HuggingFace API tests (with token)
  2. Performance Tests

    • Large file handling
    • Memory usage
    • Execution time benchmarks
  3. UI Tests

    • Streamlit component testing
    • UI interaction tests
    • Visual regression tests
  4. Network Tests

    • API endpoint tests
    • Model download tests (requires network)
    • GitHub API integration tests

Recommendations

  1. Increase Coverage

    • Add edge case tests
    • Test error recovery paths
    • Add boundary condition tests
  2. Add Integration Tests

    • Test complete workflows
    • Test component interactions
    • Test with real data
  3. Performance Testing

    • Add benchmarks for critical paths
    • Memory profiling
    • Load testing
  4. Documentation

    • Add more test examples
    • Document testing patterns
    • Create testing guide

Test Metrics

Current Status (as of 2026-04-14)

  • Total Test Files: 7
  • Total Test Cases: 150+
  • Overall Coverage: ~85%
  • Lines Covered: ~1800+ lines
  • Branches Covered: ~70%

Coverage Goals

  • Target Coverage: 90%
  • Minimum Coverage: 80%
  • Critical Modules: 95%+

Troubleshooting

Common Issues

  1. Import Errors

    # Ensure all dependencies are installed
    pip install -r requirements.txt -r requirements-test.txt
    
  2. Path Issues

    # Use absolute paths in tests
    test_path = Path(__file__).parent.parent / "file.py"
    
  3. Fixture Not Found

    # Ensure conftest.py is in tests directory
    # Check fixture name matches
    
  4. Mock Not Working

    # Use correct patch target
    with patch('module.function') as mock_func:
        # Test code
    

Resources

Contributing

When adding new code:

  1. Write tests first (TDD approach)
  2. Ensure minimum 80% coverage
  3. Run full test suite before committing
  4. Update this documentation if needed

Contact

For questions about testing:

  • Review existing tests for examples
  • Check pytest documentation
  • Create an issue for test-specific questions