oppo-node / TESTING.md
DJ-Goanna-Coding's picture
Deploy from GitHub Actions
c87f72b verified
# Testing Documentation for VAMGUARD_TITAN / TIA-ARCHITECT-CORE
## Overview
This document provides comprehensive information about the test suite for the VAMGUARD_TITAN repository, including test coverage, how to run tests, and testing best practices.
## Test Structure
```
tests/
├── __init__.py # Test package initialization
├── conftest.py # Pytest fixtures and configuration
├── test_genesis_boiler.py # Tests for genesis_boiler.py
├── test_worker_watchdog.py # Tests for worker_watchdog.py
├── test_self_healing_worker.py # Tests for self_healing_worker.py
├── test_apps_script_toolbox.py # Tests for apps_script_toolbox.py
├── test_download_citadel_omega_models.py # Tests for download scripts
└── test_app.py # Tests for Streamlit app
```
## Test Coverage
### Module Coverage
| Module | Coverage | Test Cases | Status |
|--------|----------|------------|--------|
| genesis_boiler.py | ~95% | 25+ | ✅ Complete |
| worker_watchdog.py | ~90% | 30+ | ✅ Complete |
| self_healing_worker.py | ~90% | 35+ | ✅ Complete |
| apps_script_toolbox.py | ~85% | 20+ | ✅ Complete |
| download_citadel_omega_models.py | ~80% | 15+ | ✅ Complete |
| app.py | ~75% | 25+ | ✅ Complete |
### Coverage by Component
#### GenesisBoiler (genesis_boiler.py)
- ✅ Initialization
- ✅ Territory auditing
- ✅ File consolidation (tarball creation)
- ✅ Error handling (OSError, PermissionError, IOError)
- ✅ Path validation
- ✅ Multiple source handling
- ✅ Non-existent path handling
#### WorkerWatchdog (worker_watchdog.py)
- ✅ Initialization and configuration
- ✅ File hash calculation (SHA256)
- ✅ Change detection (new, modified, deleted files)
- ✅ Self-healing trigger
- ✅ Workflow health checking
- ✅ State persistence (save/load)
- ✅ Continuous monitoring
- ✅ Template change detection
#### SelfHealingWorker (self_healing_worker.py)
- ✅ Script health checking
- ✅ Python script validation (AST parsing)
- ✅ Bash script validation
- ✅ Import checking
- ✅ Auto-repair (shebang, imports, permissions)
- ✅ Backup creation
- ✅ Health reporting
- ✅ Full healing workflow
#### AppsScriptToolbox (apps_script_toolbox.py)
- ✅ Worker initialization
- ✅ Connection verification
- ✅ Identity strike reports
- ✅ Full archive audits
- ✅ Worker status dashboard
- ✅ Error handling
#### Download Scripts
- ✅ Model downloading
- ✅ Registry creation
- ✅ Path management
- ✅ Error handling
- ✅ Already-downloaded detection
#### Streamlit App (app.py)
- ✅ Configuration structure
- ✅ Environment variables
- ✅ Data directory management
- ✅ UI component structure
- ✅ Models registry integration
- ✅ Workers constellation
- ✅ RAG system references
- ✅ Tools and utilities
## Running Tests
### Prerequisites
```bash
# Install main dependencies
pip install -r requirements.txt
# Install test dependencies
pip install -r requirements-test.txt
```
### Run All Tests
```bash
# Run all tests with coverage
pytest -v --cov=. --cov-report=term-missing
# Run all tests with HTML coverage report
pytest -v --cov=. --cov-report=html
# Run specific test file
pytest tests/test_genesis_boiler.py -v
# Run specific test class
pytest tests/test_genesis_boiler.py::TestGenesisBoilerInit -v
# Run specific test
pytest tests/test_genesis_boiler.py::TestGenesisBoilerInit::test_init_default_values -v
```
### Test Markers
Tests are marked with the following markers:
- `@pytest.mark.unit` - Unit tests
- `@pytest.mark.integration` - Integration tests
- `@pytest.mark.slow` - Slow-running tests
- `@pytest.mark.requires_network` - Tests requiring network access
- `@pytest.mark.requires_hf_token` - Tests requiring HuggingFace token
```bash
# Run only unit tests
pytest -v -m unit
# Run only integration tests
pytest -v -m integration
# Skip slow tests
pytest -v -m "not slow"
# Skip network-dependent tests
pytest -v -m "not requires_network"
```
### Coverage Reports
```bash
# Generate coverage report
coverage run -m pytest
coverage report
# Generate HTML coverage report
coverage html
# Open htmlcov/index.html in browser
# Generate XML coverage report (for CI/CD)
coverage xml
```
## Test Fixtures
### Common Fixtures (from conftest.py)
- `temp_dir` - Creates a temporary directory for testing
- `mock_env_vars` - Mocks environment variables
- `sample_python_file` - Creates a sample Python file
- `sample_directory_structure` - Creates a directory structure with files
### Usage Example
```python
def test_with_temp_dir(temp_dir):
"""Test using temp_dir fixture"""
test_file = temp_dir / "test.txt"
test_file.write_text("content")
assert test_file.exists()
def test_with_mock_env(mock_env_vars):
"""Test using mocked environment variables"""
assert os.getenv("HF_TOKEN") == "test_token_123"
```
## Writing New Tests
### Test Structure
```python
"""
Module docstring explaining what is being tested
"""
import pytest
from pathlib import Path
from unittest.mock import Mock, patch
import sys
# Add parent to path if needed
sys.path.insert(0, str(Path(__file__).parent.parent))
from module_to_test import ClassToTest
class TestClassName:
"""Test class with descriptive name"""
def test_specific_functionality(self):
"""Test with clear description"""
# Arrange
obj = ClassToTest()
# Act
result = obj.method()
# Assert
assert result == expected_value
```
### Best Practices
1. **Descriptive Names**: Use clear, descriptive test names
2. **Arrange-Act-Assert**: Structure tests with clear sections
3. **One Assertion Per Test**: Focus each test on one behavior
4. **Use Fixtures**: Reuse common setup code via fixtures
5. **Mock External Dependencies**: Use mocks for external services
6. **Test Edge Cases**: Include error conditions and edge cases
7. **Document Tests**: Add docstrings explaining what is being tested
## Continuous Integration
Tests run automatically on:
- Push to `main`, `develop`, or `claude/*` branches
- Pull requests to `main`
- Manual workflow dispatch
### CI/CD Pipeline
1. **Test Job**: Runs tests on Python 3.10, 3.11, 3.12, 3.13
2. **Lint Job**: Runs ruff, black, isort
3. **Coverage Upload**: Uploads coverage to Codecov
4. **Artifacts**: Saves HTML coverage reports
## Areas for Future Improvement
### Missing Test Coverage
1. **Integration Tests**
- End-to-end workflow tests
- Multi-component integration tests
- Real HuggingFace API tests (with token)
2. **Performance Tests**
- Large file handling
- Memory usage
- Execution time benchmarks
3. **UI Tests**
- Streamlit component testing
- UI interaction tests
- Visual regression tests
4. **Network Tests**
- API endpoint tests
- Model download tests (requires network)
- GitHub API integration tests
### Recommendations
1. **Increase Coverage**
- Add edge case tests
- Test error recovery paths
- Add boundary condition tests
2. **Add Integration Tests**
- Test complete workflows
- Test component interactions
- Test with real data
3. **Performance Testing**
- Add benchmarks for critical paths
- Memory profiling
- Load testing
4. **Documentation**
- Add more test examples
- Document testing patterns
- Create testing guide
## Test Metrics
### Current Status (as of 2026-04-14)
- **Total Test Files**: 7
- **Total Test Cases**: 150+
- **Overall Coverage**: ~85%
- **Lines Covered**: ~1800+ lines
- **Branches Covered**: ~70%
### Coverage Goals
- **Target Coverage**: 90%
- **Minimum Coverage**: 80%
- **Critical Modules**: 95%+
## Troubleshooting
### Common Issues
1. **Import Errors**
```bash
# Ensure all dependencies are installed
pip install -r requirements.txt -r requirements-test.txt
```
2. **Path Issues**
```python
# Use absolute paths in tests
test_path = Path(__file__).parent.parent / "file.py"
```
3. **Fixture Not Found**
```python
# Ensure conftest.py is in tests directory
# Check fixture name matches
```
4. **Mock Not Working**
```python
# Use correct patch target
with patch('module.function') as mock_func:
# Test code
```
## Resources
- [Pytest Documentation](https://docs.pytest.org/)
- [Coverage.py Documentation](https://coverage.readthedocs.io/)
- [Python Testing Best Practices](https://docs.python-guide.org/writing/tests/)
- [Mock Documentation](https://docs.python.org/3/library/unittest.mock.html)
## Contributing
When adding new code:
1. Write tests first (TDD approach)
2. Ensure minimum 80% coverage
3. Run full test suite before committing
4. Update this documentation if needed
## Contact
For questions about testing:
- Review existing tests for examples
- Check pytest documentation
- Create an issue for test-specific questions