CalejoControl/TESTING_STRATEGY.md

3.5 KiB

Testing Strategy

This document outlines the testing strategy for the Calejo Control Adapter project.

Test Directory Structure

tests/
├── unit/                    # Unit tests - test individual components in isolation
├── integration/             # Integration tests - test components working together
├── e2e/                     # End-to-end tests - require external services (mocks)
├── fixtures/                # Test fixtures and data
├── utils/                   # Test utilities
└── mock_services/           # Mock SCADA and optimizer services

Test Categories

1. Unit Tests (tests/unit/)

  • Purpose: Test individual functions, classes, and modules in isolation
  • Dependencies: None or minimal (mocked dependencies)
  • Execution: pytest tests/unit/
  • Examples: Database clients, configuration validation, business logic

2. Integration Tests (tests/integration/)

  • Purpose: Test how components work together
  • Dependencies: May require database, but not external services
  • Execution: pytest tests/integration/
  • Examples: Database integration, protocol handlers working together

3. End-to-End Tests (tests/e2e/)

  • Purpose: Test complete workflows with external services
  • Dependencies: Require mock SCADA and optimizer services
  • Execution: Use dedicated runner scripts
  • Examples: Complete SCADA-to-optimizer workflows

4. Mock Services (tests/mock_services/)

  • Purpose: Simulate external SCADA and optimizer services
  • Usage: Started by e2e test runners
  • Ports: SCADA (8081), Optimizer (8082)

Test Runners

For E2E Tests (Mock-Dependent)

# Starts mock services and runs e2e tests
./scripts/run-reliable-e2e-tests.py

# Quick mock service verification
./scripts/test-mock-services.sh

# Full test environment setup
./scripts/setup-test-environment.sh

For Unit and Integration Tests

# Run all unit tests
pytest tests/unit/

# Run all integration tests
pytest tests/integration/

# Run specific test file
pytest tests/unit/test_database_client.py

Deployment Testing

Current Strategy

  • Deployment Script: deploy/ssh/deploy-remote.sh
  • Purpose: Deploy to production server (95.111.206.155)
  • Testing: Manual verification after deployment
  • Separation: Deployment is separate from automated testing

To add automated deployment testing:

  1. Create tests/deployment/ directory
  2. Add smoke tests that verify deployment
  3. Run these tests after deployment
  4. Consider using staging environment for pre-production testing

Test Execution Guidelines

When to Run Which Tests

  • Local Development: Run unit tests frequently
  • Before Commits: Run unit + integration tests
  • Before Deployment: Run all tests including e2e
  • CI/CD Pipeline: Run all test categories

Mock Service Usage

  • E2E tests require mock services to be running
  • Use dedicated runners that manage service lifecycle
  • Don't run e2e tests directly with pytest (they'll fail)

Adding New Tests

  1. Unit Tests: Add to tests/unit/
  2. Integration Tests: Add to tests/integration/
  3. E2E Tests: Add to tests/e2e/ and update runners if needed
  4. Mock Services: Add to tests/mock_services/ if new services needed

Best Practices

  • Keep tests fast and isolated
  • Use fixtures for common setup
  • Mock external dependencies in unit tests
  • Write descriptive test names
  • Include both happy path and error scenarios
  • Use retry logic for flaky network operations