Contributing Tests
This guide explains the testing requirements for contributing code to Gofannon.
Pull Request Requirements
Every PR must include:
- ✅ Tests for all new code
- ✅ Tests for modified code (if changing behavior)
- ✅ 95% minimum coverage on changed files
- ✅ All tests passing (unit + integration)
- ✅ No lint errors
PR Testing Checklist
Before submitting your PR, verify:
- I've written unit tests for new functions/components
- I've written integration tests for new endpoints/features
- All tests pass locally
- Coverage is ≥95% on files I modified
- Tests follow the project's testing patterns
- Test names clearly describe what they test
- No tests are skipped or commented out
- CI/CD checks are passing
Writing Tests for Your Changes
For New Features
- Write tests first (TDD approach recommended)
- Cover happy path - normal successful execution
- Cover edge cases - empty input, null values, boundaries
- Cover error cases - validation failures, exceptions
- Test interactions - how your feature works with others
For Bug Fixes
- Write a failing test that reproduces the bug
- Fix the bug
- Verify the test passes
- Add additional tests for related edge cases
For Refactoring
- Ensure existing tests pass before starting
- Don't modify test expectations (behavior shouldn't change)
- Add tests if coverage decreased
- Verify all tests still pass after refactoring
Test Coverage Rules
You Are Responsible For
- Files you create: 95% minimum coverage
- Files you modify: Maintain or improve existing coverage
- New functions/methods: 100% coverage (no exceptions)
Coverage Exceptions
Only exclude from coverage:
- Type checking blocks (
if TYPE_CHECKING:) - Abstract methods (
@abstractmethod) - Main blocks (
if __name__ == "__main__":) - Explicitly unreachable code (
pragma: no cover)
Never exclude:
- Business logic
- Error handling
- Validation code
- Utility functions
Test Organization
File Naming
# Backend (Python)
tests/unit/test_<module_name>.py
tests/integration/test_<feature_name>.py
# Frontend (JavaScript)
src/components/ComponentName.test.jsx
src/utils/utilityName.test.js
Test Structure
# Backend
class TestFeatureName:
"""Test suite for FeatureName."""
def test_feature_does_something_when_condition(self):
"""Test that feature does X when Y happens."""
# Arrange
# Act
# Assert
// Frontend
describe('ComponentName', () => {
describe('when prop X is true', () => {
it('renders element Y', () => {
// Arrange, Act, Assert
});
});
});
Common Scenarios
Adding a New API Endpoint
Required Tests:
- Unit test for the route handler function
- Unit tests for any new service methods
- Integration test for the full HTTP request/response
- Test authentication/authorization
- Test validation errors
- Test success case with valid data
Example:
# tests/unit/test_routes.py
def test_create_agent_validates_input(mock_db):
with pytest.raises(ValidationError):
create_agent(CreateAgentRequest(name="")) # Empty name
def test_create_agent_saves_to_database(mock_db):
agent = create_agent(CreateAgentRequest(name="Test"))
mock_db.save.assert_called_once()
# tests/integration/test_agent_endpoints.py
def test_create_agent_endpoint(client):
response = client.post("/agents", json={"name": "Test"})
assert response.status_code == 201
assert response.json()["name"] == "Test"
Adding a New React Component
Required Tests:
- Test component renders with required props
- Test user interactions (clicks, typing, etc.)
- Test conditional rendering
- Test prop validation/defaults
- Test error states
- Test accessibility
Example:
// ActionCard.test.jsx
describe('ActionCard', () => {
it('renders with required props', () => {
render(<ActionCard {...requiredProps} />);
expect(screen.getByText('Title')).toBeInTheDocument();
});
it('calls onClick when clicked', async () => {
const onClick = vi.fn();
render(<ActionCard {...requiredProps} onClick={onClick} />);
await userEvent.click(screen.getByRole('button'));
expect(onClick).toHaveBeenCalled();
});
it('shows error message when prop is invalid', () => {
render(<ActionCard {...requiredProps} title="" />);
expect(screen.getByText('Title is required')).toBeInTheDocument();
});
});
Modifying Existing Code
- Run existing tests to verify they still pass
- Update tests if behavior changed (document why in PR)
- Add new tests for new behavior/edge cases
- Ensure coverage doesn't decrease
Running Tests Locally
Before Creating PR
# 1. Run all unit tests
cd webapp
pnpm test:unit
# 2. Check coverage
pnpm test:coverage
# 3. Run integration tests if you changed API/services
pnpm test:integration
# 4. Run lint
cd packages/webui
pnpm lint
# 5. Verify everything passes
cd ../..
pnpm test
During PR Review
If CI fails:
- Check GitHub Actions logs
- Reproduce failure locally
- Fix the issue
- Re-run tests locally
- Push fix
Code Review Focus
Reviewers will check:
- Tests cover new/modified code
- Test names are descriptive
- Tests are independent (no shared state)
- Appropriate test type (unit vs integration)
- Mocks used correctly in unit tests
- Edge cases covered
- No flaky tests (random failures)
- Coverage meets 95% threshold
Examples of Good PRs
Example 1: New Feature
Title: Add user allowance reset endpoint
Tests Added:
- test_reset_allowance_sets_to_monthly_limit (unit)
- test_reset_allowance_clears_usage_history (unit)
- test_reset_allowance_endpoint_returns_updated_user (integration)
- test_reset_allowance_requires_authentication (integration)
Coverage: 98% on modified files
Example 2: Bug Fix
Title: Fix user creation with missing email
Tests Added:
- test_create_user_without_email_uses_default (unit)
- test_create_user_with_null_email_raises_error (unit)
Before: Bug allowed null emails
After: Bug fixed, tests verify correct behavior
Coverage: 100% on user_service.py
Getting Help
Test Writing Help
- Review existing tests in similar files
- Check the Unit Testing Guide
- Ask in team chat or PR comments
Coverage Issues
- Run
pnpm test:coverageto see uncovered lines - Focus on testing the red lines in coverage report
- Review Coverage Requirements
CI Failures
- Check GitHub Actions logs for error details
- Reproduce locally with same command from CI
- Check if it's a timing issue (flaky test)
Common Mistakes to Avoid
1. Not Testing Error Cases
# Bad - only tests success
def test_create_user():
user = create_user("test@example.com")
assert user.email == "test@example.com"
# Good - tests error cases too
def test_create_user_with_invalid_email():
with pytest.raises(ValidationError):
create_user("invalid-email")
def test_create_user_with_duplicate_email():
create_user("test@example.com")
with pytest.raises(DuplicateEmailError):
create_user("test@example.com")
2. Testing Implementation Instead of Behavior
// Bad - tests implementation detail
it('calls setState with correct value', () => {
const setState = vi.spyOn(component, 'setState');
component.handleClick();
expect(setState).toHaveBeenCalledWith({ clicked: true });
});
// Good - tests visible behavior
it('shows success message after clicking', async () => {
render(<Component />);
await userEvent.click(screen.getByRole('button'));
expect(screen.getByText('Success!')).toBeInTheDocument();
});
3. Shared State Between Tests
# Bad - shared state
user = User(_id="test-123")
def test_update_email():
user.email = "new@example.com" # Modifies shared object!
def test_update_name():
user.name = "New Name" # Depends on previous test!
# Good - isolated tests
def test_update_email():
user = User(_id="test-123")
user.email = "new@example.com"
def test_update_name():
user = User(_id="test-123")
user.name = "New Name"
4. Skipping Integration Tests
# Bad - only unit tests for new endpoint
def test_create_agent_saves_to_db(mock_db):
create_agent(data, mock_db)
mock_db.save.assert_called()
# Good - also has integration test
def test_create_agent_endpoint_e2e(client):
response = client.post("/agents", json=valid_data)
assert response.status_code == 201
# Verify it's actually in the database
agent = client.get(f"/agents/{response.json()['id']}")
assert agent.json()["name"] == valid_data["name"]
PR Approval Criteria
Your PR will be approved when:
- ✅ All CI checks pass
- ✅ Coverage is ≥95%
- ✅ Tests are well-written and maintainable
- ✅ Code review feedback addressed
- ✅ Documentation updated (if needed)