Waldur Testing Guide
Test Writing Best Practices
1. Understand Actual System Behavior
- Always verify actual behavior before writing tests - Don't assume how the system should work
- Test what the system actually does, not what you think it should do
- Example: Basic permission queries don't automatically filter expired roles
2. Use Existing Fixtures and Factories
- Always use established fixtures - Don't invent your own role names
- Use
CustomerRole.SUPPORTnotCustomerRole.MANAGER(which doesn't exist) - Use
fixtures.ProjectFixture()for consistent test setup with proper relationships - Use
factories.UserFactory()for creating test users with proper defaults
3. Error Handling Reality Check
- Test for actual exceptions, not ideal ones
- If the system raises
AttributeErrorfor missing attributes, test forAttributeError - Only test for
PermissionDeniedwhen the system actually catches and converts errors
4. Mock Objects for Complex Testing
- Use Mock objects effectively for nested permission paths
- Create realistic mock structures:
mock_resource.project.customer = self.customer - Test permission factory with multiple source paths:
["direct_customer", "project.customer"] - Mock objects help test complex scenarios without database overhead
5. Time-Based Testing Patterns
- Understand explicit vs implicit time checking
- Basic
has_permission()doesn't check expiration times automatically - Test boundary conditions: exact expiration time, microseconds past expiration
- Create roles with
timezone.now() ± timedelta()for realistic time testing
6. Test Base Class Selection
Choose the right test base class for each test:
- Default:
test.APITestCase— uses transaction rollback, much faster - Use
test.APITransactionTestCaseonly when: transaction.on_commit()callbacks must fire (e.g., Celery task dispatch)IntegrityErroris deliberately triggered (breaks TestCase's wrapping transaction)- Threading or multi-process database access is needed
responses.start()insetUpfor class-wide HTTP mocking (leaks across TestCase classes)
1 2 3 4 5 6 7 8 9 10 | |
A CI lint job (scripts/analyze_transaction_test_cases.py --ci --baseline N) enforces this — adding new unjustified APITransactionTestCase classes will fail the pipeline. The baseline is lowered as classes are migrated.
7. Performance Testing Considerations
- Include query optimization tests where appropriate
- Use
override_settings(DEBUG=True)to count database queries - Test with multiple users/roles to ensure performance doesn't degrade
8. System Role Protection
- Test that system roles work correctly even when modified
- System roles like
CustomerRole.OWNERshould maintain functionality - Test that role modifications don't break core functionality
- Verify that predefined roles have expected permissions
9. Edge Case Testing
- Test None values, missing attributes, and circular references
- Handle
AttributeErrorwhen accessing missing nested attributes - Test with inactive users, deleted roles, removed permissions
- Verify behavior with complex nested object hierarchies
10. HTTP Mocking Patterns
Preferred: @responses.activate per method — fully isolated, no cleanup needed:
1 2 3 4 5 6 | |
Class-wide mocking with responses.start() — requires APITransactionTestCase:
1 2 3 4 5 6 7 8 9 10 11 12 | |
Using responses.start() in setUp with APITestCase causes leaked mock state across test classes because TestCase doesn't fully reset process-level state between classes.
11. Multiple Inheritance Pitfall
When combining APITransactionTestCase with a mixin that extends APITestCase, Python's MRO can silently break TransactionTestCase behavior:
1 2 3 4 5 6 7 8 9 | |
12. OpenStack Backend Test Patterns
When writing standalone backend tests that don't inherit from BaseBackendTestCase:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | |
Test Guidelines
- Test behavior, not implementation
- One assertion per test when possible
- Clear test names describing scenario
- Use existing test utilities/helpers
- Tests should be deterministic
Debugging Complex Systems
When fixing performance or accuracy issues:
- Isolate the problem:
- Run individual failing tests to understand specific issues
- Use
pytest -v -sfor verbose output with print statements -
Check if multiple tests fail for the same underlying reason
-
Understand test expectations:
- Read test comments carefully - they often explain intended behavior
- Check if tests expect specific error types
-
Look for conflicting expectations between test suites
-
Fix systematically:
- Fix one root cause at a time
- After each fix, run full test suite to check for regressions
-
Update related tests for consistency when changing behavior
-
API changes require test updates:
- When changing function signatures or default parameters, expect test failures
- Update tests for consistency rather than reverting functional improvements
- Document parameter behavior changes clearly