Apex Testing
Generate Apex test classes with proper coverage. AI writes the scaffolding, test data, and assertions so you can focus on the logic.
Skill Details
Install this skill
Works with
Use this skill when the user needs Apex test execution and failure analysis: running tests, checking coverage, interpreting failures, improving coverage, and managing a disciplined test-fix loop for Salesforce code.
When This Skill Owns the TaskWorkflow
Use sf-testing when the work involves:
sf apex run testworkflows- Apex unit-test failures
- code coverage analysis
- identifying uncovered lines and missing test scenarios
- structured test-fix loops for Apex code
Delegate elsewhere when the user is:
- writing or refactoring production Apex → sf-apex
- testing Agentforce agents → sf-ai-agentforce-testing
- testing LWC with Jest → sf-lwc
Required Context to Gather FirstWorkflow
Ask for or infer:
- target org alias
- desired test scope: single class, specific methods, suite, or local tests
- coverage threshold expectation
- whether the user wants diagnosis only or a test-fix loop
- whether related test data factories already exist
Recommended Workflow
1. Discover test scope
Identify:
- existing test classes
- target production classes
- test data factories / setup helpers
2. Run the smallest useful test set first
Start narrow when debugging a failure; widen only after the fix is stable.
3. Analyze results
Focus on:
- failing methods
- exception types and stack traces
- uncovered lines / weak coverage areas
- whether failures indicate bad test data, brittle assertions, or broken production logic
4. Run a disciplined fix loop
When the issue is code or test quality:
- delegate code fixes to sf-apex when needed
- add or improve tests
- rerun focused tests before broader regression
5. Improve coverage intentionally
Cover:
- positive path
- negative / exception path
- bulk path (251+ records where appropriate)
- callout or async path when relevant
High-Signal Rules
- default to
SeeAllData=false - every test should assert meaningful outcomes
- test bulk behavior, not just single-record happy paths
- use factories /
@TestSetupwhen they improve clarity and speed - pair
Test.startTest()withTest.stopTest()when async behavior matters - do not hide flaky org dependencies inside tests
Output FormatTemplate
When finishing, report in this order:
Suggested shape:
Test run: <scope>
Org: <alias>
Result: <passed / partial / failed>
Coverage: <percent / key classes>
Issues: <highest-signal failures>
Next step: <fix class, add test, rerun scope, or widen regression>
Cross-Skill IntegrationReference
| Need | Delegate to | Reason |
|---|---|---|
| fix production code or author tests | sf-apex | code generation and repair |
| create bulk / edge-case data | sf-data | realistic test datasets |
| deploy updated tests | sf-deploy | rollout |
| inspect detailed runtime logs | sf-debug | deeper failure analysis |
| Score | Meaning | |
| 108+ | strong production-grade test confidence | |
| 96–107 | good test suite with minor gaps | |
| 84–95 | acceptable but strengthen coverage / assertions | |
| < 84 | below standard; revise before relying on it |
More in Apex & Flow
Apex Code
Write custom Apex classes, triggers, and batch jobs. Describe what you need in plain English and AI builds production-ready Salesforce code.
Debug Logs
Read and analyze Salesforce debug logs. Paste the error, AI finds the root cause and tells you exactly what to fix.
Deployments
Deploy metadata between Salesforce orgs. AI handles the CLI commands, validation, and flags the things that could go wrong.
Navigate Apex & Flow