Skip to content
SalesforceSkills

Apex Testing

Generate Apex test classes with proper coverage. AI writes the scaffolding, test data, and assertions so you can focus on the logic.

Skill Details

Install this skill

Versionv1.1.0AuthorJag ValaiyapathyLicenseMITSections8

Works with

Claude CodeCursorWindsurf

Use this skill when the user needs Apex test execution and failure analysis: running tests, checking coverage, interpreting failures, improving coverage, and managing a disciplined test-fix loop for Salesforce code.

When This Skill Owns the TaskWorkflow

Use sf-testing when the work involves:

  • sf apex run test workflows
  • Apex unit-test failures
  • code coverage analysis
  • identifying uncovered lines and missing test scenarios
  • structured test-fix loops for Apex code

Delegate elsewhere when the user is:

Required Context to Gather FirstWorkflow

Ask for or infer:

  • target org alias
  • desired test scope: single class, specific methods, suite, or local tests
  • coverage threshold expectation
  • whether the user wants diagnosis only or a test-fix loop
  • whether related test data factories already exist

1. Discover test scope

Identify:

  • existing test classes
  • target production classes
  • test data factories / setup helpers

2. Run the smallest useful test set first

Start narrow when debugging a failure; widen only after the fix is stable.

3. Analyze results

Focus on:

  • failing methods
  • exception types and stack traces
  • uncovered lines / weak coverage areas
  • whether failures indicate bad test data, brittle assertions, or broken production logic

4. Run a disciplined fix loop

When the issue is code or test quality:

  • delegate code fixes to sf-apex when needed
  • add or improve tests
  • rerun focused tests before broader regression

5. Improve coverage intentionally

Cover:

  • positive path
  • negative / exception path
  • bulk path (251+ records where appropriate)
  • callout or async path when relevant

High-Signal Rules

  • default to SeeAllData=false
  • every test should assert meaningful outcomes
  • test bulk behavior, not just single-record happy paths
  • use factories / @TestSetup when they improve clarity and speed
  • pair Test.startTest() with Test.stopTest() when async behavior matters
  • do not hide flaky org dependencies inside tests

Output FormatTemplate

When finishing, report in this order:

1
What tests were run
2
Pass/fail summary
3
Coverage result
4
Root-cause findings
5
Fix or next-run recommendation

Suggested shape:

TEXT
Test run: <scope>
Org: <alias>
Result: <passed / partial / failed>
Coverage: <percent / key classes>
Issues: <highest-signal failures>
Next step: <fix class, add test, rerun scope, or widen regression>

Cross-Skill IntegrationReference

Reference Map

Start here

Specialized guidance

Score Guide

NeedDelegate toReason
fix production code or author testssf-apexcode generation and repair
create bulk / edge-case datasf-datarealistic test datasets
deploy updated testssf-deployrollout
inspect detailed runtime logssf-debugdeeper failure analysis
ScoreMeaning
108+strong production-grade test confidence
96–107good test suite with minor gaps
84–95acceptable but strengthen coverage / assertions
< 84below standard; revise before relying on it

Navigate Apex & Flow