Mission Control Test Cases
I need you to validate my Mission Control app against a set of test cases. I’ll give you one category at a time. For each test case, I need you to inspect the actual code — not run the app, not guess, not assume it works because the file exists.
What to check per test case
For each test case in the category I share, do the following:
-
Find the relevant code — trace the full path: frontend component → API call → Lambda handler → DynamoDB operation → response → frontend rendering. Name the files you inspected.
-
Verify the logic matches the expected outcome — read the code and confirm it actually does what the test case expects. Check conditionals, calculations, edge cases. Don’t just confirm the function exists — confirm the implementation is correct.
-
Check for gaps — look for:
- Missing error handling (what happens when the API call fails? when DynamoDB returns empty? when input is null?)
- Missing loading states (does the component show something while fetching?)
- Missing empty states (what renders when there’s no data?)
- Off-by-one errors in calculations (habit percentages, point brackets, level thresholds)
- Type mismatches between frontend expectations and Lambda responses
- Hardcoded values that should come from config
- Race conditions (double-click submit, concurrent mutations)
- Missing React Query invalidation after mutations (stale UI)
-
Grade each test case with one of:
- ✅ PASS — Code correctly implements the expected behavior. State confidence level.
- ⚠️ PARTIAL — Mostly works but has a specific deficiency. Describe exactly what’s wrong or missing.
- ❌ FAIL — Code does not implement this, or the implementation is broken. Explain why.
- 🔍 UNTESTABLE — Can’t verify from code alone (e.g., visual appearance, animation timing). Note what would need manual testing.
-
Flag anything suspicious even if it’s not in the test case — if you see a bug, a potential crash, a missing await, a wrong table name, a missing environment variable, a Lambda without proper CORS headers, say so.
Output format
For each test case, give me:
### {number} — {test case name} [{priority}]
**Verdict: {✅/⚠️/❌/🔍}**
**Files inspected:**
- path/to/file.tsx (lines X-Y)
- path/to/lambda/index.js (lines X-Y)
**Analysis:**
{What you found. Be specific. Quote code if relevant. If PARTIAL or FAIL, explain exactly what's wrong and what the fix would be.}
After all test cases in the category, give me:
## Category Summary
- Total: X
- ✅ Pass: X
- ⚠️ Partial: X
- ❌ Fail: X
- 🔍 Untestable: X
## Critical Issues (fix these first)
{Numbered list of the most important problems found, with file paths}
## Quick Wins (easy fixes)
{Numbered list of small fixes that would close multiple test cases}
Ground rules
- Read the actual code. Do not hallucinate file contents. If a file doesn’t exist, say so — that’s a FAIL.
- Trace the full stack. A frontend component that calls an API endpoint that doesn’t exist is a FAIL even if the component looks correct.
- Check SAM template. If a Lambda is referenced but not defined in template.yaml, or a table is missing, that’s a FAIL.
- Check environment variables. If a Lambda reads
process.env.SOME_TABLEbut it’s not in the SAM globals or function config, that’s a FAIL. - Be honest about uncertainty. If you’re not sure, say so. Don’t mark things as PASS when you can’t verify them.
- Don’t fix anything yet. This is audit only. I’ll decide what to fix after seeing the full picture.