Skip to main content
A QA team might use TeamCopilot for bug triage, test-case generation, and recurring validation tasks.

Example setup

  • A skill for bug triage rules and severity definitions
  • A skill for how to write reproducible test cases
  • A workflow that runs a recurring validation or smoke-check routine

Example requests

  • “Classify this bug using our severity framework.”
  • “Turn this issue into a reproducible QA test case.”
  • “Run the staging smoke-test workflow.”

Why this works

The skills standardize how defects are described and prioritized. The workflow gives QA a reliable way to run the same checks repeatedly.

Example SKILL.md

Below is an example QA skill for triaging incoming bugs consistently.
---
name: qa-bug-triage
description: Classify bugs, identify missing reproduction details, and recommend next QA actions.
---

# QA Bug Triage

Use this skill when the user shares a defect report, test failure, or issue that needs triage.

## Triage goals

- determine severity
- identify missing reproduction details
- separate product issues from environment issues
- recommend next steps for QA and engineering

## Severity framework

- Critical: production outage, data loss, security issue, or core workflow blocked for many users
- High: major functionality broken with no reasonable workaround
- Medium: meaningful issue with workaround available or limited scope
- Low: cosmetic issue, edge case, or low-impact bug

## Required checks

1. Confirm the affected environment.
2. Confirm expected behavior and actual behavior.
3. Check whether reproduction steps are complete.
4. Note whether the issue is consistent or intermittent.
5. Identify any logs, screenshots, or payloads still needed.

## Output format

When triaging a bug:

1. Give the proposed severity.
2. Explain why.
3. List missing information.
4. Rewrite the bug report in a clearer format.
5. Suggest next QA or engineering actions.

## Constraints

- Do not claim root cause unless evidence supports it.
- If reproduction details are weak, say so explicitly.
- Distinguish clearly between observed facts and hypotheses.

Example workflow run.py

Below is an example smoke-check workflow that reads expected endpoints from a JSON file and validates that they respond successfully.
import argparse
import json
import sys
from pathlib import Path
from urllib import request, error


def parse_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--config", required=True, help="Path to smoke test config JSON")
    return parser.parse_args()


def load_config(config_path: Path):
    with config_path.open("r", encoding="utf-8") as handle:
        return json.load(handle)


def check_url(url: str, timeout: int):
    req = request.Request(url, method="GET")
    try:
        with request.urlopen(req, timeout=timeout) as response:
            return response.status, None
    except error.HTTPError as exc:
        return exc.code, str(exc)
    except Exception as exc:  # noqa: BLE001
        return None, str(exc)


def main():
    args = parse_args()
    config_path = Path(args.config)

    if not config_path.exists():
        raise SystemExit(f"Config file not found: {config_path}")

    config = load_config(config_path)
    timeout = int(config.get("timeout_seconds", 10))
    checks = config.get("checks", [])

    if not checks:
        print("No checks configured.")
        return

    failures = []
    print("Smoke Test Results")
    print("==================")

    for check in checks:
        name = check["name"]
        url = check["url"]
        status, err = check_url(url, timeout)

        if status and 200 <= status < 400:
            print(f"[PASS] {name}: {url} -> {status}")
        else:
            print(f"[FAIL] {name}: {url} -> {status or 'error'}")
            if err:
                print(f"       {err}")
            failures.append(name)

    print()
    print(f"Total checks: {len(checks)}")
    print(f"Failures: {len(failures)}")

    if failures:
        sys.exit(1)


if __name__ == "__main__":
    main()