Compare commits

..

12 Commits

Author SHA1 Message Date
Ashwin Bhat
d090d03da6 Reapply "feat: send additional_permissions in token exchange request (#859)" (#864)
This reverts commit 231bd75b71.
2026-01-27 08:50:24 -08:00
Rani Halabi
fe72061e16 feat: add actor-based comment filtering to GitHub data fetching (#812)
- Introduced `include_comments_by_actor` and `exclude_comments_by_actor` inputs in action.yml to allow filtering of comments based on actor usernames.
- Updated context parsing to handle new input fields.
- Implemented `filterCommentsByActor` function to filter comments according to specified inclusion and exclusion patterns.
- Modified `fetchGitHubData` to apply actor filters when retrieving comments from pull requests and issues.
- Added comprehensive tests for the new filtering functionality.

This enhancement provides more control over which comments are processed based on the actor, improving the flexibility of the workflow.
2026-01-27 07:48:10 -08:00
Ashwin Bhat
231bd75b71 Revert "feat: send additional_permissions in token exchange request (#859)" (#864)
This reverts commit 0c704179b5.
2026-01-26 21:40:04 -08:00
GitHub Actions
4126f9d975 chore: bump Claude Code to 2.1.20 and Agent SDK to 0.2.20 2026-01-27 01:34:26 +00:00
Arthur
ba45bb9506 chore: upgarde checkout-action to v6 (#862) 2026-01-26 16:25:42 -08:00
Ashwin Bhat
0c704179b5 feat: send additional_permissions in token exchange request (#859)
* feat: send additional_permissions in token exchange request

Parse the ADDITIONAL_PERMISSIONS env var and send it as a JSON body
in the OIDC token exchange request. Permissions are merged on top of
the standard defaults (contents: write, pull_requests: write,
issues: write).

* docs: list specific available additional permissions
2026-01-26 09:02:20 -08:00
GitHub Actions
f64219702d chore: bump Claude Code to 2.1.19 and Agent SDK to 0.2.19 2026-01-23 21:55:28 +00:00
GitHub Actions
8341a564b0 chore: bump Claude Code to 2.1.17 and Agent SDK to 0.2.17 2026-01-22 21:49:14 +00:00
GitHub Actions
2804b4174b chore: bump Claude Code to 2.1.16 and Agent SDK to 0.2.16 2026-01-22 20:08:26 +00:00
GitHub Actions
2316a9a8db chore: bump Claude Code to 2.1.15 and Agent SDK to 0.2.15 2026-01-21 22:00:12 +00:00
Ashwin Bhat
49cfcf8107 refactor: remove CLI path, use Agent SDK exclusively (#849)
* refactor: remove CLI path, use Agent SDK exclusively

- Remove CLI-based Claude execution in favor of Agent SDK
- Delete prepareRunConfig, parseAndSetSessionId, parseAndSetStructuredOutputs functions
- Remove named pipe IPC and sanitizeJsonOutput helper
- Remove test-agent-sdk job from test-base-action workflow (SDK is now default)
- Delete run-claude.test.ts and structured-output.test.ts (testing removed CLI code)
- Update CLAUDE.md to remove named pipe references

Co-Authored-By: Claude <noreply@anthropic.com>
Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 2
Claude-Permission-Prompts: 1
Claude-Escapes: 0
Claude-Plan:
<claude-plan>
# Plan: Remove Non-Agent SDK Code Path

## Overview
Since `use_agent_sdk` defaults to `true`, remove the legacy CLI code path entirely from `base-action/src/run-claude.ts`.

## Files to Modify

### 1. `base-action/src/run-claude.ts` - Main Cleanup

**Remove imports:**
- `exec` from `child_process`
- `promisify` from `util`
- `unlink`, `writeFile`, `stat` from `fs/promises` (keep `readFile` - check if needed)
- `createWriteStream` from `fs`
- `spawn` from `child_process`
- `parseShellArgs` from `shell-quote` (still used in `parse-sdk-options.ts`, keep package)

**Remove constants:**
- `execAsync`
- `PIPE_PATH`
- `EXECUTION_FILE` (defined in both files, keep in SDK file)
- `BASE_ARGS`

**Remove types:**
- `PreparedConfig` type (lines 85-89) - only used by `prepareRunConfig()`

**Remove functions:**
- `sanitizeJsonOutput()` (lines 21-68)
- `prepareRunConfig()` (lines 91-125) - also remove export
- `parseAndSetSessionId()` (lines 131-155) - also remove export
- `parseAndSetStructuredOutputs()` (lines 162-197) - also remove export

**Simplify `runClaude()`:**
- Remove `useAgentSdk` flag check and logging (lines 200-204)
- Remove the `if (useAgentSdk)` block, make SDK call direct
- Remove entire CLI path (lines 211-438)
- Resulting function becomes just:
  ```typescript
  export async function runClaude(promptPath: string, options: ClaudeOptions) {
    const parsedOptions = parseSdkOptions(options);
    return runClaudeWithSdk(promptPath, parsedOptions);
  }
  ```

### 2. Delete Test Files

**`base-action/test/run-claude.test.ts`:**
- Delete entire file (only tests `prepareRunConfig()`)

**`base-action/test/structured-output.test.ts`:**
- Delete entire file (only tests `parseAndSetStructuredOutputs()` and `parseAndSetSessionId()`)

### 3. Workflow Update

**`.github/workflows/test-base-action.yml`:**
- Remove `test-agent-sdk` job (lines 120-176) - redundant now

### 4. Documentation Update

**`base-action/CLAUDE.md`:**
- Line 30: Remove "- Named pipes for IPC between prompt input and Claude process"
- Line 57: Remove "- Uses `mkfifo` to create named pipes for prompt input"

## Verification
1. Run `bun run typecheck` to ensure no type errors
2. Run `bun test` to ensure remaining tests pass
3. Run `bun run format` to fix any formatting issues
</claude-plan>

* fix: address PR review comments

- Add session_id output handling in run-claude-sdk.ts (critical)
- Remove unused claudeEnv parameter from ClaudeOptions and index.ts
- Update stale CLI path comment in parse-sdk-options.ts

Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 0
Claude-Permission-Prompts: 0
Claude-Escapes: 0
Claude-Plan:
<claude-plan>
# Plan: Remove Non-Agent SDK Code Path

## Overview
Since `use_agent_sdk` defaults to `true`, remove the legacy CLI code path entirely from `base-action/src/run-claude.ts`.

## Files to Modify

### 1. `base-action/src/run-claude.ts` - Main Cleanup

**Remove imports:**
- `exec` from `child_process`
- `promisify` from `util`
- `unlink`, `writeFile`, `stat` from `fs/promises` (keep `readFile` - check if needed)
- `createWriteStream` from `fs`
- `spawn` from `child_process`
- `parseShellArgs` from `shell-quote` (still used in `parse-sdk-options.ts`, keep package)

**Remove constants:**
- `execAsync`
- `PIPE_PATH`
- `EXECUTION_FILE` (defined in both files, keep in SDK file)
- `BASE_ARGS`

**Remove types:**
- `PreparedConfig` type (lines 85-89) - only used by `prepareRunConfig()`

**Remove functions:**
- `sanitizeJsonOutput()` (lines 21-68)
- `prepareRunConfig()` (lines 91-125) - also remove export
- `parseAndSetSessionId()` (lines 131-155) - also remove export
- `parseAndSetStructuredOutputs()` (lines 162-197) - also remove export

**Simplify `runClaude()`:**
- Remove `useAgentSdk` flag check and logging (lines 200-204)
- Remove the `if (useAgentSdk)` block, make SDK call direct
- Remove entire CLI path (lines 211-438)
- Resulting function becomes just:
  ```typescript
  export async function runClaude(promptPath: string, options: ClaudeOptions) {
    const parsedOptions = parseSdkOptions(options);
    return runClaudeWithSdk(promptPath, parsedOptions);
  }
  ```

### 2. Delete Test Files

**`base-action/test/run-claude.test.ts`:**
- Delete entire file (only tests `prepareRunConfig()`)

**`base-action/test/structured-output.test.ts`:**
- Delete entire file (only tests `parseAndSetStructuredOutputs()` and `parseAndSetSessionId()`)

### 3. Workflow Update

**`.github/workflows/test-base-action.yml`:**
- Remove `test-agent-sdk` job (lines 120-176) - redundant now

### 4. Documentation Update

**`base-action/CLAUDE.md`:**
- Line 30: Remove "- Named pipes for IPC between prompt input and Claude process"
- Line 57: Remove "- Uses `mkfifo` to create named pipes for prompt input"

## Verification
1. Run `bun run typecheck` to ensure no type errors
2. Run `bun test` to ensure remaining tests pass
3. Run `bun run format` to fix any formatting issues
</claude-plan>
2026-01-20 16:00:23 -08:00
Ashwin Bhat
e208124d29 chore: bump Bun to 1.3.6 and setup-bun action to v2.1.2 (#848)
Claude-Generated-By: Claude Code (cli/claude=100%)
Claude-Steers: 1
Claude-Permission-Prompts: 5
Claude-Escapes: 1
2026-01-20 14:06:49 -08:00
43 changed files with 652 additions and 862 deletions

View File

@@ -8,7 +8,7 @@ jobs:
test: test:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
- uses: oven-sh/setup-bun@v2 - uses: oven-sh/setup-bun@v2
with: with:
@@ -23,7 +23,7 @@ jobs:
prettier: prettier:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
- uses: oven-sh/setup-bun@v1 - uses: oven-sh/setup-bun@v1
with: with:
@@ -38,7 +38,7 @@ jobs:
typecheck: typecheck:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
- uses: oven-sh/setup-bun@v2 - uses: oven-sh/setup-bun@v2
with: with:

View File

@@ -13,7 +13,7 @@ jobs:
id-token: write id-token: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -25,7 +25,7 @@ jobs:
id-token: write id-token: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -14,7 +14,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -32,7 +32,7 @@ jobs:
next_version: ${{ steps.next_version.outputs.next_version }} next_version: ${{ steps.next_version.outputs.next_version }}
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -105,7 +105,7 @@ jobs:
contents: write contents: write
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -130,7 +130,7 @@ jobs:
# environment: production # environment: production
# steps: # steps:
# - name: Checkout base-action repo # - name: Checkout base-action repo
# uses: actions/checkout@v5 # uses: actions/checkout@v6
# with: # with:
# repository: anthropics/claude-code-base-action # repository: anthropics/claude-code-base-action
# token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }} # token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}

View File

@@ -116,61 +116,3 @@ jobs:
echo "❌ Execution log file not found" echo "❌ Execution log file not found"
exit 1 exit 1
fi fi
test-agent-sdk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Test with Agent SDK
id: sdk-test
uses: ./base-action
env:
USE_AGENT_SDK: "true"
with:
prompt: ${{ github.event.inputs.test_prompt || 'List the files in the current directory starting with "package"' }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
allowed_tools: "LS,Read"
- name: Verify SDK output
run: |
OUTPUT_FILE="${{ steps.sdk-test.outputs.execution_file }}"
CONCLUSION="${{ steps.sdk-test.outputs.conclusion }}"
echo "Conclusion: $CONCLUSION"
echo "Output file: $OUTPUT_FILE"
if [ "$CONCLUSION" = "success" ]; then
echo "✅ Action completed successfully with Agent SDK"
else
echo "❌ Action failed with Agent SDK"
exit 1
fi
if [ -f "$OUTPUT_FILE" ]; then
if [ -s "$OUTPUT_FILE" ]; then
echo "✅ Execution log file created successfully with content"
echo "Validating JSON format:"
if jq . "$OUTPUT_FILE" > /dev/null 2>&1; then
echo "✅ Output is valid JSON"
# Verify SDK output contains total_cost_usd (SDK field name)
if jq -e '.[] | select(.type == "result") | .total_cost_usd' "$OUTPUT_FILE" > /dev/null 2>&1; then
echo "✅ SDK output contains total_cost_usd field"
else
echo "❌ SDK output missing total_cost_usd field"
exit 1
fi
echo "Content preview:"
head -c 500 "$OUTPUT_FILE"
else
echo "❌ Output is not valid JSON"
exit 1
fi
else
echo "❌ Execution log file is empty"
exit 1
fi
else
echo "❌ Execution log file not found"
exit 1
fi

View File

@@ -35,6 +35,14 @@ inputs:
description: "Comma-separated list of usernames to allow without write permissions, or '*' to allow all users. Only works when github_token input is provided. WARNING: Use with extreme caution - this bypasses security checks and should only be used for workflows with very limited permissions (e.g., issue labeling)." description: "Comma-separated list of usernames to allow without write permissions, or '*' to allow all users. Only works when github_token input is provided. WARNING: Use with extreme caution - this bypasses security checks and should only be used for workflows with very limited permissions (e.g., issue labeling)."
required: false required: false
default: "" default: ""
include_comments_by_actor:
description: "Comma-separated list of actor usernames to INCLUDE in comments. Supports wildcards: '*[bot]' matches all bots, 'dependabot[bot]' matches specific bot. Empty (default) includes all actors."
required: false
default: ""
exclude_comments_by_actor:
description: "Comma-separated list of actor usernames to EXCLUDE from comments. Supports wildcards: '*[bot]' matches all bots, 'renovate[bot]' matches specific bot. Empty (default) excludes none. If actor is in both lists, exclusion takes priority."
required: false
default: ""
# Claude Code configuration # Claude Code configuration
prompt: prompt:
@@ -150,7 +158,7 @@ runs:
if: inputs.path_to_bun_executable == '' if: inputs.path_to_bun_executable == ''
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2 uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
with: with:
bun-version: 2.1.1 bun-version: 1.3.6
- name: Setup Custom Bun Path - name: Setup Custom Bun Path
if: inputs.path_to_bun_executable != '' if: inputs.path_to_bun_executable != ''
@@ -186,6 +194,8 @@ runs:
OVERRIDE_GITHUB_TOKEN: ${{ inputs.github_token }} OVERRIDE_GITHUB_TOKEN: ${{ inputs.github_token }}
ALLOWED_BOTS: ${{ inputs.allowed_bots }} ALLOWED_BOTS: ${{ inputs.allowed_bots }}
ALLOWED_NON_WRITE_USERS: ${{ inputs.allowed_non_write_users }} ALLOWED_NON_WRITE_USERS: ${{ inputs.allowed_non_write_users }}
INCLUDE_COMMENTS_BY_ACTOR: ${{ inputs.include_comments_by_actor }}
EXCLUDE_COMMENTS_BY_ACTOR: ${{ inputs.exclude_comments_by_actor }}
GITHUB_RUN_ID: ${{ github.run_id }} GITHUB_RUN_ID: ${{ github.run_id }}
USE_STICKY_COMMENT: ${{ inputs.use_sticky_comment }} USE_STICKY_COMMENT: ${{ inputs.use_sticky_comment }}
DEFAULT_WORKFLOW_TOKEN: ${{ github.token }} DEFAULT_WORKFLOW_TOKEN: ${{ github.token }}
@@ -213,7 +223,7 @@ runs:
# Install Claude Code if no custom executable is provided # Install Claude Code if no custom executable is provided
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
CLAUDE_CODE_VERSION="2.1.11" CLAUDE_CODE_VERSION="2.1.20"
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..." echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
for attempt in 1 2 3; do for attempt in 1 2 3; do
echo "Installation attempt $attempt..." echo "Installation attempt $attempt..."

View File

@@ -27,7 +27,6 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
### Key Design Patterns ### Key Design Patterns
- Uses Bun runtime for development and execution - Uses Bun runtime for development and execution
- Named pipes for IPC between prompt input and Claude process
- JSON streaming output format for execution logs - JSON streaming output format for execution logs
- Composite action pattern to orchestrate multiple steps - Composite action pattern to orchestrate multiple steps
- Provider-agnostic design supporting Anthropic API, AWS Bedrock, and Google Vertex AI - Provider-agnostic design supporting Anthropic API, AWS Bedrock, and Google Vertex AI
@@ -54,7 +53,6 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
## Important Technical Details ## Important Technical Details
- Uses `mkfifo` to create named pipes for prompt input
- Outputs execution logs as JSON to `/tmp/claude-execution-output.json` - Outputs execution logs as JSON to `/tmp/claude-execution-output.json`
- Timeout enforcement via `timeout` command wrapper - Timeout enforcement via `timeout` command wrapper
- Strict TypeScript configuration with Bun-specific settings - Strict TypeScript configuration with Bun-specific settings

View File

@@ -339,7 +339,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -99,7 +99,7 @@ runs:
if: inputs.path_to_bun_executable == '' if: inputs.path_to_bun_executable == ''
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2 uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
with: with:
bun-version: 2.1.1 bun-version: 1.3.6
- name: Setup Custom Bun Path - name: Setup Custom Bun Path
if: inputs.path_to_bun_executable != '' if: inputs.path_to_bun_executable != ''
@@ -124,7 +124,7 @@ runs:
PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }} PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
run: | run: |
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
CLAUDE_CODE_VERSION="2.1.11" CLAUDE_CODE_VERSION="2.1.20"
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..." echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
for attempt in 1 2 3; do for attempt in 1 2 3; do
echo "Installation attempt $attempt..." echo "Installation attempt $attempt..."

View File

@@ -6,7 +6,7 @@
"name": "@anthropic-ai/claude-code-base-action", "name": "@anthropic-ai/claude-code-base-action",
"dependencies": { "dependencies": {
"@actions/core": "^1.10.1", "@actions/core": "^1.10.1",
"@anthropic-ai/claude-agent-sdk": "^0.2.11", "@anthropic-ai/claude-agent-sdk": "^0.2.20",
"shell-quote": "^1.8.3", "shell-quote": "^1.8.3",
}, },
"devDependencies": { "devDependencies": {
@@ -27,7 +27,7 @@
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="], "@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.11", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-50q4vfh57HYTUrwukULp3gvSaOZfRF5zukxhWvW6mmUHuD8+Xwhqn59sZhoDz9ZUw6Um+1lUCgWpJw5/TTMn5w=="], "@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.20", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-Q2rJlYC2hEhJRKcOswJrcvm0O6H/uhXkRPAAqbAlFR/jbCWeg6jpyr9iUmVBFUFOBzAWqT2C6KLHiTJ8NySvQg=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="], "@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],

View File

@@ -11,7 +11,7 @@
}, },
"dependencies": { "dependencies": {
"@actions/core": "^1.10.1", "@actions/core": "^1.10.1",
"@anthropic-ai/claude-agent-sdk": "^0.2.11", "@anthropic-ai/claude-agent-sdk": "^0.2.20",
"shell-quote": "^1.8.3" "shell-quote": "^1.8.3"
}, },
"devDependencies": { "devDependencies": {

View File

@@ -36,7 +36,6 @@ async function run() {
mcpConfig: process.env.INPUT_MCP_CONFIG, mcpConfig: process.env.INPUT_MCP_CONFIG,
systemPrompt: process.env.INPUT_SYSTEM_PROMPT, systemPrompt: process.env.INPUT_SYSTEM_PROMPT,
appendSystemPrompt: process.env.INPUT_APPEND_SYSTEM_PROMPT, appendSystemPrompt: process.env.INPUT_APPEND_SYSTEM_PROMPT,
claudeEnv: process.env.INPUT_CLAUDE_ENV,
fallbackModel: process.env.INPUT_FALLBACK_MODEL, fallbackModel: process.env.INPUT_FALLBACK_MODEL,
model: process.env.ANTHROPIC_MODEL, model: process.env.ANTHROPIC_MODEL,
pathToClaudeCodeExecutable: pathToClaudeCodeExecutable:

View File

@@ -212,7 +212,7 @@ export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
if (process.env.INPUT_ACTION_INPUTS_PRESENT) { if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
env.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT; env.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
} }
// Ensure SDK path uses the same entrypoint as the CLI path // Set the entrypoint for Claude Code to identify this as the GitHub Action
env.CLAUDE_CODE_ENTRYPOINT = "claude-code-github-action"; env.CLAUDE_CODE_ENTRYPOINT = "claude-code-github-action";
// Build system prompt option - default to claude_code preset // Build system prompt option - default to claude_code preset

View File

@@ -178,6 +178,15 @@ export async function runClaudeWithSdk(
core.warning(`Failed to write execution file: ${error}`); core.warning(`Failed to write execution file: ${error}`);
} }
// Extract and set session_id from system.init message
const initMessage = messages.find(
(m) => m.type === "system" && "subtype" in m && m.subtype === "init",
);
if (initMessage && "session_id" in initMessage && initMessage.session_id) {
core.setOutput("session_id", initMessage.session_id);
core.info(`Set session_id: ${initMessage.session_id}`);
}
if (!resultMessage) { if (!resultMessage) {
core.setOutput("conclusion", "failure"); core.setOutput("conclusion", "failure");
core.error("No result message received from Claude"); core.error("No result message received from Claude");

View File

@@ -1,72 +1,6 @@
import * as core from "@actions/core";
import { exec } from "child_process";
import { promisify } from "util";
import { unlink, writeFile, stat, readFile } from "fs/promises";
import { createWriteStream } from "fs";
import { spawn } from "child_process";
import { parse as parseShellArgs } from "shell-quote";
import { runClaudeWithSdk } from "./run-claude-sdk"; import { runClaudeWithSdk } from "./run-claude-sdk";
import { parseSdkOptions } from "./parse-sdk-options"; import { parseSdkOptions } from "./parse-sdk-options";
const execAsync = promisify(exec);
const PIPE_PATH = `${process.env.RUNNER_TEMP}/claude_prompt_pipe`;
const EXECUTION_FILE = `${process.env.RUNNER_TEMP}/claude-execution-output.json`;
const BASE_ARGS = ["--verbose", "--output-format", "stream-json"];
/**
* Sanitizes JSON output to remove sensitive information when full output is disabled
* Returns a safe summary message or null if the message should be completely suppressed
*/
function sanitizeJsonOutput(
jsonObj: any,
showFullOutput: boolean,
): string | null {
if (showFullOutput) {
// In full output mode, return the full JSON
return JSON.stringify(jsonObj, null, 2);
}
// In non-full-output mode, provide minimal safe output
const type = jsonObj.type;
const subtype = jsonObj.subtype;
// System initialization - safe to show
if (type === "system" && subtype === "init") {
return JSON.stringify(
{
type: "system",
subtype: "init",
message: "Claude Code initialized",
model: jsonObj.model || "unknown",
},
null,
2,
);
}
// Result messages - Always show the final result
if (type === "result") {
// These messages contain the final result and should always be visible
return JSON.stringify(
{
type: "result",
subtype: jsonObj.subtype,
is_error: jsonObj.is_error,
duration_ms: jsonObj.duration_ms,
num_turns: jsonObj.num_turns,
total_cost_usd: jsonObj.total_cost_usd,
permission_denials: jsonObj.permission_denials,
},
null,
2,
);
}
// For any other message types, suppress completely in non-full-output mode
return null;
}
export type ClaudeOptions = { export type ClaudeOptions = {
claudeArgs?: string; claudeArgs?: string;
model?: string; model?: string;
@@ -77,363 +11,11 @@ export type ClaudeOptions = {
mcpConfig?: string; mcpConfig?: string;
systemPrompt?: string; systemPrompt?: string;
appendSystemPrompt?: string; appendSystemPrompt?: string;
claudeEnv?: string;
fallbackModel?: string; fallbackModel?: string;
showFullOutput?: string; showFullOutput?: string;
}; };
type PreparedConfig = {
claudeArgs: string[];
promptPath: string;
env: Record<string, string>;
};
export function prepareRunConfig(
promptPath: string,
options: ClaudeOptions,
): PreparedConfig {
// Build Claude CLI arguments:
// 1. Prompt flag (always first)
// 2. User's claudeArgs (full control)
// 3. BASE_ARGS (always last, cannot be overridden)
const claudeArgs = ["-p"];
// Parse and add user's custom Claude arguments
if (options.claudeArgs?.trim()) {
const parsed = parseShellArgs(options.claudeArgs);
const customArgs = parsed.filter(
(arg): arg is string => typeof arg === "string",
);
claudeArgs.push(...customArgs);
}
// BASE_ARGS are always appended last (cannot be overridden)
claudeArgs.push(...BASE_ARGS);
const customEnv: Record<string, string> = {};
if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
customEnv.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
}
return {
claudeArgs,
promptPath,
env: customEnv,
};
}
/**
* Parses session_id from execution file and sets GitHub Action output
* Exported for testing
*/
export async function parseAndSetSessionId(
executionFile: string,
): Promise<void> {
try {
const content = await readFile(executionFile, "utf-8");
const messages = JSON.parse(content) as {
type: string;
subtype?: string;
session_id?: string;
}[];
// Find the system.init message which contains session_id
const initMessage = messages.find(
(m) => m.type === "system" && m.subtype === "init",
);
if (initMessage?.session_id) {
core.setOutput("session_id", initMessage.session_id);
core.info(`Set session_id: ${initMessage.session_id}`);
}
} catch (error) {
// Don't fail the action if session_id extraction fails
core.warning(`Failed to extract session_id: ${error}`);
}
}
/**
* Parses structured_output from execution file and sets GitHub Action outputs
* Only runs if --json-schema was explicitly provided in claude_args
* Exported for testing
*/
export async function parseAndSetStructuredOutputs(
executionFile: string,
): Promise<void> {
try {
const content = await readFile(executionFile, "utf-8");
const messages = JSON.parse(content) as {
type: string;
structured_output?: Record<string, unknown>;
}[];
// Search backwards - result is typically last or second-to-last message
const result = messages.findLast(
(m) => m.type === "result" && m.structured_output,
);
if (!result?.structured_output) {
throw new Error(
`--json-schema was provided but Claude did not return structured_output.\n` +
`Found ${messages.length} messages. Result exists: ${!!result}\n`,
);
}
// Set the complete structured output as a single JSON string
// This works around GitHub Actions limitation that composite actions can't have dynamic outputs
const structuredOutputJson = JSON.stringify(result.structured_output);
core.setOutput("structured_output", structuredOutputJson);
core.info(
`Set structured_output with ${Object.keys(result.structured_output).length} field(s)`,
);
} catch (error) {
if (error instanceof Error) {
throw error; // Preserve original error and stack trace
}
throw new Error(`Failed to parse structured outputs: ${error}`);
}
}
export async function runClaude(promptPath: string, options: ClaudeOptions) { export async function runClaude(promptPath: string, options: ClaudeOptions) {
// Feature flag: use SDK path by default, set USE_AGENT_SDK=false to use CLI const parsedOptions = parseSdkOptions(options);
const useAgentSdk = process.env.USE_AGENT_SDK !== "false"; return runClaudeWithSdk(promptPath, parsedOptions);
console.log(
`Using ${useAgentSdk ? "Agent SDK" : "CLI"} path (USE_AGENT_SDK=${process.env.USE_AGENT_SDK ?? "unset"})`,
);
if (useAgentSdk) {
const parsedOptions = parseSdkOptions(options);
return runClaudeWithSdk(promptPath, parsedOptions);
}
const config = prepareRunConfig(promptPath, options);
// Detect if --json-schema is present in claude args
const hasJsonSchema = options.claudeArgs?.includes("--json-schema") ?? false;
// Create a named pipe
try {
await unlink(PIPE_PATH);
} catch (e) {
// Ignore if file doesn't exist
}
// Create the named pipe
await execAsync(`mkfifo "${PIPE_PATH}"`);
// Log prompt file size
let promptSize = "unknown";
try {
const stats = await stat(config.promptPath);
promptSize = stats.size.toString();
} catch (e) {
// Ignore error
}
console.log(`Prompt file size: ${promptSize} bytes`);
// Log custom environment variables if any
const customEnvKeys = Object.keys(config.env).filter(
(key) => key !== "CLAUDE_ACTION_INPUTS_PRESENT",
);
if (customEnvKeys.length > 0) {
console.log(`Custom environment variables: ${customEnvKeys.join(", ")}`);
}
// Log custom arguments if any
if (options.claudeArgs && options.claudeArgs.trim() !== "") {
console.log(`Custom Claude arguments: ${options.claudeArgs}`);
}
// Output to console
console.log(`Running Claude with prompt from file: ${config.promptPath}`);
console.log(`Full command: claude ${config.claudeArgs.join(" ")}`);
// Start sending prompt to pipe in background
const catProcess = spawn("cat", [config.promptPath], {
stdio: ["ignore", "pipe", "inherit"],
});
const pipeStream = createWriteStream(PIPE_PATH);
catProcess.stdout.pipe(pipeStream);
catProcess.on("error", (error) => {
console.error("Error reading prompt file:", error);
pipeStream.destroy();
});
// Use custom executable path if provided, otherwise default to "claude"
const claudeExecutable = options.pathToClaudeCodeExecutable || "claude";
const claudeProcess = spawn(claudeExecutable, config.claudeArgs, {
stdio: ["pipe", "pipe", "inherit"],
env: {
...process.env,
...config.env,
},
});
// Handle Claude process errors
claudeProcess.on("error", (error) => {
console.error("Error spawning Claude process:", error);
pipeStream.destroy();
});
// Determine if full output should be shown
// Show full output if explicitly set to "true" OR if GitHub Actions debug mode is enabled
const isDebugMode = process.env.ACTIONS_STEP_DEBUG === "true";
let showFullOutput = options.showFullOutput === "true" || isDebugMode;
if (isDebugMode && options.showFullOutput !== "false") {
console.log("Debug mode detected - showing full output");
showFullOutput = true;
} else if (!showFullOutput) {
console.log("Running Claude Code (full output hidden for security)...");
console.log(
"Rerun in debug mode or enable `show_full_output: true` in your workflow file for full output.",
);
}
// Capture output for parsing execution metrics
let output = "";
claudeProcess.stdout.on("data", (data) => {
const text = data.toString();
// Try to parse as JSON and handle based on verbose setting
const lines = text.split("\n");
lines.forEach((line: string, index: number) => {
if (line.trim() === "") return;
try {
// Check if this line is a JSON object
const parsed = JSON.parse(line);
const sanitizedOutput = sanitizeJsonOutput(parsed, showFullOutput);
if (sanitizedOutput) {
process.stdout.write(sanitizedOutput);
if (index < lines.length - 1 || text.endsWith("\n")) {
process.stdout.write("\n");
}
}
} catch (e) {
// Not a JSON object
if (showFullOutput) {
// In full output mode, print as is
process.stdout.write(line);
if (index < lines.length - 1 || text.endsWith("\n")) {
process.stdout.write("\n");
}
}
// In non-full-output mode, suppress non-JSON output
}
});
output += text;
});
// Handle stdout errors
claudeProcess.stdout.on("error", (error) => {
console.error("Error reading Claude stdout:", error);
});
// Pipe from named pipe to Claude
const pipeProcess = spawn("cat", [PIPE_PATH]);
pipeProcess.stdout.pipe(claudeProcess.stdin);
// Handle pipe process errors
pipeProcess.on("error", (error) => {
console.error("Error reading from named pipe:", error);
claudeProcess.kill("SIGTERM");
});
// Wait for Claude to finish
const exitCode = await new Promise<number>((resolve) => {
claudeProcess.on("close", (code) => {
resolve(code || 0);
});
claudeProcess.on("error", (error) => {
console.error("Claude process error:", error);
resolve(1);
});
});
// Clean up processes
try {
catProcess.kill("SIGTERM");
} catch (e) {
// Process may already be dead
}
try {
pipeProcess.kill("SIGTERM");
} catch (e) {
// Process may already be dead
}
// Clean up pipe file
try {
await unlink(PIPE_PATH);
} catch (e) {
// Ignore errors during cleanup
}
// Set conclusion based on exit code
if (exitCode === 0) {
// Try to process the output and save execution metrics
try {
await writeFile("output.txt", output);
// Process output.txt into JSON and save to execution file
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
maxBuffer: 10 * 1024 * 1024,
});
await writeFile(EXECUTION_FILE, jsonOutput);
console.log(`Log saved to ${EXECUTION_FILE}`);
} catch (e) {
core.warning(`Failed to process output for execution metrics: ${e}`);
}
core.setOutput("execution_file", EXECUTION_FILE);
// Extract and set session_id
await parseAndSetSessionId(EXECUTION_FILE);
// Parse and set structured outputs only if user provided --json-schema in claude_args
if (hasJsonSchema) {
try {
await parseAndSetStructuredOutputs(EXECUTION_FILE);
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error);
core.setFailed(errorMessage);
core.setOutput("conclusion", "failure");
process.exit(1);
}
}
// Set conclusion to success if we reached here
core.setOutput("conclusion", "success");
} else {
core.setOutput("conclusion", "failure");
// Still try to save execution file if we have output
if (output) {
try {
await writeFile("output.txt", output);
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
maxBuffer: 10 * 1024 * 1024,
});
await writeFile(EXECUTION_FILE, jsonOutput);
core.setOutput("execution_file", EXECUTION_FILE);
} catch (e) {
// Ignore errors when processing output during failure
}
}
process.exit(exitCode);
}
} }

View File

@@ -1,96 +0,0 @@
#!/usr/bin/env bun
import { describe, test, expect } from "bun:test";
import { prepareRunConfig, type ClaudeOptions } from "../src/run-claude";
describe("prepareRunConfig", () => {
test("should prepare config with basic arguments", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should include promptPath", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.promptPath).toBe("/tmp/test-prompt.txt");
});
test("should use provided prompt path", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/custom/prompt/path.txt", options);
expect(prepared.promptPath).toBe("/custom/prompt/path.txt");
});
describe("claudeArgs handling", () => {
test("should parse and include custom claude arguments", () => {
const options: ClaudeOptions = {
claudeArgs: "--max-turns 10 --model claude-3-opus-20240229",
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--max-turns",
"10",
"--model",
"claude-3-opus-20240229",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should handle empty claudeArgs", () => {
const options: ClaudeOptions = {
claudeArgs: "",
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should handle claudeArgs with quoted strings", () => {
const options: ClaudeOptions = {
claudeArgs: '--system-prompt "You are a helpful assistant"',
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--system-prompt",
"You are a helpful assistant",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should include json-schema flag when provided", () => {
const options: ClaudeOptions = {
claudeArgs:
'--json-schema \'{"type":"object","properties":{"result":{"type":"boolean"}}}\'',
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toContain("--json-schema");
expect(prepared.claudeArgs).toContain(
'{"type":"object","properties":{"result":{"type":"boolean"}}}',
);
});
});
});

View File

@@ -1,227 +0,0 @@
#!/usr/bin/env bun
import { describe, test, expect, afterEach, beforeEach, spyOn } from "bun:test";
import { writeFile, unlink } from "fs/promises";
import { tmpdir } from "os";
import { join } from "path";
import {
parseAndSetStructuredOutputs,
parseAndSetSessionId,
} from "../src/run-claude";
import * as core from "@actions/core";
// Mock execution file path
const TEST_EXECUTION_FILE = join(tmpdir(), "test-execution-output.json");
// Helper to create mock execution file with structured output
async function createMockExecutionFile(
structuredOutput?: Record<string, unknown>,
includeResult: boolean = true,
): Promise<void> {
const messages: any[] = [
{ type: "system", subtype: "init" },
{ type: "turn", content: "test" },
];
if (includeResult) {
messages.push({
type: "result",
cost_usd: 0.01,
duration_ms: 1000,
structured_output: structuredOutput,
});
}
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
}
// Spy on core functions
let setOutputSpy: any;
let infoSpy: any;
let warningSpy: any;
beforeEach(() => {
setOutputSpy = spyOn(core, "setOutput").mockImplementation(() => {});
infoSpy = spyOn(core, "info").mockImplementation(() => {});
warningSpy = spyOn(core, "warning").mockImplementation(() => {});
});
describe("parseAndSetStructuredOutputs", () => {
afterEach(async () => {
setOutputSpy?.mockRestore();
infoSpy?.mockRestore();
warningSpy?.mockRestore();
try {
await unlink(TEST_EXECUTION_FILE);
} catch {
// Ignore if file doesn't exist
}
});
test("should set structured_output with valid data", async () => {
await createMockExecutionFile({
is_flaky: true,
confidence: 0.85,
summary: "Test looks flaky",
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
expect(setOutputSpy).toHaveBeenCalledWith(
"structured_output",
'{"is_flaky":true,"confidence":0.85,"summary":"Test looks flaky"}',
);
expect(infoSpy).toHaveBeenCalledWith(
"Set structured_output with 3 field(s)",
);
});
test("should handle arrays and nested objects", async () => {
await createMockExecutionFile({
items: ["a", "b", "c"],
config: { key: "value", nested: { deep: true } },
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
const callArgs = setOutputSpy.mock.calls[0];
expect(callArgs[0]).toBe("structured_output");
const parsed = JSON.parse(callArgs[1]);
expect(parsed).toEqual({
items: ["a", "b", "c"],
config: { key: "value", nested: { deep: true } },
});
});
test("should handle special characters in field names", async () => {
await createMockExecutionFile({
"test-result": "passed",
"item.count": 10,
"user@email": "test",
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
const callArgs = setOutputSpy.mock.calls[0];
const parsed = JSON.parse(callArgs[1]);
expect(parsed["test-result"]).toBe("passed");
expect(parsed["item.count"]).toBe(10);
expect(parsed["user@email"]).toBe("test");
});
test("should throw error when result exists but structured_output is undefined", async () => {
const messages = [
{ type: "system", subtype: "init" },
{ type: "result", cost_usd: 0.01, duration_ms: 1000 },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow(
"--json-schema was provided but Claude did not return structured_output",
);
});
test("should throw error when no result message exists", async () => {
const messages = [
{ type: "system", subtype: "init" },
{ type: "turn", content: "test" },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow(
"--json-schema was provided but Claude did not return structured_output",
);
});
test("should throw error with malformed JSON", async () => {
await writeFile(TEST_EXECUTION_FILE, "{ invalid json");
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow();
});
test("should throw error when file does not exist", async () => {
await expect(
parseAndSetStructuredOutputs("/nonexistent/file.json"),
).rejects.toThrow();
});
test("should handle empty structured_output object", async () => {
await createMockExecutionFile({});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
expect(setOutputSpy).toHaveBeenCalledWith("structured_output", "{}");
expect(infoSpy).toHaveBeenCalledWith(
"Set structured_output with 0 field(s)",
);
});
});
describe("parseAndSetSessionId", () => {
afterEach(async () => {
setOutputSpy?.mockRestore();
infoSpy?.mockRestore();
warningSpy?.mockRestore();
try {
await unlink(TEST_EXECUTION_FILE);
} catch {
// Ignore if file doesn't exist
}
});
test("should extract session_id from system.init message", async () => {
const messages = [
{ type: "system", subtype: "init", session_id: "test-session-123" },
{ type: "result", cost_usd: 0.01 },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await parseAndSetSessionId(TEST_EXECUTION_FILE);
expect(setOutputSpy).toHaveBeenCalledWith("session_id", "test-session-123");
expect(infoSpy).toHaveBeenCalledWith("Set session_id: test-session-123");
});
test("should handle missing session_id gracefully", async () => {
const messages = [
{ type: "system", subtype: "init" },
{ type: "result", cost_usd: 0.01 },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await parseAndSetSessionId(TEST_EXECUTION_FILE);
expect(setOutputSpy).not.toHaveBeenCalled();
});
test("should handle missing system.init message gracefully", async () => {
const messages = [{ type: "result", cost_usd: 0.01 }];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await parseAndSetSessionId(TEST_EXECUTION_FILE);
expect(setOutputSpy).not.toHaveBeenCalled();
});
test("should handle malformed JSON gracefully with warning", async () => {
await writeFile(TEST_EXECUTION_FILE, "{ invalid json");
await parseAndSetSessionId(TEST_EXECUTION_FILE);
expect(setOutputSpy).not.toHaveBeenCalled();
expect(warningSpy).toHaveBeenCalled();
});
test("should handle non-existent file gracefully with warning", async () => {
await parseAndSetSessionId("/nonexistent/file.json");
expect(setOutputSpy).not.toHaveBeenCalled();
expect(warningSpy).toHaveBeenCalled();
});
});

View File

@@ -7,7 +7,7 @@
"dependencies": { "dependencies": {
"@actions/core": "^1.10.1", "@actions/core": "^1.10.1",
"@actions/github": "^6.0.1", "@actions/github": "^6.0.1",
"@anthropic-ai/claude-agent-sdk": "^0.2.11", "@anthropic-ai/claude-agent-sdk": "^0.2.20",
"@modelcontextprotocol/sdk": "^1.11.0", "@modelcontextprotocol/sdk": "^1.11.0",
"@octokit/graphql": "^8.2.2", "@octokit/graphql": "^8.2.2",
"@octokit/rest": "^21.1.1", "@octokit/rest": "^21.1.1",
@@ -37,7 +37,7 @@
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="], "@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.11", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-50q4vfh57HYTUrwukULp3gvSaOZfRF5zukxhWvW6mmUHuD8+Xwhqn59sZhoDz9ZUw6Um+1lUCgWpJw5/TTMn5w=="], "@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.20", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-Q2rJlYC2hEhJRKcOswJrcvm0O6H/uhXkRPAAqbAlFR/jbCWeg6jpyr9iUmVBFUFOBzAWqT2C6KLHiTJ8NySvQg=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="], "@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],

View File

@@ -172,9 +172,14 @@ jobs:
**Important Notes**: **Important Notes**:
- The GitHub token must have the `actions: read` permission in your workflow - The GitHub token must have the corresponding permission in your workflow
- If the permission is missing, Claude will warn you and suggest adding it - If the permission is missing, Claude will warn you and suggest adding it
- Currently, only `actions: read` is supported, but the format allows for future extensions - The following additional permissions can be requested beyond the defaults:
- `actions: read`
- `checks: read`
- `discussions: read` or `discussions: write`
- `workflows: read` or `workflows: write`
- Standard permissions (`contents: write`, `pull_requests: write`, `issues: write`) are always included and do not need to be specified
## Custom Environment Variables ## Custom Environment Variables

View File

@@ -127,7 +127,7 @@ For performance, Claude uses shallow clones:
If you need full history, you can configure this in your workflow before calling Claude in the `actions/checkout` step. If you need full history, you can configure this in your workflow before calling Claude in the `actions/checkout` step.
``` ```
- uses: actions/checkout@v5 - uses: actions/checkout@v6
depth: 0 # will fetch full repo history depth: 0 # will fetch full repo history
``` ```

View File

@@ -35,7 +35,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -89,7 +89,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -153,7 +153,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -211,7 +211,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -268,7 +268,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1
@@ -344,7 +344,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -456,7 +456,7 @@ jobs:
pull-requests: write pull-requests: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
ref: ${{ github.event.pull_request.head.ref }} ref: ${{ github.event.pull_request.head.ref }}
fetch-depth: 0 fetch-depth: 0
@@ -513,7 +513,7 @@ jobs:
security-events: write security-events: write
id-token: write id-token: write
steps: steps:
- uses: actions/checkout@v5 - uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout code - name: Checkout code
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
ref: ${{ github.event.workflow_run.head_branch }} ref: ${{ github.event.workflow_run.head_branch }}
fetch-depth: 0 fetch-depth: 0

View File

@@ -26,7 +26,7 @@ jobs:
actions: read # Required for Claude to read CI results on PRs actions: read # Required for Claude to read CI results on PRs
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -15,7 +15,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -14,7 +14,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -23,7 +23,7 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 2 # Need at least 2 commits to analyze the latest fetch-depth: 2 # Need at least 2 commits to analyze the latest

View File

@@ -16,7 +16,7 @@ jobs:
id-token: write id-token: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -18,7 +18,7 @@ jobs:
id-token: write id-token: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -19,7 +19,7 @@ jobs:
id-token: write id-token: write
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 1 fetch-depth: 1

View File

@@ -12,7 +12,7 @@
"dependencies": { "dependencies": {
"@actions/core": "^1.10.1", "@actions/core": "^1.10.1",
"@actions/github": "^6.0.1", "@actions/github": "^6.0.1",
"@anthropic-ai/claude-agent-sdk": "^0.2.11", "@anthropic-ai/claude-agent-sdk": "^0.2.20",
"@modelcontextprotocol/sdk": "^1.11.0", "@modelcontextprotocol/sdk": "^1.11.0",
"@octokit/graphql": "^8.2.2", "@octokit/graphql": "^8.2.2",
"@octokit/rest": "^21.1.1", "@octokit/rest": "^21.1.1",

View File

@@ -98,6 +98,8 @@ type BaseContext = {
allowedNonWriteUsers: string; allowedNonWriteUsers: string;
trackProgress: boolean; trackProgress: boolean;
includeFixLinks: boolean; includeFixLinks: boolean;
includeCommentsByActor: string;
excludeCommentsByActor: string;
}; };
}; };
@@ -156,6 +158,8 @@ export function parseGitHubContext(): GitHubContext {
allowedNonWriteUsers: process.env.ALLOWED_NON_WRITE_USERS ?? "", allowedNonWriteUsers: process.env.ALLOWED_NON_WRITE_USERS ?? "",
trackProgress: process.env.TRACK_PROGRESS === "true", trackProgress: process.env.TRACK_PROGRESS === "true",
includeFixLinks: process.env.INCLUDE_FIX_LINKS === "true", includeFixLinks: process.env.INCLUDE_FIX_LINKS === "true",
includeCommentsByActor: process.env.INCLUDE_COMMENTS_BY_ACTOR ?? "",
excludeCommentsByActor: process.env.EXCLUDE_COMMENTS_BY_ACTOR ?? "",
}, },
}; };

View File

@@ -20,6 +20,10 @@ import type {
} from "../types"; } from "../types";
import type { CommentWithImages } from "../utils/image-downloader"; import type { CommentWithImages } from "../utils/image-downloader";
import { downloadCommentImages } from "../utils/image-downloader"; import { downloadCommentImages } from "../utils/image-downloader";
import {
parseActorFilter,
shouldIncludeCommentByActor,
} from "../utils/actor-filter";
/** /**
* Extracts the trigger timestamp from the GitHub webhook payload. * Extracts the trigger timestamp from the GitHub webhook payload.
@@ -166,6 +170,35 @@ export function isBodySafeToUse(
return true; return true;
} }
/**
* Filters comments by actor username based on include/exclude patterns
* @param comments - Array of comments to filter
* @param includeActors - Comma-separated actors to include
* @param excludeActors - Comma-separated actors to exclude
* @returns Filtered array of comments
*/
export function filterCommentsByActor<T extends { author: { login: string } }>(
comments: T[],
includeActors: string = "",
excludeActors: string = "",
): T[] {
const includeParsed = parseActorFilter(includeActors);
const excludeParsed = parseActorFilter(excludeActors);
// No filters = return all
if (includeParsed.length === 0 && excludeParsed.length === 0) {
return comments;
}
return comments.filter((comment) =>
shouldIncludeCommentByActor(
comment.author.login,
includeParsed,
excludeParsed,
),
);
}
type FetchDataParams = { type FetchDataParams = {
octokits: Octokits; octokits: Octokits;
repository: string; repository: string;
@@ -174,6 +207,8 @@ type FetchDataParams = {
triggerUsername?: string; triggerUsername?: string;
triggerTime?: string; triggerTime?: string;
originalTitle?: string; originalTitle?: string;
includeCommentsByActor?: string;
excludeCommentsByActor?: string;
}; };
export type GitHubFileWithSHA = GitHubFile & { export type GitHubFileWithSHA = GitHubFile & {
@@ -198,6 +233,8 @@ export async function fetchGitHubData({
triggerUsername, triggerUsername,
triggerTime, triggerTime,
originalTitle, originalTitle,
includeCommentsByActor,
excludeCommentsByActor,
}: FetchDataParams): Promise<FetchDataResult> { }: FetchDataParams): Promise<FetchDataResult> {
const [owner, repo] = repository.split("/"); const [owner, repo] = repository.split("/");
if (!owner || !repo) { if (!owner || !repo) {
@@ -225,9 +262,13 @@ export async function fetchGitHubData({
const pullRequest = prResult.repository.pullRequest; const pullRequest = prResult.repository.pullRequest;
contextData = pullRequest; contextData = pullRequest;
changedFiles = pullRequest.files.nodes || []; changedFiles = pullRequest.files.nodes || [];
comments = filterCommentsToTriggerTime( comments = filterCommentsByActor(
pullRequest.comments?.nodes || [], filterCommentsToTriggerTime(
triggerTime, pullRequest.comments?.nodes || [],
triggerTime,
),
includeCommentsByActor,
excludeCommentsByActor,
); );
reviewData = pullRequest.reviews || []; reviewData = pullRequest.reviews || [];
@@ -248,9 +289,13 @@ export async function fetchGitHubData({
if (issueResult.repository.issue) { if (issueResult.repository.issue) {
contextData = issueResult.repository.issue; contextData = issueResult.repository.issue;
comments = filterCommentsToTriggerTime( comments = filterCommentsByActor(
contextData?.comments?.nodes || [], filterCommentsToTriggerTime(
triggerTime, contextData?.comments?.nodes || [],
triggerTime,
),
includeCommentsByActor,
excludeCommentsByActor,
); );
console.log(`Successfully fetched issue #${prNumber} data`); console.log(`Successfully fetched issue #${prNumber} data`);
@@ -318,7 +363,27 @@ export async function fetchGitHubData({
body: r.body, body: r.body,
})); }));
// Filter review comments to trigger time // Filter review comments to trigger time and by actor
if (reviewData && reviewData.nodes) {
// Filter reviews by actor
reviewData.nodes = filterCommentsByActor(
reviewData.nodes,
includeCommentsByActor,
excludeCommentsByActor,
);
// Also filter inline review comments within each review
reviewData.nodes.forEach((review) => {
if (review.comments?.nodes) {
review.comments.nodes = filterCommentsByActor(
review.comments.nodes,
includeCommentsByActor,
excludeCommentsByActor,
);
}
});
}
const allReviewComments = const allReviewComments =
reviewData?.nodes?.flatMap((r) => r.comments?.nodes ?? []) ?? []; reviewData?.nodes?.flatMap((r) => r.comments?.nodes ?? []) ?? [];
const filteredReviewComments = filterCommentsToTriggerTime( const filteredReviewComments = filterCommentsToTriggerTime(

View File

@@ -16,15 +16,60 @@ async function getOidcToken(): Promise<string> {
} }
} }
async function exchangeForAppToken(oidcToken: string): Promise<string> { const DEFAULT_PERMISSIONS: Record<string, string> = {
contents: "write",
pull_requests: "write",
issues: "write",
};
export function parseAdditionalPermissions():
| Record<string, string>
| undefined {
const raw = process.env.ADDITIONAL_PERMISSIONS;
if (!raw || !raw.trim()) {
return undefined;
}
const additional: Record<string, string> = {};
for (const line of raw.split("\n")) {
const trimmed = line.trim();
if (!trimmed) continue;
const colonIndex = trimmed.indexOf(":");
if (colonIndex === -1) continue;
const key = trimmed.slice(0, colonIndex).trim();
const value = trimmed.slice(colonIndex + 1).trim();
if (key && value) {
additional[key] = value;
}
}
if (Object.keys(additional).length === 0) {
return undefined;
}
return { ...DEFAULT_PERMISSIONS, ...additional };
}
async function exchangeForAppToken(
oidcToken: string,
permissions?: Record<string, string>,
): Promise<string> {
const headers: Record<string, string> = {
Authorization: `Bearer ${oidcToken}`,
};
const fetchOptions: RequestInit = {
method: "POST",
headers,
};
if (permissions) {
headers["Content-Type"] = "application/json";
fetchOptions.body = JSON.stringify({ permissions });
}
const response = await fetch( const response = await fetch(
"https://api.anthropic.com/api/github/github-app-token-exchange", "https://api.anthropic.com/api/github/github-app-token-exchange",
{ fetchOptions,
method: "POST",
headers: {
Authorization: `Bearer ${oidcToken}`,
},
},
); );
if (!response.ok) { if (!response.ok) {
@@ -89,9 +134,11 @@ export async function setupGitHubToken(): Promise<string> {
const oidcToken = await retryWithBackoff(() => getOidcToken()); const oidcToken = await retryWithBackoff(() => getOidcToken());
console.log("OIDC token successfully obtained"); console.log("OIDC token successfully obtained");
const permissions = parseAdditionalPermissions();
console.log("Exchanging OIDC token for app token..."); console.log("Exchanging OIDC token for app token...");
const appToken = await retryWithBackoff(() => const appToken = await retryWithBackoff(() =>
exchangeForAppToken(oidcToken), exchangeForAppToken(oidcToken, permissions),
); );
console.log("App token successfully obtained"); console.log("App token successfully obtained");

View File

@@ -0,0 +1,65 @@
/**
* Parses actor filter string into array of patterns
* @param filterString - Comma-separated actor names (e.g., "user1,user2,*[bot]")
* @returns Array of actor patterns
*/
export function parseActorFilter(filterString: string): string[] {
if (!filterString.trim()) return [];
return filterString
.split(",")
.map((actor) => actor.trim())
.filter((actor) => actor.length > 0);
}
/**
* Checks if an actor matches a pattern
* Supports wildcards: "*[bot]" matches all bots, "dependabot[bot]" matches specific
* @param actor - Actor username to check
* @param pattern - Pattern to match against
* @returns true if actor matches pattern
*/
export function actorMatchesPattern(actor: string, pattern: string): boolean {
// Exact match
if (actor === pattern) return true;
// Wildcard bot pattern: "*[bot]" matches any username ending with [bot]
if (pattern === "*[bot]" && actor.endsWith("[bot]")) return true;
// No match
return false;
}
/**
* Determines if a comment should be included based on actor filters
* @param actor - Comment author username
* @param includeActors - Array of actors to include (empty = include all)
* @param excludeActors - Array of actors to exclude (empty = exclude none)
* @returns true if comment should be included
*/
export function shouldIncludeCommentByActor(
actor: string,
includeActors: string[],
excludeActors: string[],
): boolean {
// Check exclusion first (exclusion takes priority)
if (excludeActors.length > 0) {
for (const pattern of excludeActors) {
if (actorMatchesPattern(actor, pattern)) {
return false; // Excluded
}
}
}
// Check inclusion
if (includeActors.length > 0) {
for (const pattern of includeActors) {
if (actorMatchesPattern(actor, pattern)) {
return true; // Explicitly included
}
}
return false; // Not in include list
}
// No filters or passed all checks
return true;
}

View File

@@ -89,6 +89,8 @@ export const tagMode: Mode = {
triggerUsername: context.actor, triggerUsername: context.actor,
triggerTime, triggerTime,
originalTitle, originalTitle,
includeCommentsByActor: context.inputs.includeCommentsByActor,
excludeCommentsByActor: context.inputs.excludeCommentsByActor,
}); });
// Setup branch // Setup branch

172
test/actor-filter.test.ts Normal file
View File

@@ -0,0 +1,172 @@
import { describe, expect, test } from "bun:test";
import {
parseActorFilter,
actorMatchesPattern,
shouldIncludeCommentByActor,
} from "../src/github/utils/actor-filter";
describe("parseActorFilter", () => {
test("parses comma-separated actors", () => {
expect(parseActorFilter("user1,user2,bot[bot]")).toEqual([
"user1",
"user2",
"bot[bot]",
]);
});
test("handles empty string", () => {
expect(parseActorFilter("")).toEqual([]);
});
test("handles whitespace-only string", () => {
expect(parseActorFilter(" ")).toEqual([]);
});
test("trims whitespace", () => {
expect(parseActorFilter(" user1 , user2 ")).toEqual(["user1", "user2"]);
});
test("filters out empty entries", () => {
expect(parseActorFilter("user1,,user2")).toEqual(["user1", "user2"]);
});
test("handles single actor", () => {
expect(parseActorFilter("user1")).toEqual(["user1"]);
});
test("handles wildcard bot pattern", () => {
expect(parseActorFilter("*[bot]")).toEqual(["*[bot]"]);
});
});
describe("actorMatchesPattern", () => {
test("matches exact username", () => {
expect(actorMatchesPattern("john-doe", "john-doe")).toBe(true);
});
test("does not match different username", () => {
expect(actorMatchesPattern("john-doe", "jane-doe")).toBe(false);
});
test("matches wildcard bot pattern", () => {
expect(actorMatchesPattern("dependabot[bot]", "*[bot]")).toBe(true);
expect(actorMatchesPattern("renovate[bot]", "*[bot]")).toBe(true);
expect(actorMatchesPattern("github-actions[bot]", "*[bot]")).toBe(true);
});
test("does not match non-bot with wildcard", () => {
expect(actorMatchesPattern("john-doe", "*[bot]")).toBe(false);
expect(actorMatchesPattern("user-bot", "*[bot]")).toBe(false);
});
test("matches specific bot", () => {
expect(actorMatchesPattern("dependabot[bot]", "dependabot[bot]")).toBe(
true,
);
expect(actorMatchesPattern("renovate[bot]", "renovate[bot]")).toBe(true);
});
test("does not match different specific bot", () => {
expect(actorMatchesPattern("dependabot[bot]", "renovate[bot]")).toBe(false);
});
test("is case sensitive", () => {
expect(actorMatchesPattern("User1", "user1")).toBe(false);
expect(actorMatchesPattern("user1", "User1")).toBe(false);
});
});
describe("shouldIncludeCommentByActor", () => {
test("includes all when no filters", () => {
expect(shouldIncludeCommentByActor("user1", [], [])).toBe(true);
expect(shouldIncludeCommentByActor("bot[bot]", [], [])).toBe(true);
});
test("excludes when in exclude list", () => {
expect(shouldIncludeCommentByActor("bot[bot]", [], ["*[bot]"])).toBe(false);
expect(shouldIncludeCommentByActor("user1", [], ["user1"])).toBe(false);
});
test("includes when not in exclude list", () => {
expect(shouldIncludeCommentByActor("user1", [], ["user2"])).toBe(true);
expect(shouldIncludeCommentByActor("user1", [], ["*[bot]"])).toBe(true);
});
test("includes when in include list", () => {
expect(shouldIncludeCommentByActor("user1", ["user1", "user2"], [])).toBe(
true,
);
expect(shouldIncludeCommentByActor("user2", ["user1", "user2"], [])).toBe(
true,
);
});
test("excludes when not in include list", () => {
expect(shouldIncludeCommentByActor("user3", ["user1", "user2"], [])).toBe(
false,
);
});
test("exclusion takes priority over inclusion", () => {
expect(shouldIncludeCommentByActor("user1", ["user1"], ["user1"])).toBe(
false,
);
expect(
shouldIncludeCommentByActor("bot[bot]", ["*[bot]"], ["*[bot]"]),
).toBe(false);
});
test("handles wildcard in include list", () => {
expect(shouldIncludeCommentByActor("dependabot[bot]", ["*[bot]"], [])).toBe(
true,
);
expect(shouldIncludeCommentByActor("renovate[bot]", ["*[bot]"], [])).toBe(
true,
);
expect(shouldIncludeCommentByActor("user1", ["*[bot]"], [])).toBe(false);
});
test("handles wildcard in exclude list", () => {
expect(shouldIncludeCommentByActor("dependabot[bot]", [], ["*[bot]"])).toBe(
false,
);
expect(shouldIncludeCommentByActor("renovate[bot]", [], ["*[bot]"])).toBe(
false,
);
expect(shouldIncludeCommentByActor("user1", [], ["*[bot]"])).toBe(true);
});
test("handles mixed include and exclude lists", () => {
// Include user1 and user2, but exclude user2
expect(
shouldIncludeCommentByActor("user1", ["user1", "user2"], ["user2"]),
).toBe(true);
expect(
shouldIncludeCommentByActor("user2", ["user1", "user2"], ["user2"]),
).toBe(false);
expect(
shouldIncludeCommentByActor("user3", ["user1", "user2"], ["user2"]),
).toBe(false);
});
test("handles complex bot filtering", () => {
// Include all bots but exclude dependabot
expect(
shouldIncludeCommentByActor(
"renovate[bot]",
["*[bot]"],
["dependabot[bot]"],
),
).toBe(true);
expect(
shouldIncludeCommentByActor(
"dependabot[bot]",
["*[bot]"],
["dependabot[bot]"],
),
).toBe(false);
expect(
shouldIncludeCommentByActor("user1", ["*[bot]"], ["dependabot[bot]"]),
).toBe(false);
});
});

View File

@@ -1,4 +1,4 @@
import { describe, expect, it, jest } from "bun:test"; import { describe, expect, it, jest, test } from "bun:test";
import { import {
extractTriggerTimestamp, extractTriggerTimestamp,
extractOriginalTitle, extractOriginalTitle,
@@ -1100,3 +1100,101 @@ describe("fetchGitHubData integration with time filtering", () => {
); );
}); });
}); });
describe("filterCommentsByActor", () => {
test("filters out excluded actors", () => {
const comments = [
{ author: { login: "user1" }, body: "comment1" },
{ author: { login: "bot[bot]" }, body: "comment2" },
{ author: { login: "user2" }, body: "comment3" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "", "*[bot]");
expect(filtered).toHaveLength(2);
expect(filtered.map((c: any) => c.author.login)).toEqual([
"user1",
"user2",
]);
});
test("includes only specified actors", () => {
const comments = [
{ author: { login: "user1" }, body: "comment1" },
{ author: { login: "user2" }, body: "comment2" },
{ author: { login: "user3" }, body: "comment3" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "user1,user2", "");
expect(filtered).toHaveLength(2);
expect(filtered.map((c: any) => c.author.login)).toEqual([
"user1",
"user2",
]);
});
test("returns all when no filters", () => {
const comments = [
{ author: { login: "user1" }, body: "comment1" },
{ author: { login: "user2" }, body: "comment2" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "", "");
expect(filtered).toHaveLength(2);
});
test("exclusion takes priority", () => {
const comments = [
{ author: { login: "user1" }, body: "comment1" },
{ author: { login: "user2" }, body: "comment2" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "user1,user2", "user1");
expect(filtered).toHaveLength(1);
expect(filtered[0].author.login).toBe("user2");
});
test("filters multiple bot types", () => {
const comments = [
{ author: { login: "user1" }, body: "comment1" },
{ author: { login: "dependabot[bot]" }, body: "comment2" },
{ author: { login: "renovate[bot]" }, body: "comment3" },
{ author: { login: "user2" }, body: "comment4" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "", "*[bot]");
expect(filtered).toHaveLength(2);
expect(filtered.map((c: any) => c.author.login)).toEqual([
"user1",
"user2",
]);
});
test("filters specific bot only", () => {
const comments = [
{ author: { login: "dependabot[bot]" }, body: "comment1" },
{ author: { login: "renovate[bot]" }, body: "comment2" },
{ author: { login: "user1" }, body: "comment3" },
];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "", "dependabot[bot]");
expect(filtered).toHaveLength(2);
expect(filtered.map((c: any) => c.author.login)).toEqual([
"renovate[bot]",
"user1",
]);
});
test("handles empty comment array", () => {
const comments: any[] = [];
const { filterCommentsByActor } = require("../src/github/data/fetcher");
const filtered = filterCommentsByActor(comments, "user1", "");
expect(filtered).toHaveLength(0);
});
});

View File

@@ -39,6 +39,8 @@ describe("prepareMcpConfig", () => {
allowedNonWriteUsers: "", allowedNonWriteUsers: "",
trackProgress: false, trackProgress: false,
includeFixLinks: true, includeFixLinks: true,
includeCommentsByActor: "",
excludeCommentsByActor: "",
}, },
}; };

View File

@@ -27,6 +27,8 @@ const defaultInputs = {
allowedNonWriteUsers: "", allowedNonWriteUsers: "",
trackProgress: false, trackProgress: false,
includeFixLinks: true, includeFixLinks: true,
includeCommentsByActor: "",
excludeCommentsByActor: "",
}; };
const defaultRepository = { const defaultRepository = {
@@ -55,7 +57,12 @@ export const createMockContext = (
}; };
const mergedInputs = overrides.inputs const mergedInputs = overrides.inputs
? { ...defaultInputs, ...overrides.inputs } ? {
...defaultInputs,
...overrides.inputs,
includeCommentsByActor: overrides.inputs.includeCommentsByActor ?? "",
excludeCommentsByActor: overrides.inputs.excludeCommentsByActor ?? "",
}
: defaultInputs; : defaultInputs;
return { ...baseContext, ...overrides, inputs: mergedInputs }; return { ...baseContext, ...overrides, inputs: mergedInputs };
@@ -79,7 +86,12 @@ export const createMockAutomationContext = (
}; };
const mergedInputs = overrides.inputs const mergedInputs = overrides.inputs
? { ...defaultInputs, ...overrides.inputs } ? {
...defaultInputs,
...overrides.inputs,
includeCommentsByActor: overrides.inputs.includeCommentsByActor ?? "",
excludeCommentsByActor: overrides.inputs.excludeCommentsByActor ?? "",
}
: { ...defaultInputs }; : { ...defaultInputs };
return { ...baseContext, ...overrides, inputs: mergedInputs }; return { ...baseContext, ...overrides, inputs: mergedInputs };

View File

@@ -27,6 +27,8 @@ describe("detectMode with enhanced routing", () => {
allowedNonWriteUsers: "", allowedNonWriteUsers: "",
trackProgress: false, trackProgress: false,
includeFixLinks: true, includeFixLinks: true,
includeCommentsByActor: "",
excludeCommentsByActor: "",
}, },
}; };

View File

@@ -0,0 +1,97 @@
import { describe, expect, test, beforeEach, afterEach } from "bun:test";
import { parseAdditionalPermissions } from "../src/github/token";
describe("parseAdditionalPermissions", () => {
let originalEnv: string | undefined;
beforeEach(() => {
originalEnv = process.env.ADDITIONAL_PERMISSIONS;
});
afterEach(() => {
if (originalEnv === undefined) {
delete process.env.ADDITIONAL_PERMISSIONS;
} else {
process.env.ADDITIONAL_PERMISSIONS = originalEnv;
}
});
test("returns undefined when env var is not set", () => {
delete process.env.ADDITIONAL_PERMISSIONS;
expect(parseAdditionalPermissions()).toBeUndefined();
});
test("returns undefined when env var is empty string", () => {
process.env.ADDITIONAL_PERMISSIONS = "";
expect(parseAdditionalPermissions()).toBeUndefined();
});
test("returns undefined when env var is only whitespace", () => {
process.env.ADDITIONAL_PERMISSIONS = " \n \n ";
expect(parseAdditionalPermissions()).toBeUndefined();
});
test("parses single permission and merges with defaults", () => {
process.env.ADDITIONAL_PERMISSIONS = "actions: read";
expect(parseAdditionalPermissions()).toEqual({
contents: "write",
pull_requests: "write",
issues: "write",
actions: "read",
});
});
test("parses multiple permissions", () => {
process.env.ADDITIONAL_PERMISSIONS = "actions: read\nworkflows: write";
expect(parseAdditionalPermissions()).toEqual({
contents: "write",
pull_requests: "write",
issues: "write",
actions: "read",
workflows: "write",
});
});
test("additional permissions can override defaults", () => {
process.env.ADDITIONAL_PERMISSIONS = "contents: read";
expect(parseAdditionalPermissions()).toEqual({
contents: "read",
pull_requests: "write",
issues: "write",
});
});
test("handles extra whitespace around keys and values", () => {
process.env.ADDITIONAL_PERMISSIONS = " actions : read ";
expect(parseAdditionalPermissions()).toEqual({
contents: "write",
pull_requests: "write",
issues: "write",
actions: "read",
});
});
test("skips empty lines", () => {
process.env.ADDITIONAL_PERMISSIONS =
"actions: read\n\n\nworkflows: write\n\n";
expect(parseAdditionalPermissions()).toEqual({
contents: "write",
pull_requests: "write",
issues: "write",
actions: "read",
workflows: "write",
});
});
test("skips lines without colons", () => {
process.env.ADDITIONAL_PERMISSIONS =
"actions: read\ninvalid line\nworkflows: write";
expect(parseAdditionalPermissions()).toEqual({
contents: "write",
pull_requests: "write",
issues: "write",
actions: "read",
workflows: "write",
});
});
});

View File

@@ -75,6 +75,8 @@ describe("checkWritePermissions", () => {
allowedNonWriteUsers: "", allowedNonWriteUsers: "",
trackProgress: false, trackProgress: false,
includeFixLinks: true, includeFixLinks: true,
includeCommentsByActor: "",
excludeCommentsByActor: "",
}, },
}); });