Compare commits

...

48 Commits

Author SHA1 Message Date
GitHub Actions
8341a564b0 chore: bump Claude Code to 2.1.17 and Agent SDK to 0.2.17 2026-01-22 21:49:14 +00:00
GitHub Actions
2804b4174b chore: bump Claude Code to 2.1.16 and Agent SDK to 0.2.16 2026-01-22 20:08:26 +00:00
GitHub Actions
2316a9a8db chore: bump Claude Code to 2.1.15 and Agent SDK to 0.2.15 2026-01-21 22:00:12 +00:00
Ashwin Bhat
49cfcf8107 refactor: remove CLI path, use Agent SDK exclusively (#849)
* refactor: remove CLI path, use Agent SDK exclusively

- Remove CLI-based Claude execution in favor of Agent SDK
- Delete prepareRunConfig, parseAndSetSessionId, parseAndSetStructuredOutputs functions
- Remove named pipe IPC and sanitizeJsonOutput helper
- Remove test-agent-sdk job from test-base-action workflow (SDK is now default)
- Delete run-claude.test.ts and structured-output.test.ts (testing removed CLI code)
- Update CLAUDE.md to remove named pipe references

Co-Authored-By: Claude <noreply@anthropic.com>
Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 2
Claude-Permission-Prompts: 1
Claude-Escapes: 0
Claude-Plan:
<claude-plan>
# Plan: Remove Non-Agent SDK Code Path

## Overview
Since `use_agent_sdk` defaults to `true`, remove the legacy CLI code path entirely from `base-action/src/run-claude.ts`.

## Files to Modify

### 1. `base-action/src/run-claude.ts` - Main Cleanup

**Remove imports:**
- `exec` from `child_process`
- `promisify` from `util`
- `unlink`, `writeFile`, `stat` from `fs/promises` (keep `readFile` - check if needed)
- `createWriteStream` from `fs`
- `spawn` from `child_process`
- `parseShellArgs` from `shell-quote` (still used in `parse-sdk-options.ts`, keep package)

**Remove constants:**
- `execAsync`
- `PIPE_PATH`
- `EXECUTION_FILE` (defined in both files, keep in SDK file)
- `BASE_ARGS`

**Remove types:**
- `PreparedConfig` type (lines 85-89) - only used by `prepareRunConfig()`

**Remove functions:**
- `sanitizeJsonOutput()` (lines 21-68)
- `prepareRunConfig()` (lines 91-125) - also remove export
- `parseAndSetSessionId()` (lines 131-155) - also remove export
- `parseAndSetStructuredOutputs()` (lines 162-197) - also remove export

**Simplify `runClaude()`:**
- Remove `useAgentSdk` flag check and logging (lines 200-204)
- Remove the `if (useAgentSdk)` block, make SDK call direct
- Remove entire CLI path (lines 211-438)
- Resulting function becomes just:
  ```typescript
  export async function runClaude(promptPath: string, options: ClaudeOptions) {
    const parsedOptions = parseSdkOptions(options);
    return runClaudeWithSdk(promptPath, parsedOptions);
  }
  ```

### 2. Delete Test Files

**`base-action/test/run-claude.test.ts`:**
- Delete entire file (only tests `prepareRunConfig()`)

**`base-action/test/structured-output.test.ts`:**
- Delete entire file (only tests `parseAndSetStructuredOutputs()` and `parseAndSetSessionId()`)

### 3. Workflow Update

**`.github/workflows/test-base-action.yml`:**
- Remove `test-agent-sdk` job (lines 120-176) - redundant now

### 4. Documentation Update

**`base-action/CLAUDE.md`:**
- Line 30: Remove "- Named pipes for IPC between prompt input and Claude process"
- Line 57: Remove "- Uses `mkfifo` to create named pipes for prompt input"

## Verification
1. Run `bun run typecheck` to ensure no type errors
2. Run `bun test` to ensure remaining tests pass
3. Run `bun run format` to fix any formatting issues
</claude-plan>

* fix: address PR review comments

- Add session_id output handling in run-claude-sdk.ts (critical)
- Remove unused claudeEnv parameter from ClaudeOptions and index.ts
- Update stale CLI path comment in parse-sdk-options.ts

Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 0
Claude-Permission-Prompts: 0
Claude-Escapes: 0
Claude-Plan:
<claude-plan>
# Plan: Remove Non-Agent SDK Code Path

## Overview
Since `use_agent_sdk` defaults to `true`, remove the legacy CLI code path entirely from `base-action/src/run-claude.ts`.

## Files to Modify

### 1. `base-action/src/run-claude.ts` - Main Cleanup

**Remove imports:**
- `exec` from `child_process`
- `promisify` from `util`
- `unlink`, `writeFile`, `stat` from `fs/promises` (keep `readFile` - check if needed)
- `createWriteStream` from `fs`
- `spawn` from `child_process`
- `parseShellArgs` from `shell-quote` (still used in `parse-sdk-options.ts`, keep package)

**Remove constants:**
- `execAsync`
- `PIPE_PATH`
- `EXECUTION_FILE` (defined in both files, keep in SDK file)
- `BASE_ARGS`

**Remove types:**
- `PreparedConfig` type (lines 85-89) - only used by `prepareRunConfig()`

**Remove functions:**
- `sanitizeJsonOutput()` (lines 21-68)
- `prepareRunConfig()` (lines 91-125) - also remove export
- `parseAndSetSessionId()` (lines 131-155) - also remove export
- `parseAndSetStructuredOutputs()` (lines 162-197) - also remove export

**Simplify `runClaude()`:**
- Remove `useAgentSdk` flag check and logging (lines 200-204)
- Remove the `if (useAgentSdk)` block, make SDK call direct
- Remove entire CLI path (lines 211-438)
- Resulting function becomes just:
  ```typescript
  export async function runClaude(promptPath: string, options: ClaudeOptions) {
    const parsedOptions = parseSdkOptions(options);
    return runClaudeWithSdk(promptPath, parsedOptions);
  }
  ```

### 2. Delete Test Files

**`base-action/test/run-claude.test.ts`:**
- Delete entire file (only tests `prepareRunConfig()`)

**`base-action/test/structured-output.test.ts`:**
- Delete entire file (only tests `parseAndSetStructuredOutputs()` and `parseAndSetSessionId()`)

### 3. Workflow Update

**`.github/workflows/test-base-action.yml`:**
- Remove `test-agent-sdk` job (lines 120-176) - redundant now

### 4. Documentation Update

**`base-action/CLAUDE.md`:**
- Line 30: Remove "- Named pipes for IPC between prompt input and Claude process"
- Line 57: Remove "- Uses `mkfifo` to create named pipes for prompt input"

## Verification
1. Run `bun run typecheck` to ensure no type errors
2. Run `bun test` to ensure remaining tests pass
3. Run `bun run format` to fix any formatting issues
</claude-plan>
2026-01-20 16:00:23 -08:00
Ashwin Bhat
e208124d29 chore: bump Bun to 1.3.6 and setup-bun action to v2.1.2 (#848)
Claude-Generated-By: Claude Code (cli/claude=100%)
Claude-Steers: 1
Claude-Permission-Prompts: 5
Claude-Escapes: 1
2026-01-20 14:06:49 -08:00
Ashwin Bhat
ba60ef7ba2 Consolidate CI workflows into a single entry point (#836)
* refactor: consolidate CI workflows with ci-all.yml orchestrator

- Add ci-all.yml to orchestrate all CI workflows on push to main
- Update individual workflows to use workflow_call for reusability
- Remove redundant push triggers from individual test workflows
- Update release.yml to trigger on CI All workflow completion
- Auto-release on version bump commits after CI passes

Co-Authored-By: Claude <noreply@anthropic.com>
Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 8
Claude-Permission-Prompts: 1
Claude-Escapes: 0

* address security review comments

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-20 11:58:13 -08:00
GitHub Actions
f3c892ca8d chore: bump Claude Code to 2.1.11 and Agent SDK to 0.2.11 2026-01-17 01:44:05 +00:00
Ashwin Bhat
6e896a06bb fix: ensure SSH signing key has trailing newline (#834)
ssh-keygen requires a trailing newline to parse private keys correctly.
Without it, git signing fails with the confusing error:
'Couldn't load public key: No such file or directory?'

This normalizes the key to always end with a newline before writing.
2026-01-16 14:44:22 -08:00
Ashwin Bhat
a017b830c0 chore: comment out release-base-action job in release workflow (#833)
Temporarily disable the release-base-action job that syncs releases
to the claude-code-base-action repository. The job checkout step
and subsequent tag/release creation steps are now commented out.


Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 2
Claude-Permission-Prompts: 2
Claude-Escapes: 0

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-16 14:16:27 -08:00
GitHub Actions
75f52e56b2 chore: bump Claude Code to 2.1.9 and Agent SDK to 0.2.9 2026-01-16 02:18:38 +00:00
Ashwin Bhat
1bbc9e7ff7 fix: add checkHumanActor to agent mode (#826)
Fixes issue #641 where users were getting banned due to rapid successive
Claude runs triggered by the synchronize event.

Changes:
- Add checkHumanActor call to agent mode's prepare() method to reject
  bot-triggered workflows unless explicitly allowed via allowed_bots
- Update checkHumanActor to accept GitHubContext (union type) instead
  of just ParsedGitHubContext
- Add tests for bot rejection/allowance in agent mode

Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 1
Claude-Permission-Prompts: 3
Claude-Escapes: 0
2026-01-15 10:28:46 -08:00
Ashwin Bhat
625ea1519c docs: clarify that Claude does not auto-create PRs by default (#824)
Add a new section to security.md explaining that in the default
configuration, Claude commits to a branch and provides a link for
the user to create the PR themselves, ensuring human oversight.


Claude-Generated-By: Claude Code (cli/claude-opus-4-5=100%)
Claude-Steers: 2
Claude-Permission-Prompts: 2
Claude-Escapes: 0

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-14 15:22:40 -08:00
GitHub Actions
a9171f0ced chore: bump Claude Code to 2.1.7 and Agent SDK to 0.2.7 2026-01-14 00:03:29 +00:00
GitHub Actions
4778aeae4c chore: bump Claude Code to 2.1.6 and Agent SDK to 0.2.6 2026-01-13 02:25:17 +00:00
GitHub Actions
b6e5a9f27a chore: bump Claude Code to 2.1.4 and Agent SDK to 0.2.4 2026-01-11 00:27:43 +00:00
GitHub Actions
5d91d7d217 chore: bump Claude Code to 2.1.3 and Agent SDK to 0.2.3 2026-01-09 23:31:55 +00:00
GitHub Actions
90006bcae7 chore: bump Claude Code to 2.1.2 and Agent SDK to 0.2.2 2026-01-09 00:03:55 +00:00
Alexander Bartash
005436f51d fix: parse ALL --allowed-tools flags, not just the first one (#801)
The parseAllowedTools() function previously used .match() which only
returns the first match. This caused tools specified in subsequent
--allowed-tools flags to be ignored during MCP server initialization.

Changes:
- Add /g flag to regex patterns for global matching
- Use matchAll() to find all occurrences
- Deduplicate tools while preserving order
- Make unquoted pattern not match quoted values

Fixes #800

 #vibe

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-09 01:36:12 +05:30
Ashwin Bhat
1b8ee3b941 fix: add missing import and update tests for branch template feature (#799)
* fix: add missing import and update tests for branch template feature

- Add missing `import { $ } from 'bun'` in branch.ts
- Add missing `labels` property to pull-request-target.test.ts fixture
- Update branch-template tests to expect 5-word descriptions

* address review feedback: update comment and add truncation test
2026-01-08 07:07:54 +05:30
Cole D
c247cb152d feat: custom branch name templates (#571)
* Add branch-name-template config option

* Logging

* Use branch name template

* Add label to template variables

* Add description template variable

* More concise description for branch_name_template

* Remove more granular time template variables

* Only fetch first label

* Add check for empty template-generated name

* Clean up comments, docstrings

* Merge createBranchTemplateVariables into generateBranchName

* Still replace undefined values

* Fall back to default on duplicate branch

* Parameterize description wordcount

* Remove some over-explanatory comments

* NUM_DESCRIPTION_WORDS: 3 -> 5
2026-01-08 06:47:26 +05:30
GitHub Actions
cefa60067a chore: bump Claude Code to 2.1.1 and Agent SDK to 0.2.1 2026-01-07 21:30:16 +00:00
GitHub Actions
7a708f68fa chore: bump Claude Code to 2.1.0 and Agent SDK to 0.2.0 2026-01-07 20:03:23 +00:00
David Dworken
5da7ba548c feat: add path validation for commit_files MCP tool (#796)
Add validatePathWithinRepo helper to ensure file paths resolve within the repository root directory. This hardens the commit_files tool by validating paths before file operations.

Changes:
- Add src/mcp/path-validation.ts with async path validation using realpath
- Update commit_files to validate all paths before reading files
- Prevent symlink-based path escapes by resolving real paths
- Add comprehensive test coverage including symlink attack scenarios

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-07 10:16:31 -08:00
Ashwin Bhat
964b8355fb fix: use original title from webhook payload instead of fetched title (#793)
* fix: use original title from webhook payload instead of fetched title

- Add extractOriginalTitle() helper to extract title from webhook payload
- Add originalTitle parameter to fetchGitHubData()
- Update tag mode to pass original title from webhook context
- Add tests for extractOriginalTitle and originalTitle parameter

This ensures the title used in prompts is the one that existed when the
trigger event occurred, rather than a potentially modified title fetched
later via GraphQL.

* fix: add title sanitization and explicit TOCTOU test

- Apply sanitizeContent() to titles in formatContext() for defense-in-depth
- Add explicit test documenting TOCTOU prevention for title handling
2026-01-07 23:45:12 +05:30
orbisai0security
c83d67a9b9 fix: resolve high vulnerability CVE-2025-66414 (#792)
Automatically generated security fix

Co-authored-by: orbisai0security <orbisai0security@users.noreply.github.com>
2026-01-07 12:53:45 +05:30
Ashwin Bhat
c9ec2b02b4 fix: set CLAUDE_CODE_ENTRYPOINT for SDK path to match CLI path (#791)
Previously, the SDK path would result in the CLI setting the entrypoint
to 'sdk-ts' internally, while the non-SDK (CLI) path would correctly
set it to 'claude-code-github-action' based on the CLAUDE_CODE_ACTION
env var.

This change explicitly sets CLAUDE_CODE_ENTRYPOINT in both:
1. The action.yml env block (for consistency)
2. The SDK options env (to override the CLI's internal default)

The CLI respects pre-set entrypoint values, so this ensures consistent
user agent reporting for both execution paths.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-06 02:10:44 +05:30
Ashwin Bhat
63ea7e3174 fix: prevent orphaned installer processes from blocking retries (#790)
* fix: prevent orphaned installer processes from blocking retries

When the `timeout` command expires during Claude Code installation, it only
kills the direct child bash process, not the grandchild installer processes.
These orphaned processes continue holding a lock file, causing retry attempts
to fail with "another process is currently installing Claude".

Add `--foreground` flag to run the command in a foreground process group so
all child processes are killed on timeout. Add `--kill-after=10` to send
SIGKILL if SIGTERM doesn't terminate processes within 10 seconds.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

   Co-Authored-By: Claude <noreply@anthropic.com>

* fix: apply same timeout fix to root action.yml

🤖 Generated with [Claude Code](https://claude.com/claude-code)

   Co-Authored-By: Claude <noreply@anthropic.com>
2026-01-05 23:01:39 +05:30
Gor Grigoryan
653f9cd7a3 feat: support local plugin marketplace paths (#761)
* feat: support local plugin marketplace paths

Enable installing plugins from local directories in addition to remote
Git URLs. This allows users to use local plugin marketplaces within their
repository without requiring them to be hosted in a separate Git repo.

Example usage:
  plugin_marketplaces: "./my-local-marketplace"
  plugins: "my-plugin@my-local-marketplace"

Supported path formats:
- Relative paths: ./plugins, ../shared-plugins
- Absolute Unix paths: /home/user/plugins
- Absolute Windows paths: C:\Users\user\plugins

Fixes #664

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* support hidden folders

* Revert "support hidden folders"

This reverts commit a55626c9f1.

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-05 16:13:32 +05:30
Ashwin Bhat
b17b541bbc feat: send user request as separate content block for slash command support (#785)
* feat: send user request as separate content block for slash command support

When in tag mode with the SDK path, extracts the user's request from the
trigger comment (text after @claude) and sends it as a separate content
block. This enables the CLI to process slash commands like "/review-pr".

- Add extract-user-request utility to parse trigger comments
- Write user request to separate file during prompt generation
- Send multi-block SDKUserMessage when user request file exists
- Add tests for the extraction utility

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: address PR feedback

- Fix potential ReDoS vulnerability by using string operations instead of regex
- Remove unused extractUserRequestFromEvent function and tests
- Extract USER_REQUEST_FILENAME to shared constants
- Conditionally log user request based on showFullOutput setting
- Add JSDoc documentation to extractUserRequestFromContext

---------

Co-authored-by: Claude <noreply@anthropic.com>
2026-01-02 17:57:13 -08:00
Ashwin Bhat
7e4bf87b1c feat: add ssh_signing_key input for SSH commit signing (#784)
* feat: add ssh_signing_key input for SSH commit signing

Add a new ssh_signing_key input that allows passing an SSH signing key
for commit signing, as an alternative to the existing use_commit_signing
(which uses GitHub API-based commits).

When ssh_signing_key is provided:
- Git is configured to use SSH signing (gpg.format=ssh, commit.gpgsign=true)
- The key is written to ~/.ssh/claude_signing_key with 0600 permissions
- Git CLI commands are used (not MCP file ops)
- The key is cleaned up in a post step for security

Behavior matrix:
| ssh_signing_key | use_commit_signing | Result |
|-----------------|-------------------|--------|
| not set         | false             | Regular git, no signing |
| not set         | true              | GitHub API (MCP), verified commits |
| set             | false             | Git CLI with SSH signing |
| set             | true              | Git CLI with SSH signing (ssh_signing_key takes precedence)

* docs: add SSH signing key documentation

- Update security.md with detailed setup instructions for both signing options
- Explain that ssh_signing_key enables full git CLI operations (rebasing, etc.)
- Add ssh_signing_key to inputs table in usage.md
- Update bot_id/bot_name descriptions to note they're needed for verified commits

* fix: address security review feedback for SSH signing

- Write SSH key atomically with mode 0o600 (fixes TOCTOU race condition)
- Create .ssh directory with mode 0o700 (SSH best practices)
- Add input validation for SSH key format
- Remove unused chmod import
- Add tests for validation logic
2026-01-02 10:37:25 -08:00
Aidan Dunlap
154d0de144 feat: add instant "Fix this" links to PR code reviews (#773)
* feat: add "Fix this" links to PR code reviews

When Claude reviews PRs and identifies fixable issues, it now includes
inline links that open Claude Code with the fix request pre-loaded.

Format: [Fix this →](https://claude.ai/code?q=<URI_ENCODED_INSTRUCTIONS>&repo=<REPO>)

This enables one-click fix requests directly from code review comments.

* feat: add include_fix_links input to control Fix this links

Adds a configurable input to enable/disable the "Fix this →" links
in PR code reviews. Defaults to true for backwards compatibility.
2025-12-27 15:29:06 -08:00
GitHub Actions
3ba9f7c8c2 chore: bump Claude Code to 2.0.76 and Agent SDK to 0.1.76 2025-12-23 19:33:03 +00:00
きわみざむらい
e5b07416ea chore: remove unused ci yaml file (#763)
* fix: Replace direct template expansion in bump-claude-code-version workflow

* chore: remove bump-claude-code-version workflow file
2025-12-22 18:59:34 -08:00
Ashwin Bhat
b89827f8d1 fix: update broken link in cloud-providers.md (#758)
Update the AWS Bedrock documentation link to point to the new
code.claude.com domain.

Fixes #756

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Ashwin Bhat <ashwin-ant@users.noreply.github.com>
2025-12-19 15:47:47 -08:00
GitHub Actions
7145c3e051 chore: bump Claude Code to 2.0.74 and Agent SDK to 0.1.74 2025-12-19 22:12:44 +00:00
GitHub Actions
db4548b597 chore: bump Claude Code to 2.0.73 and Agent SDK to 0.1.73 2025-12-19 00:16:27 +00:00
GitHub Actions
0d19335299 chore: bump Claude Code to 2.0.72 and Agent SDK to 0.1.72 2025-12-17 21:59:16 +00:00
Ashwin Bhat
95be46676d fix: set GH_TOKEN alongside GITHUB_TOKEN for gh CLI precedence (#752)
The gh CLI prefers GH_TOKEN over GITHUB_TOKEN. When a calling workflow
sets GH_TOKEN in env, the action's GITHUB_TOKEN was being ignored,
causing the gh CLI to use the wrong token (e.g., the default workflow
token instead of an App token).

This ensures Claude's gh CLI commands use the action's prepared token.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-17 09:54:03 -08:00
Ashwin Bhat
f98c1a5aa8 fix: respect user's --setting-sources in claude_args (#750)
When users specify --setting-sources in claude_args (e.g., '--setting-sources user'),
the action now respects that value instead of overriding it with all three sources.

This fixes an issue where users who wanted to avoid in-repo configs would still
have them loaded because the settingSources was hardcoded to ['user', 'project', 'local'].

Fixes #749

Co-authored-by: Ashwin Bhat <ashwin-ant@users.noreply.github.com>

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
2025-12-16 15:00:34 -08:00
GitHub Actions
b0c32b65f9 chore: bump Claude Code to 2.0.71 and Agent SDK to 0.1.71 2025-12-16 22:09:42 +00:00
Ashwin Bhat
d7b6d50442 fix: merge multiple --mcp-config flags and support --allowed-tools parsing (#748)
* fix: merge multiple --mcp-config flags instead of overwriting

When users provide their own --mcp-config in claude_args, the action's
built-in MCP servers (github_comment, github_ci, etc.) were being lost
because multiple --mcp-config flags were overwriting each other.

This fix:
- Adds mcp-config to ACCUMULATING_FLAGS to collect all values
- Changes delimiter to null character to avoid conflicts with JSON
- Adds mergeMcpConfigs() to combine mcpServers objects from multiple configs
- Merges inline JSON configs while preserving file path configs

Fixes #745

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ashwin Bhat <ashwin-ant@users.noreply.github.com>

* fix: support hyphenated --allowed-tools flag and multiple values

The --allowed-tools flag was not being parsed correctly when:
1. Using the hyphenated form (--allowed-tools) instead of camelCase (--allowedTools)
2. Passing multiple space-separated values after a single flag
   (e.g., --allowed-tools "Tool1" "Tool2" "Tool3")

This fix:
- Adds hyphenated variants (allowed-tools, disallowed-tools) to ACCUMULATING_FLAGS
- Updates parsing to consume all consecutive non-flag values for accumulating flags
- Merges values from both camelCase and hyphenated variants

Fixes #746

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Ashwin Bhat <ashwin-ant@users.noreply.github.com>
2025-12-16 13:08:25 -08:00
Ashwin Bhat
f375cabfab chore: update model to claude-opus-4-5 in workflow (#747)
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-16 12:47:41 -08:00
GitHub Actions
9acae263e7 chore: bump Claude Code to 2.0.70 and Agent SDK to 0.1.70 2025-12-15 23:53:03 +00:00
Gor Grigoryan
67bf0594ce feat: add session_id output to enable resuming conversations (#739)
Add a new `session_id` output that exposes the Claude Code session ID,
allowing other workflows or Claude Code instances to resume the
conversation using `--resume <session_id>`.

Changes:
- Add parseAndSetSessionId() function to extract session_id from
  the system.init message in execution output
- Add session_id output to both action.yml and base-action/action.yml
- Add comprehensive tests for the new functionality

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-14 19:42:54 -08:00
GitHub Actions
b58533dbe0 chore: bump Claude Code version to 2.0.69 2025-12-13 01:00:43 +00:00
GitHub Actions
bda9bf08de chore: bump Claude Code version to 2.0.68 2025-12-12 23:32:49 +00:00
Ashwin Bhat
79b343c094 feat: Make Agent SDK the default execution path (#738)
Change USE_AGENT_SDK to default to true instead of false. The Agent SDK
path is now used by default; set USE_AGENT_SDK=false to use the CLI path.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 13:55:16 -08:00
bogini
609c388361 Fix command injection vulnerability in branch setup (#736)
* fix: Prevent command injection in branch operations

Replace Bun shell template literals with Node.js execFileSync to prevent
command injection attacks via malicious branch names. Branch names from
PR data (headRefName) are now validated against a strict whitelist pattern
before use in git commands.

Changes:
- Add validateBranchName() function with strict character whitelist
- Replace $`git ...` shell templates with execGit() using execFileSync
- Validate all branch names before use in git operations

* fix: Address review comments for branch validation security

- Enhanced execGit JSDoc to explain security benefits of execFileSync
- Added comprehensive branch name validation:
  - Leading dash check (prevents option injection)
  - Control characters and special git characters (~^:?*[\])
  - Leading/trailing period checks
  - Trailing slash and consecutive slash checks
- Added -- separator to git checkout commands
- Added 30 unit tests for validateBranchName covering:
  - Valid branch names
  - Command injection attempts
  - Option injection attempts
  - Path traversal attempts
  - Git-specific invalid patterns
  - Control characters and edge cases

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: Claude <noreply@anthropic.com>
2025-12-12 11:30:28 -08:00
63 changed files with 2740 additions and 1028 deletions

View File

@@ -1,132 +0,0 @@
name: Bump Claude Code Version
on:
repository_dispatch:
types: [bump_claude_code_version]
workflow_dispatch:
inputs:
version:
description: "Claude Code version to bump to"
required: true
type: string
permissions:
contents: write
jobs:
bump-version:
name: Bump Claude Code Version
runs-on: ubuntu-latest
environment: release
timeout-minutes: 5
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4
with:
token: ${{ secrets.RELEASE_PAT }}
fetch-depth: 0
- name: Get version from event payload
id: get_version
run: |
# Get version from either repository_dispatch or workflow_dispatch
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
NEW_VERSION="${CLIENT_PAYLOAD_VERSION}"
else
NEW_VERSION="${INPUT_VERSION}"
fi
# Sanitize the version to avoid issues enabled by problematic characters
NEW_VERSION=$(echo "$NEW_VERSION" | tr -d '`;$(){}[]|&<>' | tr -s ' ' '-')
if [ -z "$NEW_VERSION" ]; then
echo "Error: version not provided"
exit 1
fi
echo "NEW_VERSION=$NEW_VERSION" >> $GITHUB_ENV
echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
env:
INPUT_VERSION: ${{ inputs.version }}
CLIENT_PAYLOAD_VERSION: ${{ github.event.client_payload.version }}
- name: Create branch and update base-action/action.yml
run: |
# Variables
TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
BRANCH_NAME="bump-claude-code-${{ env.NEW_VERSION }}-$TIMESTAMP"
echo "BRANCH_NAME=$BRANCH_NAME" >> $GITHUB_ENV
# Get the default branch
DEFAULT_BRANCH=$(gh api repos/${GITHUB_REPOSITORY} --jq '.default_branch')
echo "DEFAULT_BRANCH=$DEFAULT_BRANCH" >> $GITHUB_ENV
# Get the latest commit SHA from the default branch
BASE_SHA=$(gh api repos/${GITHUB_REPOSITORY}/git/refs/heads/$DEFAULT_BRANCH --jq '.object.sha')
# Create a new branch
gh api \
--method POST \
repos/${GITHUB_REPOSITORY}/git/refs \
-f ref="refs/heads/$BRANCH_NAME" \
-f sha="$BASE_SHA"
# Get the current base-action/action.yml content
ACTION_CONTENT=$(gh api repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml?ref=$DEFAULT_BRANCH --jq '.content' | base64 -d)
# Update the Claude Code version in the npm install command
UPDATED_CONTENT=$(echo "$ACTION_CONTENT" | sed -E "s/(npm install -g @anthropic-ai\/claude-code@)[0-9]+\.[0-9]+\.[0-9]+/\1${{ env.NEW_VERSION }}/")
# Verify the change would be made
if ! echo "$UPDATED_CONTENT" | grep -q "@anthropic-ai/claude-code@${{ env.NEW_VERSION }}"; then
echo "Error: Failed to update Claude Code version in content"
exit 1
fi
# Get the current SHA of base-action/action.yml for the update API call
FILE_SHA=$(gh api repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml?ref=$DEFAULT_BRANCH --jq '.sha')
# Create the updated base-action/action.yml content in base64
echo "$UPDATED_CONTENT" | base64 > action.yml.b64
# Commit the updated base-action/action.yml via GitHub API
gh api \
--method PUT \
repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml \
-f message="chore: bump Claude Code version to ${{ env.NEW_VERSION }}" \
-F content=@action.yml.b64 \
-f sha="$FILE_SHA" \
-f branch="$BRANCH_NAME"
echo "Successfully created branch and updated Claude Code version to ${{ env.NEW_VERSION }}"
env:
GH_TOKEN: ${{ secrets.RELEASE_PAT }}
GITHUB_REPOSITORY: ${{ github.repository }}
- name: Create Pull Request
run: |
# Determine trigger type for PR body
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
TRIGGER_INFO="repository dispatch event"
else
TRIGGER_INFO="manual workflow dispatch by @${GITHUB_ACTOR}"
fi
# Create PR body with proper YAML escape
printf -v PR_BODY "## Bump Claude Code to ${{ env.NEW_VERSION }}\n\nThis PR updates the Claude Code version in base-action/action.yml to ${{ env.NEW_VERSION }}.\n\n### Changes\n- Updated Claude Code version from current to \`${{ env.NEW_VERSION }}\`\n\n### Triggered by\n- $TRIGGER_INFO\n\n🤖 This PR was automatically created by the bump-claude-code-version workflow."
echo "Creating PR with gh pr create command"
PR_URL=$(gh pr create \
--repo "${GITHUB_REPOSITORY}" \
--title "chore: bump Claude Code version to ${{ env.NEW_VERSION }}" \
--body "$PR_BODY" \
--base "${DEFAULT_BRANCH}" \
--head "${BRANCH_NAME}")
echo "PR created successfully: $PR_URL"
env:
GH_TOKEN: ${{ secrets.RELEASE_PAT }}
GITHUB_REPOSITORY: ${{ github.repository }}
GITHUB_ACTOR: ${{ github.actor }}
DEFAULT_BRANCH: ${{ env.DEFAULT_BRANCH }}
BRANCH_NAME: ${{ env.BRANCH_NAME }}

37
.github/workflows/ci-all.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
# Orchestrates all CI workflows - runs on PRs, pushes to main, and manual dispatch
# Individual test workflows are called as reusable workflows
name: CI All
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
permissions:
contents: read
jobs:
ci:
uses: ./.github/workflows/ci.yml
test-base-action:
uses: ./.github/workflows/test-base-action.yml
secrets: inherit # Required for ANTHROPIC_API_KEY
test-custom-executables:
uses: ./.github/workflows/test-custom-executables.yml
secrets: inherit
test-mcp-servers:
uses: ./.github/workflows/test-mcp-servers.yml
secrets: inherit
test-settings:
uses: ./.github/workflows/test-settings.yml
secrets: inherit
test-structured-output:
uses: ./.github/workflows/test-structured-output.yml
secrets: inherit

View File

@@ -1,9 +1,8 @@
name: CI
on:
push:
branches: [main]
pull_request:
workflow_call:
jobs:
test:

View File

@@ -36,4 +36,4 @@ jobs:
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
claude_args: |
--allowedTools "Bash(bun install),Bash(bun test:*),Bash(bun run format),Bash(bun typecheck)"
--model "claude-opus-4-1-20250805"
--model "claude-opus-4-5"

View File

@@ -8,10 +8,23 @@ on:
required: false
type: boolean
default: false
workflow_run:
workflows: ["CI All"]
types:
- completed
branches:
- main
jobs:
create-release:
runs-on: ubuntu-latest
# Run if: manual dispatch OR (CI All succeeded AND commit is a version bump)
if: |
github.event_name == 'workflow_dispatch' ||
(github.event.workflow_run.conclusion == 'success' &&
github.event.workflow_run.head_branch == 'main' &&
github.event.workflow_run.event == 'push' &&
startsWith(github.event.workflow_run.head_commit.message, 'chore: bump Claude Code to'))
environment: production
permissions:
contents: write
@@ -84,7 +97,8 @@ jobs:
update-major-tag:
needs: create-release
if: ${{ !inputs.dry_run }}
# Skip for dry runs (workflow_run events are never dry runs)
if: github.event_name == 'workflow_run' || !inputs.dry_run
runs-on: ubuntu-latest
environment: production
permissions:
@@ -109,48 +123,48 @@ jobs:
echo "Updated $major_version tag to point to $next_version"
release-base-action:
needs: create-release
if: ${{ !inputs.dry_run }}
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout base-action repo
uses: actions/checkout@v5
with:
repository: anthropics/claude-code-base-action
token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
fetch-depth: 0
# - name: Create and push tag
# run: |
# next_version="${{ needs.create-release.outputs.next_version }}"
# git config user.name "github-actions[bot]"
# git config user.email "github-actions[bot]@users.noreply.github.com"
# # Create the version tag
# git tag -a "$next_version" -m "Release $next_version - synced from claude-code-action"
# git push origin "$next_version"
# # Update the beta tag
# git tag -fa beta -m "Update beta tag to ${next_version}"
# git push origin beta --force
# - name: Create GitHub release
# env:
# GH_TOKEN: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
# run: |
# next_version="${{ needs.create-release.outputs.next_version }}"
# # Create the release
# gh release create "$next_version" \
# --repo anthropics/claude-code-base-action \
# --title "$next_version" \
# --notes "Release $next_version - synced from anthropics/claude-code-action" \
# --latest=false
# # Update beta release to be latest
# gh release edit beta \
# --repo anthropics/claude-code-base-action \
# --latest
# release-base-action:
# needs: create-release
# if: ${{ !inputs.dry_run }}
# runs-on: ubuntu-latest
# environment: production
# steps:
# - name: Checkout base-action repo
# uses: actions/checkout@v5
# with:
# repository: anthropics/claude-code-base-action
# token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
# fetch-depth: 0
#
# - name: Create and push tag
# run: |
# next_version="${{ needs.create-release.outputs.next_version }}"
#
# git config user.name "github-actions[bot]"
# git config user.email "github-actions[bot]@users.noreply.github.com"
#
# # Create the version tag
# git tag -a "$next_version" -m "Release $next_version - synced from claude-code-action"
# git push origin "$next_version"
#
# # Update the beta tag
# git tag -fa beta -m "Update beta tag to ${next_version}"
# git push origin beta --force
#
# - name: Create GitHub release
# env:
# GH_TOKEN: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
# run: |
# next_version="${{ needs.create-release.outputs.next_version }}"
#
# # Create the release
# gh release create "$next_version" \
# --repo anthropics/claude-code-base-action \
# --title "$next_version" \
# --notes "Release $next_version - synced from anthropics/claude-code-action" \
# --latest=false
#
# # Update beta release to be latest
# gh release edit beta \
# --repo anthropics/claude-code-base-action \
# --latest

View File

@@ -1,9 +1,6 @@
name: Test Claude Code Action
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
inputs:
@@ -11,6 +8,7 @@ on:
description: "Test prompt for Claude"
required: false
default: "List the files in the current directory starting with 'package'"
workflow_call:
jobs:
test-inline-prompt:
@@ -118,61 +116,3 @@ jobs:
echo "❌ Execution log file not found"
exit 1
fi
test-agent-sdk:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
- name: Test with Agent SDK
id: sdk-test
uses: ./base-action
env:
USE_AGENT_SDK: "true"
with:
prompt: ${{ github.event.inputs.test_prompt || 'List the files in the current directory starting with "package"' }}
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
allowed_tools: "LS,Read"
- name: Verify SDK output
run: |
OUTPUT_FILE="${{ steps.sdk-test.outputs.execution_file }}"
CONCLUSION="${{ steps.sdk-test.outputs.conclusion }}"
echo "Conclusion: $CONCLUSION"
echo "Output file: $OUTPUT_FILE"
if [ "$CONCLUSION" = "success" ]; then
echo "✅ Action completed successfully with Agent SDK"
else
echo "❌ Action failed with Agent SDK"
exit 1
fi
if [ -f "$OUTPUT_FILE" ]; then
if [ -s "$OUTPUT_FILE" ]; then
echo "✅ Execution log file created successfully with content"
echo "Validating JSON format:"
if jq . "$OUTPUT_FILE" > /dev/null 2>&1; then
echo "✅ Output is valid JSON"
# Verify SDK output contains total_cost_usd (SDK field name)
if jq -e '.[] | select(.type == "result") | .total_cost_usd' "$OUTPUT_FILE" > /dev/null 2>&1; then
echo "✅ SDK output contains total_cost_usd field"
else
echo "❌ SDK output missing total_cost_usd field"
exit 1
fi
echo "Content preview:"
head -c 500 "$OUTPUT_FILE"
else
echo "❌ Output is not valid JSON"
exit 1
fi
else
echo "❌ Execution log file is empty"
exit 1
fi
else
echo "❌ Execution log file not found"
exit 1
fi

View File

@@ -1,11 +1,9 @@
name: Test Custom Executables
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
workflow_call:
jobs:
test-custom-executables:

View File

@@ -1,11 +1,9 @@
name: Test MCP Servers
on:
push:
branches: [main]
pull_request:
branches: [main]
workflow_dispatch:
workflow_call:
jobs:
test-mcp-integration:

View File

@@ -1,11 +1,9 @@
name: Test Settings Feature
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
workflow_call:
jobs:
test-settings-inline-allow:

View File

@@ -1,11 +1,9 @@
name: Test Structured Outputs
on:
push:
branches:
- main
pull_request:
workflow_dispatch:
workflow_call:
permissions:
contents: read

View File

@@ -23,6 +23,10 @@ inputs:
description: "The prefix to use for Claude branches (defaults to 'claude/', use 'claude-' for dash format)"
required: false
default: "claude/"
branch_name_template:
description: "Template for branch naming. Available variables: {{prefix}}, {{entityType}}, {{entityNumber}}, {{timestamp}}, {{sha}}, {{label}}, {{description}}. {{label}} will be first label from the issue/PR, or {{entityType}} as a fallback. {{description}} will be the first 5 words of the issue/PR title in kebab-case. Default: '{{prefix}}{{entityType}}-{{entityNumber}}-{{timestamp}}'"
required: false
default: ""
allowed_bots:
description: "Comma-separated list of allowed bot usernames, or '*' to allow all bots. Empty string (default) allows no bots."
required: false
@@ -81,6 +85,10 @@ inputs:
description: "Enable commit signing using GitHub's commit signature verification. When false, Claude uses standard git commands"
required: false
default: "false"
ssh_signing_key:
description: "SSH private key for signing commits. When provided, git will be configured to use SSH signing. Takes precedence over use_commit_signing."
required: false
default: ""
bot_id:
description: "GitHub user ID to use for git operations (defaults to Claude's bot ID)"
required: false
@@ -93,6 +101,10 @@ inputs:
description: "Force tag mode with tracking comments for pull_request and issue events. Only applicable to pull_request (opened, synchronize, ready_for_review, reopened) and issue (opened, edited, labeled, assigned) events."
required: false
default: "false"
include_fix_links:
description: "Include 'Fix this' links in PR code review feedback that open Claude Code with context to fix the identified issue"
required: false
default: "true"
path_to_claude_code_executable:
description: "Optional path to a custom Claude Code executable. If provided, skips automatic installation and uses this executable instead. WARNING: Using an older version may cause problems if the action begins taking advantage of new Claude Code features. This input is typically not needed unless you're debugging something specific or have unique needs in your environment."
required: false
@@ -127,15 +139,18 @@ outputs:
structured_output:
description: "JSON string containing all structured output fields when --json-schema is provided in claude_args. Use fromJSON() to parse: fromJSON(steps.id.outputs.structured_output).field_name"
value: ${{ steps.claude-code.outputs.structured_output }}
session_id:
description: "The Claude Code session ID that can be used with --resume to continue this conversation"
value: ${{ steps.claude-code.outputs.session_id }}
runs:
using: "composite"
steps:
- name: Install Bun
if: inputs.path_to_bun_executable == ''
uses: oven-sh/setup-bun@735343b667d3e6f658f44d0eca948eb6282f2b76 # https://github.com/oven-sh/setup-bun/releases/tag/v2.0.2
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
with:
bun-version: 1.2.11
bun-version: 1.3.6
- name: Setup Custom Bun Path
if: inputs.path_to_bun_executable != ''
@@ -167,6 +182,7 @@ runs:
LABEL_TRIGGER: ${{ inputs.label_trigger }}
BASE_BRANCH: ${{ inputs.base_branch }}
BRANCH_PREFIX: ${{ inputs.branch_prefix }}
BRANCH_NAME_TEMPLATE: ${{ inputs.branch_name_template }}
OVERRIDE_GITHUB_TOKEN: ${{ inputs.github_token }}
ALLOWED_BOTS: ${{ inputs.allowed_bots }}
ALLOWED_NON_WRITE_USERS: ${{ inputs.allowed_non_write_users }}
@@ -174,9 +190,11 @@ runs:
USE_STICKY_COMMENT: ${{ inputs.use_sticky_comment }}
DEFAULT_WORKFLOW_TOKEN: ${{ github.token }}
USE_COMMIT_SIGNING: ${{ inputs.use_commit_signing }}
SSH_SIGNING_KEY: ${{ inputs.ssh_signing_key }}
BOT_ID: ${{ inputs.bot_id }}
BOT_NAME: ${{ inputs.bot_name }}
TRACK_PROGRESS: ${{ inputs.track_progress }}
INCLUDE_FIX_LINKS: ${{ inputs.include_fix_links }}
ADDITIONAL_PERMISSIONS: ${{ inputs.additional_permissions }}
CLAUDE_ARGS: ${{ inputs.claude_args }}
ALL_INPUTS: ${{ toJson(inputs) }}
@@ -195,12 +213,13 @@ runs:
# Install Claude Code if no custom executable is provided
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
CLAUDE_CODE_VERSION="2.0.62"
CLAUDE_CODE_VERSION="2.1.17"
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
for attempt in 1 2 3; do
echo "Installation attempt $attempt..."
if command -v timeout &> /dev/null; then
timeout 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
# Use --foreground to kill entire process group on timeout, --kill-after to send SIGKILL if SIGTERM fails
timeout --foreground --kill-after=10 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
else
curl -fsSL https://claude.ai/install.sh | bash -s -- "$CLAUDE_CODE_VERSION" && break
fi
@@ -244,6 +263,7 @@ runs:
# Model configuration
GITHUB_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
GH_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
NODE_VERSION: ${{ env.NODE_VERSION }}
DETAILED_PERMISSION_MESSAGES: "1"
@@ -293,6 +313,7 @@ runs:
CLAUDE_COMMENT_ID: ${{ steps.prepare.outputs.claude_comment_id }}
GITHUB_RUN_ID: ${{ github.run_id }}
GITHUB_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
GH_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
GITHUB_EVENT_NAME: ${{ github.event_name }}
TRIGGER_COMMENT_ID: ${{ github.event.comment.id }}
CLAUDE_BRANCH: ${{ steps.prepare.outputs.CLAUDE_BRANCH }}
@@ -324,6 +345,12 @@ runs:
echo '```' >> $GITHUB_STEP_SUMMARY
fi
- name: Cleanup SSH signing key
if: always() && inputs.ssh_signing_key != ''
shell: bash
run: |
bun run ${GITHUB_ACTION_PATH}/src/entrypoints/cleanup-ssh-signing.ts
- name: Revoke app token
if: always() && inputs.github_token == '' && steps.prepare.outputs.skipped_due_to_workflow_validation_mismatch != 'true'
shell: bash

View File

@@ -27,7 +27,6 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
### Key Design Patterns
- Uses Bun runtime for development and execution
- Named pipes for IPC between prompt input and Claude process
- JSON streaming output format for execution logs
- Composite action pattern to orchestrate multiple steps
- Provider-agnostic design supporting Anthropic API, AWS Bedrock, and Google Vertex AI
@@ -54,7 +53,6 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
## Important Technical Details
- Uses `mkfifo` to create named pipes for prompt input
- Outputs execution logs as JSON to `/tmp/claude-execution-output.json`
- Timeout enforcement via `timeout` command wrapper
- Strict TypeScript configuration with Bun-specific settings

View File

@@ -82,6 +82,9 @@ outputs:
structured_output:
description: "JSON string containing all structured output fields when --json-schema is provided in claude_args (use fromJSON() or jq to parse)"
value: ${{ steps.run_claude.outputs.structured_output }}
session_id:
description: "The Claude Code session ID that can be used with --resume to continue this conversation"
value: ${{ steps.run_claude.outputs.session_id }}
runs:
using: "composite"
@@ -94,9 +97,9 @@ runs:
- name: Install Bun
if: inputs.path_to_bun_executable == ''
uses: oven-sh/setup-bun@735343b667d3e6f658f44d0eca948eb6282f2b76 # https://github.com/oven-sh/setup-bun/releases/tag/v2.0.2
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
with:
bun-version: 1.2.11
bun-version: 1.3.6
- name: Setup Custom Bun Path
if: inputs.path_to_bun_executable != ''
@@ -121,12 +124,13 @@ runs:
PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
run: |
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
CLAUDE_CODE_VERSION="2.0.62"
CLAUDE_CODE_VERSION="2.1.17"
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
for attempt in 1 2 3; do
echo "Installation attempt $attempt..."
if command -v timeout &> /dev/null; then
timeout 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
# Use --foreground to kill entire process group on timeout, --kill-after to send SIGKILL if SIGTERM fails
timeout --foreground --kill-after=10 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
else
curl -fsSL https://claude.ai/install.sh | bash -s -- "$CLAUDE_CODE_VERSION" && break
fi

View File

@@ -1,11 +1,12 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "@anthropic-ai/claude-code-base-action",
"dependencies": {
"@actions/core": "^1.10.1",
"@anthropic-ai/claude-agent-sdk": "^0.1.52",
"@anthropic-ai/claude-agent-sdk": "^0.2.17",
"shell-quote": "^1.8.3",
},
"devDependencies": {
@@ -26,7 +27,7 @@
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.1.52", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^3.24.1" } }, "sha512-yF8N05+9NRbqYA/h39jQ726HTQFrdXXp7pEfDNKIJ2c4FdWvEjxBA/8ciZIebN6/PyvGDcbEp3yq2Co4rNpg6A=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.17", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-cWEZ3fhPG6beVlZkXPAGYwqoR5zbELPracg+eKQ9UUqlf9m5UmUaaasGSXdVVxgDkjZfl8yoPsHnicuL2GIB1A=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],

View File

@@ -11,7 +11,7 @@
},
"dependencies": {
"@actions/core": "^1.10.1",
"@anthropic-ai/claude-agent-sdk": "^0.1.52",
"@anthropic-ai/claude-agent-sdk": "^0.2.17",
"shell-quote": "^1.8.3"
},
"devDependencies": {

View File

@@ -36,7 +36,6 @@ async function run() {
mcpConfig: process.env.INPUT_MCP_CONFIG,
systemPrompt: process.env.INPUT_SYSTEM_PROMPT,
appendSystemPrompt: process.env.INPUT_APPEND_SYSTEM_PROMPT,
claudeEnv: process.env.INPUT_CLAUDE_ENV,
fallbackModel: process.env.INPUT_FALLBACK_MODEL,
model: process.env.ANTHROPIC_MODEL,
pathToClaudeCodeExecutable:

View File

@@ -8,26 +8,47 @@ const MARKETPLACE_URL_REGEX =
/^https:\/\/[a-zA-Z0-9\-._~:/?#[\]@!$&'()*+,;=%]+\.git$/;
/**
* Validates a marketplace URL for security issues
* @param url - The marketplace URL to validate
* @throws {Error} If the URL is invalid
* Checks if a marketplace input is a local path (not a URL)
* @param input - The marketplace input to check
* @returns true if the input is a local path, false if it's a URL
*/
function validateMarketplaceUrl(url: string): void {
const normalized = url.trim();
function isLocalPath(input: string): boolean {
// Local paths start with ./, ../, /, or a drive letter (Windows)
return (
input.startsWith("./") ||
input.startsWith("../") ||
input.startsWith("/") ||
/^[a-zA-Z]:[\\\/]/.test(input)
);
}
/**
* Validates a marketplace URL or local path
* @param input - The marketplace URL or local path to validate
* @throws {Error} If the input is invalid
*/
function validateMarketplaceInput(input: string): void {
const normalized = input.trim();
if (!normalized) {
throw new Error("Marketplace URL cannot be empty");
throw new Error("Marketplace URL or path cannot be empty");
}
// Local paths are passed directly to Claude Code which handles them
if (isLocalPath(normalized)) {
return;
}
// Validate as URL
if (!MARKETPLACE_URL_REGEX.test(normalized)) {
throw new Error(`Invalid marketplace URL format: ${url}`);
throw new Error(`Invalid marketplace URL format: ${input}`);
}
// Additional check for valid URL structure
try {
new URL(normalized);
} catch {
throw new Error(`Invalid marketplace URL: ${url}`);
throw new Error(`Invalid marketplace URL: ${input}`);
}
}
@@ -55,9 +76,9 @@ function validatePluginName(pluginName: string): void {
}
/**
* Parse a newline-separated list of marketplace URLs and return an array of validated URLs
* @param marketplaces - Newline-separated list of marketplace Git URLs
* @returns Array of validated marketplace URLs (empty array if none provided)
* Parse a newline-separated list of marketplace URLs or local paths and return an array of validated entries
* @param marketplaces - Newline-separated list of marketplace Git URLs or local paths
* @returns Array of validated marketplace URLs or paths (empty array if none provided)
*/
function parseMarketplaces(marketplaces?: string): string[] {
const trimmed = marketplaces?.trim();
@@ -66,14 +87,14 @@ function parseMarketplaces(marketplaces?: string): string[] {
return [];
}
// Split by newline and process each URL
// Split by newline and process each entry
return trimmed
.split("\n")
.map((url) => url.trim())
.filter((url) => {
if (url.length === 0) return false;
.map((entry) => entry.trim())
.filter((entry) => {
if (entry.length === 0) return false;
validateMarketplaceUrl(url);
validateMarketplaceInput(entry);
return true;
});
}
@@ -163,26 +184,26 @@ async function installPlugin(
/**
* Adds a Claude Code plugin marketplace
* @param claudeExecutable - Path to the Claude executable
* @param marketplaceUrl - The marketplace Git URL to add
* @param marketplace - The marketplace Git URL or local path to add
* @returns Promise that resolves when the marketplace add command completes
* @throws {Error} If the command fails to execute
*/
async function addMarketplace(
claudeExecutable: string,
marketplaceUrl: string,
marketplace: string,
): Promise<void> {
console.log(`Adding marketplace: ${marketplaceUrl}`);
console.log(`Adding marketplace: ${marketplace}`);
return executeClaudeCommand(
claudeExecutable,
["plugin", "marketplace", "add", marketplaceUrl],
`Failed to add marketplace '${marketplaceUrl}'`,
["plugin", "marketplace", "add", marketplace],
`Failed to add marketplace '${marketplace}'`,
);
}
/**
* Installs Claude Code plugins from a newline-separated list
* @param marketplacesInput - Newline-separated list of marketplace Git URLs
* @param marketplacesInput - Newline-separated list of marketplace Git URLs or local paths
* @param pluginsInput - Newline-separated list of plugin names
* @param claudeExecutable - Path to the Claude executable (defaults to "claude")
* @returns Promise that resolves when all plugins are installed

View File

@@ -12,12 +12,79 @@ export type ParsedSdkOptions = {
};
// Flags that should accumulate multiple values instead of overwriting
const ACCUMULATING_FLAGS = new Set(["allowedTools", "disallowedTools"]);
// Include both camelCase and hyphenated variants for CLI compatibility
const ACCUMULATING_FLAGS = new Set([
"allowedTools",
"allowed-tools",
"disallowedTools",
"disallowed-tools",
"mcp-config",
]);
// Delimiter used to join accumulated flag values
const ACCUMULATE_DELIMITER = "\x00";
type McpConfig = {
mcpServers?: Record<string, unknown>;
};
/**
* Merge multiple MCP config values into a single config.
* Each config can be a JSON string or a file path.
* For JSON strings, mcpServers objects are merged.
* For file paths, they are kept as-is (user's file takes precedence and is used last).
*/
function mergeMcpConfigs(configValues: string[]): string {
const merged: McpConfig = { mcpServers: {} };
let lastFilePath: string | null = null;
for (const config of configValues) {
const trimmed = config.trim();
if (!trimmed) continue;
// Check if it's a JSON string (starts with {) or a file path
if (trimmed.startsWith("{")) {
try {
const parsed = JSON.parse(trimmed) as McpConfig;
if (parsed.mcpServers) {
Object.assign(merged.mcpServers!, parsed.mcpServers);
}
} catch {
// If JSON parsing fails, treat as file path
lastFilePath = trimmed;
}
} else {
// It's a file path - store it to handle separately
lastFilePath = trimmed;
}
}
// If we have file paths, we need to keep the merged JSON and let the file
// be handled separately. Since we can only return one value, merge what we can.
// If there's a file path, we need a different approach - read the file at runtime.
// For now, if there's a file path, we'll stringify the merged config.
// The action prepends its config as JSON, so we can safely merge inline JSON configs.
// If no inline configs were found (all file paths), return the last file path
if (Object.keys(merged.mcpServers!).length === 0 && lastFilePath) {
return lastFilePath;
}
// Note: If user passes a file path, we cannot merge it at parse time since
// we don't have access to the file system here. The action's built-in MCP
// servers are always passed as inline JSON, so they will be merged.
// If user also passes inline JSON, it will be merged.
// If user passes a file path, they should ensure it includes all needed servers.
return JSON.stringify(merged);
}
/**
* Parse claudeArgs string into extraArgs record for SDK pass-through
* The SDK/CLI will handle --mcp-config, --json-schema, etc.
* For allowedTools and disallowedTools, multiple occurrences are accumulated (comma-joined).
* For allowedTools and disallowedTools, multiple occurrences are accumulated (null-char joined).
* Accumulating flags also consume all consecutive non-flag values
* (e.g., --allowed-tools "Tool1" "Tool2" "Tool3" captures all three).
*/
function parseClaudeArgsToExtraArgs(
claudeArgs?: string,
@@ -37,13 +104,25 @@ function parseClaudeArgsToExtraArgs(
// Check if next arg is a value (not another flag)
if (nextArg && !nextArg.startsWith("--")) {
// For accumulating flags, join multiple values with commas
if (ACCUMULATING_FLAGS.has(flag) && result[flag]) {
result[flag] = `${result[flag]},${nextArg}`;
// For accumulating flags, consume all consecutive non-flag values
// This handles: --allowed-tools "Tool1" "Tool2" "Tool3"
if (ACCUMULATING_FLAGS.has(flag)) {
const values: string[] = [];
while (i + 1 < args.length && !args[i + 1]?.startsWith("--")) {
i++;
values.push(args[i]!);
}
const joinedValues = values.join(ACCUMULATE_DELIMITER);
if (result[flag]) {
result[flag] =
`${result[flag]}${ACCUMULATE_DELIMITER}${joinedValues}`;
} else {
result[flag] = joinedValues;
}
} else {
result[flag] = nextArg;
i++; // Skip the value
}
i++; // Skip the value
} else {
result[flag] = null; // Boolean flag
}
@@ -68,12 +147,23 @@ export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
// Detect if --json-schema is present (for hasJsonSchema flag)
const hasJsonSchema = "json-schema" in extraArgs;
// Extract and merge allowedTools from both sources:
// Extract and merge allowedTools from all sources:
// 1. From extraArgs (parsed from claudeArgs - contains tag mode's tools)
// - Check both camelCase (--allowedTools) and hyphenated (--allowed-tools) variants
// 2. From options.allowedTools (direct input - may be undefined)
// This prevents duplicate flags being overwritten when claudeArgs contains --allowedTools
const extraArgsAllowedTools = extraArgs["allowedTools"]
? extraArgs["allowedTools"].split(",").map((t) => t.trim())
const allowedToolsValues = [
extraArgs["allowedTools"],
extraArgs["allowed-tools"],
]
.filter(Boolean)
.join(ACCUMULATE_DELIMITER);
const extraArgsAllowedTools = allowedToolsValues
? allowedToolsValues
.split(ACCUMULATE_DELIMITER)
.flatMap((v) => v.split(","))
.map((t) => t.trim())
.filter(Boolean)
: [];
const directAllowedTools = options.allowedTools
? options.allowedTools.split(",").map((t) => t.trim())
@@ -82,10 +172,21 @@ export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
...new Set([...extraArgsAllowedTools, ...directAllowedTools]),
];
delete extraArgs["allowedTools"];
delete extraArgs["allowed-tools"];
// Same for disallowedTools
const extraArgsDisallowedTools = extraArgs["disallowedTools"]
? extraArgs["disallowedTools"].split(",").map((t) => t.trim())
// Same for disallowedTools - check both camelCase and hyphenated variants
const disallowedToolsValues = [
extraArgs["disallowedTools"],
extraArgs["disallowed-tools"],
]
.filter(Boolean)
.join(ACCUMULATE_DELIMITER);
const extraArgsDisallowedTools = disallowedToolsValues
? disallowedToolsValues
.split(ACCUMULATE_DELIMITER)
.flatMap((v) => v.split(","))
.map((t) => t.trim())
.filter(Boolean)
: [];
const directDisallowedTools = options.disallowedTools
? options.disallowedTools.split(",").map((t) => t.trim())
@@ -94,12 +195,25 @@ export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
...new Set([...extraArgsDisallowedTools, ...directDisallowedTools]),
];
delete extraArgs["disallowedTools"];
delete extraArgs["disallowed-tools"];
// Merge multiple --mcp-config values by combining their mcpServers objects
// The action prepends its config (github_comment, github_ci, etc.) as inline JSON,
// and users may provide their own config as inline JSON or file path
if (extraArgs["mcp-config"]) {
const mcpConfigValues = extraArgs["mcp-config"].split(ACCUMULATE_DELIMITER);
if (mcpConfigValues.length > 1) {
extraArgs["mcp-config"] = mergeMcpConfigs(mcpConfigValues);
}
}
// Build custom environment
const env: Record<string, string | undefined> = { ...process.env };
if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
env.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
}
// Set the entrypoint for Claude Code to identify this as the GitHub Action
env.CLAUDE_CODE_ENTRYPOINT = "claude-code-github-action";
// Build system prompt option - default to claude_code preset
let systemPrompt: SdkOptions["systemPrompt"];
@@ -137,10 +251,18 @@ export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
extraArgs,
env,
// Load settings from all sources to pick up CLI-installed plugins, CLAUDE.md, etc.
settingSources: ["user", "project", "local"],
// Load settings from sources - prefer user's --setting-sources if provided, otherwise use all sources
// This ensures users can override the default behavior (e.g., --setting-sources user to avoid in-repo configs)
settingSources: extraArgs["setting-sources"]
? (extraArgs["setting-sources"].split(
",",
) as SdkOptions["settingSources"])
: ["user", "project", "local"],
};
// Remove setting-sources from extraArgs to avoid passing it twice
delete extraArgs["setting-sources"];
return {
sdkOptions,
showFullOutput,

View File

@@ -1,14 +1,81 @@
import * as core from "@actions/core";
import { readFile, writeFile } from "fs/promises";
import { readFile, writeFile, access } from "fs/promises";
import { dirname, join } from "path";
import { query } from "@anthropic-ai/claude-agent-sdk";
import type {
SDKMessage,
SDKResultMessage,
SDKUserMessage,
} from "@anthropic-ai/claude-agent-sdk";
import type { ParsedSdkOptions } from "./parse-sdk-options";
const EXECUTION_FILE = `${process.env.RUNNER_TEMP}/claude-execution-output.json`;
/** Filename for the user request file, written by prompt generation */
const USER_REQUEST_FILENAME = "claude-user-request.txt";
/**
* Check if a file exists
*/
async function fileExists(path: string): Promise<boolean> {
try {
await access(path);
return true;
} catch {
return false;
}
}
/**
* Creates a prompt configuration for the SDK.
* If a user request file exists alongside the prompt file, returns a multi-block
* SDKUserMessage that enables slash command processing in the CLI.
* Otherwise, returns the prompt as a simple string.
*/
async function createPromptConfig(
promptPath: string,
showFullOutput: boolean,
): Promise<string | AsyncIterable<SDKUserMessage>> {
const promptContent = await readFile(promptPath, "utf-8");
// Check for user request file in the same directory
const userRequestPath = join(dirname(promptPath), USER_REQUEST_FILENAME);
const hasUserRequest = await fileExists(userRequestPath);
if (!hasUserRequest) {
// No user request file - use simple string prompt
return promptContent;
}
// User request file exists - create multi-block message
const userRequest = await readFile(userRequestPath, "utf-8");
if (showFullOutput) {
console.log("Using multi-block message with user request:", userRequest);
} else {
console.log("Using multi-block message with user request (content hidden)");
}
// Create an async generator that yields a single multi-block message
// The context/instructions go first, then the user's actual request last
// This allows the CLI to detect and process slash commands in the user request
async function* createMultiBlockMessage(): AsyncGenerator<SDKUserMessage> {
yield {
type: "user",
session_id: "",
message: {
role: "user",
content: [
{ type: "text", text: promptContent }, // Instructions + GitHub context
{ type: "text", text: userRequest }, // User's request (may be a slash command)
],
},
parent_tool_use_id: null,
};
}
return createMultiBlockMessage();
}
/**
* Sanitizes SDK output to match CLI sanitization behavior
*/
@@ -63,7 +130,8 @@ export async function runClaudeWithSdk(
promptPath: string,
{ sdkOptions, showFullOutput, hasJsonSchema }: ParsedSdkOptions,
): Promise<void> {
const prompt = await readFile(promptPath, "utf-8");
// Create prompt configuration - may be a string or multi-block message
const prompt = await createPromptConfig(promptPath, showFullOutput);
if (!showFullOutput) {
console.log(
@@ -110,6 +178,15 @@ export async function runClaudeWithSdk(
core.warning(`Failed to write execution file: ${error}`);
}
// Extract and set session_id from system.init message
const initMessage = messages.find(
(m) => m.type === "system" && "subtype" in m && m.subtype === "init",
);
if (initMessage && "session_id" in initMessage && initMessage.session_id) {
core.setOutput("session_id", initMessage.session_id);
core.info(`Set session_id: ${initMessage.session_id}`);
}
if (!resultMessage) {
core.setOutput("conclusion", "failure");
core.error("No result message received from Claude");

View File

@@ -1,72 +1,6 @@
import * as core from "@actions/core";
import { exec } from "child_process";
import { promisify } from "util";
import { unlink, writeFile, stat, readFile } from "fs/promises";
import { createWriteStream } from "fs";
import { spawn } from "child_process";
import { parse as parseShellArgs } from "shell-quote";
import { runClaudeWithSdk } from "./run-claude-sdk";
import { parseSdkOptions } from "./parse-sdk-options";
const execAsync = promisify(exec);
const PIPE_PATH = `${process.env.RUNNER_TEMP}/claude_prompt_pipe`;
const EXECUTION_FILE = `${process.env.RUNNER_TEMP}/claude-execution-output.json`;
const BASE_ARGS = ["--verbose", "--output-format", "stream-json"];
/**
* Sanitizes JSON output to remove sensitive information when full output is disabled
* Returns a safe summary message or null if the message should be completely suppressed
*/
function sanitizeJsonOutput(
jsonObj: any,
showFullOutput: boolean,
): string | null {
if (showFullOutput) {
// In full output mode, return the full JSON
return JSON.stringify(jsonObj, null, 2);
}
// In non-full-output mode, provide minimal safe output
const type = jsonObj.type;
const subtype = jsonObj.subtype;
// System initialization - safe to show
if (type === "system" && subtype === "init") {
return JSON.stringify(
{
type: "system",
subtype: "init",
message: "Claude Code initialized",
model: jsonObj.model || "unknown",
},
null,
2,
);
}
// Result messages - Always show the final result
if (type === "result") {
// These messages contain the final result and should always be visible
return JSON.stringify(
{
type: "result",
subtype: jsonObj.subtype,
is_error: jsonObj.is_error,
duration_ms: jsonObj.duration_ms,
num_turns: jsonObj.num_turns,
total_cost_usd: jsonObj.total_cost_usd,
permission_denials: jsonObj.permission_denials,
},
null,
2,
);
}
// For any other message types, suppress completely in non-full-output mode
return null;
}
export type ClaudeOptions = {
claudeArgs?: string;
model?: string;
@@ -77,330 +11,11 @@ export type ClaudeOptions = {
mcpConfig?: string;
systemPrompt?: string;
appendSystemPrompt?: string;
claudeEnv?: string;
fallbackModel?: string;
showFullOutput?: string;
};
type PreparedConfig = {
claudeArgs: string[];
promptPath: string;
env: Record<string, string>;
};
export function prepareRunConfig(
promptPath: string,
options: ClaudeOptions,
): PreparedConfig {
// Build Claude CLI arguments:
// 1. Prompt flag (always first)
// 2. User's claudeArgs (full control)
// 3. BASE_ARGS (always last, cannot be overridden)
const claudeArgs = ["-p"];
// Parse and add user's custom Claude arguments
if (options.claudeArgs?.trim()) {
const parsed = parseShellArgs(options.claudeArgs);
const customArgs = parsed.filter(
(arg): arg is string => typeof arg === "string",
);
claudeArgs.push(...customArgs);
}
// BASE_ARGS are always appended last (cannot be overridden)
claudeArgs.push(...BASE_ARGS);
const customEnv: Record<string, string> = {};
if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
customEnv.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
}
return {
claudeArgs,
promptPath,
env: customEnv,
};
}
/**
* Parses structured_output from execution file and sets GitHub Action outputs
* Only runs if --json-schema was explicitly provided in claude_args
* Exported for testing
*/
export async function parseAndSetStructuredOutputs(
executionFile: string,
): Promise<void> {
try {
const content = await readFile(executionFile, "utf-8");
const messages = JSON.parse(content) as {
type: string;
structured_output?: Record<string, unknown>;
}[];
// Search backwards - result is typically last or second-to-last message
const result = messages.findLast(
(m) => m.type === "result" && m.structured_output,
);
if (!result?.structured_output) {
throw new Error(
`--json-schema was provided but Claude did not return structured_output.\n` +
`Found ${messages.length} messages. Result exists: ${!!result}\n`,
);
}
// Set the complete structured output as a single JSON string
// This works around GitHub Actions limitation that composite actions can't have dynamic outputs
const structuredOutputJson = JSON.stringify(result.structured_output);
core.setOutput("structured_output", structuredOutputJson);
core.info(
`Set structured_output with ${Object.keys(result.structured_output).length} field(s)`,
);
} catch (error) {
if (error instanceof Error) {
throw error; // Preserve original error and stack trace
}
throw new Error(`Failed to parse structured outputs: ${error}`);
}
}
export async function runClaude(promptPath: string, options: ClaudeOptions) {
// Feature flag: use SDK path when USE_AGENT_SDK=true
const useAgentSdk = process.env.USE_AGENT_SDK === "true";
console.log(
`Using ${useAgentSdk ? "Agent SDK" : "CLI"} path (USE_AGENT_SDK=${process.env.USE_AGENT_SDK ?? "unset"})`,
);
if (useAgentSdk) {
const parsedOptions = parseSdkOptions(options);
return runClaudeWithSdk(promptPath, parsedOptions);
}
const config = prepareRunConfig(promptPath, options);
// Detect if --json-schema is present in claude args
const hasJsonSchema = options.claudeArgs?.includes("--json-schema") ?? false;
// Create a named pipe
try {
await unlink(PIPE_PATH);
} catch (e) {
// Ignore if file doesn't exist
}
// Create the named pipe
await execAsync(`mkfifo "${PIPE_PATH}"`);
// Log prompt file size
let promptSize = "unknown";
try {
const stats = await stat(config.promptPath);
promptSize = stats.size.toString();
} catch (e) {
// Ignore error
}
console.log(`Prompt file size: ${promptSize} bytes`);
// Log custom environment variables if any
const customEnvKeys = Object.keys(config.env).filter(
(key) => key !== "CLAUDE_ACTION_INPUTS_PRESENT",
);
if (customEnvKeys.length > 0) {
console.log(`Custom environment variables: ${customEnvKeys.join(", ")}`);
}
// Log custom arguments if any
if (options.claudeArgs && options.claudeArgs.trim() !== "") {
console.log(`Custom Claude arguments: ${options.claudeArgs}`);
}
// Output to console
console.log(`Running Claude with prompt from file: ${config.promptPath}`);
console.log(`Full command: claude ${config.claudeArgs.join(" ")}`);
// Start sending prompt to pipe in background
const catProcess = spawn("cat", [config.promptPath], {
stdio: ["ignore", "pipe", "inherit"],
});
const pipeStream = createWriteStream(PIPE_PATH);
catProcess.stdout.pipe(pipeStream);
catProcess.on("error", (error) => {
console.error("Error reading prompt file:", error);
pipeStream.destroy();
});
// Use custom executable path if provided, otherwise default to "claude"
const claudeExecutable = options.pathToClaudeCodeExecutable || "claude";
const claudeProcess = spawn(claudeExecutable, config.claudeArgs, {
stdio: ["pipe", "pipe", "inherit"],
env: {
...process.env,
...config.env,
},
});
// Handle Claude process errors
claudeProcess.on("error", (error) => {
console.error("Error spawning Claude process:", error);
pipeStream.destroy();
});
// Determine if full output should be shown
// Show full output if explicitly set to "true" OR if GitHub Actions debug mode is enabled
const isDebugMode = process.env.ACTIONS_STEP_DEBUG === "true";
let showFullOutput = options.showFullOutput === "true" || isDebugMode;
if (isDebugMode && options.showFullOutput !== "false") {
console.log("Debug mode detected - showing full output");
showFullOutput = true;
} else if (!showFullOutput) {
console.log("Running Claude Code (full output hidden for security)...");
console.log(
"Rerun in debug mode or enable `show_full_output: true` in your workflow file for full output.",
);
}
// Capture output for parsing execution metrics
let output = "";
claudeProcess.stdout.on("data", (data) => {
const text = data.toString();
// Try to parse as JSON and handle based on verbose setting
const lines = text.split("\n");
lines.forEach((line: string, index: number) => {
if (line.trim() === "") return;
try {
// Check if this line is a JSON object
const parsed = JSON.parse(line);
const sanitizedOutput = sanitizeJsonOutput(parsed, showFullOutput);
if (sanitizedOutput) {
process.stdout.write(sanitizedOutput);
if (index < lines.length - 1 || text.endsWith("\n")) {
process.stdout.write("\n");
}
}
} catch (e) {
// Not a JSON object
if (showFullOutput) {
// In full output mode, print as is
process.stdout.write(line);
if (index < lines.length - 1 || text.endsWith("\n")) {
process.stdout.write("\n");
}
}
// In non-full-output mode, suppress non-JSON output
}
});
output += text;
});
// Handle stdout errors
claudeProcess.stdout.on("error", (error) => {
console.error("Error reading Claude stdout:", error);
});
// Pipe from named pipe to Claude
const pipeProcess = spawn("cat", [PIPE_PATH]);
pipeProcess.stdout.pipe(claudeProcess.stdin);
// Handle pipe process errors
pipeProcess.on("error", (error) => {
console.error("Error reading from named pipe:", error);
claudeProcess.kill("SIGTERM");
});
// Wait for Claude to finish
const exitCode = await new Promise<number>((resolve) => {
claudeProcess.on("close", (code) => {
resolve(code || 0);
});
claudeProcess.on("error", (error) => {
console.error("Claude process error:", error);
resolve(1);
});
});
// Clean up processes
try {
catProcess.kill("SIGTERM");
} catch (e) {
// Process may already be dead
}
try {
pipeProcess.kill("SIGTERM");
} catch (e) {
// Process may already be dead
}
// Clean up pipe file
try {
await unlink(PIPE_PATH);
} catch (e) {
// Ignore errors during cleanup
}
// Set conclusion based on exit code
if (exitCode === 0) {
// Try to process the output and save execution metrics
try {
await writeFile("output.txt", output);
// Process output.txt into JSON and save to execution file
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
maxBuffer: 10 * 1024 * 1024,
});
await writeFile(EXECUTION_FILE, jsonOutput);
console.log(`Log saved to ${EXECUTION_FILE}`);
} catch (e) {
core.warning(`Failed to process output for execution metrics: ${e}`);
}
core.setOutput("execution_file", EXECUTION_FILE);
// Parse and set structured outputs only if user provided --json-schema in claude_args
if (hasJsonSchema) {
try {
await parseAndSetStructuredOutputs(EXECUTION_FILE);
} catch (error) {
const errorMessage =
error instanceof Error ? error.message : String(error);
core.setFailed(errorMessage);
core.setOutput("conclusion", "failure");
process.exit(1);
}
}
// Set conclusion to success if we reached here
core.setOutput("conclusion", "success");
} else {
core.setOutput("conclusion", "failure");
// Still try to save execution file if we have output
if (output) {
try {
await writeFile("output.txt", output);
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
maxBuffer: 10 * 1024 * 1024,
});
await writeFile(EXECUTION_FILE, jsonOutput);
core.setOutput("execution_file", EXECUTION_FILE);
} catch (e) {
// Ignore errors when processing output during failure
}
}
process.exit(exitCode);
}
const parsedOptions = parseSdkOptions(options);
return runClaudeWithSdk(promptPath, parsedOptions);
}

View File

@@ -596,4 +596,111 @@ describe("installPlugins", () => {
{ stdio: "inherit" },
);
});
// Local marketplace path tests
test("should accept local marketplace path with ./", async () => {
const spy = createMockSpawn();
await installPlugins("./my-local-marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "./my-local-marketplace"],
{ stdio: "inherit" },
);
expect(spy).toHaveBeenNthCalledWith(
2,
"claude",
["plugin", "install", "test-plugin"],
{ stdio: "inherit" },
);
});
test("should accept local marketplace path with absolute Unix path", async () => {
const spy = createMockSpawn();
await installPlugins("/home/user/my-marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "/home/user/my-marketplace"],
{ stdio: "inherit" },
);
});
test("should accept local marketplace path with Windows absolute path", async () => {
const spy = createMockSpawn();
await installPlugins("C:\\Users\\user\\marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "C:\\Users\\user\\marketplace"],
{ stdio: "inherit" },
);
});
test("should accept mixed local and remote marketplaces", async () => {
const spy = createMockSpawn();
await installPlugins(
"./local-marketplace\nhttps://github.com/user/remote.git",
"test-plugin",
);
expect(spy).toHaveBeenCalledTimes(3);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "./local-marketplace"],
{ stdio: "inherit" },
);
expect(spy).toHaveBeenNthCalledWith(
2,
"claude",
["plugin", "marketplace", "add", "https://github.com/user/remote.git"],
{ stdio: "inherit" },
);
});
test("should accept local path with ../ (parent directory)", async () => {
const spy = createMockSpawn();
await installPlugins("../shared-plugins/marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "../shared-plugins/marketplace"],
{ stdio: "inherit" },
);
});
test("should accept local path with nested directories", async () => {
const spy = createMockSpawn();
await installPlugins("./plugins/my-org/my-marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "./plugins/my-org/my-marketplace"],
{ stdio: "inherit" },
);
});
test("should accept local path with dots in directory name", async () => {
const spy = createMockSpawn();
await installPlugins("./my.plugin.marketplace", "test-plugin");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenNthCalledWith(
1,
"claude",
["plugin", "marketplace", "add", "./my.plugin.marketplace"],
{ stdio: "inherit" },
);
});
});

View File

@@ -2,6 +2,6 @@
"name": "mcp-test",
"version": "1.0.0",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.11.0"
"@modelcontextprotocol/sdk": "^1.24.0"
}
}

View File

@@ -108,6 +108,48 @@ describe("parseSdkOptions", () => {
expect(result.sdkOptions.extraArgs?.["allowedTools"]).toBeUndefined();
expect(result.sdkOptions.extraArgs?.["model"]).toBe("claude-3-5-sonnet");
});
test("should handle hyphenated --allowed-tools flag", () => {
const options: ClaudeOptions = {
claudeArgs: '--allowed-tools "Edit,Read,Write"',
};
const result = parseSdkOptions(options);
expect(result.sdkOptions.allowedTools).toEqual(["Edit", "Read", "Write"]);
expect(result.sdkOptions.extraArgs?.["allowed-tools"]).toBeUndefined();
});
test("should accumulate multiple --allowed-tools flags (hyphenated)", () => {
// This is the exact scenario from issue #746
const options: ClaudeOptions = {
claudeArgs:
'--allowed-tools "Bash(git log:*)" "Bash(git diff:*)" "Bash(git fetch:*)" "Bash(gh pr:*)"',
};
const result = parseSdkOptions(options);
expect(result.sdkOptions.allowedTools).toEqual([
"Bash(git log:*)",
"Bash(git diff:*)",
"Bash(git fetch:*)",
"Bash(gh pr:*)",
]);
});
test("should handle mixed camelCase and hyphenated allowedTools flags", () => {
const options: ClaudeOptions = {
claudeArgs: '--allowedTools "Edit,Read" --allowed-tools "Write,Glob"',
};
const result = parseSdkOptions(options);
// Both should be merged - note: order depends on which key is found first
expect(result.sdkOptions.allowedTools).toContain("Edit");
expect(result.sdkOptions.allowedTools).toContain("Read");
expect(result.sdkOptions.allowedTools).toContain("Write");
expect(result.sdkOptions.allowedTools).toContain("Glob");
});
});
describe("disallowedTools merging", () => {
@@ -134,19 +176,129 @@ describe("parseSdkOptions", () => {
});
});
describe("other extraArgs passthrough", () => {
test("should pass through mcp-config in extraArgs", () => {
describe("mcp-config merging", () => {
test("should pass through single mcp-config in extraArgs", () => {
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '{"mcpServers":{}}' --allowedTools "Edit"`,
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{"command":"cmd1"}}}'`,
};
const result = parseSdkOptions(options);
expect(result.sdkOptions.extraArgs?.["mcp-config"]).toBe(
'{"mcpServers":{}}',
'{"mcpServers":{"server1":{"command":"cmd1"}}}',
);
});
test("should merge multiple mcp-config flags with inline JSON", () => {
// Simulates action prepending its config, then user providing their own
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '{"mcpServers":{"github_comment":{"command":"node","args":["server.js"]}}}' --mcp-config '{"mcpServers":{"user_server":{"command":"custom","args":["run"]}}}'`,
};
const result = parseSdkOptions(options);
const mcpConfig = JSON.parse(
result.sdkOptions.extraArgs?.["mcp-config"] as string,
);
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
expect(mcpConfig.mcpServers).toHaveProperty("user_server");
expect(mcpConfig.mcpServers.github_comment.command).toBe("node");
expect(mcpConfig.mcpServers.user_server.command).toBe("custom");
});
test("should merge three mcp-config flags", () => {
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{"command":"cmd1"}}}' --mcp-config '{"mcpServers":{"server2":{"command":"cmd2"}}}' --mcp-config '{"mcpServers":{"server3":{"command":"cmd3"}}}'`,
};
const result = parseSdkOptions(options);
const mcpConfig = JSON.parse(
result.sdkOptions.extraArgs?.["mcp-config"] as string,
);
expect(mcpConfig.mcpServers).toHaveProperty("server1");
expect(mcpConfig.mcpServers).toHaveProperty("server2");
expect(mcpConfig.mcpServers).toHaveProperty("server3");
});
test("should handle mcp-config file path when no inline JSON exists", () => {
const options: ClaudeOptions = {
claudeArgs: `--mcp-config /tmp/user-mcp-config.json`,
};
const result = parseSdkOptions(options);
expect(result.sdkOptions.extraArgs?.["mcp-config"]).toBe(
"/tmp/user-mcp-config.json",
);
});
test("should merge inline JSON configs when file path is also present", () => {
// When action provides inline JSON and user provides a file path,
// the inline JSON configs should be merged (file paths cannot be merged at parse time)
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '{"mcpServers":{"github_comment":{"command":"node"}}}' --mcp-config '{"mcpServers":{"github_ci":{"command":"node"}}}' --mcp-config /tmp/user-config.json`,
};
const result = parseSdkOptions(options);
// The inline JSON configs should be merged
const mcpConfig = JSON.parse(
result.sdkOptions.extraArgs?.["mcp-config"] as string,
);
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
expect(mcpConfig.mcpServers).toHaveProperty("github_ci");
});
test("should handle mcp-config with other flags", () => {
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{}}}' --model claude-3-5-sonnet --mcp-config '{"mcpServers":{"server2":{}}}'`,
};
const result = parseSdkOptions(options);
const mcpConfig = JSON.parse(
result.sdkOptions.extraArgs?.["mcp-config"] as string,
);
expect(mcpConfig.mcpServers).toHaveProperty("server1");
expect(mcpConfig.mcpServers).toHaveProperty("server2");
expect(result.sdkOptions.extraArgs?.["model"]).toBe("claude-3-5-sonnet");
});
test("should handle real-world scenario: action config + user config", () => {
// This is the exact scenario from the bug report
const actionConfig = JSON.stringify({
mcpServers: {
github_comment: {
command: "node",
args: ["github-comment-server.js"],
},
github_ci: { command: "node", args: ["github-ci-server.js"] },
},
});
const userConfig = JSON.stringify({
mcpServers: {
my_custom_server: { command: "python", args: ["server.py"] },
},
});
const options: ClaudeOptions = {
claudeArgs: `--mcp-config '${actionConfig}' --mcp-config '${userConfig}'`,
};
const result = parseSdkOptions(options);
const mcpConfig = JSON.parse(
result.sdkOptions.extraArgs?.["mcp-config"] as string,
);
// All servers should be present
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
expect(mcpConfig.mcpServers).toHaveProperty("github_ci");
expect(mcpConfig.mcpServers).toHaveProperty("my_custom_server");
});
});
describe("other extraArgs passthrough", () => {
test("should pass through json-schema in extraArgs", () => {
const options: ClaudeOptions = {
claudeArgs: `--json-schema '{"type":"object"}'`,

View File

@@ -1,96 +0,0 @@
#!/usr/bin/env bun
import { describe, test, expect } from "bun:test";
import { prepareRunConfig, type ClaudeOptions } from "../src/run-claude";
describe("prepareRunConfig", () => {
test("should prepare config with basic arguments", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should include promptPath", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.promptPath).toBe("/tmp/test-prompt.txt");
});
test("should use provided prompt path", () => {
const options: ClaudeOptions = {};
const prepared = prepareRunConfig("/custom/prompt/path.txt", options);
expect(prepared.promptPath).toBe("/custom/prompt/path.txt");
});
describe("claudeArgs handling", () => {
test("should parse and include custom claude arguments", () => {
const options: ClaudeOptions = {
claudeArgs: "--max-turns 10 --model claude-3-opus-20240229",
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--max-turns",
"10",
"--model",
"claude-3-opus-20240229",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should handle empty claudeArgs", () => {
const options: ClaudeOptions = {
claudeArgs: "",
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should handle claudeArgs with quoted strings", () => {
const options: ClaudeOptions = {
claudeArgs: '--system-prompt "You are a helpful assistant"',
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toEqual([
"-p",
"--system-prompt",
"You are a helpful assistant",
"--verbose",
"--output-format",
"stream-json",
]);
});
test("should include json-schema flag when provided", () => {
const options: ClaudeOptions = {
claudeArgs:
'--json-schema \'{"type":"object","properties":{"result":{"type":"boolean"}}}\'',
};
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
expect(prepared.claudeArgs).toContain("--json-schema");
expect(prepared.claudeArgs).toContain(
'{"type":"object","properties":{"result":{"type":"boolean"}}}',
);
});
});
});

View File

@@ -1,158 +0,0 @@
#!/usr/bin/env bun
import { describe, test, expect, afterEach, beforeEach, spyOn } from "bun:test";
import { writeFile, unlink } from "fs/promises";
import { tmpdir } from "os";
import { join } from "path";
import { parseAndSetStructuredOutputs } from "../src/run-claude";
import * as core from "@actions/core";
// Mock execution file path
const TEST_EXECUTION_FILE = join(tmpdir(), "test-execution-output.json");
// Helper to create mock execution file with structured output
async function createMockExecutionFile(
structuredOutput?: Record<string, unknown>,
includeResult: boolean = true,
): Promise<void> {
const messages: any[] = [
{ type: "system", subtype: "init" },
{ type: "turn", content: "test" },
];
if (includeResult) {
messages.push({
type: "result",
cost_usd: 0.01,
duration_ms: 1000,
structured_output: structuredOutput,
});
}
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
}
// Spy on core functions
let setOutputSpy: any;
let infoSpy: any;
beforeEach(() => {
setOutputSpy = spyOn(core, "setOutput").mockImplementation(() => {});
infoSpy = spyOn(core, "info").mockImplementation(() => {});
});
describe("parseAndSetStructuredOutputs", () => {
afterEach(async () => {
setOutputSpy?.mockRestore();
infoSpy?.mockRestore();
try {
await unlink(TEST_EXECUTION_FILE);
} catch {
// Ignore if file doesn't exist
}
});
test("should set structured_output with valid data", async () => {
await createMockExecutionFile({
is_flaky: true,
confidence: 0.85,
summary: "Test looks flaky",
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
expect(setOutputSpy).toHaveBeenCalledWith(
"structured_output",
'{"is_flaky":true,"confidence":0.85,"summary":"Test looks flaky"}',
);
expect(infoSpy).toHaveBeenCalledWith(
"Set structured_output with 3 field(s)",
);
});
test("should handle arrays and nested objects", async () => {
await createMockExecutionFile({
items: ["a", "b", "c"],
config: { key: "value", nested: { deep: true } },
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
const callArgs = setOutputSpy.mock.calls[0];
expect(callArgs[0]).toBe("structured_output");
const parsed = JSON.parse(callArgs[1]);
expect(parsed).toEqual({
items: ["a", "b", "c"],
config: { key: "value", nested: { deep: true } },
});
});
test("should handle special characters in field names", async () => {
await createMockExecutionFile({
"test-result": "passed",
"item.count": 10,
"user@email": "test",
});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
const callArgs = setOutputSpy.mock.calls[0];
const parsed = JSON.parse(callArgs[1]);
expect(parsed["test-result"]).toBe("passed");
expect(parsed["item.count"]).toBe(10);
expect(parsed["user@email"]).toBe("test");
});
test("should throw error when result exists but structured_output is undefined", async () => {
const messages = [
{ type: "system", subtype: "init" },
{ type: "result", cost_usd: 0.01, duration_ms: 1000 },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow(
"--json-schema was provided but Claude did not return structured_output",
);
});
test("should throw error when no result message exists", async () => {
const messages = [
{ type: "system", subtype: "init" },
{ type: "turn", content: "test" },
];
await writeFile(TEST_EXECUTION_FILE, JSON.stringify(messages));
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow(
"--json-schema was provided but Claude did not return structured_output",
);
});
test("should throw error with malformed JSON", async () => {
await writeFile(TEST_EXECUTION_FILE, "{ invalid json");
await expect(
parseAndSetStructuredOutputs(TEST_EXECUTION_FILE),
).rejects.toThrow();
});
test("should throw error when file does not exist", async () => {
await expect(
parseAndSetStructuredOutputs("/nonexistent/file.json"),
).rejects.toThrow();
});
test("should handle empty structured_output object", async () => {
await createMockExecutionFile({});
await parseAndSetStructuredOutputs(TEST_EXECUTION_FILE);
expect(setOutputSpy).toHaveBeenCalledWith("structured_output", "{}");
expect(infoSpy).toHaveBeenCalledWith(
"Set structured_output with 0 field(s)",
);
});
});

View File

@@ -1,12 +1,13 @@
{
"lockfileVersion": 1,
"configVersion": 0,
"workspaces": {
"": {
"name": "@anthropic-ai/claude-code-action",
"dependencies": {
"@actions/core": "^1.10.1",
"@actions/github": "^6.0.1",
"@anthropic-ai/claude-agent-sdk": "^0.1.52",
"@anthropic-ai/claude-agent-sdk": "^0.2.17",
"@modelcontextprotocol/sdk": "^1.11.0",
"@octokit/graphql": "^8.2.2",
"@octokit/rest": "^21.1.1",
@@ -36,7 +37,7 @@
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.1.52", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^3.24.1" } }, "sha512-yF8N05+9NRbqYA/h39jQ726HTQFrdXXp7pEfDNKIJ2c4FdWvEjxBA/8ciZIebN6/PyvGDcbEp3yq2Co4rNpg6A=="],
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.17", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-cWEZ3fhPG6beVlZkXPAGYwqoR5zbELPracg+eKQ9UUqlf9m5UmUaaasGSXdVVxgDkjZfl8yoPsHnicuL2GIB1A=="],
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],

View File

@@ -7,7 +7,7 @@ You can authenticate with Claude using any of these four methods:
3. Google Vertex AI with OIDC authentication
4. Microsoft Foundry with OIDC authentication
For detailed setup instructions for AWS Bedrock and Google Vertex AI, see the [official documentation](https://docs.anthropic.com/en/docs/claude-code/github-actions#using-with-aws-bedrock-%26-google-vertex-ai).
For detailed setup instructions for AWS Bedrock and Google Vertex AI, see the [official documentation](https://code.claude.com/docs/en/github-actions#for-aws-bedrock:).
**Note**:

View File

@@ -13,6 +13,16 @@
- **No Cross-Repository Access**: Each action invocation is limited to the repository where it was triggered
- **Limited Scope**: The token cannot access other repositories or perform actions beyond the configured permissions
## Pull Request Creation
In its default configuration, **Claude does not create pull requests automatically** when responding to `@claude` mentions. Instead:
- Claude commits code changes to a new branch
- Claude provides a **link to the GitHub PR creation page** in its response
- **The user must click the link and create the PR themselves**, ensuring human oversight before any code is proposed for merging
This design ensures that users retain full control over what pull requests are created and can review the changes before initiating the PR workflow.
## ⚠️ Prompt Injection Risks
**Beware of potential hidden markdown when tagging Claude on untrusted content.** External contributors may include hidden instructions through HTML comments, invisible characters, hidden attributes, or other techniques. The action sanitizes content by stripping HTML comments, invisible characters, markdown image alt text, hidden HTML attributes, and HTML entities, but new bypass techniques may emerge. We recommend reviewing the raw content of all input coming from external contributors before allowing Claude to process it.
@@ -38,7 +48,64 @@ The following permissions are requested but not yet actively used. These will en
## Commit Signing
Commits made by Claude through this action are no longer automatically signed with commit signatures. To enable commit signing set `use_commit_signing: True` in the workflow(s). This ensures the authenticity and integrity of commits, providing a verifiable trail of changes made by the action.
By default, commits made by Claude are unsigned. You can enable commit signing using one of two methods:
### Option 1: GitHub API Commit Signing (use_commit_signing)
This uses GitHub's API to create commits, which automatically signs them as verified from the GitHub App:
```yaml
- uses: anthropics/claude-code-action@main
with:
use_commit_signing: true
```
This is the simplest option and requires no additional setup. However, because it uses the GitHub API instead of git CLI, it cannot perform complex git operations like rebasing, cherry-picking, or interactive history manipulation.
### Option 2: SSH Signing Key (ssh_signing_key)
This uses an SSH key to sign commits via git CLI. Use this option when you need both signed commits AND standard git operations (rebasing, cherry-picking, etc.):
```yaml
- uses: anthropics/claude-code-action@main
with:
ssh_signing_key: ${{ secrets.SSH_SIGNING_KEY }}
bot_id: "YOUR_GITHUB_USER_ID"
bot_name: "YOUR_GITHUB_USERNAME"
```
Commits will show as verified and attributed to the GitHub account that owns the signing key.
**Setup steps:**
1. Generate an SSH key pair for signing:
```bash
ssh-keygen -t ed25519 -f ~/.ssh/signing_key -N "" -C "commit signing key"
```
2. Add the **public key** to your GitHub account:
- Go to GitHub → Settings → SSH and GPG keys
- Click "New SSH key"
- Select **Key type: Signing Key** (important)
- Paste the contents of `~/.ssh/signing_key.pub`
3. Add the **private key** to your repository secrets:
- Go to your repo → Settings → Secrets and variables → Actions
- Create a new secret named `SSH_SIGNING_KEY`
- Paste the contents of `~/.ssh/signing_key`
4. Get your GitHub user ID:
```bash
gh api users/YOUR_USERNAME --jq '.id'
```
5. Update your workflow with `bot_id` and `bot_name` matching the account where you added the signing key.
**Note:** If both `ssh_signing_key` and `use_commit_signing` are provided, `ssh_signing_key` takes precedence.
## ⚠️ Authentication Protection

View File

@@ -58,6 +58,7 @@ jobs:
| `claude_code_oauth_token` | Claude Code OAuth token (alternative to anthropic_api_key) | No\* | - |
| `prompt` | Instructions for Claude. Can be a direct prompt or custom template for automation workflows | No | - |
| `track_progress` | Force tag mode with tracking comments. Only works with specific PR/issue events. Preserves GitHub context | No | `false` |
| `include_fix_links` | Include 'Fix this' links in PR code review feedback that open Claude Code with context to fix the identified issue | No | `true` |
| `claude_args` | Additional [arguments to pass directly to Claude CLI](https://docs.claude.com/en/docs/claude-code/cli-reference#cli-flags) (e.g., `--max-turns 10 --model claude-4-0-sonnet-20250805`) | No | "" |
| `base_branch` | The base branch to use for creating new branches (e.g., 'main', 'develop') | No | - |
| `use_sticky_comment` | Use just one comment to deliver PR comments (only applies for pull_request event workflows) | No | `false` |
@@ -70,9 +71,10 @@ jobs:
| `branch_prefix` | The prefix to use for Claude branches (defaults to 'claude/', use 'claude-' for dash format) | No | `claude/` |
| `settings` | Claude Code settings as JSON string or path to settings JSON file | No | "" |
| `additional_permissions` | Additional permissions to enable. Currently supports 'actions: read' for viewing workflow results | No | "" |
| `use_commit_signing` | Enable commit signing using GitHub's commit signature verification. When false, Claude uses standard git commands | No | `false` |
| `bot_id` | GitHub user ID to use for git operations (defaults to Claude's bot ID) | No | `41898282` |
| `bot_name` | GitHub username to use for git operations (defaults to Claude's bot name) | No | `claude[bot]` |
| `use_commit_signing` | Enable commit signing using GitHub's API. Simple but cannot perform complex git operations like rebasing. See [Security](./security.md#commit-signing) | No | `false` |
| `ssh_signing_key` | SSH private key for signing commits. Enables signed commits with full git CLI support (rebasing, etc.). See [Security](./security.md#commit-signing) | No | "" |
| `bot_id` | GitHub user ID to use for git operations (defaults to Claude's bot ID). Required with `ssh_signing_key` for verified commits | No | `41898282` |
| `bot_name` | GitHub username to use for git operations (defaults to Claude's bot name). Required with `ssh_signing_key` for verified commits | No | `claude[bot]` |
| `allowed_bots` | Comma-separated list of allowed bot usernames, or '\*' to allow all bots. Empty string (default) allows no bots | No | "" |
| `allowed_non_write_users` | **⚠️ RISKY**: Comma-separated list of usernames to allow without write permissions, or '\*' for all users. Only works with `github_token` input. See [Security](./security.md) | No | "" |
| `path_to_claude_code_executable` | Optional path to a custom Claude Code executable. Skips automatic installation. Useful for Nix, custom containers, or specialized environments | No | "" |

View File

@@ -12,7 +12,7 @@
"dependencies": {
"@actions/core": "^1.10.1",
"@actions/github": "^6.0.1",
"@anthropic-ai/claude-agent-sdk": "^0.1.52",
"@anthropic-ai/claude-agent-sdk": "^0.2.17",
"@modelcontextprotocol/sdk": "^1.11.0",
"@octokit/graphql": "^8.2.2",
"@octokit/rest": "^21.1.1",

View File

@@ -21,8 +21,12 @@ import type { ParsedGitHubContext } from "../github/context";
import type { CommonFields, PreparedContext, EventData } from "./types";
import { GITHUB_SERVER_URL } from "../github/api/config";
import type { Mode, ModeContext } from "../modes/types";
import { extractUserRequest } from "../utils/extract-user-request";
export type { CommonFields, PreparedContext } from "./types";
/** Filename for the user request file, read by the SDK runner */
const USER_REQUEST_FILENAME = "claude-user-request.txt";
// Tag mode defaults - these tools are needed for tag mode to function
const BASE_ALLOWED_TOOLS = [
"Edit",
@@ -734,7 +738,13 @@ ${eventData.eventName === "issue_comment" || eventData.eventName === "pull_reque
- Reference specific code sections with file paths and line numbers${eventData.isPR ? `\n - AFTER reading files and analyzing code, you MUST call mcp__github_comment__update_claude_comment to post your review` : ""}
- Formulate a concise, technical, and helpful response based on the context.
- Reference specific code with inline formatting or code blocks.
- Include relevant file paths and line numbers when applicable.
- Include relevant file paths and line numbers when applicable.${
eventData.isPR && context.githubContext?.inputs.includeFixLinks
? `
- When identifying issues that could be fixed, include an inline link: [Fix this →](https://claude.ai/code?q=<URI_ENCODED_INSTRUCTIONS>&repo=${context.repository})
The query should be URI-encoded and include enough context for Claude Code to understand and fix the issue (file path, line numbers, branch name, what needs to change).`
: ""
}
- ${eventData.isPR ? `IMPORTANT: Submit your review feedback by updating the Claude comment using mcp__github_comment__update_claude_comment. This will be displayed as your PR review.` : `Remember that this feedback must be posted to the GitHub comment using mcp__github_comment__update_claude_comment.`}
B. For Straightforward Changes:
@@ -841,6 +851,55 @@ f. If you are unable to complete certain steps, such as running a linter or test
return promptContent;
}
/**
* Extracts the user's request from the prepared context and GitHub data.
*
* This is used to send the user's actual command/request as a separate
* content block, enabling slash command processing in the CLI.
*
* @param context - The prepared context containing event data and trigger phrase
* @param githubData - The fetched GitHub data containing issue/PR body content
* @returns The extracted user request text (e.g., "/review-pr" or "fix this bug"),
* or null for assigned/labeled events without an explicit trigger in the body
*
* @example
* // Comment event: "@claude /review-pr" -> returns "/review-pr"
* // Issue body with "@claude fix this" -> returns "fix this"
* // Issue assigned without @claude in body -> returns null
*/
function extractUserRequestFromContext(
context: PreparedContext,
githubData: FetchDataResult,
): string | null {
const { eventData, triggerPhrase } = context;
// For comment events, extract from comment body
if (
"commentBody" in eventData &&
eventData.commentBody &&
(eventData.eventName === "issue_comment" ||
eventData.eventName === "pull_request_review_comment" ||
eventData.eventName === "pull_request_review")
) {
return extractUserRequest(eventData.commentBody, triggerPhrase);
}
// For issue/PR events triggered by body content, extract from the body
if (githubData.contextData?.body) {
const request = extractUserRequest(
githubData.contextData.body,
triggerPhrase,
);
if (request) {
return request;
}
}
// For assigned/labeled events without explicit trigger in body,
// return null to indicate the full context should be used
return null;
}
export async function createPrompt(
mode: Mode,
modeContext: ModeContext,
@@ -889,6 +948,22 @@ export async function createPrompt(
promptContent,
);
// Extract and write the user request separately for SDK multi-block messaging
// This allows the CLI to process slash commands (e.g., "@claude /review-pr")
const userRequest = extractUserRequestFromContext(
preparedContext,
githubData,
);
if (userRequest) {
await writeFile(
`${process.env.RUNNER_TEMP || "/tmp"}/claude-prompts/${USER_REQUEST_FILENAME}`,
userRequest,
);
console.log("===== USER REQUEST =====");
console.log(userRequest);
console.log("========================");
}
// Set allowed tools
const hasActionsReadPermission = false;

View File

@@ -0,0 +1,21 @@
#!/usr/bin/env bun
/**
* Cleanup SSH signing key after action completes
* This is run as a post step for security purposes
*/
import { cleanupSshSigning } from "../github/operations/git-config";
async function run() {
try {
await cleanupSshSigning();
} catch (error) {
// Don't fail the action if cleanup fails, just log it
console.error("Failed to cleanup SSH signing key:", error);
}
}
if (import.meta.main) {
run();
}

View File

@@ -26,6 +26,7 @@ export function collectActionInputsPresence(): void {
max_turns: "",
use_sticky_comment: "false",
use_commit_signing: "false",
ssh_signing_key: "",
};
const allInputsJson = process.env.ALL_INPUTS;

View File

@@ -18,6 +18,11 @@ export const PR_QUERY = `
additions
deletions
state
labels(first: 1) {
nodes {
name
}
}
commits(first: 100) {
totalCount
nodes {
@@ -101,6 +106,11 @@ export const ISSUE_QUERY = `
updatedAt
lastEditedAt
state
labels(first: 1) {
nodes {
name
}
}
comments(first: 100) {
nodes {
id

View File

@@ -88,13 +88,16 @@ type BaseContext = {
labelTrigger: string;
baseBranch?: string;
branchPrefix: string;
branchNameTemplate?: string;
useStickyComment: boolean;
useCommitSigning: boolean;
sshSigningKey: string;
botId: string;
botName: string;
allowedBots: string;
allowedNonWriteUsers: string;
trackProgress: boolean;
includeFixLinks: boolean;
};
};
@@ -143,13 +146,16 @@ export function parseGitHubContext(): GitHubContext {
labelTrigger: process.env.LABEL_TRIGGER ?? "",
baseBranch: process.env.BASE_BRANCH,
branchPrefix: process.env.BRANCH_PREFIX ?? "claude/",
branchNameTemplate: process.env.BRANCH_NAME_TEMPLATE,
useStickyComment: process.env.USE_STICKY_COMMENT === "true",
useCommitSigning: process.env.USE_COMMIT_SIGNING === "true",
sshSigningKey: process.env.SSH_SIGNING_KEY || "",
botId: process.env.BOT_ID ?? String(CLAUDE_APP_BOT_ID),
botName: process.env.BOT_NAME ?? CLAUDE_BOT_LOGIN,
allowedBots: process.env.ALLOWED_BOTS ?? "",
allowedNonWriteUsers: process.env.ALLOWED_NON_WRITE_USERS ?? "",
trackProgress: process.env.TRACK_PROGRESS === "true",
includeFixLinks: process.env.INCLUDE_FIX_LINKS === "true",
},
};

View File

@@ -3,6 +3,8 @@ import type { Octokits } from "../api/client";
import { ISSUE_QUERY, PR_QUERY, USER_QUERY } from "../api/queries/github";
import {
isIssueCommentEvent,
isIssuesEvent,
isPullRequestEvent,
isPullRequestReviewEvent,
isPullRequestReviewCommentEvent,
type ParsedGitHubContext,
@@ -40,6 +42,31 @@ export function extractTriggerTimestamp(
return undefined;
}
/**
* Extracts the original title from the GitHub webhook payload.
* This is the title as it existed when the trigger event occurred.
*
* @param context - Parsed GitHub context from webhook
* @returns The original title string or undefined if not available
*/
export function extractOriginalTitle(
context: ParsedGitHubContext,
): string | undefined {
if (isIssueCommentEvent(context)) {
return context.payload.issue?.title;
} else if (isPullRequestEvent(context)) {
return context.payload.pull_request?.title;
} else if (isPullRequestReviewEvent(context)) {
return context.payload.pull_request?.title;
} else if (isPullRequestReviewCommentEvent(context)) {
return context.payload.pull_request?.title;
} else if (isIssuesEvent(context)) {
return context.payload.issue?.title;
}
return undefined;
}
/**
* Filters comments to only include those that existed in their final state before the trigger time.
* This prevents malicious actors from editing comments after the trigger to inject harmful content.
@@ -146,6 +173,7 @@ type FetchDataParams = {
isPR: boolean;
triggerUsername?: string;
triggerTime?: string;
originalTitle?: string;
};
export type GitHubFileWithSHA = GitHubFile & {
@@ -169,6 +197,7 @@ export async function fetchGitHubData({
isPR,
triggerUsername,
triggerTime,
originalTitle,
}: FetchDataParams): Promise<FetchDataResult> {
const [owner, repo] = repository.split("/");
if (!owner || !repo) {
@@ -354,6 +383,11 @@ export async function fetchGitHubData({
triggerDisplayName = await fetchUserDisplayName(octokits, triggerUsername);
}
// Use the original title from the webhook payload if provided
if (originalTitle !== undefined) {
contextData.title = originalTitle;
}
return {
contextData,
comments,

View File

@@ -14,7 +14,8 @@ export function formatContext(
): string {
if (isPR) {
const prData = contextData as GitHubPullRequest;
return `PR Title: ${prData.title}
const sanitizedTitle = sanitizeContent(prData.title);
return `PR Title: ${sanitizedTitle}
PR Author: ${prData.author.login}
PR Branch: ${prData.headRefName} -> ${prData.baseRefName}
PR State: ${prData.state}
@@ -24,7 +25,8 @@ Total Commits: ${prData.commits.totalCount}
Changed Files: ${prData.files.nodes.length} files`;
} else {
const issueData = contextData as GitHubIssue;
return `Issue Title: ${issueData.title}
const sanitizedTitle = sanitizeContent(issueData.title);
return `Issue Title: ${sanitizedTitle}
Issue Author: ${issueData.author.login}
Issue State: ${issueData.state}`;
}

View File

@@ -7,11 +7,120 @@
*/
import { $ } from "bun";
import { execFileSync } from "child_process";
import * as core from "@actions/core";
import type { ParsedGitHubContext } from "../context";
import type { GitHubPullRequest } from "../types";
import type { Octokits } from "../api/client";
import type { FetchDataResult } from "../data/fetcher";
import { generateBranchName } from "../../utils/branch-template";
/**
* Extracts the first label from GitHub data, or returns undefined if no labels exist
*/
function extractFirstLabel(githubData: FetchDataResult): string | undefined {
const labels = githubData.contextData.labels?.nodes;
return labels && labels.length > 0 ? labels[0]?.name : undefined;
}
/**
* Validates a git branch name against a strict whitelist pattern.
* This prevents command injection by ensuring only safe characters are used.
*
* Valid branch names:
* - Start with alphanumeric character (not dash, to prevent option injection)
* - Contain only alphanumeric, forward slash, hyphen, underscore, or period
* - Do not start or end with a period
* - Do not end with a slash
* - Do not contain '..' (path traversal)
* - Do not contain '//' (consecutive slashes)
* - Do not end with '.lock'
* - Do not contain '@{'
* - Do not contain control characters or special git characters (~^:?*[\])
*/
export function validateBranchName(branchName: string): void {
// Check for empty or whitespace-only names
if (!branchName || branchName.trim().length === 0) {
throw new Error("Branch name cannot be empty");
}
// Check for leading dash (prevents option injection like --help, -x)
if (branchName.startsWith("-")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot start with a dash.`,
);
}
// Check for control characters and special git characters (~^:?*[\])
// eslint-disable-next-line no-control-regex
if (/[\x00-\x1F\x7F ~^:?*[\]\\]/.test(branchName)) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot contain control characters, spaces, or special git characters (~^:?*[\\]).`,
);
}
// Strict whitelist pattern: alphanumeric start, then alphanumeric/slash/hyphen/underscore/period
const validPattern = /^[a-zA-Z0-9][a-zA-Z0-9/_.-]*$/;
if (!validPattern.test(branchName)) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names must start with an alphanumeric character and contain only alphanumeric characters, forward slashes, hyphens, underscores, or periods.`,
);
}
// Check for leading/trailing periods
if (branchName.startsWith(".") || branchName.endsWith(".")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot start or end with a period.`,
);
}
// Check for trailing slash
if (branchName.endsWith("/")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot end with a slash.`,
);
}
// Check for consecutive slashes
if (branchName.includes("//")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot contain consecutive slashes.`,
);
}
// Additional git-specific validations
if (branchName.includes("..")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot contain '..'`,
);
}
if (branchName.endsWith(".lock")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot end with '.lock'`,
);
}
if (branchName.includes("@{")) {
throw new Error(
`Invalid branch name: "${branchName}". Branch names cannot contain '@{'`,
);
}
}
/**
* Executes a git command safely using execFileSync to avoid shell interpolation.
*
* Security: execFileSync passes arguments directly to the git binary without
* invoking a shell, preventing command injection attacks where malicious input
* could be interpreted as shell commands (e.g., branch names containing `;`, `|`, `&&`).
*
* @param args - Git command arguments (e.g., ["checkout", "branch-name"])
*/
function execGit(args: string[]): void {
execFileSync("git", args, { stdio: "inherit" });
}
export type BranchInfo = {
baseBranch: string;
@@ -26,7 +135,7 @@ export async function setupBranch(
): Promise<BranchInfo> {
const { owner, repo } = context.repository;
const entityNumber = context.entityNumber;
const { baseBranch, branchPrefix } = context.inputs;
const { baseBranch, branchPrefix, branchNameTemplate } = context.inputs;
const isPR = context.isPR;
if (isPR) {
@@ -53,14 +162,19 @@ export async function setupBranch(
`PR #${entityNumber}: ${commitCount} commits, using fetch depth ${fetchDepth}`,
);
// Validate branch names before use to prevent command injection
validateBranchName(branchName);
// Execute git commands to checkout PR branch (dynamic depth based on PR size)
await $`git fetch origin --depth=${fetchDepth} ${branchName}`;
await $`git checkout ${branchName} --`;
// Using execFileSync instead of shell template literals for security
execGit(["fetch", "origin", `--depth=${fetchDepth}`, branchName]);
execGit(["checkout", branchName, "--"]);
console.log(`Successfully checked out PR branch for PR #${entityNumber}`);
// For open PRs, we need to get the base branch of the PR
const baseBranch = prData.baseRefName;
validateBranchName(baseBranch);
return {
baseBranch,
@@ -87,17 +201,8 @@ export async function setupBranch(
// Generate branch name for either an issue or closed/merged PR
const entityType = isPR ? "pr" : "issue";
// Create Kubernetes-compatible timestamp: lowercase, hyphens only, shorter format
const now = new Date();
const timestamp = `${now.getFullYear()}${String(now.getMonth() + 1).padStart(2, "0")}${String(now.getDate()).padStart(2, "0")}-${String(now.getHours()).padStart(2, "0")}${String(now.getMinutes()).padStart(2, "0")}`;
// Ensure branch name is Kubernetes-compatible:
// - Lowercase only
// - Alphanumeric with hyphens
// - No underscores
// - Max 50 chars (to allow for prefixes)
const branchName = `${branchPrefix}${entityType}-${entityNumber}-${timestamp}`;
const newBranch = branchName.toLowerCase().substring(0, 50);
// Get the SHA of the source branch to use in template
let sourceSHA: string | undefined;
try {
// Get the SHA of the source branch to verify it exists
@@ -107,8 +212,46 @@ export async function setupBranch(
ref: `heads/${sourceBranch}`,
});
const currentSHA = sourceBranchRef.data.object.sha;
console.log(`Source branch SHA: ${currentSHA}`);
sourceSHA = sourceBranchRef.data.object.sha;
console.log(`Source branch SHA: ${sourceSHA}`);
// Extract first label from GitHub data
const firstLabel = extractFirstLabel(githubData);
// Extract title from GitHub data
const title = githubData.contextData.title;
// Generate branch name using template or default format
let newBranch = generateBranchName(
branchNameTemplate,
branchPrefix,
entityType,
entityNumber,
sourceSHA,
firstLabel,
title,
);
// Check if generated branch already exists on remote
try {
await $`git ls-remote --exit-code origin refs/heads/${newBranch}`.quiet();
// If we get here, branch exists (exit code 0)
console.log(
`Branch '${newBranch}' already exists, falling back to default format`,
);
newBranch = generateBranchName(
undefined, // Force default template
branchPrefix,
entityType,
entityNumber,
sourceSHA,
firstLabel,
title,
);
} catch {
// Branch doesn't exist (non-zero exit code), continue with generated name
}
// For commit signing, defer branch creation to the file ops server
if (context.inputs.useCommitSigning) {
@@ -118,8 +261,9 @@ export async function setupBranch(
// Ensure we're on the source branch
console.log(`Fetching and checking out source branch: ${sourceBranch}`);
await $`git fetch origin ${sourceBranch} --depth=1`;
await $`git checkout ${sourceBranch}`;
validateBranchName(sourceBranch);
execGit(["fetch", "origin", sourceBranch, "--depth=1"]);
execGit(["checkout", sourceBranch, "--"]);
// Set outputs for GitHub Actions
core.setOutput("CLAUDE_BRANCH", newBranch);
@@ -138,11 +282,13 @@ export async function setupBranch(
// Fetch and checkout the source branch first to ensure we branch from the correct base
console.log(`Fetching and checking out source branch: ${sourceBranch}`);
await $`git fetch origin ${sourceBranch} --depth=1`;
await $`git checkout ${sourceBranch}`;
validateBranchName(sourceBranch);
validateBranchName(newBranch);
execGit(["fetch", "origin", sourceBranch, "--depth=1"]);
execGit(["checkout", sourceBranch, "--"]);
// Create and checkout the new branch from the source branch
await $`git checkout -b ${newBranch}`;
execGit(["checkout", "-b", newBranch]);
console.log(
`Successfully created and checked out local branch: ${newBranch}`,

View File

@@ -6,9 +6,14 @@
*/
import { $ } from "bun";
import { mkdir, writeFile, rm } from "fs/promises";
import { join } from "path";
import { homedir } from "os";
import type { GitHubContext } from "../context";
import { GITHUB_SERVER_URL } from "../api/config";
const SSH_SIGNING_KEY_PATH = join(homedir(), ".ssh", "claude_signing_key");
type GitUser = {
login: string;
id: number;
@@ -54,3 +59,55 @@ export async function configureGitAuth(
console.log("Git authentication configured successfully");
}
/**
* Configure git to use SSH signing for commits
* This is an alternative to GitHub API-based commit signing (use_commit_signing)
*/
export async function setupSshSigning(sshSigningKey: string): Promise<void> {
console.log("Configuring SSH signing for commits...");
// Validate SSH key format
if (!sshSigningKey.trim()) {
throw new Error("SSH signing key cannot be empty");
}
if (
!sshSigningKey.includes("BEGIN") ||
!sshSigningKey.includes("PRIVATE KEY")
) {
throw new Error("Invalid SSH private key format");
}
// Create .ssh directory with secure permissions (700)
const sshDir = join(homedir(), ".ssh");
await mkdir(sshDir, { recursive: true, mode: 0o700 });
// Ensure key ends with newline (required for ssh-keygen to parse it)
const normalizedKey = sshSigningKey.endsWith("\n")
? sshSigningKey
: sshSigningKey + "\n";
// Write the signing key atomically with secure permissions (600)
await writeFile(SSH_SIGNING_KEY_PATH, normalizedKey, { mode: 0o600 });
console.log(`✓ SSH signing key written to ${SSH_SIGNING_KEY_PATH}`);
// Configure git to use SSH signing
await $`git config gpg.format ssh`;
await $`git config user.signingkey ${SSH_SIGNING_KEY_PATH}`;
await $`git config commit.gpgsign true`;
console.log("✓ Git configured to use SSH signing for commits");
}
/**
* Clean up the SSH signing key file
* Should be called in the post step for security
*/
export async function cleanupSshSigning(): Promise<void> {
try {
await rm(SSH_SIGNING_KEY_PATH, { force: true });
console.log("✓ SSH signing key cleaned up");
} catch (error) {
console.log("No SSH signing key to clean up");
}
}

View File

@@ -63,6 +63,11 @@ export type GitHubPullRequest = {
additions: number;
deletions: number;
state: string;
labels: {
nodes: Array<{
name: string;
}>;
};
commits: {
totalCount: number;
nodes: Array<{
@@ -88,6 +93,11 @@ export type GitHubIssue = {
updatedAt?: string;
lastEditedAt?: string;
state: string;
labels: {
nodes: Array<{
name: string;
}>;
};
comments: {
nodes: GitHubComment[];
};

View File

@@ -6,11 +6,11 @@
*/
import type { Octokit } from "@octokit/rest";
import type { ParsedGitHubContext } from "../context";
import type { GitHubContext } from "../context";
export async function checkHumanActor(
octokit: Octokit,
githubContext: ParsedGitHubContext,
githubContext: GitHubContext,
) {
// Fetch user information from GitHub API
const { data: userData } = await octokit.users.getByUsername({

View File

@@ -4,11 +4,12 @@ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";
import { readFile, stat } from "fs/promises";
import { join } from "path";
import { resolve } from "path";
import { constants } from "fs";
import fetch from "node-fetch";
import { GITHUB_API_URL } from "../github/api/config";
import { retryWithBackoff } from "../utils/retry";
import { validatePathWithinRepo } from "./path-validation";
type GitHubRef = {
object: {
@@ -213,12 +214,18 @@ server.tool(
throw new Error("GITHUB_TOKEN environment variable is required");
}
const processedFiles = files.map((filePath) => {
if (filePath.startsWith("/")) {
return filePath.slice(1);
}
return filePath;
});
// Validate all paths are within repository root and get full/relative paths
const resolvedRepoDir = resolve(REPO_DIR);
const validatedFiles = await Promise.all(
files.map(async (filePath) => {
const fullPath = await validatePathWithinRepo(filePath, REPO_DIR);
// Calculate the relative path for the git tree entry
// Use the original filePath (normalized) for the git path, not the symlink-resolved path
const normalizedPath = resolve(resolvedRepoDir, filePath);
const relativePath = normalizedPath.slice(resolvedRepoDir.length + 1);
return { fullPath, relativePath };
}),
);
// 1. Get the branch reference (create if doesn't exist)
const baseSha = await getOrCreateBranchRef(
@@ -247,18 +254,14 @@ server.tool(
// 3. Create tree entries for all files
const treeEntries = await Promise.all(
processedFiles.map(async (filePath) => {
const fullPath = filePath.startsWith("/")
? filePath
: join(REPO_DIR, filePath);
validatedFiles.map(async ({ fullPath, relativePath }) => {
// Get the proper file mode based on file permissions
const fileMode = await getFileMode(fullPath);
// Check if file is binary (images, etc.)
const isBinaryFile =
/\.(png|jpg|jpeg|gif|webp|ico|pdf|zip|tar|gz|exe|bin|woff|woff2|ttf|eot)$/i.test(
filePath,
relativePath,
);
if (isBinaryFile) {
@@ -284,7 +287,7 @@ server.tool(
if (!blobResponse.ok) {
const errorText = await blobResponse.text();
throw new Error(
`Failed to create blob for ${filePath}: ${blobResponse.status} - ${errorText}`,
`Failed to create blob for ${relativePath}: ${blobResponse.status} - ${errorText}`,
);
}
@@ -292,7 +295,7 @@ server.tool(
// Return tree entry with blob SHA
return {
path: filePath,
path: relativePath,
mode: fileMode,
type: "blob",
sha: blobData.sha,
@@ -301,7 +304,7 @@ server.tool(
// For text files, include content directly in tree
const content = await readFile(fullPath, "utf-8");
return {
path: filePath,
path: relativePath,
mode: fileMode,
type: "blob",
content: content,
@@ -421,7 +424,9 @@ server.tool(
author: newCommitData.author.name,
date: newCommitData.author.date,
},
files: processedFiles.map((path) => ({ path })),
files: validatedFiles.map(({ relativePath }) => ({
path: relativePath,
})),
tree: {
sha: treeData.sha,
},

View File

@@ -0,0 +1,64 @@
import { realpath } from "fs/promises";
import { resolve, sep } from "path";
/**
* Validates that a file path resolves within the repository root.
* Prevents path traversal attacks via "../" sequences and symlinks.
* @param filePath - The file path to validate (can be relative or absolute)
* @param repoRoot - The repository root directory
* @returns The resolved absolute path (with symlinks resolved) if valid
* @throws Error if the path resolves outside the repository root
*/
export async function validatePathWithinRepo(
filePath: string,
repoRoot: string,
): Promise<string> {
// First resolve the path string (handles .. and . segments)
const initialPath = resolve(repoRoot, filePath);
// Resolve symlinks to get the real path
// This prevents symlink attacks where a link inside the repo points outside
let resolvedRoot: string;
let resolvedPath: string;
try {
resolvedRoot = await realpath(repoRoot);
} catch {
throw new Error(`Repository root '${repoRoot}' does not exist`);
}
try {
resolvedPath = await realpath(initialPath);
} catch {
// File doesn't exist yet - fall back to checking the parent directory
// This handles the case where we're creating a new file
const parentDir = resolve(initialPath, "..");
try {
const resolvedParent = await realpath(parentDir);
if (
resolvedParent !== resolvedRoot &&
!resolvedParent.startsWith(resolvedRoot + sep)
) {
throw new Error(
`Path '${filePath}' resolves outside the repository root`,
);
}
// Parent is valid, return the initial path since file doesn't exist yet
return initialPath;
} catch {
throw new Error(
`Path '${filePath}' resolves outside the repository root`,
);
}
}
// Path must be within repo root (or be the root itself)
if (
resolvedPath !== resolvedRoot &&
!resolvedPath.startsWith(resolvedRoot + sep)
) {
throw new Error(`Path '${filePath}' resolves outside the repository root`);
}
return resolvedPath;
}

View File

@@ -4,7 +4,11 @@ import type { Mode, ModeOptions, ModeResult } from "../types";
import type { PreparedContext } from "../../create-prompt/types";
import { prepareMcpConfig } from "../../mcp/install-mcp-server";
import { parseAllowedTools } from "./parse-tools";
import { configureGitAuth } from "../../github/operations/git-config";
import {
configureGitAuth,
setupSshSigning,
} from "../../github/operations/git-config";
import { checkHumanActor } from "../../github/validation/actor";
import type { GitHubContext } from "../../github/context";
import { isEntityContext } from "../../github/context";
@@ -77,9 +81,36 @@ export const agentMode: Mode = {
return false;
},
async prepare({ context, githubToken }: ModeOptions): Promise<ModeResult> {
async prepare({
context,
octokit,
githubToken,
}: ModeOptions): Promise<ModeResult> {
// Check if actor is human (prevents bot-triggered loops)
await checkHumanActor(octokit.rest, context);
// Configure git authentication for agent mode (same as tag mode)
if (!context.inputs.useCommitSigning) {
// SSH signing takes precedence if provided
const useSshSigning = !!context.inputs.sshSigningKey;
const useApiCommitSigning =
context.inputs.useCommitSigning && !useSshSigning;
if (useSshSigning) {
// Setup SSH signing for commits
await setupSshSigning(context.inputs.sshSigningKey);
// Still configure git auth for push operations (user/email and remote URL)
const user = {
login: context.inputs.botName,
id: parseInt(context.inputs.botId),
};
try {
await configureGitAuth(githubToken, context, user);
} catch (error) {
console.error("Failed to configure git authentication:", error);
// Continue anyway - git operations may still work with default config
}
} else if (!useApiCommitSigning) {
// Use bot_id and bot_name from inputs directly
const user = {
login: context.inputs.botName,

View File

@@ -1,22 +1,33 @@
export function parseAllowedTools(claudeArgs: string): string[] {
// Match --allowedTools or --allowed-tools followed by the value
// Handle both quoted and unquoted values
// Use /g flag to find ALL occurrences, not just the first one
const patterns = [
/--(?:allowedTools|allowed-tools)\s+"([^"]+)"/, // Double quoted
/--(?:allowedTools|allowed-tools)\s+'([^']+)'/, // Single quoted
/--(?:allowedTools|allowed-tools)\s+([^\s]+)/, // Unquoted
/--(?:allowedTools|allowed-tools)\s+"([^"]+)"/g, // Double quoted
/--(?:allowedTools|allowed-tools)\s+'([^']+)'/g, // Single quoted
/--(?:allowedTools|allowed-tools)\s+([^'"\s][^\s]*)/g, // Unquoted (must not start with quote)
];
const tools: string[] = [];
const seen = new Set<string>();
for (const pattern of patterns) {
const match = claudeArgs.match(pattern);
if (match && match[1]) {
// Don't return if the value starts with -- (another flag)
if (match[1].startsWith("--")) {
return [];
for (const match of claudeArgs.matchAll(pattern)) {
if (match[1]) {
// Don't add if the value starts with -- (another flag)
if (match[1].startsWith("--")) {
continue;
}
for (const tool of match[1].split(",")) {
const trimmed = tool.trim();
if (trimmed && !seen.has(trimmed)) {
seen.add(trimmed);
tools.push(trimmed);
}
}
}
return match[1].split(",").map((t) => t.trim());
}
}
return [];
return tools;
}

View File

@@ -4,11 +4,15 @@ import { checkContainsTrigger } from "../../github/validation/trigger";
import { checkHumanActor } from "../../github/validation/actor";
import { createInitialComment } from "../../github/operations/comments/create-initial";
import { setupBranch } from "../../github/operations/branch";
import { configureGitAuth } from "../../github/operations/git-config";
import {
configureGitAuth,
setupSshSigning,
} from "../../github/operations/git-config";
import { prepareMcpConfig } from "../../mcp/install-mcp-server";
import {
fetchGitHubData,
extractTriggerTimestamp,
extractOriginalTitle,
} from "../../github/data/fetcher";
import { createPrompt, generateDefaultPrompt } from "../../create-prompt";
import { isEntityContext } from "../../github/context";
@@ -75,6 +79,7 @@ export const tagMode: Mode = {
const commentId = commentData.id;
const triggerTime = extractTriggerTimestamp(context);
const originalTitle = extractOriginalTitle(context);
const githubData = await fetchGitHubData({
octokits: octokit,
@@ -83,13 +88,34 @@ export const tagMode: Mode = {
isPR: context.isPR,
triggerUsername: context.actor,
triggerTime,
originalTitle,
});
// Setup branch
const branchInfo = await setupBranch(octokit, githubData, context);
// Configure git authentication if not using commit signing
if (!context.inputs.useCommitSigning) {
// Configure git authentication
// SSH signing takes precedence if provided
const useSshSigning = !!context.inputs.sshSigningKey;
const useApiCommitSigning =
context.inputs.useCommitSigning && !useSshSigning;
if (useSshSigning) {
// Setup SSH signing for commits
await setupSshSigning(context.inputs.sshSigningKey);
// Still configure git auth for push operations (user/email and remote URL)
const user = {
login: context.inputs.botName,
id: parseInt(context.inputs.botId),
};
try {
await configureGitAuth(githubToken, context, user);
} catch (error) {
console.error("Failed to configure git authentication:", error);
throw error;
}
} else if (!useApiCommitSigning) {
// Use bot_id and bot_name from inputs directly
const user = {
login: context.inputs.botName,
@@ -135,8 +161,9 @@ export const tagMode: Mode = {
...userAllowedMCPTools,
];
// Add git commands when not using commit signing
if (!context.inputs.useCommitSigning) {
// Add git commands when using git CLI (no API commit signing, or SSH signing)
// SSH signing still uses git CLI, just with signing enabled
if (!useApiCommitSigning) {
tagModeTools.push(
"Bash(git add:*)",
"Bash(git commit:*)",
@@ -147,7 +174,7 @@ export const tagMode: Mode = {
"Bash(git rm:*)",
);
} else {
// When using commit signing, use MCP file ops tools
// When using API commit signing, use MCP file ops tools
tagModeTools.push(
"mcp__github_file_ops__commit_files",
"mcp__github_file_ops__delete_files",

View File

@@ -0,0 +1,99 @@
#!/usr/bin/env bun
/**
* Branch name template parsing and variable substitution utilities
*/
const NUM_DESCRIPTION_WORDS = 5;
/**
* Extracts the first 5 words from a title and converts them to kebab-case
*/
function extractDescription(
title: string,
numWords: number = NUM_DESCRIPTION_WORDS,
): string {
if (!title || title.trim() === "") {
return "";
}
return title
.trim()
.split(/\s+/)
.slice(0, numWords) // Only first `numWords` words
.join("-")
.toLowerCase()
.replace(/[^a-z0-9-]/g, "") // Remove non-alphanumeric except hyphens
.replace(/-+/g, "-") // Replace multiple hyphens with single
.replace(/^-|-$/g, ""); // Remove leading/trailing hyphens
}
export interface BranchTemplateVariables {
prefix: string;
entityType: string;
entityNumber: number;
timestamp: string;
sha?: string;
label?: string;
description?: string;
}
/**
* Replaces template variables in a branch name template
* Template format: {{variableName}}
*/
export function applyBranchTemplate(
template: string,
variables: BranchTemplateVariables,
): string {
let result = template;
// Replace each variable
Object.entries(variables).forEach(([key, value]) => {
const placeholder = `{{${key}}}`;
const replacement = value ? String(value) : "";
result = result.replaceAll(placeholder, replacement);
});
return result;
}
/**
* Generates a branch name from the provided `template` and set of `variables`. Uses a default format if the template is empty or produces an empty result.
*/
export function generateBranchName(
template: string | undefined,
branchPrefix: string,
entityType: string,
entityNumber: number,
sha?: string,
label?: string,
title?: string,
): string {
const now = new Date();
const variables: BranchTemplateVariables = {
prefix: branchPrefix,
entityType,
entityNumber,
timestamp: `${now.getFullYear()}${String(now.getMonth() + 1).padStart(2, "0")}${String(now.getDate()).padStart(2, "0")}-${String(now.getHours()).padStart(2, "0")}${String(now.getMinutes()).padStart(2, "0")}`,
sha: sha?.substring(0, 8), // First 8 characters of SHA
label: label || entityType, // Fall back to entityType if no label
description: title ? extractDescription(title) : undefined,
};
if (template?.trim()) {
const branchName = applyBranchTemplate(template, variables);
// Some templates could produce empty results- validate
if (branchName.trim().length > 0) return branchName;
console.log(
`Branch template '${template}' generated empty result, falling back to default format`,
);
}
const branchName = `${branchPrefix}${entityType}-${entityNumber}-${variables.timestamp}`;
// Kubernetes compatible: lowercase, max 50 chars, alphanumeric and hyphens only
return branchName.toLowerCase().substring(0, 50);
}

View File

@@ -0,0 +1,32 @@
/**
* Extracts the user's request from a trigger comment.
*
* Given a comment like "@claude /review-pr please check the auth module",
* this extracts "/review-pr please check the auth module".
*
* @param commentBody - The full comment body containing the trigger phrase
* @param triggerPhrase - The trigger phrase (e.g., "@claude")
* @returns The user's request (text after the trigger phrase), or null if not found
*/
export function extractUserRequest(
commentBody: string | undefined,
triggerPhrase: string,
): string | null {
if (!commentBody) {
return null;
}
// Use string operations instead of regex for better performance and security
// (avoids potential ReDoS with large comment bodies)
const triggerIndex = commentBody
.toLowerCase()
.indexOf(triggerPhrase.toLowerCase());
if (triggerIndex === -1) {
return null;
}
const afterTrigger = commentBody
.substring(triggerIndex + triggerPhrase.length)
.trim();
return afterTrigger || null;
}

View File

@@ -0,0 +1,247 @@
#!/usr/bin/env bun
import { describe, it, expect } from "bun:test";
import {
applyBranchTemplate,
generateBranchName,
} from "../src/utils/branch-template";
describe("branch template utilities", () => {
describe("applyBranchTemplate", () => {
it("should replace all template variables", () => {
const template =
"{{prefix}}{{entityType}}-{{entityNumber}}-{{timestamp}}";
const variables = {
prefix: "feat/",
entityType: "issue",
entityNumber: 123,
timestamp: "20240301-1430",
sha: "abcd1234",
};
const result = applyBranchTemplate(template, variables);
expect(result).toBe("feat/issue-123-20240301-1430");
});
it("should handle custom templates with multiple variables", () => {
const template =
"{{prefix}}fix/{{entityType}}_{{entityNumber}}_{{timestamp}}_{{sha}}";
const variables = {
prefix: "claude-",
entityType: "pr",
entityNumber: 456,
timestamp: "20240301-1430",
sha: "abcd1234",
};
const result = applyBranchTemplate(template, variables);
expect(result).toBe("claude-fix/pr_456_20240301-1430_abcd1234");
});
it("should handle templates with missing variables gracefully", () => {
const template = "{{prefix}}{{entityType}}-{{missing}}-{{entityNumber}}";
const variables = {
prefix: "feat/",
entityType: "issue",
entityNumber: 123,
timestamp: "20240301-1430",
};
const result = applyBranchTemplate(template, variables);
expect(result).toBe("feat/issue-{{missing}}-123");
});
});
describe("generateBranchName", () => {
it("should use custom template when provided", () => {
const template = "{{prefix}}custom-{{entityType}}_{{entityNumber}}";
const result = generateBranchName(template, "feature/", "issue", 123);
expect(result).toBe("feature/custom-issue_123");
});
it("should use default format when template is empty", () => {
const result = generateBranchName("", "claude/", "issue", 123);
expect(result).toMatch(/^claude\/issue-123-\d{8}-\d{4}$/);
});
it("should use default format when template is undefined", () => {
const result = generateBranchName(undefined, "claude/", "pr", 456);
expect(result).toMatch(/^claude\/pr-456-\d{8}-\d{4}$/);
});
it("should preserve custom template formatting (no automatic lowercase/truncation)", () => {
const template = "{{prefix}}UPPERCASE_Branch-Name_{{entityNumber}}";
const result = generateBranchName(template, "Feature/", "issue", 123);
expect(result).toBe("Feature/UPPERCASE_Branch-Name_123");
});
it("should not truncate custom template results", () => {
const template =
"{{prefix}}very-long-branch-name-that-exceeds-the-maximum-allowed-length-{{entityNumber}}";
const result = generateBranchName(template, "feature/", "issue", 123);
expect(result).toBe(
"feature/very-long-branch-name-that-exceeds-the-maximum-allowed-length-123",
);
});
it("should apply Kubernetes-compatible transformations to default template only", () => {
const result = generateBranchName(undefined, "Feature/", "issue", 123);
expect(result).toMatch(/^feature\/issue-123-\d{8}-\d{4}$/);
expect(result.length).toBeLessThanOrEqual(50);
});
it("should handle SHA in template", () => {
const template = "{{prefix}}{{entityType}}-{{entityNumber}}-{{sha}}";
const result = generateBranchName(
template,
"fix/",
"pr",
789,
"abcdef123456",
);
expect(result).toBe("fix/pr-789-abcdef12");
});
it("should use label in template when provided", () => {
const template = "{{prefix}}{{label}}/{{entityNumber}}";
const result = generateBranchName(
template,
"feature/",
"issue",
123,
undefined,
"bug",
);
expect(result).toBe("feature/bug/123");
});
it("should fallback to entityType when label template is used but no label provided", () => {
const template = "{{prefix}}{{label}}-{{entityNumber}}";
const result = generateBranchName(template, "fix/", "pr", 456);
expect(result).toBe("fix/pr-456");
});
it("should handle template with both label and entityType", () => {
const template = "{{prefix}}{{label}}-{{entityType}}_{{entityNumber}}";
const result = generateBranchName(
template,
"dev/",
"issue",
789,
undefined,
"enhancement",
);
expect(result).toBe("dev/enhancement-issue_789");
});
it("should use description in template when provided", () => {
const template = "{{prefix}}{{description}}/{{entityNumber}}";
const result = generateBranchName(
template,
"feature/",
"issue",
123,
undefined,
undefined,
"Fix login bug with OAuth",
);
expect(result).toBe("feature/fix-login-bug-with-oauth/123");
});
it("should handle template with multiple variables including description", () => {
const template =
"{{prefix}}{{label}}/{{description}}-{{entityType}}_{{entityNumber}}";
const result = generateBranchName(
template,
"dev/",
"issue",
456,
undefined,
"bug",
"User authentication fails completely",
);
expect(result).toBe(
"dev/bug/user-authentication-fails-completely-issue_456",
);
});
it("should handle description with special characters in template", () => {
const template = "{{prefix}}{{description}}-{{entityNumber}}";
const result = generateBranchName(
template,
"fix/",
"pr",
789,
undefined,
undefined,
"Add: User Registration & Email Validation",
);
expect(result).toBe("fix/add-user-registration-email-789");
});
it("should truncate descriptions to exactly 5 words", () => {
const result = generateBranchName(
"{{prefix}}{{description}}/{{entityNumber}}",
"feature/",
"issue",
999,
undefined,
undefined,
"This is a very long title with many more than five words in it",
);
expect(result).toBe("feature/this-is-a-very-long/999");
});
it("should handle empty description in template", () => {
const template = "{{prefix}}{{description}}-{{entityNumber}}";
const result = generateBranchName(
template,
"test/",
"issue",
101,
undefined,
undefined,
"",
);
expect(result).toBe("test/-101");
});
it("should fallback to default format when template produces empty result", () => {
const template = "{{description}}"; // Will be empty if no title provided
const result = generateBranchName(template, "claude/", "issue", 123);
expect(result).toMatch(/^claude\/issue-123-\d{8}-\d{4}$/);
expect(result.length).toBeLessThanOrEqual(50);
});
it("should fallback to default format when template produces only whitespace", () => {
const template = " {{description}} "; // Will be " " if description is empty
const result = generateBranchName(
template,
"fix/",
"pr",
456,
undefined,
undefined,
"",
);
expect(result).toMatch(/^fix\/pr-456-\d{8}-\d{4}$/);
expect(result.length).toBeLessThanOrEqual(50);
});
});
});

View File

@@ -61,6 +61,7 @@ describe("generatePrompt", () => {
body: "This is a test PR",
author: { login: "testuser" },
state: "OPEN",
labels: { nodes: [] },
createdAt: "2023-01-01T00:00:00Z",
additions: 15,
deletions: 5,
@@ -475,6 +476,7 @@ describe("generatePrompt", () => {
body: "The login form is not working",
author: { login: "testuser" },
state: "OPEN",
labels: { nodes: [] },
createdAt: "2023-01-01T00:00:00Z",
comments: {
nodes: [],

View File

@@ -1,6 +1,7 @@
import { describe, expect, it, jest } from "bun:test";
import {
extractTriggerTimestamp,
extractOriginalTitle,
fetchGitHubData,
filterCommentsToTriggerTime,
filterReviewsToTriggerTime,
@@ -9,6 +10,7 @@ import {
import {
createMockContext,
mockIssueCommentContext,
mockPullRequestCommentContext,
mockPullRequestReviewContext,
mockPullRequestReviewCommentContext,
mockPullRequestOpenedContext,
@@ -63,6 +65,47 @@ describe("extractTriggerTimestamp", () => {
});
});
describe("extractOriginalTitle", () => {
it("should extract title from IssueCommentEvent on PR", () => {
const title = extractOriginalTitle(mockPullRequestCommentContext);
expect(title).toBe("Fix: Memory leak in user service");
});
it("should extract title from PullRequestReviewEvent", () => {
const title = extractOriginalTitle(mockPullRequestReviewContext);
expect(title).toBe("Refactor: Improve error handling in API layer");
});
it("should extract title from PullRequestReviewCommentEvent", () => {
const title = extractOriginalTitle(mockPullRequestReviewCommentContext);
expect(title).toBe("Performance: Optimize search algorithm");
});
it("should extract title from pull_request event", () => {
const title = extractOriginalTitle(mockPullRequestOpenedContext);
expect(title).toBe("Feature: Add user authentication");
});
it("should extract title from issues event", () => {
const title = extractOriginalTitle(mockIssueOpenedContext);
expect(title).toBe("Bug: Application crashes on startup");
});
it("should return undefined for event without title", () => {
const context = createMockContext({
eventName: "issue_comment",
payload: {
comment: {
id: 123,
body: "test",
},
} as any,
});
const title = extractOriginalTitle(context);
expect(title).toBeUndefined();
});
});
describe("filterCommentsToTriggerTime", () => {
const createMockComment = (
createdAt: string,
@@ -945,4 +988,115 @@ describe("fetchGitHubData integration with time filtering", () => {
);
expect(hasPrBodyInMap).toBe(false);
});
it("should use originalTitle when provided instead of fetched title", async () => {
const mockOctokits = {
graphql: jest.fn().mockResolvedValue({
repository: {
pullRequest: {
number: 123,
title: "Fetched Title From GraphQL",
body: "PR body",
author: { login: "author" },
createdAt: "2024-01-15T10:00:00Z",
additions: 10,
deletions: 5,
state: "OPEN",
commits: { totalCount: 1, nodes: [] },
files: { nodes: [] },
comments: { nodes: [] },
reviews: { nodes: [] },
},
},
user: { login: "trigger-user" },
}),
rest: jest.fn() as any,
};
const result = await fetchGitHubData({
octokits: mockOctokits as any,
repository: "test-owner/test-repo",
prNumber: "123",
isPR: true,
triggerUsername: "trigger-user",
originalTitle: "Original Title From Webhook",
});
expect(result.contextData.title).toBe("Original Title From Webhook");
});
it("should use fetched title when originalTitle is not provided", async () => {
const mockOctokits = {
graphql: jest.fn().mockResolvedValue({
repository: {
pullRequest: {
number: 123,
title: "Fetched Title From GraphQL",
body: "PR body",
author: { login: "author" },
createdAt: "2024-01-15T10:00:00Z",
additions: 10,
deletions: 5,
state: "OPEN",
commits: { totalCount: 1, nodes: [] },
files: { nodes: [] },
comments: { nodes: [] },
reviews: { nodes: [] },
},
},
user: { login: "trigger-user" },
}),
rest: jest.fn() as any,
};
const result = await fetchGitHubData({
octokits: mockOctokits as any,
repository: "test-owner/test-repo",
prNumber: "123",
isPR: true,
triggerUsername: "trigger-user",
});
expect(result.contextData.title).toBe("Fetched Title From GraphQL");
});
it("should use original title from webhook even if title was edited after trigger", async () => {
const mockOctokits = {
graphql: jest.fn().mockResolvedValue({
repository: {
pullRequest: {
number: 123,
title: "Edited Title (from GraphQL)",
body: "PR body",
author: { login: "author" },
createdAt: "2024-01-15T10:00:00Z",
lastEditedAt: "2024-01-15T12:30:00Z", // Edited after trigger
additions: 10,
deletions: 5,
state: "OPEN",
commits: { totalCount: 1, nodes: [] },
files: { nodes: [] },
comments: { nodes: [] },
reviews: { nodes: [] },
},
},
user: { login: "trigger-user" },
}),
rest: jest.fn() as any,
};
const result = await fetchGitHubData({
octokits: mockOctokits as any,
repository: "test-owner/test-repo",
prNumber: "123",
isPR: true,
triggerUsername: "trigger-user",
triggerTime: "2024-01-15T12:00:00Z",
originalTitle: "Original Title (from webhook at trigger time)",
});
expect(result.contextData.title).toBe(
"Original Title (from webhook at trigger time)",
);
});
});

View File

@@ -28,6 +28,9 @@ describe("formatContext", () => {
additions: 50,
deletions: 30,
state: "OPEN",
labels: {
nodes: [],
},
commits: {
totalCount: 3,
nodes: [],
@@ -63,6 +66,9 @@ Changed Files: 2 files`,
author: { login: "test-user" },
createdAt: "2023-01-01T00:00:00Z",
state: "OPEN",
labels: {
nodes: [],
},
comments: {
nodes: [],
},

View File

@@ -0,0 +1,77 @@
import { describe, test, expect } from "bun:test";
import { extractUserRequest } from "../src/utils/extract-user-request";
describe("extractUserRequest", () => {
test("extracts text after @claude trigger", () => {
expect(extractUserRequest("@claude /review-pr", "@claude")).toBe(
"/review-pr",
);
});
test("extracts slash command with arguments", () => {
expect(
extractUserRequest(
"@claude /review-pr please check the auth module",
"@claude",
),
).toBe("/review-pr please check the auth module");
});
test("handles trigger phrase with extra whitespace", () => {
expect(extractUserRequest("@claude /review-pr", "@claude")).toBe(
"/review-pr",
);
});
test("handles trigger phrase at start of multiline comment", () => {
const comment = `@claude /review-pr
Please review this PR carefully.
Focus on security issues.`;
expect(extractUserRequest(comment, "@claude")).toBe(
`/review-pr
Please review this PR carefully.
Focus on security issues.`,
);
});
test("handles trigger phrase in middle of text", () => {
expect(
extractUserRequest("Hey team, @claude can you review this?", "@claude"),
).toBe("can you review this?");
});
test("returns null for empty comment body", () => {
expect(extractUserRequest("", "@claude")).toBeNull();
});
test("returns null for undefined comment body", () => {
expect(extractUserRequest(undefined, "@claude")).toBeNull();
});
test("returns null when trigger phrase not found", () => {
expect(extractUserRequest("Please review this PR", "@claude")).toBeNull();
});
test("returns null when only trigger phrase with no request", () => {
expect(extractUserRequest("@claude", "@claude")).toBeNull();
});
test("handles custom trigger phrase", () => {
expect(extractUserRequest("/claude help me", "/claude")).toBe("help me");
});
test("handles trigger phrase with special regex characters", () => {
expect(
extractUserRequest("@claude[bot] do something", "@claude[bot]"),
).toBe("do something");
});
test("is case insensitive", () => {
expect(extractUserRequest("@CLAUDE /review-pr", "@claude")).toBe(
"/review-pr",
);
expect(extractUserRequest("@Claude /review-pr", "@claude")).toBe(
"/review-pr",
);
});
});

View File

@@ -0,0 +1,214 @@
import { describe, expect, it, beforeAll, afterAll } from "bun:test";
import { validatePathWithinRepo } from "../src/mcp/path-validation";
import { resolve } from "path";
import { mkdir, writeFile, symlink, rm, realpath } from "fs/promises";
import { tmpdir } from "os";
describe("validatePathWithinRepo", () => {
// Use a real temp directory for tests that need filesystem access
let testDir: string;
let repoRoot: string;
let outsideDir: string;
// Real paths after symlink resolution (e.g., /tmp -> /private/tmp on macOS)
let realRepoRoot: string;
beforeAll(async () => {
// Create test directory structure
testDir = resolve(tmpdir(), `path-validation-test-${Date.now()}`);
repoRoot = resolve(testDir, "repo");
outsideDir = resolve(testDir, "outside");
await mkdir(repoRoot, { recursive: true });
await mkdir(resolve(repoRoot, "src"), { recursive: true });
await mkdir(outsideDir, { recursive: true });
// Create test files
await writeFile(resolve(repoRoot, "file.txt"), "inside repo");
await writeFile(resolve(repoRoot, "src", "main.js"), "console.log('hi')");
await writeFile(resolve(outsideDir, "secret.txt"), "sensitive data");
// Get real paths after symlink resolution
realRepoRoot = await realpath(repoRoot);
});
afterAll(async () => {
// Cleanup
await rm(testDir, { recursive: true, force: true });
});
describe("valid paths", () => {
it("should accept simple relative paths", async () => {
const result = await validatePathWithinRepo("file.txt", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "file.txt"));
});
it("should accept nested relative paths", async () => {
const result = await validatePathWithinRepo("src/main.js", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "src/main.js"));
});
it("should accept paths with single dot segments", async () => {
const result = await validatePathWithinRepo("./src/main.js", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "src/main.js"));
});
it("should accept paths that use .. but resolve inside repo", async () => {
// src/../file.txt resolves to file.txt which is still inside repo
const result = await validatePathWithinRepo("src/../file.txt", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "file.txt"));
});
it("should accept absolute paths within the repo root", async () => {
const absolutePath = resolve(repoRoot, "file.txt");
const result = await validatePathWithinRepo(absolutePath, repoRoot);
expect(result).toBe(resolve(realRepoRoot, "file.txt"));
});
it("should accept the repo root itself", async () => {
const result = await validatePathWithinRepo(".", repoRoot);
expect(result).toBe(realRepoRoot);
});
it("should handle new files (non-existent) in valid directories", async () => {
const result = await validatePathWithinRepo("src/newfile.js", repoRoot);
// For non-existent files, we validate the parent but return the initial path
// (can't realpath a file that doesn't exist yet)
expect(result).toBe(resolve(repoRoot, "src/newfile.js"));
});
});
describe("path traversal attacks", () => {
it("should reject simple parent directory traversal", async () => {
await expect(
validatePathWithinRepo("../outside/secret.txt", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
it("should reject deeply nested parent directory traversal", async () => {
await expect(
validatePathWithinRepo("../../../etc/passwd", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
it("should reject traversal hidden within path", async () => {
await expect(
validatePathWithinRepo("src/../../outside/secret.txt", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
it("should reject traversal at the end of path", async () => {
await expect(
validatePathWithinRepo("src/../..", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
it("should reject absolute paths outside the repo root", async () => {
await expect(
validatePathWithinRepo("/etc/passwd", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
it("should reject absolute paths to sibling directories", async () => {
await expect(
validatePathWithinRepo(resolve(outsideDir, "secret.txt"), repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
});
});
describe("symlink attacks", () => {
it("should reject symlinks pointing outside the repo", async () => {
// Create a symlink inside the repo that points to a file outside
const symlinkPath = resolve(repoRoot, "evil-link");
await symlink(resolve(outsideDir, "secret.txt"), symlinkPath);
try {
// The symlink path looks like it's inside the repo, but points outside
await expect(
validatePathWithinRepo("evil-link", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
} finally {
await rm(symlinkPath, { force: true });
}
});
it("should reject symlinks to parent directories", async () => {
// Create a symlink to the parent directory
const symlinkPath = resolve(repoRoot, "parent-link");
await symlink(testDir, symlinkPath);
try {
await expect(
validatePathWithinRepo("parent-link/outside/secret.txt", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
} finally {
await rm(symlinkPath, { force: true });
}
});
it("should accept symlinks that resolve within the repo", async () => {
// Create a symlink inside the repo that points to another file inside
const symlinkPath = resolve(repoRoot, "good-link");
await symlink(resolve(repoRoot, "file.txt"), symlinkPath);
try {
const result = await validatePathWithinRepo("good-link", repoRoot);
// Should resolve to the actual file location
expect(result).toBe(resolve(realRepoRoot, "file.txt"));
} finally {
await rm(symlinkPath, { force: true });
}
});
it("should reject directory symlinks that escape the repo", async () => {
// Create a symlink to outside directory
const symlinkPath = resolve(repoRoot, "escape-dir");
await symlink(outsideDir, symlinkPath);
try {
await expect(
validatePathWithinRepo("escape-dir/secret.txt", repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
} finally {
await rm(symlinkPath, { force: true });
}
});
});
describe("edge cases", () => {
it("should handle empty path (current directory)", async () => {
const result = await validatePathWithinRepo("", repoRoot);
expect(result).toBe(realRepoRoot);
});
it("should handle paths with multiple consecutive slashes", async () => {
const result = await validatePathWithinRepo("src//main.js", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "src/main.js"));
});
it("should handle paths with trailing slashes", async () => {
const result = await validatePathWithinRepo("src/", repoRoot);
expect(result).toBe(resolve(realRepoRoot, "src"));
});
it("should reject prefix attack (repo root as prefix but not parent)", async () => {
// Create a sibling directory with repo name as prefix
const evilDir = repoRoot + "-evil";
await mkdir(evilDir, { recursive: true });
await writeFile(resolve(evilDir, "file.txt"), "evil");
try {
await expect(
validatePathWithinRepo(resolve(evilDir, "file.txt"), repoRoot),
).rejects.toThrow(/resolves outside the repository root/);
} finally {
await rm(evilDir, { recursive: true, force: true });
}
});
it("should throw error for non-existent repo root", async () => {
await expect(
validatePathWithinRepo("file.txt", "/nonexistent/repo"),
).rejects.toThrow(/does not exist/);
});
});
});

View File

@@ -32,11 +32,13 @@ describe("prepareMcpConfig", () => {
branchPrefix: "",
useStickyComment: false,
useCommitSigning: false,
sshSigningKey: "",
botId: String(CLAUDE_APP_BOT_ID),
botName: CLAUDE_BOT_LOGIN,
allowedBots: "",
allowedNonWriteUsers: "",
trackProgress: false,
includeFixLinks: true,
},
};

View File

@@ -20,11 +20,13 @@ const defaultInputs = {
branchPrefix: "claude/",
useStickyComment: false,
useCommitSigning: false,
sshSigningKey: "",
botId: String(CLAUDE_APP_BOT_ID),
botName: CLAUDE_BOT_LOGIN,
allowedBots: "",
allowedNonWriteUsers: "",
trackProgress: false,
includeFixLinks: true,
};
const defaultRepository = {

View File

@@ -145,12 +145,12 @@ describe("Agent Mode", () => {
users: {
getAuthenticated: mock(() =>
Promise.resolve({
data: { login: "test-user", id: 12345 },
data: { login: "test-user", id: 12345, type: "User" },
}),
),
getByUsername: mock(() =>
Promise.resolve({
data: { login: "test-user", id: 12345 },
data: { login: "test-user", id: 12345, type: "User" },
}),
),
},
@@ -187,6 +187,65 @@ describe("Agent Mode", () => {
process.env.GITHUB_REF_NAME = originalRefName;
});
test("prepare method rejects bot actors without allowed_bots", async () => {
const contextWithPrompts = createMockAutomationContext({
eventName: "workflow_dispatch",
});
contextWithPrompts.actor = "claude[bot]";
contextWithPrompts.inputs.allowedBots = "";
const mockOctokit = {
rest: {
users: {
getByUsername: mock(() =>
Promise.resolve({
data: { login: "claude[bot]", id: 12345, type: "Bot" },
}),
),
},
},
} as any;
await expect(
agentMode.prepare({
context: contextWithPrompts,
octokit: mockOctokit,
githubToken: "test-token",
}),
).rejects.toThrow(
"Workflow initiated by non-human actor: claude (type: Bot)",
);
});
test("prepare method allows bot actors when in allowed_bots list", async () => {
const contextWithPrompts = createMockAutomationContext({
eventName: "workflow_dispatch",
});
contextWithPrompts.actor = "dependabot[bot]";
contextWithPrompts.inputs.allowedBots = "dependabot";
const mockOctokit = {
rest: {
users: {
getByUsername: mock(() =>
Promise.resolve({
data: { login: "dependabot[bot]", id: 12345, type: "Bot" },
}),
),
},
},
} as any;
// Should not throw - bot is in allowed list
await expect(
agentMode.prepare({
context: contextWithPrompts,
octokit: mockOctokit,
githubToken: "test-token",
}),
).resolves.toBeDefined();
});
test("prepare method creates prompt file with correct content", async () => {
const contextWithPrompts = createMockAutomationContext({
eventName: "workflow_dispatch",
@@ -199,12 +258,12 @@ describe("Agent Mode", () => {
users: {
getAuthenticated: mock(() =>
Promise.resolve({
data: { login: "test-user", id: 12345 },
data: { login: "test-user", id: 12345, type: "User" },
}),
),
getByUsername: mock(() =>
Promise.resolve({
data: { login: "test-user", id: 12345 },
data: { login: "test-user", id: 12345, type: "User" },
}),
),
},

View File

@@ -20,11 +20,13 @@ describe("detectMode with enhanced routing", () => {
branchPrefix: "claude/",
useStickyComment: false,
useCommitSigning: false,
sshSigningKey: "",
botId: "123456",
botName: "claude-bot",
allowedBots: "",
allowedNonWriteUsers: "",
trackProgress: false,
includeFixLinks: true,
},
};

View File

@@ -35,12 +35,44 @@ describe("parseAllowedTools", () => {
expect(parseAllowedTools("")).toEqual([]);
});
test("handles duplicate --allowedTools flags", () => {
test("handles --allowedTools followed by another --allowedTools flag", () => {
const args = "--allowedTools --allowedTools mcp__github__*";
// Should not match the first one since the value is another flag
// The second --allowedTools is consumed as a value of the first, then skipped.
// This is an edge case with malformed input - returns empty.
expect(parseAllowedTools(args)).toEqual([]);
});
test("parses multiple separate --allowed-tools flags", () => {
const args =
"--allowed-tools 'mcp__context7__*' --allowed-tools 'Read,Glob' --allowed-tools 'mcp__github_inline_comment__*'";
expect(parseAllowedTools(args)).toEqual([
"mcp__context7__*",
"Read",
"Glob",
"mcp__github_inline_comment__*",
]);
});
test("parses multiple --allowed-tools flags on separate lines", () => {
const args = `--model 'claude-haiku'
--allowed-tools 'mcp__context7__*'
--allowed-tools 'Read,Glob,Grep'
--allowed-tools 'mcp__github_inline_comment__create_inline_comment'`;
expect(parseAllowedTools(args)).toEqual([
"mcp__context7__*",
"Read",
"Glob",
"Grep",
"mcp__github_inline_comment__create_inline_comment",
]);
});
test("deduplicates tools from multiple flags", () => {
const args =
"--allowed-tools 'Read,Glob' --allowed-tools 'Glob,Grep' --allowed-tools 'Read'";
expect(parseAllowedTools(args)).toEqual(["Read", "Glob", "Grep"]);
});
test("handles typo --alloedTools", () => {
const args = "--alloedTools mcp__github__*";
expect(parseAllowedTools(args)).toEqual([]);

View File

@@ -68,11 +68,13 @@ describe("checkWritePermissions", () => {
branchPrefix: "claude/",
useStickyComment: false,
useCommitSigning: false,
sshSigningKey: "",
botId: String(CLAUDE_APP_BOT_ID),
botName: CLAUDE_BOT_LOGIN,
allowedBots: "",
allowedNonWriteUsers: "",
trackProgress: false,
includeFixLinks: true,
},
});

View File

@@ -87,6 +87,7 @@ describe("pull_request_target event support", () => {
},
comments: { nodes: [] },
reviews: { nodes: [] },
labels: { nodes: [] },
},
comments: [],
changedFiles: [],

291
test/ssh-signing.test.ts Normal file
View File

@@ -0,0 +1,291 @@
#!/usr/bin/env bun
import {
describe,
test,
expect,
afterEach,
beforeAll,
afterAll,
} from "bun:test";
import { mkdir, writeFile, rm, readFile, stat } from "fs/promises";
import { join } from "path";
import { tmpdir } from "os";
describe("SSH Signing", () => {
// Use a temp directory for tests
const testTmpDir = join(tmpdir(), "claude-ssh-signing-test");
const testSshDir = join(testTmpDir, ".ssh");
const testKeyPath = join(testSshDir, "claude_signing_key");
const testKey =
"-----BEGIN OPENSSH PRIVATE KEY-----\ntest-key-content\n-----END OPENSSH PRIVATE KEY-----";
beforeAll(async () => {
await mkdir(testTmpDir, { recursive: true });
});
afterAll(async () => {
await rm(testTmpDir, { recursive: true, force: true });
});
afterEach(async () => {
// Clean up test key if it exists
try {
await rm(testKeyPath, { force: true });
} catch {
// Ignore cleanup errors
}
});
describe("setupSshSigning file operations", () => {
test("should write key file atomically with correct permissions", async () => {
// Create the directory with secure permissions (same as setupSshSigning does)
await mkdir(testSshDir, { recursive: true, mode: 0o700 });
// Write key atomically with proper permissions (same as setupSshSigning does)
await writeFile(testKeyPath, testKey, { mode: 0o600 });
// Verify key was written
const keyContent = await readFile(testKeyPath, "utf-8");
expect(keyContent).toBe(testKey);
// Verify permissions (0o600 = 384 in decimal for permission bits only)
const stats = await stat(testKeyPath);
const permissions = stats.mode & 0o777; // Get only permission bits
expect(permissions).toBe(0o600);
});
test("should normalize key to have trailing newline", async () => {
// ssh-keygen requires a trailing newline to parse the key
const keyWithoutNewline =
"-----BEGIN OPENSSH PRIVATE KEY-----\ntest-key-content\n-----END OPENSSH PRIVATE KEY-----";
const keyWithNewline = keyWithoutNewline + "\n";
// Create directory
await mkdir(testSshDir, { recursive: true, mode: 0o700 });
// Normalize the key (same logic as setupSshSigning)
const normalizedKey = keyWithoutNewline.endsWith("\n")
? keyWithoutNewline
: keyWithoutNewline + "\n";
await writeFile(testKeyPath, normalizedKey, { mode: 0o600 });
// Verify the written key ends with newline
const keyContent = await readFile(testKeyPath, "utf-8");
expect(keyContent).toBe(keyWithNewline);
expect(keyContent.endsWith("\n")).toBe(true);
});
test("should not add extra newline if key already has one", async () => {
const keyWithNewline =
"-----BEGIN OPENSSH PRIVATE KEY-----\ntest-key-content\n-----END OPENSSH PRIVATE KEY-----\n";
await mkdir(testSshDir, { recursive: true, mode: 0o700 });
// Normalize the key (same logic as setupSshSigning)
const normalizedKey = keyWithNewline.endsWith("\n")
? keyWithNewline
: keyWithNewline + "\n";
await writeFile(testKeyPath, normalizedKey, { mode: 0o600 });
// Verify no double newline
const keyContent = await readFile(testKeyPath, "utf-8");
expect(keyContent).toBe(keyWithNewline);
expect(keyContent.endsWith("\n\n")).toBe(false);
});
test("should create .ssh directory with secure permissions", async () => {
// Clean up first
await rm(testSshDir, { recursive: true, force: true });
// Create directory with secure permissions (same as setupSshSigning does)
await mkdir(testSshDir, { recursive: true, mode: 0o700 });
// Verify directory exists
const dirStats = await stat(testSshDir);
expect(dirStats.isDirectory()).toBe(true);
// Verify directory permissions
const dirPermissions = dirStats.mode & 0o777;
expect(dirPermissions).toBe(0o700);
});
});
describe("setupSshSigning validation", () => {
test("should reject empty SSH key", () => {
const emptyKey = "";
expect(() => {
if (!emptyKey.trim()) {
throw new Error("SSH signing key cannot be empty");
}
}).toThrow("SSH signing key cannot be empty");
});
test("should reject whitespace-only SSH key", () => {
const whitespaceKey = " \n\t ";
expect(() => {
if (!whitespaceKey.trim()) {
throw new Error("SSH signing key cannot be empty");
}
}).toThrow("SSH signing key cannot be empty");
});
test("should reject invalid SSH key format", () => {
const invalidKey = "not a valid key";
expect(() => {
if (
!invalidKey.includes("BEGIN") ||
!invalidKey.includes("PRIVATE KEY")
) {
throw new Error("Invalid SSH private key format");
}
}).toThrow("Invalid SSH private key format");
});
test("should accept valid SSH key format", () => {
const validKey =
"-----BEGIN OPENSSH PRIVATE KEY-----\nkey-content\n-----END OPENSSH PRIVATE KEY-----";
expect(() => {
if (!validKey.trim()) {
throw new Error("SSH signing key cannot be empty");
}
if (!validKey.includes("BEGIN") || !validKey.includes("PRIVATE KEY")) {
throw new Error("Invalid SSH private key format");
}
}).not.toThrow();
});
});
describe("cleanupSshSigning file operations", () => {
test("should remove the signing key file", async () => {
// Create the key file first
await mkdir(testSshDir, { recursive: true });
await writeFile(testKeyPath, testKey, { mode: 0o600 });
// Verify it exists
const existsBefore = await stat(testKeyPath)
.then(() => true)
.catch(() => false);
expect(existsBefore).toBe(true);
// Clean up (same operation as cleanupSshSigning)
await rm(testKeyPath, { force: true });
// Verify it's gone
const existsAfter = await stat(testKeyPath)
.then(() => true)
.catch(() => false);
expect(existsAfter).toBe(false);
});
test("should not throw if key file does not exist", async () => {
// Make sure file doesn't exist
await rm(testKeyPath, { force: true });
// Should not throw (rm with force: true doesn't throw on missing files)
await expect(rm(testKeyPath, { force: true })).resolves.toBeUndefined();
});
});
});
describe("SSH Signing Mode Detection", () => {
test("sshSigningKey should take precedence over useCommitSigning", () => {
// When both are set, SSH signing takes precedence
const sshSigningKey = "test-key";
const useCommitSigning = true;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useSshSigning).toBe(true);
expect(useApiCommitSigning).toBe(false);
});
test("useCommitSigning should work when sshSigningKey is not set", () => {
const sshSigningKey = "";
const useCommitSigning = true;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useSshSigning).toBe(false);
expect(useApiCommitSigning).toBe(true);
});
test("neither signing method when both are false/empty", () => {
const sshSigningKey = "";
const useCommitSigning = false;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useSshSigning).toBe(false);
expect(useApiCommitSigning).toBe(false);
});
test("git CLI tools should be used when sshSigningKey is set", () => {
// This tests the logic in tag mode for tool selection
const sshSigningKey = "test-key";
const useCommitSigning = true; // Even if this is true
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
// When SSH signing is used, we should use git CLI (not API)
const shouldUseGitCli = !useApiCommitSigning;
expect(shouldUseGitCli).toBe(true);
});
test("MCP file ops should only be used with API commit signing", () => {
// Case 1: API commit signing
{
const sshSigningKey = "";
const useCommitSigning = true;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useApiCommitSigning).toBe(true);
}
// Case 2: SSH signing (should NOT use API)
{
const sshSigningKey = "test-key";
const useCommitSigning = true;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useApiCommitSigning).toBe(false);
}
// Case 3: No signing (should NOT use API)
{
const sshSigningKey = "";
const useCommitSigning = false;
const useSshSigning = !!sshSigningKey;
const useApiCommitSigning = useCommitSigning && !useSshSigning;
expect(useApiCommitSigning).toBe(false);
}
});
});
describe("Context parsing", () => {
test("sshSigningKey should be parsed from environment", () => {
// Test that context.ts parses SSH_SIGNING_KEY correctly
const testCases = [
{ env: "test-key", expected: "test-key" },
{ env: "", expected: "" },
{ env: undefined, expected: "" },
];
for (const { env, expected } of testCases) {
const result = env || "";
expect(result).toBe(expected);
}
});
});

View File

@@ -0,0 +1,201 @@
import { describe, expect, it } from "bun:test";
import { validateBranchName } from "../src/github/operations/branch";
describe("validateBranchName", () => {
describe("valid branch names", () => {
it("should accept simple alphanumeric names", () => {
expect(() => validateBranchName("main")).not.toThrow();
expect(() => validateBranchName("feature123")).not.toThrow();
expect(() => validateBranchName("Branch1")).not.toThrow();
});
it("should accept names with hyphens", () => {
expect(() => validateBranchName("feature-branch")).not.toThrow();
expect(() => validateBranchName("fix-bug-123")).not.toThrow();
});
it("should accept names with underscores", () => {
expect(() => validateBranchName("feature_branch")).not.toThrow();
expect(() => validateBranchName("fix_bug_123")).not.toThrow();
});
it("should accept names with forward slashes", () => {
expect(() => validateBranchName("feature/new-thing")).not.toThrow();
expect(() => validateBranchName("user/feature/branch")).not.toThrow();
});
it("should accept names with periods", () => {
expect(() => validateBranchName("v1.0.0")).not.toThrow();
expect(() => validateBranchName("release.1.2.3")).not.toThrow();
});
it("should accept typical branch name formats", () => {
expect(() =>
validateBranchName("claude/issue-123-20250101-1234"),
).not.toThrow();
expect(() => validateBranchName("refs/heads/main")).not.toThrow();
expect(() => validateBranchName("bugfix/JIRA-1234")).not.toThrow();
});
});
describe("command injection attempts", () => {
it("should reject shell command substitution with $()", () => {
expect(() => validateBranchName("$(whoami)")).toThrow();
expect(() => validateBranchName("branch-$(rm -rf /)")).toThrow();
expect(() => validateBranchName("test$(cat /etc/passwd)")).toThrow();
});
it("should reject shell command substitution with backticks", () => {
expect(() => validateBranchName("`whoami`")).toThrow();
expect(() => validateBranchName("branch-`rm -rf /`")).toThrow();
});
it("should reject command chaining with semicolons", () => {
expect(() => validateBranchName("branch; rm -rf /")).toThrow();
expect(() => validateBranchName("test;whoami")).toThrow();
});
it("should reject command chaining with &&", () => {
expect(() => validateBranchName("branch && rm -rf /")).toThrow();
expect(() => validateBranchName("test&&whoami")).toThrow();
});
it("should reject command chaining with ||", () => {
expect(() => validateBranchName("branch || rm -rf /")).toThrow();
expect(() => validateBranchName("test||whoami")).toThrow();
});
it("should reject pipe characters", () => {
expect(() => validateBranchName("branch | cat")).toThrow();
expect(() => validateBranchName("test|grep password")).toThrow();
});
it("should reject redirection operators", () => {
expect(() => validateBranchName("branch > /etc/passwd")).toThrow();
expect(() => validateBranchName("branch < input")).toThrow();
expect(() => validateBranchName("branch >> file")).toThrow();
});
});
describe("option injection attempts", () => {
it("should reject branch names starting with dash", () => {
expect(() => validateBranchName("-x")).toThrow(
/cannot start with a dash/,
);
expect(() => validateBranchName("--help")).toThrow(
/cannot start with a dash/,
);
expect(() => validateBranchName("-")).toThrow(/cannot start with a dash/);
expect(() => validateBranchName("--version")).toThrow(
/cannot start with a dash/,
);
expect(() => validateBranchName("-rf")).toThrow(
/cannot start with a dash/,
);
});
});
describe("path traversal attempts", () => {
it("should reject double dot sequences", () => {
expect(() => validateBranchName("../../../etc")).toThrow();
expect(() => validateBranchName("branch/../secret")).toThrow(/'\.\.'$/);
expect(() => validateBranchName("a..b")).toThrow(/'\.\.'$/);
});
});
describe("git-specific invalid patterns", () => {
it("should reject @{ sequence", () => {
expect(() => validateBranchName("branch@{1}")).toThrow(/@{/);
expect(() => validateBranchName("HEAD@{yesterday}")).toThrow(/@{/);
});
it("should reject .lock suffix", () => {
expect(() => validateBranchName("branch.lock")).toThrow(/\.lock/);
expect(() => validateBranchName("feature.lock")).toThrow(/\.lock/);
});
it("should reject consecutive slashes", () => {
expect(() => validateBranchName("feature//branch")).toThrow(
/consecutive slashes/,
);
expect(() => validateBranchName("a//b//c")).toThrow(
/consecutive slashes/,
);
});
it("should reject trailing slashes", () => {
expect(() => validateBranchName("feature/")).toThrow(
/cannot end with a slash/,
);
expect(() => validateBranchName("branch/")).toThrow(
/cannot end with a slash/,
);
});
it("should reject leading periods", () => {
expect(() => validateBranchName(".hidden")).toThrow();
});
it("should reject trailing periods", () => {
expect(() => validateBranchName("branch.")).toThrow(
/cannot start or end with a period/,
);
});
it("should reject special git refspec characters", () => {
expect(() => validateBranchName("branch~1")).toThrow();
expect(() => validateBranchName("branch^2")).toThrow();
expect(() => validateBranchName("branch:ref")).toThrow();
expect(() => validateBranchName("branch?")).toThrow();
expect(() => validateBranchName("branch*")).toThrow();
expect(() => validateBranchName("branch[0]")).toThrow();
expect(() => validateBranchName("branch\\path")).toThrow();
});
});
describe("control characters and special characters", () => {
it("should reject null bytes", () => {
expect(() => validateBranchName("branch\x00name")).toThrow();
});
it("should reject other control characters", () => {
expect(() => validateBranchName("branch\x01name")).toThrow();
expect(() => validateBranchName("branch\x1Fname")).toThrow();
expect(() => validateBranchName("branch\x7Fname")).toThrow();
});
it("should reject spaces", () => {
expect(() => validateBranchName("branch name")).toThrow();
expect(() => validateBranchName("feature branch")).toThrow();
});
it("should reject newlines and tabs", () => {
expect(() => validateBranchName("branch\nname")).toThrow();
expect(() => validateBranchName("branch\tname")).toThrow();
});
});
describe("empty and whitespace", () => {
it("should reject empty strings", () => {
expect(() => validateBranchName("")).toThrow(/cannot be empty/);
});
it("should reject whitespace-only strings", () => {
expect(() => validateBranchName(" ")).toThrow();
expect(() => validateBranchName("\t\n")).toThrow();
});
});
describe("edge cases", () => {
it("should accept single alphanumeric character", () => {
expect(() => validateBranchName("a")).not.toThrow();
expect(() => validateBranchName("1")).not.toThrow();
});
it("should reject single special characters", () => {
expect(() => validateBranchName(".")).toThrow();
expect(() => validateBranchName("/")).toThrow();
expect(() => validateBranchName("-")).toThrow();
});
});
});