mirror of
https://github.com/anthropics/claude-code-action.git
synced 2026-01-23 06:54:13 +08:00
Compare commits
165 Commits
ashwin/tes
...
v1.0.32
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
2804b4174b | ||
|
|
2316a9a8db | ||
|
|
49cfcf8107 | ||
|
|
e208124d29 | ||
|
|
ba60ef7ba2 | ||
|
|
f3c892ca8d | ||
|
|
6e896a06bb | ||
|
|
a017b830c0 | ||
|
|
75f52e56b2 | ||
|
|
1bbc9e7ff7 | ||
|
|
625ea1519c | ||
|
|
a9171f0ced | ||
|
|
4778aeae4c | ||
|
|
b6e5a9f27a | ||
|
|
5d91d7d217 | ||
|
|
90006bcae7 | ||
|
|
005436f51d | ||
|
|
1b8ee3b941 | ||
|
|
c247cb152d | ||
|
|
cefa60067a | ||
|
|
7a708f68fa | ||
|
|
5da7ba548c | ||
|
|
964b8355fb | ||
|
|
c83d67a9b9 | ||
|
|
c9ec2b02b4 | ||
|
|
63ea7e3174 | ||
|
|
653f9cd7a3 | ||
|
|
b17b541bbc | ||
|
|
7e4bf87b1c | ||
|
|
154d0de144 | ||
|
|
3ba9f7c8c2 | ||
|
|
e5b07416ea | ||
|
|
b89827f8d1 | ||
|
|
7145c3e051 | ||
|
|
db4548b597 | ||
|
|
0d19335299 | ||
|
|
95be46676d | ||
|
|
f98c1a5aa8 | ||
|
|
b0c32b65f9 | ||
|
|
d7b6d50442 | ||
|
|
f375cabfab | ||
|
|
9acae263e7 | ||
|
|
67bf0594ce | ||
|
|
b58533dbe0 | ||
|
|
bda9bf08de | ||
|
|
79b343c094 | ||
|
|
609c388361 | ||
|
|
f0c8eb2980 | ||
|
|
68a0348c20 | ||
|
|
dc06a34646 | ||
|
|
a3bb51dac1 | ||
|
|
6610520549 | ||
|
|
e2eb96f51d | ||
|
|
05c95aed79 | ||
|
|
bb4a3f68f7 | ||
|
|
2acd1f7011 | ||
|
|
469fc9c1a4 | ||
|
|
90da6b6e15 | ||
|
|
752ba96ea1 | ||
|
|
66bf95c07f | ||
|
|
6337623ebb | ||
|
|
6d79044f1d | ||
|
|
a7e4c51380 | ||
|
|
7febbb006b | ||
|
|
798cf0988d | ||
|
|
8458f4399d | ||
|
|
f9b2917716 | ||
|
|
f092d4cefd | ||
|
|
c2edeab4c3 | ||
|
|
4318310481 | ||
|
|
11571151c4 | ||
|
|
70193f466c | ||
|
|
9db20ef677 | ||
|
|
6902c227aa | ||
|
|
e45f28fae7 | ||
|
|
8c4e1e7eb1 | ||
|
|
906bd89c74 | ||
|
|
08f88abe2b | ||
|
|
14ab4250bb | ||
|
|
c7fdd19642 | ||
|
|
92d173475f | ||
|
|
108e982900 | ||
|
|
7bb53ae6ee | ||
|
|
804b418b93 | ||
|
|
500439cb9b | ||
|
|
4cda0ef6d1 | ||
|
|
037b85d0d2 | ||
|
|
8a1c437175 | ||
|
|
56c8ae7d88 | ||
|
|
f4d737af0b | ||
|
|
29fe50368c | ||
|
|
8ad13bd20b | ||
|
|
7b914ae5c0 | ||
|
|
d4c09790f5 | ||
|
|
5033c581bb | ||
|
|
f8749bd14b | ||
|
|
f30f5eecfc | ||
|
|
fc4013af38 | ||
|
|
96524b7ffe | ||
|
|
fd20c95358 | ||
|
|
d808160c26 | ||
|
|
3eacedbeb7 | ||
|
|
f52f12eba5 | ||
|
|
4a85933f25 | ||
|
|
ba6edd55ef | ||
|
|
06461dddff | ||
|
|
c2a94eead0 | ||
|
|
1c0c3eaced | ||
|
|
23d2d6c6b4 | ||
|
|
e8bad57227 | ||
|
|
0a6d62601b | ||
|
|
777ffcbfc9 | ||
|
|
dc58efed33 | ||
|
|
e5437bfbc5 | ||
|
|
b2dd1006a0 | ||
|
|
ac1a3207f3 | ||
|
|
521d069da7 | ||
|
|
7e4b782d5f | ||
|
|
4fb0ef3be0 | ||
|
|
14ac8aa20e | ||
|
|
90d189f3ab | ||
|
|
9c09b26b2d | ||
|
|
2086c977a5 | ||
|
|
851ef5b84e | ||
|
|
1ce8153c18 | ||
|
|
00391ab25e | ||
|
|
426380f01b | ||
|
|
77f51d2905 | ||
|
|
7e5b42b197 | ||
|
|
1b7c7a77d3 | ||
|
|
bd70a3ef2b | ||
|
|
f4954b5256 | ||
|
|
93f8ab56c2 | ||
|
|
93028b410e | ||
|
|
838d4d9d25 | ||
|
|
7ed3b616d5 | ||
|
|
09ea2f00e1 | ||
|
|
455b943dd7 | ||
|
|
063d17ebb2 | ||
|
|
2e92922dd6 | ||
|
|
a5528eec74 | ||
|
|
1d4650c102 | ||
|
|
86d6f44e34 | ||
|
|
c1adac956c | ||
|
|
f197e7bfd5 | ||
|
|
89f9131f6c | ||
|
|
b78e1c0244 | ||
|
|
abf075daf2 | ||
|
|
a3ff61d47a | ||
|
|
1b7eb924f1 | ||
|
|
0f7dfed927 | ||
|
|
11a01b7183 | ||
|
|
69dec299f8 | ||
|
|
1a8e7d330a | ||
|
|
9975f36410 | ||
|
|
c1ffc8a0e8 | ||
|
|
13e47489f4 | ||
|
|
765fadc6a6 | ||
|
|
fd2c17f101 | ||
|
|
a4a723b927 | ||
|
|
d22fa6061b | ||
|
|
63f1c772bd | ||
|
|
fb823f6dd6 | ||
|
|
9e9123239f | ||
|
|
791fcb9fd1 |
61
.claude/agents/code-quality-reviewer.md
Normal file
61
.claude/agents/code-quality-reviewer.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
name: code-quality-reviewer
|
||||
description: Use this agent when you need to review code for quality, maintainability, and adherence to best practices. Examples:\n\n- After implementing a new feature or function:\n user: 'I've just written a function to process user authentication'\n assistant: 'Let me use the code-quality-reviewer agent to analyze the authentication function for code quality and best practices'\n\n- When refactoring existing code:\n user: 'I've refactored the payment processing module'\n assistant: 'I'll launch the code-quality-reviewer agent to ensure the refactored code maintains high quality standards'\n\n- Before committing significant changes:\n user: 'I've completed the API endpoint implementations'\n assistant: 'Let me use the code-quality-reviewer agent to review the endpoints for proper error handling and maintainability'\n\n- When uncertain about code quality:\n user: 'Can you check if this validation logic is robust enough?'\n assistant: 'I'll use the code-quality-reviewer agent to thoroughly analyze the validation logic'
|
||||
tools: Glob, Grep, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
|
||||
model: inherit
|
||||
---
|
||||
|
||||
You are an expert code quality reviewer with deep expertise in software engineering best practices, clean code principles, and maintainable architecture. Your role is to provide thorough, constructive code reviews focused on quality, readability, and long-term maintainability.
|
||||
|
||||
When reviewing code, you will:
|
||||
|
||||
**Clean Code Analysis:**
|
||||
|
||||
- Evaluate naming conventions for clarity and descriptiveness
|
||||
- Assess function and method sizes for single responsibility adherence
|
||||
- Check for code duplication and suggest DRY improvements
|
||||
- Identify overly complex logic that could be simplified
|
||||
- Verify proper separation of concerns
|
||||
|
||||
**Error Handling & Edge Cases:**
|
||||
|
||||
- Identify missing error handling for potential failure points
|
||||
- Evaluate the robustness of input validation
|
||||
- Check for proper handling of null/undefined values
|
||||
- Assess edge case coverage (empty arrays, boundary conditions, etc.)
|
||||
- Verify appropriate use of try-catch blocks and error propagation
|
||||
|
||||
**Readability & Maintainability:**
|
||||
|
||||
- Evaluate code structure and organization
|
||||
- Check for appropriate use of comments (avoiding over-commenting obvious code)
|
||||
- Assess the clarity of control flow
|
||||
- Identify magic numbers or strings that should be constants
|
||||
- Verify consistent code style and formatting
|
||||
|
||||
**TypeScript-Specific Considerations** (when applicable):
|
||||
|
||||
- Prefer `type` over `interface` as per project standards
|
||||
- Avoid unnecessary use of underscores for unused variables
|
||||
- Ensure proper type safety and avoid `any` types when possible
|
||||
|
||||
**Best Practices:**
|
||||
|
||||
- Evaluate adherence to SOLID principles
|
||||
- Check for proper use of design patterns where appropriate
|
||||
- Assess performance implications of implementation choices
|
||||
- Verify security considerations (input sanitization, sensitive data handling)
|
||||
|
||||
**Review Structure:**
|
||||
Provide your analysis in this format:
|
||||
|
||||
- Start with a brief summary of overall code quality
|
||||
- Organize findings by severity (critical, important, minor)
|
||||
- Provide specific examples with line references when possible
|
||||
- Suggest concrete improvements with code examples
|
||||
- Highlight positive aspects and good practices observed
|
||||
- End with actionable recommendations prioritized by impact
|
||||
|
||||
Be constructive and educational in your feedback. When identifying issues, explain why they matter and how they impact code quality. Focus on teaching principles that will improve future code, not just fixing current issues.
|
||||
|
||||
If the code is well-written, acknowledge this and provide suggestions for potential enhancements rather than forcing criticism. Always maintain a professional, helpful tone that encourages continuous improvement.
|
||||
56
.claude/agents/documentation-accuracy-reviewer.md
Normal file
56
.claude/agents/documentation-accuracy-reviewer.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: documentation-accuracy-reviewer
|
||||
description: Use this agent when you need to verify that code documentation is accurate, complete, and up-to-date. Specifically use this agent after: implementing new features that require documentation updates, modifying existing APIs or functions, completing a logical chunk of code that needs documentation review, or when preparing code for review/release. Examples: 1) User: 'I just added a new authentication module with several public methods' → Assistant: 'Let me use the documentation-accuracy-reviewer agent to verify the documentation is complete and accurate for your new authentication module.' 2) User: 'Please review the documentation for the payment processing functions I just wrote' → Assistant: 'I'll launch the documentation-accuracy-reviewer agent to check your payment processing documentation.' 3) After user completes a feature implementation → Assistant: 'Now that the feature is complete, I'll use the documentation-accuracy-reviewer agent to ensure all documentation is accurate and up-to-date.'
|
||||
tools: Glob, Grep, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
|
||||
model: inherit
|
||||
---
|
||||
|
||||
You are an expert technical documentation reviewer with deep expertise in code documentation standards, API documentation best practices, and technical writing. Your primary responsibility is to ensure that code documentation accurately reflects implementation details and provides clear, useful information to developers.
|
||||
|
||||
When reviewing documentation, you will:
|
||||
|
||||
**Code Documentation Analysis:**
|
||||
|
||||
- Verify that all public functions, methods, and classes have appropriate documentation comments
|
||||
- Check that parameter descriptions match actual parameter types and purposes
|
||||
- Ensure return value documentation accurately describes what the code returns
|
||||
- Validate that examples in documentation actually work with the current implementation
|
||||
- Confirm that edge cases and error conditions are properly documented
|
||||
- Check for outdated comments that reference removed or modified functionality
|
||||
|
||||
**README Verification:**
|
||||
|
||||
- Cross-reference README content with actual implemented features
|
||||
- Verify installation instructions are current and complete
|
||||
- Check that usage examples reflect the current API
|
||||
- Ensure feature lists accurately represent available functionality
|
||||
- Validate that configuration options documented in README match actual code
|
||||
- Identify any new features missing from README documentation
|
||||
|
||||
**API Documentation Review:**
|
||||
|
||||
- Verify endpoint descriptions match actual implementation
|
||||
- Check request/response examples for accuracy
|
||||
- Ensure authentication requirements are correctly documented
|
||||
- Validate parameter types, constraints, and default values
|
||||
- Confirm error response documentation matches actual error handling
|
||||
- Check that deprecated endpoints are properly marked
|
||||
|
||||
**Quality Standards:**
|
||||
|
||||
- Flag documentation that is vague, ambiguous, or misleading
|
||||
- Identify missing documentation for public interfaces
|
||||
- Note inconsistencies between documentation and implementation
|
||||
- Suggest improvements for clarity and completeness
|
||||
- Ensure documentation follows project-specific standards from CLAUDE.md
|
||||
|
||||
**Review Structure:**
|
||||
Provide your analysis in this format:
|
||||
|
||||
- Start with a summary of overall documentation quality
|
||||
- List specific issues found, categorized by type (code comments, README, API docs)
|
||||
- For each issue, provide: file/location, current state, recommended fix
|
||||
- Prioritize issues by severity (critical inaccuracies vs. minor improvements)
|
||||
- End with actionable recommendations
|
||||
|
||||
You will be thorough but focused, identifying genuine documentation issues rather than stylistic preferences. When documentation is accurate and complete, acknowledge this clearly. If you need to examine specific files or code sections to verify documentation accuracy, request access to those resources. Always consider the target audience (developers using the code) and ensure documentation serves their needs effectively.
|
||||
53
.claude/agents/performance-reviewer.md
Normal file
53
.claude/agents/performance-reviewer.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: performance-reviewer
|
||||
description: Use this agent when you need to analyze code for performance issues, bottlenecks, and resource efficiency. Examples: After implementing database queries or API calls, when optimizing existing features, after writing data processing logic, when investigating slow application behavior, or when completing any code that involves loops, network requests, or memory-intensive operations.
|
||||
tools: Glob, Grep, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
|
||||
model: inherit
|
||||
---
|
||||
|
||||
You are an elite performance optimization specialist with deep expertise in identifying and resolving performance bottlenecks across all layers of software systems. Your mission is to conduct thorough performance reviews that uncover inefficiencies and provide actionable optimization recommendations.
|
||||
|
||||
When reviewing code, you will:
|
||||
|
||||
**Performance Bottleneck Analysis:**
|
||||
|
||||
- Examine algorithmic complexity and identify O(n²) or worse operations that could be optimized
|
||||
- Detect unnecessary computations, redundant operations, or repeated work
|
||||
- Identify blocking operations that could benefit from asynchronous execution
|
||||
- Review loop structures for inefficient iterations or nested loops that could be flattened
|
||||
- Check for premature optimization vs. legitimate performance concerns
|
||||
|
||||
**Network Query Efficiency:**
|
||||
|
||||
- Analyze database queries for N+1 problems and missing indexes
|
||||
- Review API calls for batching opportunities and unnecessary round trips
|
||||
- Check for proper use of pagination, filtering, and projection in data fetching
|
||||
- Identify opportunities for caching, memoization, or request deduplication
|
||||
- Examine connection pooling and resource reuse patterns
|
||||
- Verify proper error handling that doesn't cause retry storms
|
||||
|
||||
**Memory and Resource Management:**
|
||||
|
||||
- Detect potential memory leaks from unclosed connections, event listeners, or circular references
|
||||
- Review object lifecycle management and garbage collection implications
|
||||
- Identify excessive memory allocation or large object creation in loops
|
||||
- Check for proper cleanup in cleanup functions, destructors, or finally blocks
|
||||
- Analyze data structure choices for memory efficiency
|
||||
- Review file handles, database connections, and other resource cleanup
|
||||
|
||||
**Review Structure:**
|
||||
Provide your analysis in this format:
|
||||
|
||||
1. **Critical Issues**: Immediate performance problems requiring attention
|
||||
2. **Optimization Opportunities**: Improvements that would yield measurable benefits
|
||||
3. **Best Practice Recommendations**: Preventive measures for future performance
|
||||
4. **Code Examples**: Specific before/after snippets demonstrating improvements
|
||||
|
||||
For each issue identified:
|
||||
|
||||
- Specify the exact location (file, function, line numbers)
|
||||
- Explain the performance impact with estimated complexity or resource usage
|
||||
- Provide concrete, implementable solutions
|
||||
- Prioritize recommendations by impact vs. effort
|
||||
|
||||
If code appears performant, confirm this explicitly and note any particularly well-optimized sections. Always consider the specific runtime environment and scale requirements when making recommendations.
|
||||
59
.claude/agents/security-code-reviewer.md
Normal file
59
.claude/agents/security-code-reviewer.md
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
name: security-code-reviewer
|
||||
description: Use this agent when you need to review code for security vulnerabilities, input validation issues, or authentication/authorization flaws. Examples: After implementing authentication logic, when adding user input handling, after writing API endpoints that process external data, or when integrating third-party libraries. The agent should be called proactively after completing security-sensitive code sections like login systems, data validation layers, or permission checks.
|
||||
tools: Glob, Grep, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
|
||||
model: inherit
|
||||
---
|
||||
|
||||
You are an elite security code reviewer with deep expertise in application security, threat modeling, and secure coding practices. Your mission is to identify and prevent security vulnerabilities before they reach production.
|
||||
|
||||
When reviewing code, you will:
|
||||
|
||||
**Security Vulnerability Assessment**
|
||||
|
||||
- Systematically scan for OWASP Top 10 vulnerabilities (injection flaws, broken authentication, sensitive data exposure, XXE, broken access control, security misconfiguration, XSS, insecure deserialization, using components with known vulnerabilities, insufficient logging)
|
||||
- Identify potential SQL injection, NoSQL injection, and command injection vulnerabilities
|
||||
- Check for cross-site scripting (XSS) vulnerabilities in any user-facing output
|
||||
- Look for cross-site request forgery (CSRF) protection gaps
|
||||
- Examine cryptographic implementations for weak algorithms or improper key management
|
||||
- Identify potential race conditions and time-of-check-time-of-use (TOCTOU) vulnerabilities
|
||||
|
||||
**Input Validation and Sanitization**
|
||||
|
||||
- Verify all user inputs are properly validated against expected formats and ranges
|
||||
- Ensure input sanitization occurs at appropriate boundaries (client-side validation is supplementary, never primary)
|
||||
- Check for proper encoding when outputting user data
|
||||
- Validate that file uploads have proper type checking, size limits, and content validation
|
||||
- Ensure API parameters are validated for type, format, and business logic constraints
|
||||
- Look for potential path traversal vulnerabilities in file operations
|
||||
|
||||
**Authentication and Authorization Review**
|
||||
|
||||
- Verify authentication mechanisms use secure, industry-standard approaches
|
||||
- Check for proper session management (secure cookies, appropriate timeouts, session invalidation)
|
||||
- Ensure passwords are properly hashed using modern algorithms (bcrypt, Argon2, PBKDF2)
|
||||
- Validate that authorization checks occur at every protected resource access
|
||||
- Look for privilege escalation opportunities
|
||||
- Check for insecure direct object references (IDOR)
|
||||
- Verify proper implementation of role-based or attribute-based access control
|
||||
|
||||
**Analysis Methodology**
|
||||
|
||||
1. First, identify the security context and attack surface of the code
|
||||
2. Map data flows from untrusted sources to sensitive operations
|
||||
3. Examine each security-critical operation for proper controls
|
||||
4. Consider both common vulnerabilities and context-specific threats
|
||||
5. Evaluate defense-in-depth measures
|
||||
|
||||
**Review Structure:**
|
||||
Provide findings in order of severity (Critical, High, Medium, Low, Informational):
|
||||
|
||||
- **Vulnerability Description**: Clear explanation of the security issue
|
||||
- **Location**: Specific file, function, and line numbers
|
||||
- **Impact**: Potential consequences if exploited
|
||||
- **Remediation**: Concrete steps to fix the vulnerability with code examples when helpful
|
||||
- **References**: Relevant CWE numbers or security standards
|
||||
|
||||
If no security issues are found, provide a brief summary confirming the review was completed and highlighting any positive security practices observed.
|
||||
|
||||
Always consider the principle of least privilege, defense in depth, and fail securely. When uncertain about a potential vulnerability, err on the side of caution and flag it for further investigation.
|
||||
52
.claude/agents/test-coverage-reviewer.md
Normal file
52
.claude/agents/test-coverage-reviewer.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: test-coverage-reviewer
|
||||
description: Use this agent when you need to review testing implementation and coverage. Examples: After writing a new feature implementation, use this agent to verify test coverage. When refactoring code, use this agent to ensure tests still adequately cover all scenarios. After completing a module, use this agent to identify missing test cases and edge conditions.
|
||||
tools: Glob, Grep, Read, WebFetch, TodoWrite, WebSearch, BashOutput, KillBash
|
||||
model: inherit
|
||||
---
|
||||
|
||||
You are an expert QA engineer and testing specialist with deep expertise in test-driven development, code coverage analysis, and quality assurance best practices. Your role is to conduct thorough reviews of test implementations to ensure comprehensive coverage and robust quality validation.
|
||||
|
||||
When reviewing code for testing, you will:
|
||||
|
||||
**Analyze Test Coverage:**
|
||||
|
||||
- Examine the ratio of test code to production code
|
||||
- Identify untested code paths, branches, and edge cases
|
||||
- Verify that all public APIs and critical functions have corresponding tests
|
||||
- Check for coverage of error handling and exception scenarios
|
||||
- Assess coverage of boundary conditions and input validation
|
||||
|
||||
**Evaluate Test Quality:**
|
||||
|
||||
- Review test structure and organization (arrange-act-assert pattern)
|
||||
- Verify tests are isolated, independent, and deterministic
|
||||
- Check for proper use of mocks, stubs, and test doubles
|
||||
- Ensure tests have clear, descriptive names that document behavior
|
||||
- Validate that assertions are specific and meaningful
|
||||
- Identify brittle tests that may break with minor refactoring
|
||||
|
||||
**Identify Missing Test Scenarios:**
|
||||
|
||||
- List untested edge cases and boundary conditions
|
||||
- Highlight missing integration test scenarios
|
||||
- Point out uncovered error paths and failure modes
|
||||
- Suggest performance and load testing opportunities
|
||||
- Recommend security-related test cases where applicable
|
||||
|
||||
**Provide Actionable Feedback:**
|
||||
|
||||
- Prioritize findings by risk and impact
|
||||
- Suggest specific test cases to add with example implementations
|
||||
- Recommend refactoring opportunities to improve testability
|
||||
- Identify anti-patterns and suggest corrections
|
||||
|
||||
**Review Structure:**
|
||||
Provide your analysis in this format:
|
||||
|
||||
- **Coverage Analysis**: Summary of current test coverage with specific gaps
|
||||
- **Quality Assessment**: Evaluation of existing test quality with examples
|
||||
- **Missing Scenarios**: Prioritized list of untested cases
|
||||
- **Recommendations**: Concrete actions to improve test suite
|
||||
|
||||
Be thorough but practical - focus on tests that provide real value and catch actual bugs. Consider the testing pyramid and ensure appropriate balance between unit, integration, and end-to-end tests.
|
||||
60
.claude/commands/label-issue.md
Normal file
60
.claude/commands/label-issue.md
Normal file
@@ -0,0 +1,60 @@
|
||||
---
|
||||
allowed-tools: Bash(gh label list:*),Bash(gh issue view:*),Bash(gh issue edit:*),Bash(gh search:*)
|
||||
description: Apply labels to GitHub issues
|
||||
---
|
||||
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use gh commands to get context about the issue:
|
||||
|
||||
- Use `gh issue view ${{ github.event.issue.number }}` to retrieve the current issue's details
|
||||
- Use `gh search issues` to find similar issues that might provide context for proper categorization
|
||||
- You have access to these Bash commands:
|
||||
- Bash(gh label list:\*) - to get available labels
|
||||
- Bash(gh issue view:\*) - to view issue details
|
||||
- Bash(gh issue edit:\*) - to apply labels to the issue
|
||||
- Bash(gh search:\*) - to search for similar issues
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- IMPORTANT: Add a priority label (P1, P2, or P3) based on the label descriptions from gh label list
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using gh search, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use `gh issue edit` to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using gh issue edit
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
|
||||
---
|
||||
20
.claude/commands/review-pr.md
Normal file
20
.claude/commands/review-pr.md
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
allowed-tools: Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*)
|
||||
description: Review a pull request
|
||||
---
|
||||
|
||||
Perform a comprehensive code review using subagents for key areas:
|
||||
|
||||
- code-quality-reviewer
|
||||
- performance-reviewer
|
||||
- test-coverage-reviewer
|
||||
- documentation-accuracy-reviewer
|
||||
- security-code-reviewer
|
||||
|
||||
Instruct each to only provide noteworthy feedback. Once they finish, review the feedback and post only the feedback that you also deem noteworthy.
|
||||
|
||||
Provide feedback using inline comments for specific issues.
|
||||
Use top-level comments for general observations or praise.
|
||||
Keep feedback concise.
|
||||
|
||||
---
|
||||
15
.claude/settings.json
Normal file
15
.claude/settings.json
Normal file
@@ -0,0 +1,15 @@
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "bun run format"
|
||||
}
|
||||
],
|
||||
"matcher": "Edit|Write|MultiEdit"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
132
.github/workflows/bump-claude-code-version.yml
vendored
132
.github/workflows/bump-claude-code-version.yml
vendored
@@ -1,132 +0,0 @@
|
||||
name: Bump Claude Code Version
|
||||
|
||||
on:
|
||||
repository_dispatch:
|
||||
types: [bump_claude_code_version]
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
version:
|
||||
description: "Claude Code version to bump to"
|
||||
required: true
|
||||
type: string
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
|
||||
jobs:
|
||||
bump-version:
|
||||
name: Bump Claude Code Version
|
||||
runs-on: ubuntu-latest
|
||||
environment: release
|
||||
timeout-minutes: 5
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4
|
||||
with:
|
||||
token: ${{ secrets.RELEASE_PAT }}
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Get version from event payload
|
||||
id: get_version
|
||||
run: |
|
||||
# Get version from either repository_dispatch or workflow_dispatch
|
||||
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
|
||||
NEW_VERSION="${CLIENT_PAYLOAD_VERSION}"
|
||||
else
|
||||
NEW_VERSION="${INPUT_VERSION}"
|
||||
fi
|
||||
|
||||
# Sanitize the version to avoid issues enabled by problematic characters
|
||||
NEW_VERSION=$(echo "$NEW_VERSION" | tr -d '`;$(){}[]|&<>' | tr -s ' ' '-')
|
||||
|
||||
if [ -z "$NEW_VERSION" ]; then
|
||||
echo "Error: version not provided"
|
||||
exit 1
|
||||
fi
|
||||
echo "NEW_VERSION=$NEW_VERSION" >> $GITHUB_ENV
|
||||
echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
|
||||
env:
|
||||
INPUT_VERSION: ${{ inputs.version }}
|
||||
CLIENT_PAYLOAD_VERSION: ${{ github.event.client_payload.version }}
|
||||
|
||||
- name: Create branch and update base-action/action.yml
|
||||
run: |
|
||||
# Variables
|
||||
TIMESTAMP=$(date +'%Y%m%d-%H%M%S')
|
||||
BRANCH_NAME="bump-claude-code-${{ env.NEW_VERSION }}-$TIMESTAMP"
|
||||
|
||||
echo "BRANCH_NAME=$BRANCH_NAME" >> $GITHUB_ENV
|
||||
|
||||
# Get the default branch
|
||||
DEFAULT_BRANCH=$(gh api repos/${GITHUB_REPOSITORY} --jq '.default_branch')
|
||||
echo "DEFAULT_BRANCH=$DEFAULT_BRANCH" >> $GITHUB_ENV
|
||||
|
||||
# Get the latest commit SHA from the default branch
|
||||
BASE_SHA=$(gh api repos/${GITHUB_REPOSITORY}/git/refs/heads/$DEFAULT_BRANCH --jq '.object.sha')
|
||||
|
||||
# Create a new branch
|
||||
gh api \
|
||||
--method POST \
|
||||
repos/${GITHUB_REPOSITORY}/git/refs \
|
||||
-f ref="refs/heads/$BRANCH_NAME" \
|
||||
-f sha="$BASE_SHA"
|
||||
|
||||
# Get the current base-action/action.yml content
|
||||
ACTION_CONTENT=$(gh api repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml?ref=$DEFAULT_BRANCH --jq '.content' | base64 -d)
|
||||
|
||||
# Update the Claude Code version in the npm install command
|
||||
UPDATED_CONTENT=$(echo "$ACTION_CONTENT" | sed -E "s/(npm install -g @anthropic-ai\/claude-code@)[0-9]+\.[0-9]+\.[0-9]+/\1${{ env.NEW_VERSION }}/")
|
||||
|
||||
# Verify the change would be made
|
||||
if ! echo "$UPDATED_CONTENT" | grep -q "@anthropic-ai/claude-code@${{ env.NEW_VERSION }}"; then
|
||||
echo "Error: Failed to update Claude Code version in content"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Get the current SHA of base-action/action.yml for the update API call
|
||||
FILE_SHA=$(gh api repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml?ref=$DEFAULT_BRANCH --jq '.sha')
|
||||
|
||||
# Create the updated base-action/action.yml content in base64
|
||||
echo "$UPDATED_CONTENT" | base64 > action.yml.b64
|
||||
|
||||
# Commit the updated base-action/action.yml via GitHub API
|
||||
gh api \
|
||||
--method PUT \
|
||||
repos/${GITHUB_REPOSITORY}/contents/base-action/action.yml \
|
||||
-f message="chore: bump Claude Code version to ${{ env.NEW_VERSION }}" \
|
||||
-F content=@action.yml.b64 \
|
||||
-f sha="$FILE_SHA" \
|
||||
-f branch="$BRANCH_NAME"
|
||||
|
||||
echo "Successfully created branch and updated Claude Code version to ${{ env.NEW_VERSION }}"
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.RELEASE_PAT }}
|
||||
GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
|
||||
- name: Create Pull Request
|
||||
run: |
|
||||
# Determine trigger type for PR body
|
||||
if [ "${{ github.event_name }}" = "repository_dispatch" ]; then
|
||||
TRIGGER_INFO="repository dispatch event"
|
||||
else
|
||||
TRIGGER_INFO="manual workflow dispatch by @${GITHUB_ACTOR}"
|
||||
fi
|
||||
|
||||
# Create PR body with proper YAML escape
|
||||
printf -v PR_BODY "## Bump Claude Code to ${{ env.NEW_VERSION }}\n\nThis PR updates the Claude Code version in base-action/action.yml to ${{ env.NEW_VERSION }}.\n\n### Changes\n- Updated Claude Code version from current to \`${{ env.NEW_VERSION }}\`\n\n### Triggered by\n- $TRIGGER_INFO\n\n🤖 This PR was automatically created by the bump-claude-code-version workflow."
|
||||
|
||||
echo "Creating PR with gh pr create command"
|
||||
PR_URL=$(gh pr create \
|
||||
--repo "${GITHUB_REPOSITORY}" \
|
||||
--title "chore: bump Claude Code version to ${{ env.NEW_VERSION }}" \
|
||||
--body "$PR_BODY" \
|
||||
--base "${DEFAULT_BRANCH}" \
|
||||
--head "${BRANCH_NAME}")
|
||||
|
||||
echo "PR created successfully: $PR_URL"
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.RELEASE_PAT }}
|
||||
GITHUB_REPOSITORY: ${{ github.repository }}
|
||||
GITHUB_ACTOR: ${{ github.actor }}
|
||||
DEFAULT_BRANCH: ${{ env.DEFAULT_BRANCH }}
|
||||
BRANCH_NAME: ${{ env.BRANCH_NAME }}
|
||||
37
.github/workflows/ci-all.yml
vendored
Normal file
37
.github/workflows/ci-all.yml
vendored
Normal file
@@ -0,0 +1,37 @@
|
||||
# Orchestrates all CI workflows - runs on PRs, pushes to main, and manual dispatch
|
||||
# Individual test workflows are called as reusable workflows
|
||||
name: CI All
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
ci:
|
||||
uses: ./.github/workflows/ci.yml
|
||||
|
||||
test-base-action:
|
||||
uses: ./.github/workflows/test-base-action.yml
|
||||
secrets: inherit # Required for ANTHROPIC_API_KEY
|
||||
|
||||
test-custom-executables:
|
||||
uses: ./.github/workflows/test-custom-executables.yml
|
||||
secrets: inherit
|
||||
|
||||
test-mcp-servers:
|
||||
uses: ./.github/workflows/test-mcp-servers.yml
|
||||
secrets: inherit
|
||||
|
||||
test-settings:
|
||||
uses: ./.github/workflows/test-settings.yml
|
||||
secrets: inherit
|
||||
|
||||
test-structured-output:
|
||||
uses: ./.github/workflows/test-structured-output.yml
|
||||
secrets: inherit
|
||||
9
.github/workflows/ci.yml
vendored
9
.github/workflows/ci.yml
vendored
@@ -1,15 +1,14 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
@@ -24,7 +23,7 @@ jobs:
|
||||
prettier:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- uses: oven-sh/setup-bun@v1
|
||||
with:
|
||||
@@ -39,7 +38,7 @@ jobs:
|
||||
typecheck:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
|
||||
- uses: oven-sh/setup-bun@v2
|
||||
with:
|
||||
|
||||
28
.github/workflows/claude-review.yml
vendored
28
.github/workflows/claude-review.yml
vendored
@@ -1,33 +1,27 @@
|
||||
name: Auto review PRs
|
||||
name: PR Review
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
auto-review:
|
||||
review:
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
- name: Auto review PR
|
||||
uses: anthropics/claude-code-action@main
|
||||
- name: PR Review with Progress Tracking
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
direct_prompt: |
|
||||
Please review this PR. Look at the changes and provide thoughtful feedback on:
|
||||
- Code quality and best practices
|
||||
- Potential bugs or issues
|
||||
- Suggestions for improvements
|
||||
- Overall architecture and design decisions
|
||||
- Documentation consistency: Verify that README.md and other documentation files are updated to reflect any code changes (especially new inputs, features, or configuration options)
|
||||
|
||||
Be constructive and specific in your feedback. Give inline comments where applicable.
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
allowed_tools: "mcp__github__create_pending_pull_request_review,mcp__github__add_comment_to_pending_review,mcp__github__submit_pending_pull_request_review,mcp__github__get_pull_request_diff"
|
||||
|
||||
prompt: "/review-pr REPO: ${{ github.repository }} PR_NUMBER: ${{ github.event.pull_request.number }}"
|
||||
claude_args: |
|
||||
--allowedTools "mcp__github_inline_comment__create_inline_comment"
|
||||
|
||||
38
.github/workflows/claude-test.yml
vendored
38
.github/workflows/claude-test.yml
vendored
@@ -1,38 +0,0 @@
|
||||
# Test workflow for km-anthropic fork (v1-dev branch)
|
||||
# This tests the fork implementation, not the main repo
|
||||
name: Claude Code (Fork Test)
|
||||
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
pull_request_review_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened, assigned]
|
||||
pull_request_review:
|
||||
types: [submitted]
|
||||
|
||||
jobs:
|
||||
claude:
|
||||
if: |
|
||||
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
|
||||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
|
||||
(github.event_name == 'issues' && (
|
||||
contains(github.event.issue.body, '@claude') ||
|
||||
contains(github.event.issue.title, '@claude')
|
||||
))
|
||||
runs-on: ubuntu-latest
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
issues: write
|
||||
id-token: write # Required for OIDC token exchange
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Run Claude Code
|
||||
uses: km-anthropic/claude-code-action@v1-dev
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
4
.github/workflows/claude.yml
vendored
4
.github/workflows/claude.yml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -36,4 +36,4 @@ jobs:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools "Bash(bun install),Bash(bun test:*),Bash(bun run format),Bash(bun typecheck)"
|
||||
--model "claude-opus-4-1-20250805"
|
||||
--model "claude-opus-4-5"
|
||||
|
||||
91
.github/workflows/issue-triage.yml
vendored
91
.github/workflows/issue-triage.yml
vendored
@@ -14,95 +14,14 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Setup GitHub MCP Server
|
||||
run: |
|
||||
mkdir -p /tmp/mcp-config
|
||||
cat > /tmp/mcp-config/mcp-servers.json << 'EOF'
|
||||
{
|
||||
"mcpServers": {
|
||||
"github": {
|
||||
"command": "docker",
|
||||
"args": [
|
||||
"run",
|
||||
"-i",
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-efef8ae"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
- name: Create triage prompt
|
||||
run: |
|
||||
mkdir -p /tmp/claude-prompts
|
||||
cat > /tmp/claude-prompts/triage-prompt.txt << 'EOF'
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
EOF
|
||||
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-base-action@v1
|
||||
uses: anthropics/claude-code-action@main
|
||||
with:
|
||||
prompt: $(cat /tmp/claude-prompts/triage-prompt.txt)
|
||||
prompt: "/label-issue REPO: ${{ github.repository }} ISSUE_NUMBER${{ github.event.issue.number }}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues
|
||||
--mcp-config /tmp/mcp-config/mcp-servers.json
|
||||
env:
|
||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
allowed_non_write_users: "*" # Required for issue triage workflow, if users without repo write access create issues
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
110
.github/workflows/release.yml
vendored
110
.github/workflows/release.yml
vendored
@@ -8,10 +8,23 @@ on:
|
||||
required: false
|
||||
type: boolean
|
||||
default: false
|
||||
workflow_run:
|
||||
workflows: ["CI All"]
|
||||
types:
|
||||
- completed
|
||||
branches:
|
||||
- main
|
||||
|
||||
jobs:
|
||||
create-release:
|
||||
runs-on: ubuntu-latest
|
||||
# Run if: manual dispatch OR (CI All succeeded AND commit is a version bump)
|
||||
if: |
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
(github.event.workflow_run.conclusion == 'success' &&
|
||||
github.event.workflow_run.head_branch == 'main' &&
|
||||
github.event.workflow_run.event == 'push' &&
|
||||
startsWith(github.event.workflow_run.head_commit.message, 'chore: bump Claude Code to'))
|
||||
environment: production
|
||||
permissions:
|
||||
contents: write
|
||||
@@ -19,7 +32,7 @@ jobs:
|
||||
next_version: ${{ steps.next_version.outputs.next_version }}
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -84,14 +97,15 @@ jobs:
|
||||
|
||||
update-major-tag:
|
||||
needs: create-release
|
||||
if: ${{ !inputs.dry_run }}
|
||||
# Skip for dry runs (workflow_run events are never dry runs)
|
||||
if: github.event_name == 'workflow_run' || !inputs.dry_run
|
||||
runs-on: ubuntu-latest
|
||||
environment: production
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -109,48 +123,48 @@ jobs:
|
||||
|
||||
echo "Updated $major_version tag to point to $next_version"
|
||||
|
||||
release-base-action:
|
||||
needs: create-release
|
||||
if: ${{ !inputs.dry_run }}
|
||||
runs-on: ubuntu-latest
|
||||
environment: production
|
||||
steps:
|
||||
- name: Checkout base-action repo
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
repository: anthropics/claude-code-base-action
|
||||
token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
|
||||
fetch-depth: 0
|
||||
|
||||
# - name: Create and push tag
|
||||
# run: |
|
||||
# next_version="${{ needs.create-release.outputs.next_version }}"
|
||||
|
||||
# git config user.name "github-actions[bot]"
|
||||
# git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
|
||||
# # Create the version tag
|
||||
# git tag -a "$next_version" -m "Release $next_version - synced from claude-code-action"
|
||||
# git push origin "$next_version"
|
||||
|
||||
# # Update the beta tag
|
||||
# git tag -fa beta -m "Update beta tag to ${next_version}"
|
||||
# git push origin beta --force
|
||||
|
||||
# - name: Create GitHub release
|
||||
# env:
|
||||
# GH_TOKEN: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
|
||||
# run: |
|
||||
# next_version="${{ needs.create-release.outputs.next_version }}"
|
||||
|
||||
# # Create the release
|
||||
# gh release create "$next_version" \
|
||||
# --repo anthropics/claude-code-base-action \
|
||||
# --title "$next_version" \
|
||||
# --notes "Release $next_version - synced from anthropics/claude-code-action" \
|
||||
# --latest=false
|
||||
|
||||
# # Update beta release to be latest
|
||||
# gh release edit beta \
|
||||
# --repo anthropics/claude-code-base-action \
|
||||
# --latest
|
||||
# release-base-action:
|
||||
# needs: create-release
|
||||
# if: ${{ !inputs.dry_run }}
|
||||
# runs-on: ubuntu-latest
|
||||
# environment: production
|
||||
# steps:
|
||||
# - name: Checkout base-action repo
|
||||
# uses: actions/checkout@v5
|
||||
# with:
|
||||
# repository: anthropics/claude-code-base-action
|
||||
# token: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
|
||||
# fetch-depth: 0
|
||||
#
|
||||
# - name: Create and push tag
|
||||
# run: |
|
||||
# next_version="${{ needs.create-release.outputs.next_version }}"
|
||||
#
|
||||
# git config user.name "github-actions[bot]"
|
||||
# git config user.email "github-actions[bot]@users.noreply.github.com"
|
||||
#
|
||||
# # Create the version tag
|
||||
# git tag -a "$next_version" -m "Release $next_version - synced from claude-code-action"
|
||||
# git push origin "$next_version"
|
||||
#
|
||||
# # Update the beta tag
|
||||
# git tag -fa beta -m "Update beta tag to ${next_version}"
|
||||
# git push origin beta --force
|
||||
#
|
||||
# - name: Create GitHub release
|
||||
# env:
|
||||
# GH_TOKEN: ${{ secrets.CLAUDE_CODE_BASE_ACTION_PAT }}
|
||||
# run: |
|
||||
# next_version="${{ needs.create-release.outputs.next_version }}"
|
||||
#
|
||||
# # Create the release
|
||||
# gh release create "$next_version" \
|
||||
# --repo anthropics/claude-code-base-action \
|
||||
# --title "$next_version" \
|
||||
# --notes "Release $next_version - synced from anthropics/claude-code-action" \
|
||||
# --latest=false
|
||||
#
|
||||
# # Update beta release to be latest
|
||||
# gh release edit beta \
|
||||
# --repo anthropics/claude-code-base-action \
|
||||
# --latest
|
||||
|
||||
4
.github/workflows/sync-base-action.yml
vendored
4
.github/workflows/sync-base-action.yml
vendored
@@ -94,5 +94,5 @@ jobs:
|
||||
echo "✅ Successfully synced \`base-action\` directory to [anthropics/claude-code-base-action](https://github.com/anthropics/claude-code-base-action)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Source commit**: [\`${GITHUB_SHA:0:7}\`](https://github.com/anthropics/claude-code-action/commit/${GITHUB_SHA})" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Triggered by**: ${{ github.event_name }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Actor**: @${{ github.actor }}" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Triggered by**: $GITHUB_EVENT_NAME" >> $GITHUB_STEP_SUMMARY
|
||||
echo "- **Actor**: @$GITHUB_ACTOR" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
4
.github/workflows/test-base-action.yml
vendored
4
.github/workflows/test-base-action.yml
vendored
@@ -1,9 +1,6 @@
|
||||
name: Test Claude Code Action
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
@@ -11,6 +8,7 @@ on:
|
||||
description: "Test prompt for Claude"
|
||||
required: false
|
||||
default: "List the files in the current directory starting with 'package'"
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
test-inline-prompt:
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
name: Test Custom Executables
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
test-custom-executables:
|
||||
|
||||
4
.github/workflows/test-mcp-servers.yml
vendored
4
.github/workflows/test-mcp-servers.yml
vendored
@@ -1,11 +1,9 @@
|
||||
name: Test MCP Servers
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
pull_request:
|
||||
branches: [main]
|
||||
workflow_dispatch:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
test-mcp-integration:
|
||||
|
||||
8
.github/workflows/test-settings.yml
vendored
8
.github/workflows/test-settings.yml
vendored
@@ -1,11 +1,9 @@
|
||||
name: Test Settings Feature
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
workflow_call:
|
||||
|
||||
jobs:
|
||||
test-settings-inline-allow:
|
||||
@@ -67,7 +65,7 @@ jobs:
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Use Bash to echo "This should not work"
|
||||
Run the command `echo $HOME` to check the home directory path
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
settings: |
|
||||
{
|
||||
@@ -163,7 +161,7 @@ jobs:
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Use Bash to echo "This should not work from file"
|
||||
Run the command `echo $HOME` to check the home directory path
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
settings: "test-settings.json"
|
||||
|
||||
|
||||
305
.github/workflows/test-structured-output.yml
vendored
Normal file
305
.github/workflows/test-structured-output.yml
vendored
Normal file
@@ -0,0 +1,305 @@
|
||||
name: Test Structured Outputs
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
workflow_dispatch:
|
||||
workflow_call:
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
jobs:
|
||||
test-basic-types:
|
||||
name: Test Basic Type Conversions
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- name: Test with explicit values
|
||||
id: test
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Run this command: echo "test"
|
||||
|
||||
Then return EXACTLY these values:
|
||||
- text_field: "hello"
|
||||
- number_field: 42
|
||||
- boolean_true: true
|
||||
- boolean_false: false
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash
|
||||
--json-schema '{"type":"object","properties":{"text_field":{"type":"string"},"number_field":{"type":"number"},"boolean_true":{"type":"boolean"},"boolean_false":{"type":"boolean"}},"required":["text_field","number_field","boolean_true","boolean_false"]}'
|
||||
|
||||
- name: Verify outputs
|
||||
run: |
|
||||
# Parse the structured_output JSON
|
||||
OUTPUT='${{ steps.test.outputs.structured_output }}'
|
||||
|
||||
# Test string pass-through
|
||||
TEXT_FIELD=$(echo "$OUTPUT" | jq -r '.text_field')
|
||||
if [ "$TEXT_FIELD" != "hello" ]; then
|
||||
echo "❌ String: expected 'hello', got '$TEXT_FIELD'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test number → string conversion
|
||||
NUMBER_FIELD=$(echo "$OUTPUT" | jq -r '.number_field')
|
||||
if [ "$NUMBER_FIELD" != "42" ]; then
|
||||
echo "❌ Number: expected '42', got '$NUMBER_FIELD'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test boolean → "true" conversion
|
||||
BOOLEAN_TRUE=$(echo "$OUTPUT" | jq -r '.boolean_true')
|
||||
if [ "$BOOLEAN_TRUE" != "true" ]; then
|
||||
echo "❌ Boolean true: expected 'true', got '$BOOLEAN_TRUE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test boolean → "false" conversion
|
||||
BOOLEAN_FALSE=$(echo "$OUTPUT" | jq -r '.boolean_false')
|
||||
if [ "$BOOLEAN_FALSE" != "false" ]; then
|
||||
echo "❌ Boolean false: expected 'false', got '$BOOLEAN_FALSE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ All basic type conversions correct"
|
||||
|
||||
test-complex-types:
|
||||
name: Test Arrays and Objects
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- name: Test complex types
|
||||
id: test
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Run: echo "ready"
|
||||
|
||||
Return EXACTLY:
|
||||
- items: ["apple", "banana", "cherry"]
|
||||
- config: {"key": "value", "count": 3}
|
||||
- empty_array: []
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash
|
||||
--json-schema '{"type":"object","properties":{"items":{"type":"array","items":{"type":"string"}},"config":{"type":"object"},"empty_array":{"type":"array"}},"required":["items","config","empty_array"]}'
|
||||
|
||||
- name: Verify JSON stringification
|
||||
run: |
|
||||
# Parse the structured_output JSON
|
||||
OUTPUT='${{ steps.test.outputs.structured_output }}'
|
||||
|
||||
# Arrays should be JSON stringified
|
||||
if ! echo "$OUTPUT" | jq -e '.items | length == 3' > /dev/null; then
|
||||
echo "❌ Array not properly formatted"
|
||||
echo "$OUTPUT" | jq '.items'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Objects should be JSON stringified
|
||||
if ! echo "$OUTPUT" | jq -e '.config.key == "value"' > /dev/null; then
|
||||
echo "❌ Object not properly formatted"
|
||||
echo "$OUTPUT" | jq '.config'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Empty arrays should work
|
||||
if ! echo "$OUTPUT" | jq -e '.empty_array | length == 0' > /dev/null; then
|
||||
echo "❌ Empty array not properly formatted"
|
||||
echo "$OUTPUT" | jq '.empty_array'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ All complex types handled correctly"
|
||||
|
||||
test-edge-cases:
|
||||
name: Test Edge Cases
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- name: Test edge cases
|
||||
id: test
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Run: echo "test"
|
||||
|
||||
Return EXACTLY:
|
||||
- zero: 0
|
||||
- empty_string: ""
|
||||
- negative: -5
|
||||
- decimal: 3.14
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash
|
||||
--json-schema '{"type":"object","properties":{"zero":{"type":"number"},"empty_string":{"type":"string"},"negative":{"type":"number"},"decimal":{"type":"number"}},"required":["zero","empty_string","negative","decimal"]}'
|
||||
|
||||
- name: Verify edge cases
|
||||
run: |
|
||||
# Parse the structured_output JSON
|
||||
OUTPUT='${{ steps.test.outputs.structured_output }}'
|
||||
|
||||
# Zero should be "0", not empty or falsy
|
||||
ZERO=$(echo "$OUTPUT" | jq -r '.zero')
|
||||
if [ "$ZERO" != "0" ]; then
|
||||
echo "❌ Zero: expected '0', got '$ZERO'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Empty string should be empty (not "null" or missing)
|
||||
EMPTY_STRING=$(echo "$OUTPUT" | jq -r '.empty_string')
|
||||
if [ "$EMPTY_STRING" != "" ]; then
|
||||
echo "❌ Empty string: expected '', got '$EMPTY_STRING'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Negative numbers should work
|
||||
NEGATIVE=$(echo "$OUTPUT" | jq -r '.negative')
|
||||
if [ "$NEGATIVE" != "-5" ]; then
|
||||
echo "❌ Negative: expected '-5', got '$NEGATIVE'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Decimals should preserve precision
|
||||
DECIMAL=$(echo "$OUTPUT" | jq -r '.decimal')
|
||||
if [ "$DECIMAL" != "3.14" ]; then
|
||||
echo "❌ Decimal: expected '3.14', got '$DECIMAL'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ All edge cases handled correctly"
|
||||
|
||||
test-name-sanitization:
|
||||
name: Test Output Name Sanitization
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- name: Test special characters in field names
|
||||
id: test
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: |
|
||||
Run: echo "test"
|
||||
Return EXACTLY: {test-result: "passed", item_count: 10}
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash
|
||||
--json-schema '{"type":"object","properties":{"test-result":{"type":"string"},"item_count":{"type":"number"}},"required":["test-result","item_count"]}'
|
||||
|
||||
- name: Verify sanitized names work
|
||||
run: |
|
||||
# Parse the structured_output JSON
|
||||
OUTPUT='${{ steps.test.outputs.structured_output }}'
|
||||
|
||||
# Hyphens should be preserved in the JSON
|
||||
TEST_RESULT=$(echo "$OUTPUT" | jq -r '.["test-result"]')
|
||||
if [ "$TEST_RESULT" != "passed" ]; then
|
||||
echo "❌ Hyphenated name failed: expected 'passed', got '$TEST_RESULT'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Underscores should work
|
||||
ITEM_COUNT=$(echo "$OUTPUT" | jq -r '.item_count')
|
||||
if [ "$ITEM_COUNT" != "10" ]; then
|
||||
echo "❌ Underscore name failed: expected '10', got '$ITEM_COUNT'"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Name sanitization works"
|
||||
|
||||
test-execution-file-structure:
|
||||
name: Test Execution File Format
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
|
||||
|
||||
- name: Run with structured output
|
||||
id: test
|
||||
uses: ./base-action
|
||||
with:
|
||||
prompt: "Run: echo 'complete'. Return: {done: true}"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools Bash
|
||||
--json-schema '{"type":"object","properties":{"done":{"type":"boolean"}},"required":["done"]}'
|
||||
|
||||
- name: Verify execution file contains structured_output
|
||||
run: |
|
||||
FILE="${{ steps.test.outputs.execution_file }}"
|
||||
|
||||
# Check file exists
|
||||
if [ ! -f "$FILE" ]; then
|
||||
echo "❌ Execution file missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for structured_output field
|
||||
if ! jq -e '.[] | select(.type == "result") | .structured_output' "$FILE" > /dev/null; then
|
||||
echo "❌ No structured_output in execution file"
|
||||
cat "$FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify the actual value
|
||||
DONE=$(jq -r '.[] | select(.type == "result") | .structured_output.done' "$FILE")
|
||||
if [ "$DONE" != "true" ]; then
|
||||
echo "❌ Wrong value in execution file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "✅ Execution file format correct"
|
||||
|
||||
test-summary:
|
||||
name: Summary
|
||||
runs-on: ubuntu-latest
|
||||
needs:
|
||||
- test-basic-types
|
||||
- test-complex-types
|
||||
- test-edge-cases
|
||||
- test-name-sanitization
|
||||
- test-execution-file-structure
|
||||
if: always()
|
||||
steps:
|
||||
- name: Generate Summary
|
||||
run: |
|
||||
echo "# Structured Output Tests (Optimized)" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "Fast, deterministic tests using explicit prompts" >> $GITHUB_STEP_SUMMARY
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Test | Result |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "|------|--------|" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Basic Types | ${{ needs.test-basic-types.result == 'success' && '✅ PASS' || '❌ FAIL' }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Complex Types | ${{ needs.test-complex-types.result == 'success' && '✅ PASS' || '❌ FAIL' }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Edge Cases | ${{ needs.test-edge-cases.result == 'success' && '✅ PASS' || '❌ FAIL' }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Name Sanitization | ${{ needs.test-name-sanitization.result == 'success' && '✅ PASS' || '❌ FAIL' }} |" >> $GITHUB_STEP_SUMMARY
|
||||
echo "| Execution File | ${{ needs.test-execution-file-structure.result == 'success' && '✅ PASS' || '❌ FAIL' }} |" >> $GITHUB_STEP_SUMMARY
|
||||
|
||||
# Check if all passed
|
||||
ALL_PASSED=${{
|
||||
needs.test-basic-types.result == 'success' &&
|
||||
needs.test-complex-types.result == 'success' &&
|
||||
needs.test-edge-cases.result == 'success' &&
|
||||
needs.test-name-sanitization.result == 'success' &&
|
||||
needs.test-execution-file-structure.result == 'success'
|
||||
}}
|
||||
|
||||
if [ "$ALL_PASSED" = "true" ]; then
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "## ✅ All Tests Passed" >> $GITHUB_STEP_SUMMARY
|
||||
else
|
||||
echo "" >> $GITHUB_STEP_SUMMARY
|
||||
echo "## ❌ Some Tests Failed" >> $GITHUB_STEP_SUMMARY
|
||||
exit 1
|
||||
fi
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
# Claude Code Action
|
||||
|
||||
A general-purpose [Claude Code](https://claude.ai/code) action for GitHub PRs and issues that can answer questions and implement code changes. This action intelligently detects when to activate based on your workflow context—whether responding to @claude mentions, issue assignments, or executing automation tasks with explicit prompts. It supports multiple authentication methods including Anthropic direct API, Amazon Bedrock, and Google Vertex AI.
|
||||
A general-purpose [Claude Code](https://claude.ai/code) action for GitHub PRs and issues that can answer questions and implement code changes. This action intelligently detects when to activate based on your workflow context—whether responding to @claude mentions, issue assignments, or executing automation tasks with explicit prompts. It supports multiple authentication methods including Anthropic direct API, Amazon Bedrock, Google Vertex AI, and Microsoft Foundry.
|
||||
|
||||
## Features
|
||||
|
||||
@@ -13,6 +13,7 @@ A general-purpose [Claude Code](https://claude.ai/code) action for GitHub PRs an
|
||||
- 💬 **PR/Issue Integration**: Works seamlessly with GitHub comments and PR reviews
|
||||
- 🛠️ **Flexible Tool Access**: Access to GitHub APIs and file operations (additional tools can be enabled via configuration)
|
||||
- 📋 **Progress Tracking**: Visual progress indicators with checkboxes that dynamically update as Claude completes tasks
|
||||
- 📊 **Structured Outputs**: Get validated JSON results that automatically become GitHub Action outputs for complex automations
|
||||
- 🏃 **Runs on Your Infrastructure**: The action executes entirely on your own GitHub runner (Anthropic API calls go to your chosen provider)
|
||||
- ⚙️ **Simplified Configuration**: Unified `prompt` and `claude_args` inputs provide clean, powerful configuration aligned with Claude Code SDK
|
||||
|
||||
@@ -29,7 +30,7 @@ This command will guide you through setting up the GitHub app and required secre
|
||||
**Note**:
|
||||
|
||||
- You must be a repository admin to install the GitHub app and add secrets
|
||||
- This quickstart method is only available for direct Anthropic API users. For AWS Bedrock or Google Vertex AI setup, see [docs/cloud-providers.md](./docs/cloud-providers.md).
|
||||
- This quickstart method is only available for direct Anthropic API users. For AWS Bedrock, Google Vertex AI, or Microsoft Foundry setup, see [docs/cloud-providers.md](./docs/cloud-providers.md).
|
||||
|
||||
## 📚 Solutions & Use Cases
|
||||
|
||||
@@ -56,7 +57,7 @@ Each solution includes complete working examples, configuration details, and exp
|
||||
- [Custom Automations](./docs/custom-automations.md) - Examples of automated workflows and custom prompts
|
||||
- [Configuration](./docs/configuration.md) - MCP servers, permissions, environment variables, and advanced settings
|
||||
- [Experimental Features](./docs/experimental.md) - Execution modes and network restrictions
|
||||
- [Cloud Providers](./docs/cloud-providers.md) - AWS Bedrock and Google Vertex AI setup
|
||||
- [Cloud Providers](./docs/cloud-providers.md) - AWS Bedrock, Google Vertex AI, and Microsoft Foundry setup
|
||||
- [Capabilities & Limitations](./docs/capabilities-and-limitations.md) - What Claude can and cannot do
|
||||
- [Security](./docs/security.md) - Access control, permissions, and commit signing
|
||||
- [FAQ](./docs/faq.md) - Common questions and troubleshooting
|
||||
|
||||
126
action.yml
126
action.yml
@@ -23,10 +23,18 @@ inputs:
|
||||
description: "The prefix to use for Claude branches (defaults to 'claude/', use 'claude-' for dash format)"
|
||||
required: false
|
||||
default: "claude/"
|
||||
branch_name_template:
|
||||
description: "Template for branch naming. Available variables: {{prefix}}, {{entityType}}, {{entityNumber}}, {{timestamp}}, {{sha}}, {{label}}, {{description}}. {{label}} will be first label from the issue/PR, or {{entityType}} as a fallback. {{description}} will be the first 5 words of the issue/PR title in kebab-case. Default: '{{prefix}}{{entityType}}-{{entityNumber}}-{{timestamp}}'"
|
||||
required: false
|
||||
default: ""
|
||||
allowed_bots:
|
||||
description: "Comma-separated list of allowed bot usernames, or '*' to allow all bots. Empty string (default) allows no bots."
|
||||
required: false
|
||||
default: ""
|
||||
allowed_non_write_users:
|
||||
description: "Comma-separated list of usernames to allow without write permissions, or '*' to allow all users. Only works when github_token input is provided. WARNING: Use with extreme caution - this bypasses security checks and should only be used for workflows with very limited permissions (e.g., issue labeling)."
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
# Claude Code configuration
|
||||
prompt:
|
||||
@@ -40,7 +48,7 @@ inputs:
|
||||
|
||||
# Auth configuration
|
||||
anthropic_api_key:
|
||||
description: "Anthropic API key (required for direct API, not needed for Bedrock/Vertex)"
|
||||
description: "Anthropic API key (required for direct API, not needed for Bedrock/Vertex/Foundry)"
|
||||
required: false
|
||||
claude_code_oauth_token:
|
||||
description: "Claude Code OAuth token (alternative to anthropic_api_key)"
|
||||
@@ -56,6 +64,10 @@ inputs:
|
||||
description: "Use Google Vertex AI with OIDC authentication instead of direct Anthropic API"
|
||||
required: false
|
||||
default: "false"
|
||||
use_foundry:
|
||||
description: "Use Microsoft Foundry with OIDC authentication instead of direct Anthropic API"
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
claude_args:
|
||||
description: "Additional arguments to pass directly to Claude CLI"
|
||||
@@ -73,14 +85,26 @@ inputs:
|
||||
description: "Enable commit signing using GitHub's commit signature verification. When false, Claude uses standard git commands"
|
||||
required: false
|
||||
default: "false"
|
||||
ssh_signing_key:
|
||||
description: "SSH private key for signing commits. When provided, git will be configured to use SSH signing. Takes precedence over use_commit_signing."
|
||||
required: false
|
||||
default: ""
|
||||
bot_id:
|
||||
description: "GitHub user ID to use for git operations (defaults to Claude's bot ID)"
|
||||
required: false
|
||||
default: "41898282" # Claude's bot ID - see src/github/constants.ts
|
||||
bot_name:
|
||||
description: "GitHub username to use for git operations (defaults to Claude's bot name)"
|
||||
required: false
|
||||
default: "claude[bot]"
|
||||
track_progress:
|
||||
description: "Force tag mode with tracking comments for pull_request and issue events. Only applicable to pull_request (opened, synchronize, ready_for_review, reopened) and issue (opened, edited, labeled, assigned) events."
|
||||
required: false
|
||||
default: "false"
|
||||
experimental_allowed_domains:
|
||||
description: "Restrict network access to these domains only (newline-separated). If not set, no restrictions are applied. Provider domains are auto-detected."
|
||||
include_fix_links:
|
||||
description: "Include 'Fix this' links in PR code review feedback that open Claude Code with context to fix the identified issue"
|
||||
required: false
|
||||
default: ""
|
||||
default: "true"
|
||||
path_to_claude_code_executable:
|
||||
description: "Optional path to a custom Claude Code executable. If provided, skips automatic installation and uses this executable instead. WARNING: Using an older version may cause problems if the action begins taking advantage of new Claude Code features. This input is typically not needed unless you're debugging something specific or have unique needs in your environment."
|
||||
required: false
|
||||
@@ -89,6 +113,18 @@ inputs:
|
||||
description: "Optional path to a custom Bun executable. If provided, skips automatic Bun installation and uses this executable instead. WARNING: Using an incompatible version may cause problems if the action requires specific Bun features. This input is typically not needed unless you're debugging something specific or have unique needs in your environment."
|
||||
required: false
|
||||
default: ""
|
||||
show_full_output:
|
||||
description: "Show full JSON output from Claude Code. WARNING: This outputs ALL Claude messages including tool execution results which may contain secrets, API keys, or other sensitive information. These logs are publicly visible in GitHub Actions. Only enable for debugging in non-sensitive environments."
|
||||
required: false
|
||||
default: "false"
|
||||
plugins:
|
||||
description: "Newline-separated list of Claude Code plugin names to install (e.g., 'code-review@claude-code-plugins\nfeature-dev@claude-code-plugins')"
|
||||
required: false
|
||||
default: ""
|
||||
plugin_marketplaces:
|
||||
description: "Newline-separated list of Claude Code plugin marketplace Git URLs to install from (e.g., 'https://github.com/user/marketplace1.git\nhttps://github.com/user/marketplace2.git')"
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
outputs:
|
||||
execution_file:
|
||||
@@ -100,23 +136,31 @@ outputs:
|
||||
github_token:
|
||||
description: "The GitHub token used by the action (Claude App token if available)"
|
||||
value: ${{ steps.prepare.outputs.github_token }}
|
||||
structured_output:
|
||||
description: "JSON string containing all structured output fields when --json-schema is provided in claude_args. Use fromJSON() to parse: fromJSON(steps.id.outputs.structured_output).field_name"
|
||||
value: ${{ steps.claude-code.outputs.structured_output }}
|
||||
session_id:
|
||||
description: "The Claude Code session ID that can be used with --resume to continue this conversation"
|
||||
value: ${{ steps.claude-code.outputs.session_id }}
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Install Bun
|
||||
if: inputs.path_to_bun_executable == ''
|
||||
uses: oven-sh/setup-bun@735343b667d3e6f658f44d0eca948eb6282f2b76 # https://github.com/oven-sh/setup-bun/releases/tag/v2.0.2
|
||||
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
|
||||
with:
|
||||
bun-version: 1.2.11
|
||||
bun-version: 1.3.6
|
||||
|
||||
- name: Setup Custom Bun Path
|
||||
if: inputs.path_to_bun_executable != ''
|
||||
shell: bash
|
||||
env:
|
||||
PATH_TO_BUN_EXECUTABLE: ${{ inputs.path_to_bun_executable }}
|
||||
run: |
|
||||
echo "Using custom Bun executable: ${{ inputs.path_to_bun_executable }}"
|
||||
echo "Using custom Bun executable: $PATH_TO_BUN_EXECUTABLE"
|
||||
# Add the directory containing the custom executable to PATH
|
||||
BUN_DIR=$(dirname "${{ inputs.path_to_bun_executable }}")
|
||||
BUN_DIR=$(dirname "$PATH_TO_BUN_EXECUTABLE")
|
||||
echo "$BUN_DIR" >> "$GITHUB_PATH"
|
||||
|
||||
- name: Install Dependencies
|
||||
@@ -138,13 +182,19 @@ runs:
|
||||
LABEL_TRIGGER: ${{ inputs.label_trigger }}
|
||||
BASE_BRANCH: ${{ inputs.base_branch }}
|
||||
BRANCH_PREFIX: ${{ inputs.branch_prefix }}
|
||||
BRANCH_NAME_TEMPLATE: ${{ inputs.branch_name_template }}
|
||||
OVERRIDE_GITHUB_TOKEN: ${{ inputs.github_token }}
|
||||
ALLOWED_BOTS: ${{ inputs.allowed_bots }}
|
||||
ALLOWED_NON_WRITE_USERS: ${{ inputs.allowed_non_write_users }}
|
||||
GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
USE_STICKY_COMMENT: ${{ inputs.use_sticky_comment }}
|
||||
DEFAULT_WORKFLOW_TOKEN: ${{ github.token }}
|
||||
USE_COMMIT_SIGNING: ${{ inputs.use_commit_signing }}
|
||||
SSH_SIGNING_KEY: ${{ inputs.ssh_signing_key }}
|
||||
BOT_ID: ${{ inputs.bot_id }}
|
||||
BOT_NAME: ${{ inputs.bot_name }}
|
||||
TRACK_PROGRESS: ${{ inputs.track_progress }}
|
||||
INCLUDE_FIX_LINKS: ${{ inputs.include_fix_links }}
|
||||
ADDITIONAL_PERMISSIONS: ${{ inputs.additional_permissions }}
|
||||
CLAUDE_ARGS: ${{ inputs.claude_args }}
|
||||
ALL_INPUTS: ${{ toJson(inputs) }}
|
||||
@@ -152,6 +202,8 @@ runs:
|
||||
- name: Install Base Action Dependencies
|
||||
if: steps.prepare.outputs.contains_trigger == 'true'
|
||||
shell: bash
|
||||
env:
|
||||
PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
|
||||
run: |
|
||||
echo "Installing base-action dependencies..."
|
||||
cd ${GITHUB_ACTION_PATH}/base-action
|
||||
@@ -160,26 +212,33 @@ runs:
|
||||
cd -
|
||||
|
||||
# Install Claude Code if no custom executable is provided
|
||||
if [ -z "${{ inputs.path_to_claude_code_executable }}" ]; then
|
||||
echo "Installing Claude Code..."
|
||||
curl -fsSL https://claude.ai/install.sh | bash -s 1.0.103
|
||||
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
|
||||
CLAUDE_CODE_VERSION="2.1.16"
|
||||
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
|
||||
for attempt in 1 2 3; do
|
||||
echo "Installation attempt $attempt..."
|
||||
if command -v timeout &> /dev/null; then
|
||||
# Use --foreground to kill entire process group on timeout, --kill-after to send SIGKILL if SIGTERM fails
|
||||
timeout --foreground --kill-after=10 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
|
||||
else
|
||||
curl -fsSL https://claude.ai/install.sh | bash -s -- "$CLAUDE_CODE_VERSION" && break
|
||||
fi
|
||||
if [ $attempt -eq 3 ]; then
|
||||
echo "Failed to install Claude Code after 3 attempts"
|
||||
exit 1
|
||||
fi
|
||||
echo "Installation failed, retrying..."
|
||||
sleep 5
|
||||
done
|
||||
echo "Claude Code installed successfully"
|
||||
echo "$HOME/.local/bin" >> "$GITHUB_PATH"
|
||||
else
|
||||
echo "Using custom Claude Code executable: ${{ inputs.path_to_claude_code_executable }}"
|
||||
echo "Using custom Claude Code executable: $PATH_TO_CLAUDE_CODE_EXECUTABLE"
|
||||
# Add the directory containing the custom executable to PATH
|
||||
CLAUDE_DIR=$(dirname "${{ inputs.path_to_claude_code_executable }}")
|
||||
CLAUDE_DIR=$(dirname "$PATH_TO_CLAUDE_CODE_EXECUTABLE")
|
||||
echo "$CLAUDE_DIR" >> "$GITHUB_PATH"
|
||||
fi
|
||||
|
||||
- name: Setup Network Restrictions
|
||||
if: steps.prepare.outputs.contains_trigger == 'true' && inputs.experimental_allowed_domains != ''
|
||||
shell: bash
|
||||
run: |
|
||||
chmod +x ${GITHUB_ACTION_PATH}/scripts/setup-network-restrictions.sh
|
||||
${GITHUB_ACTION_PATH}/scripts/setup-network-restrictions.sh
|
||||
env:
|
||||
EXPERIMENTAL_ALLOWED_DOMAINS: ${{ inputs.experimental_allowed_domains }}
|
||||
|
||||
- name: Run Claude Code
|
||||
id: claude-code
|
||||
if: steps.prepare.outputs.contains_trigger == 'true'
|
||||
@@ -198,9 +257,13 @@ runs:
|
||||
INPUT_ACTION_INPUTS_PRESENT: ${{ steps.prepare.outputs.action_inputs_present }}
|
||||
INPUT_PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
|
||||
INPUT_PATH_TO_BUN_EXECUTABLE: ${{ inputs.path_to_bun_executable }}
|
||||
INPUT_SHOW_FULL_OUTPUT: ${{ inputs.show_full_output }}
|
||||
INPUT_PLUGINS: ${{ inputs.plugins }}
|
||||
INPUT_PLUGIN_MARKETPLACES: ${{ inputs.plugin_marketplaces }}
|
||||
|
||||
# Model configuration
|
||||
GITHUB_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
|
||||
GH_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
|
||||
NODE_VERSION: ${{ env.NODE_VERSION }}
|
||||
DETAILED_PERMISSION_MESSAGES: "1"
|
||||
|
||||
@@ -208,14 +271,17 @@ runs:
|
||||
ANTHROPIC_API_KEY: ${{ inputs.anthropic_api_key }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ inputs.claude_code_oauth_token }}
|
||||
ANTHROPIC_BASE_URL: ${{ env.ANTHROPIC_BASE_URL }}
|
||||
ANTHROPIC_CUSTOM_HEADERS: ${{ env.ANTHROPIC_CUSTOM_HEADERS }}
|
||||
CLAUDE_CODE_USE_BEDROCK: ${{ inputs.use_bedrock == 'true' && '1' || '' }}
|
||||
CLAUDE_CODE_USE_VERTEX: ${{ inputs.use_vertex == 'true' && '1' || '' }}
|
||||
CLAUDE_CODE_USE_FOUNDRY: ${{ inputs.use_foundry == 'true' && '1' || '' }}
|
||||
|
||||
# AWS configuration
|
||||
AWS_REGION: ${{ env.AWS_REGION }}
|
||||
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
|
||||
AWS_SESSION_TOKEN: ${{ env.AWS_SESSION_TOKEN }}
|
||||
AWS_BEARER_TOKEN_BEDROCK: ${{ env.AWS_BEARER_TOKEN_BEDROCK }}
|
||||
ANTHROPIC_BEDROCK_BASE_URL: ${{ env.ANTHROPIC_BEDROCK_BASE_URL || (env.AWS_REGION && format('https://bedrock-runtime.{0}.amazonaws.com', env.AWS_REGION)) }}
|
||||
|
||||
# GCP configuration
|
||||
@@ -229,6 +295,13 @@ runs:
|
||||
VERTEX_REGION_CLAUDE_3_5_SONNET: ${{ env.VERTEX_REGION_CLAUDE_3_5_SONNET }}
|
||||
VERTEX_REGION_CLAUDE_3_7_SONNET: ${{ env.VERTEX_REGION_CLAUDE_3_7_SONNET }}
|
||||
|
||||
# Microsoft Foundry configuration
|
||||
ANTHROPIC_FOUNDRY_RESOURCE: ${{ env.ANTHROPIC_FOUNDRY_RESOURCE }}
|
||||
ANTHROPIC_FOUNDRY_BASE_URL: ${{ env.ANTHROPIC_FOUNDRY_BASE_URL }}
|
||||
ANTHROPIC_DEFAULT_SONNET_MODEL: ${{ env.ANTHROPIC_DEFAULT_SONNET_MODEL }}
|
||||
ANTHROPIC_DEFAULT_HAIKU_MODEL: ${{ env.ANTHROPIC_DEFAULT_HAIKU_MODEL }}
|
||||
ANTHROPIC_DEFAULT_OPUS_MODEL: ${{ env.ANTHROPIC_DEFAULT_OPUS_MODEL }}
|
||||
|
||||
- name: Update comment with job link
|
||||
if: steps.prepare.outputs.contains_trigger == 'true' && steps.prepare.outputs.claude_comment_id && always()
|
||||
shell: bash
|
||||
@@ -240,10 +313,11 @@ runs:
|
||||
CLAUDE_COMMENT_ID: ${{ steps.prepare.outputs.claude_comment_id }}
|
||||
GITHUB_RUN_ID: ${{ github.run_id }}
|
||||
GITHUB_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
|
||||
GH_TOKEN: ${{ steps.prepare.outputs.GITHUB_TOKEN }}
|
||||
GITHUB_EVENT_NAME: ${{ github.event_name }}
|
||||
TRIGGER_COMMENT_ID: ${{ github.event.comment.id }}
|
||||
CLAUDE_BRANCH: ${{ steps.prepare.outputs.CLAUDE_BRANCH }}
|
||||
IS_PR: ${{ github.event.issue.pull_request != null || github.event_name == 'pull_request_review_comment' }}
|
||||
IS_PR: ${{ github.event.issue.pull_request != null || github.event_name == 'pull_request_target' || github.event_name == 'pull_request_review_comment' }}
|
||||
BASE_BRANCH: ${{ steps.prepare.outputs.BASE_BRANCH }}
|
||||
CLAUDE_SUCCESS: ${{ steps.claude-code.outputs.conclusion == 'success' }}
|
||||
OUTPUT_FILE: ${{ steps.claude-code.outputs.execution_file || '' }}
|
||||
@@ -271,6 +345,12 @@ runs:
|
||||
echo '```' >> $GITHUB_STEP_SUMMARY
|
||||
fi
|
||||
|
||||
- name: Cleanup SSH signing key
|
||||
if: always() && inputs.ssh_signing_key != ''
|
||||
shell: bash
|
||||
run: |
|
||||
bun run ${GITHUB_ACTION_PATH}/src/entrypoints/cleanup-ssh-signing.ts
|
||||
|
||||
- name: Revoke app token
|
||||
if: always() && inputs.github_token == '' && steps.prepare.outputs.skipped_due_to_workflow_validation_mismatch != 'true'
|
||||
shell: bash
|
||||
|
||||
@@ -27,7 +27,6 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
|
||||
### Key Design Patterns
|
||||
|
||||
- Uses Bun runtime for development and execution
|
||||
- Named pipes for IPC between prompt input and Claude process
|
||||
- JSON streaming output format for execution logs
|
||||
- Composite action pattern to orchestrate multiple steps
|
||||
- Provider-agnostic design supporting Anthropic API, AWS Bedrock, and Google Vertex AI
|
||||
@@ -50,11 +49,10 @@ This is a GitHub Action that allows running Claude Code within GitHub workflows.
|
||||
|
||||
- Unit tests for configuration logic
|
||||
- Integration tests for prompt preparation
|
||||
- Full workflow tests in `.github/workflows/test-action.yml`
|
||||
- Full workflow tests in `.github/workflows/test-base-action.yml`
|
||||
|
||||
## Important Technical Details
|
||||
|
||||
- Uses `mkfifo` to create named pipes for prompt input
|
||||
- Outputs execution logs as JSON to `/tmp/claude-execution-output.json`
|
||||
- Timeout enforcement via `timeout` command wrapper
|
||||
- Strict TypeScript configuration with Bun-specific settings
|
||||
|
||||
@@ -85,29 +85,32 @@ Add the following to your workflow file:
|
||||
|
||||
## Inputs
|
||||
|
||||
| Input | Description | Required | Default |
|
||||
| ------------------------- | ------------------------------------------------------------------------------------------------- | -------- | ---------------------------- |
|
||||
| `prompt` | The prompt to send to Claude Code | No\* | '' |
|
||||
| `prompt_file` | Path to a file containing the prompt to send to Claude Code | No\* | '' |
|
||||
| `allowed_tools` | Comma-separated list of allowed tools for Claude Code to use | No | '' |
|
||||
| `disallowed_tools` | Comma-separated list of disallowed tools that Claude Code cannot use | No | '' |
|
||||
| `max_turns` | Maximum number of conversation turns (default: no limit) | No | '' |
|
||||
| `mcp_config` | Path to the MCP configuration JSON file, or MCP configuration JSON string | No | '' |
|
||||
| `settings` | Path to Claude Code settings JSON file, or settings JSON string | No | '' |
|
||||
| `system_prompt` | Override system prompt | No | '' |
|
||||
| `append_system_prompt` | Append to system prompt | No | '' |
|
||||
| `claude_env` | Custom environment variables to pass to Claude Code execution (YAML multiline format) | No | '' |
|
||||
| `model` | Model to use (provider-specific format required for Bedrock/Vertex) | No | 'claude-4-0-sonnet-20250219' |
|
||||
| `anthropic_model` | DEPRECATED: Use 'model' instead | No | 'claude-4-0-sonnet-20250219' |
|
||||
| `fallback_model` | Enable automatic fallback to specified model when default model is overloaded | No | '' |
|
||||
| `anthropic_api_key` | Anthropic API key (required for direct Anthropic API) | No | '' |
|
||||
| `claude_code_oauth_token` | Claude Code OAuth token (alternative to anthropic_api_key) | No | '' |
|
||||
| `use_bedrock` | Use Amazon Bedrock with OIDC authentication instead of direct Anthropic API | No | 'false' |
|
||||
| `use_vertex` | Use Google Vertex AI with OIDC authentication instead of direct Anthropic API | No | 'false' |
|
||||
| `use_node_cache` | Whether to use Node.js dependency caching (set to true only for Node.js projects with lock files) | No | 'false' |
|
||||
| Input | Description | Required | Default |
|
||||
| ------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ---------------------------- |
|
||||
| `prompt` | The prompt to send to Claude Code | No\* | '' |
|
||||
| `prompt_file` | Path to a file containing the prompt to send to Claude Code | No\* | '' |
|
||||
| `allowed_tools` | Comma-separated list of allowed tools for Claude Code to use | No | '' |
|
||||
| `disallowed_tools` | Comma-separated list of disallowed tools that Claude Code cannot use | No | '' |
|
||||
| `max_turns` | Maximum number of conversation turns (default: no limit) | No | '' |
|
||||
| `mcp_config` | Path to the MCP configuration JSON file, or MCP configuration JSON string | No | '' |
|
||||
| `settings` | Path to Claude Code settings JSON file, or settings JSON string | No | '' |
|
||||
| `system_prompt` | Override system prompt | No | '' |
|
||||
| `append_system_prompt` | Append to system prompt | No | '' |
|
||||
| `claude_env` | Custom environment variables to pass to Claude Code execution (YAML multiline format) | No | '' |
|
||||
| `model` | Model to use (provider-specific format required for Bedrock/Vertex) | No | 'claude-4-0-sonnet-20250219' |
|
||||
| `anthropic_model` | DEPRECATED: Use 'model' instead | No | 'claude-4-0-sonnet-20250219' |
|
||||
| `fallback_model` | Enable automatic fallback to specified model when default model is overloaded | No | '' |
|
||||
| `anthropic_api_key` | Anthropic API key (required for direct Anthropic API) | No | '' |
|
||||
| `claude_code_oauth_token` | Claude Code OAuth token (alternative to anthropic_api_key) | No | '' |
|
||||
| `use_bedrock` | Use Amazon Bedrock with OIDC authentication instead of direct Anthropic API | No | 'false' |
|
||||
| `use_vertex` | Use Google Vertex AI with OIDC authentication instead of direct Anthropic API | No | 'false' |
|
||||
| `use_node_cache` | Whether to use Node.js dependency caching (set to true only for Node.js projects with lock files) | No | 'false' |
|
||||
| `show_full_output` | Show full JSON output (⚠️ May expose secrets - see [security docs](../docs/security.md#️-full-output-security-warning)) | No | 'false'\*\* |
|
||||
|
||||
\*Either `prompt` or `prompt_file` must be provided, but not both.
|
||||
|
||||
\*\*`show_full_output` is automatically enabled when GitHub Actions debug mode is active. See [security documentation](../docs/security.md#️-full-output-security-warning) for important security considerations.
|
||||
|
||||
## Outputs
|
||||
|
||||
| Output | Description |
|
||||
@@ -336,7 +339,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
|
||||
@@ -42,6 +42,10 @@ inputs:
|
||||
description: "Use Google Vertex AI with OIDC authentication instead of direct Anthropic API"
|
||||
required: false
|
||||
default: "false"
|
||||
use_foundry:
|
||||
description: "Use Microsoft Foundry with OIDC authentication instead of direct Anthropic API"
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
use_node_cache:
|
||||
description: "Whether to use Node.js dependency caching (set to true only for Node.js projects with lock files)"
|
||||
@@ -55,6 +59,18 @@ inputs:
|
||||
description: "Optional path to a custom Bun executable. If provided, skips automatic Bun installation and uses this executable instead. WARNING: Using an incompatible version may cause problems if the action requires specific Bun features. This input is typically not needed unless you're debugging something specific or have unique needs in your environment."
|
||||
required: false
|
||||
default: ""
|
||||
show_full_output:
|
||||
description: "Show full JSON output from Claude Code. WARNING: This outputs ALL Claude messages including tool execution results which may contain secrets, API keys, or other sensitive information. These logs are publicly visible in GitHub Actions. Only enable for debugging in non-sensitive environments."
|
||||
required: false
|
||||
default: "false"
|
||||
plugins:
|
||||
description: "Newline-separated list of Claude Code plugin names to install (e.g., 'code-review@claude-code-plugins\nfeature-dev@claude-code-plugins')"
|
||||
required: false
|
||||
default: ""
|
||||
plugin_marketplaces:
|
||||
description: "Newline-separated list of Claude Code plugin marketplace Git URLs to install from (e.g., 'https://github.com/user/marketplace1.git\nhttps://github.com/user/marketplace2.git')"
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
outputs:
|
||||
conclusion:
|
||||
@@ -63,6 +79,12 @@ outputs:
|
||||
execution_file:
|
||||
description: "Path to the JSON file containing Claude Code execution log"
|
||||
value: ${{ steps.run_claude.outputs.execution_file }}
|
||||
structured_output:
|
||||
description: "JSON string containing all structured output fields when --json-schema is provided in claude_args (use fromJSON() or jq to parse)"
|
||||
value: ${{ steps.run_claude.outputs.structured_output }}
|
||||
session_id:
|
||||
description: "The Claude Code session ID that can be used with --resume to continue this conversation"
|
||||
value: ${{ steps.run_claude.outputs.session_id }}
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
@@ -75,17 +97,19 @@ runs:
|
||||
|
||||
- name: Install Bun
|
||||
if: inputs.path_to_bun_executable == ''
|
||||
uses: oven-sh/setup-bun@735343b667d3e6f658f44d0eca948eb6282f2b76 # https://github.com/oven-sh/setup-bun/releases/tag/v2.0.2
|
||||
uses: oven-sh/setup-bun@3d267786b128fe76c2f16a390aa2448b815359f3 # https://github.com/oven-sh/setup-bun/releases/tag/v2.1.2
|
||||
with:
|
||||
bun-version: 1.2.11
|
||||
bun-version: 1.3.6
|
||||
|
||||
- name: Setup Custom Bun Path
|
||||
if: inputs.path_to_bun_executable != ''
|
||||
shell: bash
|
||||
env:
|
||||
PATH_TO_BUN_EXECUTABLE: ${{ inputs.path_to_bun_executable }}
|
||||
run: |
|
||||
echo "Using custom Bun executable: ${{ inputs.path_to_bun_executable }}"
|
||||
echo "Using custom Bun executable: $PATH_TO_BUN_EXECUTABLE"
|
||||
# Add the directory containing the custom executable to PATH
|
||||
BUN_DIR=$(dirname "${{ inputs.path_to_bun_executable }}")
|
||||
BUN_DIR=$(dirname "$PATH_TO_BUN_EXECUTABLE")
|
||||
echo "$BUN_DIR" >> "$GITHUB_PATH"
|
||||
|
||||
- name: Install Dependencies
|
||||
@@ -96,14 +120,32 @@ runs:
|
||||
|
||||
- name: Install Claude Code
|
||||
shell: bash
|
||||
env:
|
||||
PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
|
||||
run: |
|
||||
if [ -z "${{ inputs.path_to_claude_code_executable }}" ]; then
|
||||
echo "Installing Claude Code..."
|
||||
curl -fsSL https://claude.ai/install.sh | bash -s 1.0.103
|
||||
if [ -z "$PATH_TO_CLAUDE_CODE_EXECUTABLE" ]; then
|
||||
CLAUDE_CODE_VERSION="2.1.16"
|
||||
echo "Installing Claude Code v${CLAUDE_CODE_VERSION}..."
|
||||
for attempt in 1 2 3; do
|
||||
echo "Installation attempt $attempt..."
|
||||
if command -v timeout &> /dev/null; then
|
||||
# Use --foreground to kill entire process group on timeout, --kill-after to send SIGKILL if SIGTERM fails
|
||||
timeout --foreground --kill-after=10 120 bash -c "curl -fsSL https://claude.ai/install.sh | bash -s -- $CLAUDE_CODE_VERSION" && break
|
||||
else
|
||||
curl -fsSL https://claude.ai/install.sh | bash -s -- "$CLAUDE_CODE_VERSION" && break
|
||||
fi
|
||||
if [ $attempt -eq 3 ]; then
|
||||
echo "Failed to install Claude Code after 3 attempts"
|
||||
exit 1
|
||||
fi
|
||||
echo "Installation failed, retrying..."
|
||||
sleep 5
|
||||
done
|
||||
echo "Claude Code installed successfully"
|
||||
else
|
||||
echo "Using custom Claude Code executable: ${{ inputs.path_to_claude_code_executable }}"
|
||||
echo "Using custom Claude Code executable: $PATH_TO_CLAUDE_CODE_EXECUTABLE"
|
||||
# Add the directory containing the custom executable to PATH
|
||||
CLAUDE_DIR=$(dirname "${{ inputs.path_to_claude_code_executable }}")
|
||||
CLAUDE_DIR=$(dirname "$PATH_TO_CLAUDE_CODE_EXECUTABLE")
|
||||
echo "$CLAUDE_DIR" >> "$GITHUB_PATH"
|
||||
fi
|
||||
|
||||
@@ -126,20 +168,26 @@ runs:
|
||||
INPUT_CLAUDE_ARGS: ${{ inputs.claude_args }}
|
||||
INPUT_PATH_TO_CLAUDE_CODE_EXECUTABLE: ${{ inputs.path_to_claude_code_executable }}
|
||||
INPUT_PATH_TO_BUN_EXECUTABLE: ${{ inputs.path_to_bun_executable }}
|
||||
INPUT_SHOW_FULL_OUTPUT: ${{ inputs.show_full_output }}
|
||||
INPUT_PLUGINS: ${{ inputs.plugins }}
|
||||
INPUT_PLUGIN_MARKETPLACES: ${{ inputs.plugin_marketplaces }}
|
||||
|
||||
# Provider configuration
|
||||
ANTHROPIC_API_KEY: ${{ inputs.anthropic_api_key }}
|
||||
CLAUDE_CODE_OAUTH_TOKEN: ${{ inputs.claude_code_oauth_token }}
|
||||
ANTHROPIC_BASE_URL: ${{ env.ANTHROPIC_BASE_URL }}
|
||||
ANTHROPIC_CUSTOM_HEADERS: ${{ env.ANTHROPIC_CUSTOM_HEADERS }}
|
||||
# Only set provider flags if explicitly true, since any value (including "false") is truthy
|
||||
CLAUDE_CODE_USE_BEDROCK: ${{ inputs.use_bedrock == 'true' && '1' || '' }}
|
||||
CLAUDE_CODE_USE_VERTEX: ${{ inputs.use_vertex == 'true' && '1' || '' }}
|
||||
CLAUDE_CODE_USE_FOUNDRY: ${{ inputs.use_foundry == 'true' && '1' || '' }}
|
||||
|
||||
# AWS configuration
|
||||
AWS_REGION: ${{ env.AWS_REGION }}
|
||||
AWS_ACCESS_KEY_ID: ${{ env.AWS_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ env.AWS_SECRET_ACCESS_KEY }}
|
||||
AWS_SESSION_TOKEN: ${{ env.AWS_SESSION_TOKEN }}
|
||||
AWS_BEARER_TOKEN_BEDROCK: ${{ env.AWS_BEARER_TOKEN_BEDROCK }}
|
||||
ANTHROPIC_BEDROCK_BASE_URL: ${{ env.ANTHROPIC_BEDROCK_BASE_URL || (env.AWS_REGION && format('https://bedrock-runtime.{0}.amazonaws.com', env.AWS_REGION)) }}
|
||||
|
||||
# GCP configuration
|
||||
@@ -147,3 +195,10 @@ runs:
|
||||
CLOUD_ML_REGION: ${{ env.CLOUD_ML_REGION }}
|
||||
GOOGLE_APPLICATION_CREDENTIALS: ${{ env.GOOGLE_APPLICATION_CREDENTIALS }}
|
||||
ANTHROPIC_VERTEX_BASE_URL: ${{ env.ANTHROPIC_VERTEX_BASE_URL }}
|
||||
|
||||
# Microsoft Foundry configuration
|
||||
ANTHROPIC_FOUNDRY_RESOURCE: ${{ env.ANTHROPIC_FOUNDRY_RESOURCE }}
|
||||
ANTHROPIC_FOUNDRY_BASE_URL: ${{ env.ANTHROPIC_FOUNDRY_BASE_URL }}
|
||||
ANTHROPIC_DEFAULT_SONNET_MODEL: ${{ env.ANTHROPIC_DEFAULT_SONNET_MODEL }}
|
||||
ANTHROPIC_DEFAULT_HAIKU_MODEL: ${{ env.ANTHROPIC_DEFAULT_HAIKU_MODEL }}
|
||||
ANTHROPIC_DEFAULT_OPUS_MODEL: ${{ env.ANTHROPIC_DEFAULT_OPUS_MODEL }}
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
{
|
||||
"lockfileVersion": 1,
|
||||
"configVersion": 0,
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "@anthropic-ai/claude-code-base-action",
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.10.1",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.16",
|
||||
"shell-quote": "^1.8.3",
|
||||
},
|
||||
"devDependencies": {
|
||||
@@ -25,8 +27,40 @@
|
||||
|
||||
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
|
||||
|
||||
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.16", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-8sG7rvJZ7rc+oj0ZvWMTAtnYYTsh5gP5pCXiG21wYbwHqgEPod/oOIu5DCC/PWhwzN0sAmDbVURgCTDmimYlXw=="],
|
||||
|
||||
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
|
||||
|
||||
"@img/sharp-darwin-arm64": ["@img/sharp-darwin-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-arm64": "1.0.4" }, "os": "darwin", "cpu": "arm64" }, "sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ=="],
|
||||
|
||||
"@img/sharp-darwin-x64": ["@img/sharp-darwin-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-x64": "1.0.4" }, "os": "darwin", "cpu": "x64" }, "sha512-fyHac4jIc1ANYGRDxtiqelIbdWkIuQaI84Mv45KvGRRxSAa7o7d1ZKAOBaYbnepLC1WqxfpimdeWfvqqSGwR2Q=="],
|
||||
|
||||
"@img/sharp-libvips-darwin-arm64": ["@img/sharp-libvips-darwin-arm64@1.0.4", "", { "os": "darwin", "cpu": "arm64" }, "sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg=="],
|
||||
|
||||
"@img/sharp-libvips-darwin-x64": ["@img/sharp-libvips-darwin-x64@1.0.4", "", { "os": "darwin", "cpu": "x64" }, "sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ=="],
|
||||
|
||||
"@img/sharp-libvips-linux-arm": ["@img/sharp-libvips-linux-arm@1.0.5", "", { "os": "linux", "cpu": "arm" }, "sha512-gvcC4ACAOPRNATg/ov8/MnbxFDJqf/pDePbBnuBDcjsI8PssmjoKMAz4LtLaVi+OnSb5FK/yIOamqDwGmXW32g=="],
|
||||
|
||||
"@img/sharp-libvips-linux-arm64": ["@img/sharp-libvips-linux-arm64@1.0.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-9B+taZ8DlyyqzZQnoeIvDVR/2F4EbMepXMc/NdVbkzsJbzkUjhXv/70GQJ7tdLA4YJgNP25zukcxpX2/SueNrA=="],
|
||||
|
||||
"@img/sharp-libvips-linux-x64": ["@img/sharp-libvips-linux-x64@1.0.4", "", { "os": "linux", "cpu": "x64" }, "sha512-MmWmQ3iPFZr0Iev+BAgVMb3ZyC4KeFc3jFxnNbEPas60e1cIfevbtuyf9nDGIzOaW9PdnDciJm+wFFaTlj5xYw=="],
|
||||
|
||||
"@img/sharp-libvips-linuxmusl-arm64": ["@img/sharp-libvips-linuxmusl-arm64@1.0.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-9Ti+BbTYDcsbp4wfYib8Ctm1ilkugkA/uscUn6UXK1ldpC1JjiXbLfFZtRlBhjPZ5o1NCLiDbg8fhUPKStHoTA=="],
|
||||
|
||||
"@img/sharp-libvips-linuxmusl-x64": ["@img/sharp-libvips-linuxmusl-x64@1.0.4", "", { "os": "linux", "cpu": "x64" }, "sha512-viYN1KX9m+/hGkJtvYYp+CCLgnJXwiQB39damAO7WMdKWlIhmYTfHjwSbQeUK/20vY154mwezd9HflVFM1wVSw=="],
|
||||
|
||||
"@img/sharp-linux-arm": ["@img/sharp-linux-arm@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-arm": "1.0.5" }, "os": "linux", "cpu": "arm" }, "sha512-JTS1eldqZbJxjvKaAkxhZmBqPRGmxgu+qFKSInv8moZ2AmT5Yib3EQ1c6gp493HvrvV8QgdOXdyaIBrhvFhBMQ=="],
|
||||
|
||||
"@img/sharp-linux-arm64": ["@img/sharp-linux-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-arm64": "1.0.4" }, "os": "linux", "cpu": "arm64" }, "sha512-JMVv+AMRyGOHtO1RFBiJy/MBsgz0x4AWrT6QoEVVTyh1E39TrCUpTRI7mx9VksGX4awWASxqCYLCV4wBZHAYxA=="],
|
||||
|
||||
"@img/sharp-linux-x64": ["@img/sharp-linux-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-x64": "1.0.4" }, "os": "linux", "cpu": "x64" }, "sha512-opC+Ok5pRNAzuvq1AG0ar+1owsu842/Ab+4qvU879ippJBHvyY5n2mxF1izXqkPYlGuP/M556uh53jRLJmzTWA=="],
|
||||
|
||||
"@img/sharp-linuxmusl-arm64": ["@img/sharp-linuxmusl-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linuxmusl-arm64": "1.0.4" }, "os": "linux", "cpu": "arm64" }, "sha512-XrHMZwGQGvJg2V/oRSUfSAfjfPxO+4DkiRh6p2AFjLQztWUuY/o8Mq0eMQVIY7HJ1CDQUJlxGGZRw1a5bqmd1g=="],
|
||||
|
||||
"@img/sharp-linuxmusl-x64": ["@img/sharp-linuxmusl-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linuxmusl-x64": "1.0.4" }, "os": "linux", "cpu": "x64" }, "sha512-WT+d/cgqKkkKySYmqoZ8y3pxx7lx9vVejxW/W4DOFMYVSkErR+w7mf2u8m/y4+xHe7yY9DAXQMWQhpnMuFfScw=="],
|
||||
|
||||
"@img/sharp-win32-x64": ["@img/sharp-win32-x64@0.33.5", "", { "os": "win32", "cpu": "x64" }, "sha512-MpY/o8/8kj+EcnxwvrP4aTJSWw/aZ7JIGR4aBeZkZw5B7/Jn+tY9/VNwtcoGmdT7GfggGIU4kygOMSbYnOrAbg=="],
|
||||
|
||||
"@types/bun": ["@types/bun@1.2.19", "", { "dependencies": { "bun-types": "1.2.19" } }, "sha512-d9ZCmrH3CJ2uYKXQIUuZ/pUnTqIvLDS0SK7pFmbx8ma+ziH/FRMoAq5bYpRG7y+w1gl+HgyNZbtqgMq4W4e2Lg=="],
|
||||
|
||||
"@types/node": ["@types/node@20.19.9", "", { "dependencies": { "undici-types": "~6.21.0" } }, "sha512-cuVNgarYWZqxRJDQHEB58GEONhOK79QVR/qYx4S7kcUObQvUwvFnYxJuuHUKm2aieN9X3yZB4LZsuYNU1Qphsw=="],
|
||||
@@ -50,5 +84,7 @@
|
||||
"undici": ["undici@5.29.0", "", { "dependencies": { "@fastify/busboy": "^2.0.0" } }, "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg=="],
|
||||
|
||||
"undici-types": ["undici-types@6.21.0", "", {}, "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ=="],
|
||||
|
||||
"zod": ["zod@3.25.76", "", {}, "sha512-gzUt/qt81nXsFGKIFcC3YnfEAx5NkunCfnDlvuBSSFS02bcXu4Lmea0AFIUwbLWxWPx3d9p8S5QoaujKcNQxcQ=="],
|
||||
}
|
||||
}
|
||||
|
||||
@@ -32,7 +32,7 @@ jobs:
|
||||
"--rm",
|
||||
"-e",
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"ghcr.io/github/github-mcp-server:sha-7aced2b"
|
||||
"ghcr.io/github/github-mcp-server:sha-23fa0dd"
|
||||
],
|
||||
"env": {
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN": "${{ secrets.GITHUB_TOKEN }}"
|
||||
|
||||
196
base-action/package-lock.json
generated
Normal file
196
base-action/package-lock.json
generated
Normal file
@@ -0,0 +1,196 @@
|
||||
{
|
||||
"name": "@anthropic-ai/claude-code-base-action",
|
||||
"version": "1.0.0",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {
|
||||
"": {
|
||||
"name": "@anthropic-ai/claude-code-base-action",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.10.1",
|
||||
"shell-quote": "^1.8.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "^1.2.12",
|
||||
"@types/node": "^20.0.0",
|
||||
"@types/shell-quote": "^1.7.5",
|
||||
"prettier": "3.5.3",
|
||||
"typescript": "^5.8.3"
|
||||
}
|
||||
},
|
||||
"node_modules/@actions/core": {
|
||||
"version": "1.11.1",
|
||||
"resolved": "https://registry.npmjs.org/@actions/core/-/core-1.11.1.tgz",
|
||||
"integrity": "sha512-hXJCSrkwfA46Vd9Z3q4cpEpHB1rL5NG04+/rbqW9d3+CSvtB1tYe8UTpAlixa1vj0m/ULglfEK2UKxMGxCxv5A==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@actions/exec": "^1.1.1",
|
||||
"@actions/http-client": "^2.0.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@actions/exec": {
|
||||
"version": "1.1.1",
|
||||
"resolved": "https://registry.npmjs.org/@actions/exec/-/exec-1.1.1.tgz",
|
||||
"integrity": "sha512-+sCcHHbVdk93a0XT19ECtO/gIXoxvdsgQLzb2fE2/5sIZmWQuluYyjPQtrtTHdU1YzTZ7bAPN4sITq2xi1679w==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@actions/io": "^1.0.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@actions/http-client": {
|
||||
"version": "2.2.3",
|
||||
"resolved": "https://registry.npmjs.org/@actions/http-client/-/http-client-2.2.3.tgz",
|
||||
"integrity": "sha512-mx8hyJi/hjFvbPokCg4uRd4ZX78t+YyRPtnKWwIl+RzNaVuFpQHfmlGVfsKEJN8LwTCvL+DfVgAM04XaHkm6bA==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"tunnel": "^0.0.6",
|
||||
"undici": "^5.25.4"
|
||||
}
|
||||
},
|
||||
"node_modules/@actions/io": {
|
||||
"version": "1.1.3",
|
||||
"resolved": "https://registry.npmjs.org/@actions/io/-/io-1.1.3.tgz",
|
||||
"integrity": "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q==",
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/@fastify/busboy": {
|
||||
"version": "2.1.1",
|
||||
"resolved": "https://registry.npmjs.org/@fastify/busboy/-/busboy-2.1.1.tgz",
|
||||
"integrity": "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/bun": {
|
||||
"version": "1.3.1",
|
||||
"resolved": "https://registry.npmjs.org/@types/bun/-/bun-1.3.1.tgz",
|
||||
"integrity": "sha512-4jNMk2/K9YJtfqwoAa28c8wK+T7nvJFOjxI4h/7sORWcypRNxBpr+TPNaCfVWq70tLCJsqoFwcf0oI0JU/fvMQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"bun-types": "1.3.1"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/node": {
|
||||
"version": "20.19.23",
|
||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-20.19.23.tgz",
|
||||
"integrity": "sha512-yIdlVVVHXpmqRhtyovZAcSy0MiPcYWGkoO4CGe/+jpP0hmNuihm4XhHbADpK++MsiLHP5MVlv+bcgdF99kSiFQ==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"undici-types": "~6.21.0"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/react": {
|
||||
"version": "19.2.2",
|
||||
"resolved": "https://registry.npmjs.org/@types/react/-/react-19.2.2.tgz",
|
||||
"integrity": "sha512-6mDvHUFSjyT2B2yeNx2nUgMxh9LtOWvkhIU3uePn2I2oyNymUAX1NIsdgviM4CH+JSrp2D2hsMvJOkxY+0wNRA==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true,
|
||||
"dependencies": {
|
||||
"csstype": "^3.0.2"
|
||||
}
|
||||
},
|
||||
"node_modules/@types/shell-quote": {
|
||||
"version": "1.7.5",
|
||||
"resolved": "https://registry.npmjs.org/@types/shell-quote/-/shell-quote-1.7.5.tgz",
|
||||
"integrity": "sha512-+UE8GAGRPbJVQDdxi16dgadcBfQ+KG2vgZhV1+3A1XmHbmwcdwhCUwIdy+d3pAGrbvgRoVSjeI9vOWyq376Yzw==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
},
|
||||
"node_modules/bun-types": {
|
||||
"version": "1.3.1",
|
||||
"resolved": "https://registry.npmjs.org/bun-types/-/bun-types-1.3.1.tgz",
|
||||
"integrity": "sha512-NMrcy7smratanWJ2mMXdpatalovtxVggkj11bScuWuiOoXTiKIu2eVS1/7qbyI/4yHedtsn175n4Sm4JcdHLXw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@types/node": "*"
|
||||
},
|
||||
"peerDependencies": {
|
||||
"@types/react": "^19"
|
||||
}
|
||||
},
|
||||
"node_modules/csstype": {
|
||||
"version": "3.1.3",
|
||||
"resolved": "https://registry.npmjs.org/csstype/-/csstype-3.1.3.tgz",
|
||||
"integrity": "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"peer": true
|
||||
},
|
||||
"node_modules/prettier": {
|
||||
"version": "3.5.3",
|
||||
"resolved": "https://registry.npmjs.org/prettier/-/prettier-3.5.3.tgz",
|
||||
"integrity": "sha512-QQtaxnoDJeAkDvDKWCLiwIXkTgRhwYDEQCghU9Z6q03iyek/rxRh/2lC3HB7P8sWT2xC/y5JDctPLBIGzHKbhw==",
|
||||
"dev": true,
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"prettier": "bin/prettier.cjs"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/prettier/prettier?sponsor=1"
|
||||
}
|
||||
},
|
||||
"node_modules/shell-quote": {
|
||||
"version": "1.8.3",
|
||||
"resolved": "https://registry.npmjs.org/shell-quote/-/shell-quote-1.8.3.tgz",
|
||||
"integrity": "sha512-ObmnIF4hXNg1BqhnHmgbDETF8dLPCggZWBjkQfhZpbszZnYur5DUljTcCHii5LC3J5E0yeO/1LIMyH+UvHQgyw==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">= 0.4"
|
||||
},
|
||||
"funding": {
|
||||
"url": "https://github.com/sponsors/ljharb"
|
||||
}
|
||||
},
|
||||
"node_modules/tunnel": {
|
||||
"version": "0.0.6",
|
||||
"resolved": "https://registry.npmjs.org/tunnel/-/tunnel-0.0.6.tgz",
|
||||
"integrity": "sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==",
|
||||
"license": "MIT",
|
||||
"engines": {
|
||||
"node": ">=0.6.11 <=0.7.0 || >=0.7.3"
|
||||
}
|
||||
},
|
||||
"node_modules/typescript": {
|
||||
"version": "5.9.3",
|
||||
"resolved": "https://registry.npmjs.org/typescript/-/typescript-5.9.3.tgz",
|
||||
"integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==",
|
||||
"dev": true,
|
||||
"license": "Apache-2.0",
|
||||
"bin": {
|
||||
"tsc": "bin/tsc",
|
||||
"tsserver": "bin/tsserver"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.17"
|
||||
}
|
||||
},
|
||||
"node_modules/undici": {
|
||||
"version": "5.29.0",
|
||||
"resolved": "https://registry.npmjs.org/undici/-/undici-5.29.0.tgz",
|
||||
"integrity": "sha512-raqeBD6NQK4SkWhQzeYKd1KmIG6dllBOTt55Rmkt4HtI9mwdWtJljnrXjAFUBLTSN67HWrOIZ3EPF4kjUw80Bg==",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"@fastify/busboy": "^2.0.0"
|
||||
},
|
||||
"engines": {
|
||||
"node": ">=14.0"
|
||||
}
|
||||
},
|
||||
"node_modules/undici-types": {
|
||||
"version": "6.21.0",
|
||||
"resolved": "https://registry.npmjs.org/undici-types/-/undici-types-6.21.0.tgz",
|
||||
"integrity": "sha512-iwDZqg0QAGrg9Rav5H4n0M64c3mkR59cJ6wQp+7C4nI0gsmExaedaYLNO44eT4AtBBwjbTiGPMlt2Md0T9H9JQ==",
|
||||
"dev": true,
|
||||
"license": "MIT"
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -11,6 +11,7 @@
|
||||
},
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.10.1",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.16",
|
||||
"shell-quote": "^1.8.3"
|
||||
},
|
||||
"devDependencies": {
|
||||
|
||||
@@ -5,6 +5,7 @@ import { preparePrompt } from "./prepare-prompt";
|
||||
import { runClaude } from "./run-claude";
|
||||
import { setupClaudeCodeSettings } from "./setup-claude-code-settings";
|
||||
import { validateEnvironmentVariables } from "./validate-env";
|
||||
import { installPlugins } from "./install-plugins";
|
||||
|
||||
async function run() {
|
||||
try {
|
||||
@@ -15,6 +16,13 @@ async function run() {
|
||||
undefined, // homeDir
|
||||
);
|
||||
|
||||
// Install Claude Code plugins if specified
|
||||
await installPlugins(
|
||||
process.env.INPUT_PLUGIN_MARKETPLACES,
|
||||
process.env.INPUT_PLUGINS,
|
||||
process.env.INPUT_PATH_TO_CLAUDE_CODE_EXECUTABLE,
|
||||
);
|
||||
|
||||
const promptConfig = await preparePrompt({
|
||||
prompt: process.env.INPUT_PROMPT || "",
|
||||
promptFile: process.env.INPUT_PROMPT_FILE || "",
|
||||
@@ -28,11 +36,11 @@ async function run() {
|
||||
mcpConfig: process.env.INPUT_MCP_CONFIG,
|
||||
systemPrompt: process.env.INPUT_SYSTEM_PROMPT,
|
||||
appendSystemPrompt: process.env.INPUT_APPEND_SYSTEM_PROMPT,
|
||||
claudeEnv: process.env.INPUT_CLAUDE_ENV,
|
||||
fallbackModel: process.env.INPUT_FALLBACK_MODEL,
|
||||
model: process.env.ANTHROPIC_MODEL,
|
||||
pathToClaudeCodeExecutable:
|
||||
process.env.INPUT_PATH_TO_CLAUDE_CODE_EXECUTABLE,
|
||||
showFullOutput: process.env.INPUT_SHOW_FULL_OUTPUT,
|
||||
});
|
||||
} catch (error) {
|
||||
core.setFailed(`Action failed with error: ${error}`);
|
||||
|
||||
243
base-action/src/install-plugins.ts
Normal file
243
base-action/src/install-plugins.ts
Normal file
@@ -0,0 +1,243 @@
|
||||
import { spawn, ChildProcess } from "child_process";
|
||||
|
||||
const PLUGIN_NAME_REGEX = /^[@a-zA-Z0-9_\-\/\.]+$/;
|
||||
const MAX_PLUGIN_NAME_LENGTH = 512;
|
||||
const PATH_TRAVERSAL_REGEX =
|
||||
/\.\.\/|\/\.\.|\.\/|\/\.|(?:^|\/)\.\.$|(?:^|\/)\.$|\.\.(?![0-9])/;
|
||||
const MARKETPLACE_URL_REGEX =
|
||||
/^https:\/\/[a-zA-Z0-9\-._~:/?#[\]@!$&'()*+,;=%]+\.git$/;
|
||||
|
||||
/**
|
||||
* Checks if a marketplace input is a local path (not a URL)
|
||||
* @param input - The marketplace input to check
|
||||
* @returns true if the input is a local path, false if it's a URL
|
||||
*/
|
||||
function isLocalPath(input: string): boolean {
|
||||
// Local paths start with ./, ../, /, or a drive letter (Windows)
|
||||
return (
|
||||
input.startsWith("./") ||
|
||||
input.startsWith("../") ||
|
||||
input.startsWith("/") ||
|
||||
/^[a-zA-Z]:[\\\/]/.test(input)
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates a marketplace URL or local path
|
||||
* @param input - The marketplace URL or local path to validate
|
||||
* @throws {Error} If the input is invalid
|
||||
*/
|
||||
function validateMarketplaceInput(input: string): void {
|
||||
const normalized = input.trim();
|
||||
|
||||
if (!normalized) {
|
||||
throw new Error("Marketplace URL or path cannot be empty");
|
||||
}
|
||||
|
||||
// Local paths are passed directly to Claude Code which handles them
|
||||
if (isLocalPath(normalized)) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Validate as URL
|
||||
if (!MARKETPLACE_URL_REGEX.test(normalized)) {
|
||||
throw new Error(`Invalid marketplace URL format: ${input}`);
|
||||
}
|
||||
|
||||
// Additional check for valid URL structure
|
||||
try {
|
||||
new URL(normalized);
|
||||
} catch {
|
||||
throw new Error(`Invalid marketplace URL: ${input}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates a plugin name for security issues
|
||||
* @param pluginName - The plugin name to validate
|
||||
* @throws {Error} If the plugin name is invalid
|
||||
*/
|
||||
function validatePluginName(pluginName: string): void {
|
||||
// Normalize Unicode to prevent homoglyph attacks (e.g., fullwidth dots, Unicode slashes)
|
||||
const normalized = pluginName.normalize("NFC");
|
||||
|
||||
if (normalized.length > MAX_PLUGIN_NAME_LENGTH) {
|
||||
throw new Error(`Plugin name too long: ${normalized.substring(0, 50)}...`);
|
||||
}
|
||||
|
||||
if (!PLUGIN_NAME_REGEX.test(normalized)) {
|
||||
throw new Error(`Invalid plugin name format: ${pluginName}`);
|
||||
}
|
||||
|
||||
// Prevent path traversal attacks with single efficient regex check
|
||||
if (PATH_TRAVERSAL_REGEX.test(normalized)) {
|
||||
throw new Error(`Invalid plugin name format: ${pluginName}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a newline-separated list of marketplace URLs or local paths and return an array of validated entries
|
||||
* @param marketplaces - Newline-separated list of marketplace Git URLs or local paths
|
||||
* @returns Array of validated marketplace URLs or paths (empty array if none provided)
|
||||
*/
|
||||
function parseMarketplaces(marketplaces?: string): string[] {
|
||||
const trimmed = marketplaces?.trim();
|
||||
|
||||
if (!trimmed) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Split by newline and process each entry
|
||||
return trimmed
|
||||
.split("\n")
|
||||
.map((entry) => entry.trim())
|
||||
.filter((entry) => {
|
||||
if (entry.length === 0) return false;
|
||||
|
||||
validateMarketplaceInput(entry);
|
||||
return true;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse a newline-separated list of plugin names and return an array of trimmed, non-empty plugin names
|
||||
* Validates plugin names to prevent command injection and path traversal attacks
|
||||
* Allows: letters, numbers, @, -, _, /, . (common npm/scoped package characters)
|
||||
* Disallows: path traversal (../, ./), shell metacharacters, and consecutive dots
|
||||
* @param plugins - Newline-separated list of plugin names, or undefined/empty to return empty array
|
||||
* @returns Array of validated plugin names (empty array if none provided)
|
||||
* @throws {Error} If any plugin name fails validation
|
||||
*/
|
||||
function parsePlugins(plugins?: string): string[] {
|
||||
const trimmedPlugins = plugins?.trim();
|
||||
|
||||
if (!trimmedPlugins) {
|
||||
return [];
|
||||
}
|
||||
|
||||
// Split by newline and process each plugin
|
||||
return trimmedPlugins
|
||||
.split("\n")
|
||||
.map((p) => p.trim())
|
||||
.filter((p) => {
|
||||
if (p.length === 0) return false;
|
||||
|
||||
validatePluginName(p);
|
||||
return true;
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a Claude Code CLI command with proper error handling
|
||||
* @param claudeExecutable - Path to the Claude executable
|
||||
* @param args - Command arguments to pass to the executable
|
||||
* @param errorContext - Context string for error messages (e.g., "Failed to install plugin 'foo'")
|
||||
* @returns Promise that resolves when the command completes successfully
|
||||
* @throws {Error} If the command fails to execute
|
||||
*/
|
||||
async function executeClaudeCommand(
|
||||
claudeExecutable: string,
|
||||
args: string[],
|
||||
errorContext: string,
|
||||
): Promise<void> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const childProcess: ChildProcess = spawn(claudeExecutable, args, {
|
||||
stdio: "inherit",
|
||||
});
|
||||
|
||||
childProcess.on("close", (code: number | null) => {
|
||||
if (code === 0) {
|
||||
resolve();
|
||||
} else if (code === null) {
|
||||
reject(new Error(`${errorContext}: process terminated by signal`));
|
||||
} else {
|
||||
reject(new Error(`${errorContext} (exit code: ${code})`));
|
||||
}
|
||||
});
|
||||
|
||||
childProcess.on("error", (err: Error) => {
|
||||
reject(new Error(`${errorContext}: ${err.message}`));
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Installs a single Claude Code plugin
|
||||
* @param pluginName - The name of the plugin to install
|
||||
* @param claudeExecutable - Path to the Claude executable
|
||||
* @returns Promise that resolves when the plugin is installed successfully
|
||||
* @throws {Error} If the plugin installation fails
|
||||
*/
|
||||
async function installPlugin(
|
||||
pluginName: string,
|
||||
claudeExecutable: string,
|
||||
): Promise<void> {
|
||||
console.log(`Installing plugin: ${pluginName}`);
|
||||
|
||||
return executeClaudeCommand(
|
||||
claudeExecutable,
|
||||
["plugin", "install", pluginName],
|
||||
`Failed to install plugin '${pluginName}'`,
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Adds a Claude Code plugin marketplace
|
||||
* @param claudeExecutable - Path to the Claude executable
|
||||
* @param marketplace - The marketplace Git URL or local path to add
|
||||
* @returns Promise that resolves when the marketplace add command completes
|
||||
* @throws {Error} If the command fails to execute
|
||||
*/
|
||||
async function addMarketplace(
|
||||
claudeExecutable: string,
|
||||
marketplace: string,
|
||||
): Promise<void> {
|
||||
console.log(`Adding marketplace: ${marketplace}`);
|
||||
|
||||
return executeClaudeCommand(
|
||||
claudeExecutable,
|
||||
["plugin", "marketplace", "add", marketplace],
|
||||
`Failed to add marketplace '${marketplace}'`,
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Installs Claude Code plugins from a newline-separated list
|
||||
* @param marketplacesInput - Newline-separated list of marketplace Git URLs or local paths
|
||||
* @param pluginsInput - Newline-separated list of plugin names
|
||||
* @param claudeExecutable - Path to the Claude executable (defaults to "claude")
|
||||
* @returns Promise that resolves when all plugins are installed
|
||||
* @throws {Error} If any plugin fails validation or installation (stops on first error)
|
||||
*/
|
||||
export async function installPlugins(
|
||||
marketplacesInput?: string,
|
||||
pluginsInput?: string,
|
||||
claudeExecutable?: string,
|
||||
): Promise<void> {
|
||||
// Resolve executable path with explicit fallback
|
||||
const resolvedExecutable = claudeExecutable || "claude";
|
||||
|
||||
// Parse and add all marketplaces before installing plugins
|
||||
const marketplaces = parseMarketplaces(marketplacesInput);
|
||||
|
||||
if (marketplaces.length > 0) {
|
||||
console.log(`Adding ${marketplaces.length} marketplace(s)...`);
|
||||
for (const marketplace of marketplaces) {
|
||||
await addMarketplace(resolvedExecutable, marketplace);
|
||||
console.log(`✓ Successfully added marketplace: ${marketplace}`);
|
||||
}
|
||||
} else {
|
||||
console.log("No marketplaces specified, skipping marketplace setup");
|
||||
}
|
||||
|
||||
const plugins = parsePlugins(pluginsInput);
|
||||
if (plugins.length > 0) {
|
||||
console.log(`Installing ${plugins.length} plugin(s)...`);
|
||||
for (const plugin of plugins) {
|
||||
await installPlugin(plugin, resolvedExecutable);
|
||||
console.log(`✓ Successfully installed: ${plugin}`);
|
||||
}
|
||||
} else {
|
||||
console.log("No plugins specified, skipping plugins installation");
|
||||
}
|
||||
}
|
||||
271
base-action/src/parse-sdk-options.ts
Normal file
271
base-action/src/parse-sdk-options.ts
Normal file
@@ -0,0 +1,271 @@
|
||||
import { parse as parseShellArgs } from "shell-quote";
|
||||
import type { ClaudeOptions } from "./run-claude";
|
||||
import type { Options as SdkOptions } from "@anthropic-ai/claude-agent-sdk";
|
||||
|
||||
/**
|
||||
* Result of parsing ClaudeOptions for SDK usage
|
||||
*/
|
||||
export type ParsedSdkOptions = {
|
||||
sdkOptions: SdkOptions;
|
||||
showFullOutput: boolean;
|
||||
hasJsonSchema: boolean;
|
||||
};
|
||||
|
||||
// Flags that should accumulate multiple values instead of overwriting
|
||||
// Include both camelCase and hyphenated variants for CLI compatibility
|
||||
const ACCUMULATING_FLAGS = new Set([
|
||||
"allowedTools",
|
||||
"allowed-tools",
|
||||
"disallowedTools",
|
||||
"disallowed-tools",
|
||||
"mcp-config",
|
||||
]);
|
||||
|
||||
// Delimiter used to join accumulated flag values
|
||||
const ACCUMULATE_DELIMITER = "\x00";
|
||||
|
||||
type McpConfig = {
|
||||
mcpServers?: Record<string, unknown>;
|
||||
};
|
||||
|
||||
/**
|
||||
* Merge multiple MCP config values into a single config.
|
||||
* Each config can be a JSON string or a file path.
|
||||
* For JSON strings, mcpServers objects are merged.
|
||||
* For file paths, they are kept as-is (user's file takes precedence and is used last).
|
||||
*/
|
||||
function mergeMcpConfigs(configValues: string[]): string {
|
||||
const merged: McpConfig = { mcpServers: {} };
|
||||
let lastFilePath: string | null = null;
|
||||
|
||||
for (const config of configValues) {
|
||||
const trimmed = config.trim();
|
||||
if (!trimmed) continue;
|
||||
|
||||
// Check if it's a JSON string (starts with {) or a file path
|
||||
if (trimmed.startsWith("{")) {
|
||||
try {
|
||||
const parsed = JSON.parse(trimmed) as McpConfig;
|
||||
if (parsed.mcpServers) {
|
||||
Object.assign(merged.mcpServers!, parsed.mcpServers);
|
||||
}
|
||||
} catch {
|
||||
// If JSON parsing fails, treat as file path
|
||||
lastFilePath = trimmed;
|
||||
}
|
||||
} else {
|
||||
// It's a file path - store it to handle separately
|
||||
lastFilePath = trimmed;
|
||||
}
|
||||
}
|
||||
|
||||
// If we have file paths, we need to keep the merged JSON and let the file
|
||||
// be handled separately. Since we can only return one value, merge what we can.
|
||||
// If there's a file path, we need a different approach - read the file at runtime.
|
||||
// For now, if there's a file path, we'll stringify the merged config.
|
||||
// The action prepends its config as JSON, so we can safely merge inline JSON configs.
|
||||
|
||||
// If no inline configs were found (all file paths), return the last file path
|
||||
if (Object.keys(merged.mcpServers!).length === 0 && lastFilePath) {
|
||||
return lastFilePath;
|
||||
}
|
||||
|
||||
// Note: If user passes a file path, we cannot merge it at parse time since
|
||||
// we don't have access to the file system here. The action's built-in MCP
|
||||
// servers are always passed as inline JSON, so they will be merged.
|
||||
// If user also passes inline JSON, it will be merged.
|
||||
// If user passes a file path, they should ensure it includes all needed servers.
|
||||
|
||||
return JSON.stringify(merged);
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse claudeArgs string into extraArgs record for SDK pass-through
|
||||
* The SDK/CLI will handle --mcp-config, --json-schema, etc.
|
||||
* For allowedTools and disallowedTools, multiple occurrences are accumulated (null-char joined).
|
||||
* Accumulating flags also consume all consecutive non-flag values
|
||||
* (e.g., --allowed-tools "Tool1" "Tool2" "Tool3" captures all three).
|
||||
*/
|
||||
function parseClaudeArgsToExtraArgs(
|
||||
claudeArgs?: string,
|
||||
): Record<string, string | null> {
|
||||
if (!claudeArgs?.trim()) return {};
|
||||
|
||||
const result: Record<string, string | null> = {};
|
||||
const args = parseShellArgs(claudeArgs).filter(
|
||||
(arg): arg is string => typeof arg === "string",
|
||||
);
|
||||
|
||||
for (let i = 0; i < args.length; i++) {
|
||||
const arg = args[i];
|
||||
if (arg?.startsWith("--")) {
|
||||
const flag = arg.slice(2);
|
||||
const nextArg = args[i + 1];
|
||||
|
||||
// Check if next arg is a value (not another flag)
|
||||
if (nextArg && !nextArg.startsWith("--")) {
|
||||
// For accumulating flags, consume all consecutive non-flag values
|
||||
// This handles: --allowed-tools "Tool1" "Tool2" "Tool3"
|
||||
if (ACCUMULATING_FLAGS.has(flag)) {
|
||||
const values: string[] = [];
|
||||
while (i + 1 < args.length && !args[i + 1]?.startsWith("--")) {
|
||||
i++;
|
||||
values.push(args[i]!);
|
||||
}
|
||||
const joinedValues = values.join(ACCUMULATE_DELIMITER);
|
||||
if (result[flag]) {
|
||||
result[flag] =
|
||||
`${result[flag]}${ACCUMULATE_DELIMITER}${joinedValues}`;
|
||||
} else {
|
||||
result[flag] = joinedValues;
|
||||
}
|
||||
} else {
|
||||
result[flag] = nextArg;
|
||||
i++; // Skip the value
|
||||
}
|
||||
} else {
|
||||
result[flag] = null; // Boolean flag
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parse ClaudeOptions into SDK-compatible options
|
||||
* Uses extraArgs for CLI pass-through instead of duplicating option parsing
|
||||
*/
|
||||
export function parseSdkOptions(options: ClaudeOptions): ParsedSdkOptions {
|
||||
// Determine output verbosity
|
||||
const isDebugMode = process.env.ACTIONS_STEP_DEBUG === "true";
|
||||
const showFullOutput = options.showFullOutput === "true" || isDebugMode;
|
||||
|
||||
// Parse claudeArgs into extraArgs for CLI pass-through
|
||||
const extraArgs = parseClaudeArgsToExtraArgs(options.claudeArgs);
|
||||
|
||||
// Detect if --json-schema is present (for hasJsonSchema flag)
|
||||
const hasJsonSchema = "json-schema" in extraArgs;
|
||||
|
||||
// Extract and merge allowedTools from all sources:
|
||||
// 1. From extraArgs (parsed from claudeArgs - contains tag mode's tools)
|
||||
// - Check both camelCase (--allowedTools) and hyphenated (--allowed-tools) variants
|
||||
// 2. From options.allowedTools (direct input - may be undefined)
|
||||
// This prevents duplicate flags being overwritten when claudeArgs contains --allowedTools
|
||||
const allowedToolsValues = [
|
||||
extraArgs["allowedTools"],
|
||||
extraArgs["allowed-tools"],
|
||||
]
|
||||
.filter(Boolean)
|
||||
.join(ACCUMULATE_DELIMITER);
|
||||
const extraArgsAllowedTools = allowedToolsValues
|
||||
? allowedToolsValues
|
||||
.split(ACCUMULATE_DELIMITER)
|
||||
.flatMap((v) => v.split(","))
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean)
|
||||
: [];
|
||||
const directAllowedTools = options.allowedTools
|
||||
? options.allowedTools.split(",").map((t) => t.trim())
|
||||
: [];
|
||||
const mergedAllowedTools = [
|
||||
...new Set([...extraArgsAllowedTools, ...directAllowedTools]),
|
||||
];
|
||||
delete extraArgs["allowedTools"];
|
||||
delete extraArgs["allowed-tools"];
|
||||
|
||||
// Same for disallowedTools - check both camelCase and hyphenated variants
|
||||
const disallowedToolsValues = [
|
||||
extraArgs["disallowedTools"],
|
||||
extraArgs["disallowed-tools"],
|
||||
]
|
||||
.filter(Boolean)
|
||||
.join(ACCUMULATE_DELIMITER);
|
||||
const extraArgsDisallowedTools = disallowedToolsValues
|
||||
? disallowedToolsValues
|
||||
.split(ACCUMULATE_DELIMITER)
|
||||
.flatMap((v) => v.split(","))
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean)
|
||||
: [];
|
||||
const directDisallowedTools = options.disallowedTools
|
||||
? options.disallowedTools.split(",").map((t) => t.trim())
|
||||
: [];
|
||||
const mergedDisallowedTools = [
|
||||
...new Set([...extraArgsDisallowedTools, ...directDisallowedTools]),
|
||||
];
|
||||
delete extraArgs["disallowedTools"];
|
||||
delete extraArgs["disallowed-tools"];
|
||||
|
||||
// Merge multiple --mcp-config values by combining their mcpServers objects
|
||||
// The action prepends its config (github_comment, github_ci, etc.) as inline JSON,
|
||||
// and users may provide their own config as inline JSON or file path
|
||||
if (extraArgs["mcp-config"]) {
|
||||
const mcpConfigValues = extraArgs["mcp-config"].split(ACCUMULATE_DELIMITER);
|
||||
if (mcpConfigValues.length > 1) {
|
||||
extraArgs["mcp-config"] = mergeMcpConfigs(mcpConfigValues);
|
||||
}
|
||||
}
|
||||
|
||||
// Build custom environment
|
||||
const env: Record<string, string | undefined> = { ...process.env };
|
||||
if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
|
||||
env.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
|
||||
}
|
||||
// Set the entrypoint for Claude Code to identify this as the GitHub Action
|
||||
env.CLAUDE_CODE_ENTRYPOINT = "claude-code-github-action";
|
||||
|
||||
// Build system prompt option - default to claude_code preset
|
||||
let systemPrompt: SdkOptions["systemPrompt"];
|
||||
if (options.systemPrompt) {
|
||||
systemPrompt = options.systemPrompt;
|
||||
} else if (options.appendSystemPrompt) {
|
||||
systemPrompt = {
|
||||
type: "preset",
|
||||
preset: "claude_code",
|
||||
append: options.appendSystemPrompt,
|
||||
};
|
||||
} else {
|
||||
// Default to claude_code preset when no custom prompt is specified
|
||||
systemPrompt = {
|
||||
type: "preset",
|
||||
preset: "claude_code",
|
||||
};
|
||||
}
|
||||
|
||||
// Build SDK options - use merged tools from both direct options and claudeArgs
|
||||
const sdkOptions: SdkOptions = {
|
||||
// Direct options from ClaudeOptions inputs
|
||||
model: options.model,
|
||||
maxTurns: options.maxTurns ? parseInt(options.maxTurns, 10) : undefined,
|
||||
allowedTools:
|
||||
mergedAllowedTools.length > 0 ? mergedAllowedTools : undefined,
|
||||
disallowedTools:
|
||||
mergedDisallowedTools.length > 0 ? mergedDisallowedTools : undefined,
|
||||
systemPrompt,
|
||||
fallbackModel: options.fallbackModel,
|
||||
pathToClaudeCodeExecutable: options.pathToClaudeCodeExecutable,
|
||||
|
||||
// Pass through claudeArgs as extraArgs - CLI handles --mcp-config, --json-schema, etc.
|
||||
// Note: allowedTools and disallowedTools have been removed from extraArgs to prevent duplicates
|
||||
extraArgs,
|
||||
env,
|
||||
|
||||
// Load settings from sources - prefer user's --setting-sources if provided, otherwise use all sources
|
||||
// This ensures users can override the default behavior (e.g., --setting-sources user to avoid in-repo configs)
|
||||
settingSources: extraArgs["setting-sources"]
|
||||
? (extraArgs["setting-sources"].split(
|
||||
",",
|
||||
) as SdkOptions["settingSources"])
|
||||
: ["user", "project", "local"],
|
||||
};
|
||||
|
||||
// Remove setting-sources from extraArgs to avoid passing it twice
|
||||
delete extraArgs["setting-sources"];
|
||||
|
||||
return {
|
||||
sdkOptions,
|
||||
showFullOutput,
|
||||
hasJsonSchema,
|
||||
};
|
||||
}
|
||||
228
base-action/src/run-claude-sdk.ts
Normal file
228
base-action/src/run-claude-sdk.ts
Normal file
@@ -0,0 +1,228 @@
|
||||
import * as core from "@actions/core";
|
||||
import { readFile, writeFile, access } from "fs/promises";
|
||||
import { dirname, join } from "path";
|
||||
import { query } from "@anthropic-ai/claude-agent-sdk";
|
||||
import type {
|
||||
SDKMessage,
|
||||
SDKResultMessage,
|
||||
SDKUserMessage,
|
||||
} from "@anthropic-ai/claude-agent-sdk";
|
||||
import type { ParsedSdkOptions } from "./parse-sdk-options";
|
||||
|
||||
const EXECUTION_FILE = `${process.env.RUNNER_TEMP}/claude-execution-output.json`;
|
||||
|
||||
/** Filename for the user request file, written by prompt generation */
|
||||
const USER_REQUEST_FILENAME = "claude-user-request.txt";
|
||||
|
||||
/**
|
||||
* Check if a file exists
|
||||
*/
|
||||
async function fileExists(path: string): Promise<boolean> {
|
||||
try {
|
||||
await access(path);
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a prompt configuration for the SDK.
|
||||
* If a user request file exists alongside the prompt file, returns a multi-block
|
||||
* SDKUserMessage that enables slash command processing in the CLI.
|
||||
* Otherwise, returns the prompt as a simple string.
|
||||
*/
|
||||
async function createPromptConfig(
|
||||
promptPath: string,
|
||||
showFullOutput: boolean,
|
||||
): Promise<string | AsyncIterable<SDKUserMessage>> {
|
||||
const promptContent = await readFile(promptPath, "utf-8");
|
||||
|
||||
// Check for user request file in the same directory
|
||||
const userRequestPath = join(dirname(promptPath), USER_REQUEST_FILENAME);
|
||||
const hasUserRequest = await fileExists(userRequestPath);
|
||||
|
||||
if (!hasUserRequest) {
|
||||
// No user request file - use simple string prompt
|
||||
return promptContent;
|
||||
}
|
||||
|
||||
// User request file exists - create multi-block message
|
||||
const userRequest = await readFile(userRequestPath, "utf-8");
|
||||
if (showFullOutput) {
|
||||
console.log("Using multi-block message with user request:", userRequest);
|
||||
} else {
|
||||
console.log("Using multi-block message with user request (content hidden)");
|
||||
}
|
||||
|
||||
// Create an async generator that yields a single multi-block message
|
||||
// The context/instructions go first, then the user's actual request last
|
||||
// This allows the CLI to detect and process slash commands in the user request
|
||||
async function* createMultiBlockMessage(): AsyncGenerator<SDKUserMessage> {
|
||||
yield {
|
||||
type: "user",
|
||||
session_id: "",
|
||||
message: {
|
||||
role: "user",
|
||||
content: [
|
||||
{ type: "text", text: promptContent }, // Instructions + GitHub context
|
||||
{ type: "text", text: userRequest }, // User's request (may be a slash command)
|
||||
],
|
||||
},
|
||||
parent_tool_use_id: null,
|
||||
};
|
||||
}
|
||||
|
||||
return createMultiBlockMessage();
|
||||
}
|
||||
|
||||
/**
|
||||
* Sanitizes SDK output to match CLI sanitization behavior
|
||||
*/
|
||||
function sanitizeSdkOutput(
|
||||
message: SDKMessage,
|
||||
showFullOutput: boolean,
|
||||
): string | null {
|
||||
if (showFullOutput) {
|
||||
return JSON.stringify(message, null, 2);
|
||||
}
|
||||
|
||||
// System initialization - safe to show
|
||||
if (message.type === "system" && message.subtype === "init") {
|
||||
return JSON.stringify(
|
||||
{
|
||||
type: "system",
|
||||
subtype: "init",
|
||||
message: "Claude Code initialized",
|
||||
model: "model" in message ? message.model : "unknown",
|
||||
},
|
||||
null,
|
||||
2,
|
||||
);
|
||||
}
|
||||
|
||||
// Result messages - show sanitized summary
|
||||
if (message.type === "result") {
|
||||
const resultMsg = message as SDKResultMessage;
|
||||
return JSON.stringify(
|
||||
{
|
||||
type: "result",
|
||||
subtype: resultMsg.subtype,
|
||||
is_error: resultMsg.is_error,
|
||||
duration_ms: resultMsg.duration_ms,
|
||||
num_turns: resultMsg.num_turns,
|
||||
total_cost_usd: resultMsg.total_cost_usd,
|
||||
permission_denials: resultMsg.permission_denials,
|
||||
},
|
||||
null,
|
||||
2,
|
||||
);
|
||||
}
|
||||
|
||||
// Suppress other message types in non-full-output mode
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Run Claude using the Agent SDK
|
||||
*/
|
||||
export async function runClaudeWithSdk(
|
||||
promptPath: string,
|
||||
{ sdkOptions, showFullOutput, hasJsonSchema }: ParsedSdkOptions,
|
||||
): Promise<void> {
|
||||
// Create prompt configuration - may be a string or multi-block message
|
||||
const prompt = await createPromptConfig(promptPath, showFullOutput);
|
||||
|
||||
if (!showFullOutput) {
|
||||
console.log(
|
||||
"Running Claude Code via SDK (full output hidden for security)...",
|
||||
);
|
||||
console.log(
|
||||
"Rerun in debug mode or enable `show_full_output: true` in your workflow file for full output.",
|
||||
);
|
||||
}
|
||||
|
||||
console.log(`Running Claude with prompt from file: ${promptPath}`);
|
||||
// Log SDK options without env (which could contain sensitive data)
|
||||
const { env, ...optionsToLog } = sdkOptions;
|
||||
console.log("SDK options:", JSON.stringify(optionsToLog, null, 2));
|
||||
|
||||
const messages: SDKMessage[] = [];
|
||||
let resultMessage: SDKResultMessage | undefined;
|
||||
|
||||
try {
|
||||
for await (const message of query({ prompt, options: sdkOptions })) {
|
||||
messages.push(message);
|
||||
|
||||
const sanitized = sanitizeSdkOutput(message, showFullOutput);
|
||||
if (sanitized) {
|
||||
console.log(sanitized);
|
||||
}
|
||||
|
||||
if (message.type === "result") {
|
||||
resultMessage = message as SDKResultMessage;
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
console.error("SDK execution error:", error);
|
||||
core.setOutput("conclusion", "failure");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
// Write execution file
|
||||
try {
|
||||
await writeFile(EXECUTION_FILE, JSON.stringify(messages, null, 2));
|
||||
console.log(`Log saved to ${EXECUTION_FILE}`);
|
||||
core.setOutput("execution_file", EXECUTION_FILE);
|
||||
} catch (error) {
|
||||
core.warning(`Failed to write execution file: ${error}`);
|
||||
}
|
||||
|
||||
// Extract and set session_id from system.init message
|
||||
const initMessage = messages.find(
|
||||
(m) => m.type === "system" && "subtype" in m && m.subtype === "init",
|
||||
);
|
||||
if (initMessage && "session_id" in initMessage && initMessage.session_id) {
|
||||
core.setOutput("session_id", initMessage.session_id);
|
||||
core.info(`Set session_id: ${initMessage.session_id}`);
|
||||
}
|
||||
|
||||
if (!resultMessage) {
|
||||
core.setOutput("conclusion", "failure");
|
||||
core.error("No result message received from Claude");
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const isSuccess = resultMessage.subtype === "success";
|
||||
core.setOutput("conclusion", isSuccess ? "success" : "failure");
|
||||
|
||||
// Handle structured output
|
||||
if (hasJsonSchema) {
|
||||
if (
|
||||
isSuccess &&
|
||||
"structured_output" in resultMessage &&
|
||||
resultMessage.structured_output
|
||||
) {
|
||||
const structuredOutputJson = JSON.stringify(
|
||||
resultMessage.structured_output,
|
||||
);
|
||||
core.setOutput("structured_output", structuredOutputJson);
|
||||
core.info(
|
||||
`Set structured_output with ${Object.keys(resultMessage.structured_output as object).length} field(s)`,
|
||||
);
|
||||
} else {
|
||||
core.setFailed(
|
||||
`--json-schema was provided but Claude did not return structured_output. Result subtype: ${resultMessage.subtype}`,
|
||||
);
|
||||
core.setOutput("conclusion", "failure");
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
if (!isSuccess) {
|
||||
if ("errors" in resultMessage && resultMessage.errors) {
|
||||
core.error(`Execution failed: ${resultMessage.errors.join(", ")}`);
|
||||
}
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
@@ -1,16 +1,5 @@
|
||||
import * as core from "@actions/core";
|
||||
import { exec } from "child_process";
|
||||
import { promisify } from "util";
|
||||
import { unlink, writeFile, stat } from "fs/promises";
|
||||
import { createWriteStream } from "fs";
|
||||
import { spawn } from "child_process";
|
||||
import { parse as parseShellArgs } from "shell-quote";
|
||||
|
||||
const execAsync = promisify(exec);
|
||||
|
||||
const PIPE_PATH = `${process.env.RUNNER_TEMP}/claude_prompt_pipe`;
|
||||
const EXECUTION_FILE = `${process.env.RUNNER_TEMP}/claude-execution-output.json`;
|
||||
const BASE_ARGS = ["--verbose", "--output-format", "stream-json"];
|
||||
import { runClaudeWithSdk } from "./run-claude-sdk";
|
||||
import { parseSdkOptions } from "./parse-sdk-options";
|
||||
|
||||
export type ClaudeOptions = {
|
||||
claudeArgs?: string;
|
||||
@@ -22,236 +11,11 @@ export type ClaudeOptions = {
|
||||
mcpConfig?: string;
|
||||
systemPrompt?: string;
|
||||
appendSystemPrompt?: string;
|
||||
claudeEnv?: string;
|
||||
fallbackModel?: string;
|
||||
showFullOutput?: string;
|
||||
};
|
||||
|
||||
type PreparedConfig = {
|
||||
claudeArgs: string[];
|
||||
promptPath: string;
|
||||
env: Record<string, string>;
|
||||
};
|
||||
|
||||
export function prepareRunConfig(
|
||||
promptPath: string,
|
||||
options: ClaudeOptions,
|
||||
): PreparedConfig {
|
||||
// Build Claude CLI arguments:
|
||||
// 1. Prompt flag (always first)
|
||||
// 2. User's claudeArgs (full control)
|
||||
// 3. BASE_ARGS (always last, cannot be overridden)
|
||||
|
||||
const claudeArgs = ["-p"];
|
||||
|
||||
// Parse and add user's custom Claude arguments
|
||||
if (options.claudeArgs?.trim()) {
|
||||
const parsed = parseShellArgs(options.claudeArgs);
|
||||
const customArgs = parsed.filter(
|
||||
(arg): arg is string => typeof arg === "string",
|
||||
);
|
||||
claudeArgs.push(...customArgs);
|
||||
}
|
||||
|
||||
// BASE_ARGS are always appended last (cannot be overridden)
|
||||
claudeArgs.push(...BASE_ARGS);
|
||||
|
||||
const customEnv: Record<string, string> = {};
|
||||
|
||||
if (process.env.INPUT_ACTION_INPUTS_PRESENT) {
|
||||
customEnv.GITHUB_ACTION_INPUTS = process.env.INPUT_ACTION_INPUTS_PRESENT;
|
||||
}
|
||||
|
||||
return {
|
||||
claudeArgs,
|
||||
promptPath,
|
||||
env: customEnv,
|
||||
};
|
||||
}
|
||||
|
||||
export async function runClaude(promptPath: string, options: ClaudeOptions) {
|
||||
const config = prepareRunConfig(promptPath, options);
|
||||
|
||||
// Create a named pipe
|
||||
try {
|
||||
await unlink(PIPE_PATH);
|
||||
} catch (e) {
|
||||
// Ignore if file doesn't exist
|
||||
}
|
||||
|
||||
// Create the named pipe
|
||||
await execAsync(`mkfifo "${PIPE_PATH}"`);
|
||||
|
||||
// Log prompt file size
|
||||
let promptSize = "unknown";
|
||||
try {
|
||||
const stats = await stat(config.promptPath);
|
||||
promptSize = stats.size.toString();
|
||||
} catch (e) {
|
||||
// Ignore error
|
||||
}
|
||||
|
||||
console.log(`Prompt file size: ${promptSize} bytes`);
|
||||
|
||||
// Log custom environment variables if any
|
||||
const customEnvKeys = Object.keys(config.env).filter(
|
||||
(key) => key !== "CLAUDE_ACTION_INPUTS_PRESENT",
|
||||
);
|
||||
if (customEnvKeys.length > 0) {
|
||||
console.log(`Custom environment variables: ${customEnvKeys.join(", ")}`);
|
||||
}
|
||||
|
||||
// Log custom arguments if any
|
||||
if (options.claudeArgs && options.claudeArgs.trim() !== "") {
|
||||
console.log(`Custom Claude arguments: ${options.claudeArgs}`);
|
||||
}
|
||||
|
||||
// Output to console
|
||||
console.log(`Running Claude with prompt from file: ${config.promptPath}`);
|
||||
console.log(`Full command: claude ${config.claudeArgs.join(" ")}`);
|
||||
|
||||
// Start sending prompt to pipe in background
|
||||
const catProcess = spawn("cat", [config.promptPath], {
|
||||
stdio: ["ignore", "pipe", "inherit"],
|
||||
});
|
||||
const pipeStream = createWriteStream(PIPE_PATH);
|
||||
catProcess.stdout.pipe(pipeStream);
|
||||
|
||||
catProcess.on("error", (error) => {
|
||||
console.error("Error reading prompt file:", error);
|
||||
pipeStream.destroy();
|
||||
});
|
||||
|
||||
// Use custom executable path if provided, otherwise default to "claude"
|
||||
const claudeExecutable = options.pathToClaudeCodeExecutable || "claude";
|
||||
|
||||
const claudeProcess = spawn(claudeExecutable, config.claudeArgs, {
|
||||
stdio: ["pipe", "pipe", "inherit"],
|
||||
env: {
|
||||
...process.env,
|
||||
...config.env,
|
||||
},
|
||||
});
|
||||
|
||||
// Handle Claude process errors
|
||||
claudeProcess.on("error", (error) => {
|
||||
console.error("Error spawning Claude process:", error);
|
||||
pipeStream.destroy();
|
||||
});
|
||||
|
||||
// Capture output for parsing execution metrics
|
||||
let output = "";
|
||||
claudeProcess.stdout.on("data", (data) => {
|
||||
const text = data.toString();
|
||||
|
||||
// Try to parse as JSON and pretty print if it's on a single line
|
||||
const lines = text.split("\n");
|
||||
lines.forEach((line: string, index: number) => {
|
||||
if (line.trim() === "") return;
|
||||
|
||||
try {
|
||||
// Check if this line is a JSON object
|
||||
const parsed = JSON.parse(line);
|
||||
const prettyJson = JSON.stringify(parsed, null, 2);
|
||||
process.stdout.write(prettyJson);
|
||||
if (index < lines.length - 1 || text.endsWith("\n")) {
|
||||
process.stdout.write("\n");
|
||||
}
|
||||
} catch (e) {
|
||||
// Not a JSON object, print as is
|
||||
process.stdout.write(line);
|
||||
if (index < lines.length - 1 || text.endsWith("\n")) {
|
||||
process.stdout.write("\n");
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
output += text;
|
||||
});
|
||||
|
||||
// Handle stdout errors
|
||||
claudeProcess.stdout.on("error", (error) => {
|
||||
console.error("Error reading Claude stdout:", error);
|
||||
});
|
||||
|
||||
// Pipe from named pipe to Claude
|
||||
const pipeProcess = spawn("cat", [PIPE_PATH]);
|
||||
pipeProcess.stdout.pipe(claudeProcess.stdin);
|
||||
|
||||
// Handle pipe process errors
|
||||
pipeProcess.on("error", (error) => {
|
||||
console.error("Error reading from named pipe:", error);
|
||||
claudeProcess.kill("SIGTERM");
|
||||
});
|
||||
|
||||
// Wait for Claude to finish
|
||||
const exitCode = await new Promise<number>((resolve) => {
|
||||
claudeProcess.on("close", (code) => {
|
||||
resolve(code || 0);
|
||||
});
|
||||
|
||||
claudeProcess.on("error", (error) => {
|
||||
console.error("Claude process error:", error);
|
||||
resolve(1);
|
||||
});
|
||||
});
|
||||
|
||||
// Clean up processes
|
||||
try {
|
||||
catProcess.kill("SIGTERM");
|
||||
} catch (e) {
|
||||
// Process may already be dead
|
||||
}
|
||||
try {
|
||||
pipeProcess.kill("SIGTERM");
|
||||
} catch (e) {
|
||||
// Process may already be dead
|
||||
}
|
||||
|
||||
// Clean up pipe file
|
||||
try {
|
||||
await unlink(PIPE_PATH);
|
||||
} catch (e) {
|
||||
// Ignore errors during cleanup
|
||||
}
|
||||
|
||||
// Set conclusion based on exit code
|
||||
if (exitCode === 0) {
|
||||
// Try to process the output and save execution metrics
|
||||
try {
|
||||
await writeFile("output.txt", output);
|
||||
|
||||
// Process output.txt into JSON and save to execution file
|
||||
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
|
||||
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
|
||||
maxBuffer: 10 * 1024 * 1024,
|
||||
});
|
||||
await writeFile(EXECUTION_FILE, jsonOutput);
|
||||
|
||||
console.log(`Log saved to ${EXECUTION_FILE}`);
|
||||
} catch (e) {
|
||||
core.warning(`Failed to process output for execution metrics: ${e}`);
|
||||
}
|
||||
|
||||
core.setOutput("conclusion", "success");
|
||||
core.setOutput("execution_file", EXECUTION_FILE);
|
||||
} else {
|
||||
core.setOutput("conclusion", "failure");
|
||||
|
||||
// Still try to save execution file if we have output
|
||||
if (output) {
|
||||
try {
|
||||
await writeFile("output.txt", output);
|
||||
// Increase maxBuffer from Node.js default of 1MB to 10MB to handle large Claude outputs
|
||||
const { stdout: jsonOutput } = await execAsync("jq -s '.' output.txt", {
|
||||
maxBuffer: 10 * 1024 * 1024,
|
||||
});
|
||||
await writeFile(EXECUTION_FILE, jsonOutput);
|
||||
core.setOutput("execution_file", EXECUTION_FILE);
|
||||
} catch (e) {
|
||||
// Ignore errors when processing output during failure
|
||||
}
|
||||
}
|
||||
|
||||
process.exit(exitCode);
|
||||
}
|
||||
const parsedOptions = parseSdkOptions(options);
|
||||
return runClaudeWithSdk(promptPath, parsedOptions);
|
||||
}
|
||||
|
||||
@@ -1,39 +1,50 @@
|
||||
/**
|
||||
* Validates the environment variables required for running Claude Code
|
||||
* based on the selected provider (Anthropic API, AWS Bedrock, or Google Vertex AI)
|
||||
* based on the selected provider (Anthropic API, AWS Bedrock, Google Vertex AI, or Microsoft Foundry)
|
||||
*/
|
||||
export function validateEnvironmentVariables() {
|
||||
const useBedrock = process.env.CLAUDE_CODE_USE_BEDROCK === "1";
|
||||
const useVertex = process.env.CLAUDE_CODE_USE_VERTEX === "1";
|
||||
const useFoundry = process.env.CLAUDE_CODE_USE_FOUNDRY === "1";
|
||||
const anthropicApiKey = process.env.ANTHROPIC_API_KEY;
|
||||
const claudeCodeOAuthToken = process.env.CLAUDE_CODE_OAUTH_TOKEN;
|
||||
|
||||
const errors: string[] = [];
|
||||
|
||||
if (useBedrock && useVertex) {
|
||||
// Check for mutual exclusivity between providers
|
||||
const activeProviders = [useBedrock, useVertex, useFoundry].filter(Boolean);
|
||||
if (activeProviders.length > 1) {
|
||||
errors.push(
|
||||
"Cannot use both Bedrock and Vertex AI simultaneously. Please set only one provider.",
|
||||
"Cannot use multiple providers simultaneously. Please set only one of: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY.",
|
||||
);
|
||||
}
|
||||
|
||||
if (!useBedrock && !useVertex) {
|
||||
if (!useBedrock && !useVertex && !useFoundry) {
|
||||
if (!anthropicApiKey && !claudeCodeOAuthToken) {
|
||||
errors.push(
|
||||
"Either ANTHROPIC_API_KEY or CLAUDE_CODE_OAUTH_TOKEN is required when using direct Anthropic API.",
|
||||
);
|
||||
}
|
||||
} else if (useBedrock) {
|
||||
const requiredBedrockVars = {
|
||||
AWS_REGION: process.env.AWS_REGION,
|
||||
AWS_ACCESS_KEY_ID: process.env.AWS_ACCESS_KEY_ID,
|
||||
AWS_SECRET_ACCESS_KEY: process.env.AWS_SECRET_ACCESS_KEY,
|
||||
};
|
||||
const awsRegion = process.env.AWS_REGION;
|
||||
const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID;
|
||||
const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY;
|
||||
const awsBearerToken = process.env.AWS_BEARER_TOKEN_BEDROCK;
|
||||
|
||||
Object.entries(requiredBedrockVars).forEach(([key, value]) => {
|
||||
if (!value) {
|
||||
errors.push(`${key} is required when using AWS Bedrock.`);
|
||||
}
|
||||
});
|
||||
// AWS_REGION is always required for Bedrock
|
||||
if (!awsRegion) {
|
||||
errors.push("AWS_REGION is required when using AWS Bedrock.");
|
||||
}
|
||||
|
||||
// Either bearer token OR access key credentials must be provided
|
||||
const hasAccessKeyCredentials = awsAccessKeyId && awsSecretAccessKey;
|
||||
const hasBearerToken = awsBearerToken;
|
||||
|
||||
if (!hasAccessKeyCredentials && !hasBearerToken) {
|
||||
errors.push(
|
||||
"Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock.",
|
||||
);
|
||||
}
|
||||
} else if (useVertex) {
|
||||
const requiredVertexVars = {
|
||||
ANTHROPIC_VERTEX_PROJECT_ID: process.env.ANTHROPIC_VERTEX_PROJECT_ID,
|
||||
@@ -45,6 +56,16 @@ export function validateEnvironmentVariables() {
|
||||
errors.push(`${key} is required when using Google Vertex AI.`);
|
||||
}
|
||||
});
|
||||
} else if (useFoundry) {
|
||||
const foundryResource = process.env.ANTHROPIC_FOUNDRY_RESOURCE;
|
||||
const foundryBaseUrl = process.env.ANTHROPIC_FOUNDRY_BASE_URL;
|
||||
|
||||
// Either resource name or base URL is required
|
||||
if (!foundryResource && !foundryBaseUrl) {
|
||||
errors.push(
|
||||
"Either ANTHROPIC_FOUNDRY_RESOURCE or ANTHROPIC_FOUNDRY_BASE_URL is required when using Microsoft Foundry.",
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if (errors.length > 0) {
|
||||
|
||||
@@ -9,4 +9,4 @@ fi
|
||||
# Run the test workflow locally
|
||||
# You'll need to provide your ANTHROPIC_API_KEY
|
||||
echo "Running action locally with act..."
|
||||
act push --secret ANTHROPIC_API_KEY="$ANTHROPIC_API_KEY" -W .github/workflows/test-action.yml --container-architecture linux/amd64
|
||||
act push --secret ANTHROPIC_API_KEY="$ANTHROPIC_API_KEY" -W .github/workflows/test-base-action.yml --container-architecture linux/amd64
|
||||
706
base-action/test/install-plugins.test.ts
Normal file
706
base-action/test/install-plugins.test.ts
Normal file
@@ -0,0 +1,706 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { describe, test, expect, mock, spyOn, afterEach } from "bun:test";
|
||||
import { installPlugins } from "../src/install-plugins";
|
||||
import * as childProcess from "child_process";
|
||||
|
||||
describe("installPlugins", () => {
|
||||
let spawnSpy: ReturnType<typeof spyOn> | undefined;
|
||||
|
||||
afterEach(() => {
|
||||
// Restore original spawn after each test
|
||||
if (spawnSpy) {
|
||||
spawnSpy.mockRestore();
|
||||
}
|
||||
});
|
||||
|
||||
function createMockSpawn(
|
||||
exitCode: number | null = 0,
|
||||
shouldError: boolean = false,
|
||||
) {
|
||||
const mockProcess = {
|
||||
on: mock((event: string, handler: Function) => {
|
||||
if (event === "close" && !shouldError) {
|
||||
// Simulate successful close
|
||||
setTimeout(() => handler(exitCode), 0);
|
||||
} else if (event === "error" && shouldError) {
|
||||
// Simulate error
|
||||
setTimeout(() => handler(new Error("spawn error")), 0);
|
||||
}
|
||||
return mockProcess;
|
||||
}),
|
||||
};
|
||||
|
||||
spawnSpy = spyOn(childProcess, "spawn").mockImplementation(
|
||||
() => mockProcess as any,
|
||||
);
|
||||
return spawnSpy;
|
||||
}
|
||||
|
||||
test("should not call spawn when no plugins are specified", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "");
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should not call spawn when plugins is undefined", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, undefined);
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should not call spawn when plugins is only whitespace", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, " ");
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should install a single plugin with default executable", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
// Only call: install plugin (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should install multiple plugins sequentially", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "plugin1\nplugin2\nplugin3");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3);
|
||||
// Install plugins (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "plugin1"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "plugin2"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
"claude",
|
||||
["plugin", "install", "plugin3"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should use custom claude executable path when provided", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "test-plugin", "/custom/path/to/claude");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
// Only call: install plugin (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"/custom/path/to/claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should trim whitespace from plugin names before installation", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, " plugin1 \n plugin2 ");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
// Install plugins (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "plugin1"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "plugin2"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should skip empty entries in plugin list", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "plugin1\n\nplugin2");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
// Install plugins (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "plugin1"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "plugin2"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should handle plugin installation error and throw", async () => {
|
||||
createMockSpawn(1, false); // Exit code 1
|
||||
|
||||
await expect(installPlugins(undefined, "failing-plugin")).rejects.toThrow(
|
||||
"Failed to install plugin 'failing-plugin' (exit code: 1)",
|
||||
);
|
||||
});
|
||||
|
||||
test("should handle null exit code (process terminated by signal)", async () => {
|
||||
createMockSpawn(null, false); // Exit code null (terminated by signal)
|
||||
|
||||
await expect(
|
||||
installPlugins(undefined, "terminated-plugin"),
|
||||
).rejects.toThrow(
|
||||
"Failed to install plugin 'terminated-plugin': process terminated by signal",
|
||||
);
|
||||
});
|
||||
|
||||
test("should stop installation on first error", async () => {
|
||||
const spy = createMockSpawn(1, false); // Exit code 1
|
||||
|
||||
await expect(
|
||||
installPlugins(undefined, "plugin1\nplugin2\nplugin3"),
|
||||
).rejects.toThrow("Failed to install plugin 'plugin1' (exit code: 1)");
|
||||
|
||||
// Should only try to install first plugin before failing
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test("should handle plugins with special characters in names", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "org/plugin-name\n@scope/plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
// Install plugins (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "org/plugin-name"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "@scope/plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should handle spawn errors", async () => {
|
||||
createMockSpawn(0, true); // Trigger error event
|
||||
|
||||
await expect(installPlugins(undefined, "test-plugin")).rejects.toThrow(
|
||||
"Failed to install plugin 'test-plugin': spawn error",
|
||||
);
|
||||
});
|
||||
|
||||
test("should install plugins with custom executable and multiple plugins", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
undefined,
|
||||
"plugin-a\nplugin-b",
|
||||
"/usr/local/bin/claude-custom",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
// Install plugins (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"/usr/local/bin/claude-custom",
|
||||
["plugin", "install", "plugin-a"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"/usr/local/bin/claude-custom",
|
||||
["plugin", "install", "plugin-b"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should reject plugin names with command injection attempts", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
// Should throw due to invalid characters (semicolon and spaces)
|
||||
await expect(
|
||||
installPlugins(undefined, "plugin-name; rm -rf /"),
|
||||
).rejects.toThrow("Invalid plugin name format");
|
||||
|
||||
// Mock should never be called because validation fails first
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject plugin names with path traversal using ../", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(
|
||||
installPlugins(undefined, "../../../malicious-plugin"),
|
||||
).rejects.toThrow("Invalid plugin name format");
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject plugin names with path traversal using ./", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(
|
||||
installPlugins(undefined, "./../../@scope/package"),
|
||||
).rejects.toThrow("Invalid plugin name format");
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject plugin names with consecutive dots", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(installPlugins(undefined, ".../.../package")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject plugin names with hidden path traversal", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(installPlugins(undefined, "package/../other")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should accept plugin names with single dots in version numbers", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "plugin-v1.0.2");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
// Only call: install plugin (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "plugin-v1.0.2"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept plugin names with multiple dots in semantic versions", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(undefined, "@scope/plugin-v1.0.0-beta.1");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
// Only call: install plugin (no marketplace without explicit marketplace input)
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "@scope/plugin-v1.0.0-beta.1"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should reject Unicode homoglyph path traversal attempts", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
// Using fullwidth dots (U+FF0E) and fullwidth solidus (U+FF0F)
|
||||
await expect(installPlugins(undefined, "../malicious")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject path traversal at end of path", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(installPlugins(undefined, "package/..")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject single dot directory reference", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(installPlugins(undefined, "package/.")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject path traversal in middle of path", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(installPlugins(undefined, "package/../other")).rejects.toThrow(
|
||||
"Invalid plugin name format",
|
||||
);
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
// Marketplace functionality tests
|
||||
test("should add a single marketplace before installing plugins", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"https://github.com/user/marketplace.git",
|
||||
"test-plugin",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
// First call: add marketplace
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
[
|
||||
"plugin",
|
||||
"marketplace",
|
||||
"add",
|
||||
"https://github.com/user/marketplace.git",
|
||||
],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
// Second call: install plugin
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should add multiple marketplaces with newline separation", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"https://github.com/user/m1.git\nhttps://github.com/user/m2.git",
|
||||
"test-plugin",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3); // 2 marketplaces + 1 plugin
|
||||
// First two calls: add marketplaces
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "https://github.com/user/m1.git"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "https://github.com/user/m2.git"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
// Third call: install plugin
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
"claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should add marketplaces before installing multiple plugins", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"https://github.com/user/marketplace.git",
|
||||
"plugin1\nplugin2",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3); // 1 marketplace + 2 plugins
|
||||
// First call: add marketplace
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
[
|
||||
"plugin",
|
||||
"marketplace",
|
||||
"add",
|
||||
"https://github.com/user/marketplace.git",
|
||||
],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
// Next calls: install plugins
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "plugin1"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
3,
|
||||
"claude",
|
||||
["plugin", "install", "plugin2"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should handle only marketplaces without plugins", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("https://github.com/user/marketplace.git", undefined);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
[
|
||||
"plugin",
|
||||
"marketplace",
|
||||
"add",
|
||||
"https://github.com/user/marketplace.git",
|
||||
],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should skip empty marketplace entries", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"https://github.com/user/m1.git\n\nhttps://github.com/user/m2.git",
|
||||
"test-plugin",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3); // 2 marketplaces (skip empty) + 1 plugin
|
||||
});
|
||||
|
||||
test("should trim whitespace from marketplace URLs", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
" https://github.com/user/marketplace.git \n https://github.com/user/m2.git ",
|
||||
"test-plugin",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
[
|
||||
"plugin",
|
||||
"marketplace",
|
||||
"add",
|
||||
"https://github.com/user/marketplace.git",
|
||||
],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "https://github.com/user/m2.git"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should reject invalid marketplace URL format", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(
|
||||
installPlugins("not-a-valid-url", "test-plugin"),
|
||||
).rejects.toThrow("Invalid marketplace URL format");
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject marketplace URL without .git extension", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(
|
||||
installPlugins("https://github.com/user/marketplace", "test-plugin"),
|
||||
).rejects.toThrow("Invalid marketplace URL format");
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should reject marketplace URL with non-https protocol", async () => {
|
||||
const spy = createMockSpawn();
|
||||
|
||||
await expect(
|
||||
installPlugins("http://github.com/user/marketplace.git", "test-plugin"),
|
||||
).rejects.toThrow("Invalid marketplace URL format");
|
||||
|
||||
expect(spy).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
test("should skip whitespace-only marketplace input", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(" ", "test-plugin");
|
||||
|
||||
// Should skip marketplaces and only install plugin
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should handle marketplace addition error", async () => {
|
||||
createMockSpawn(1, false); // Exit code 1
|
||||
|
||||
await expect(
|
||||
installPlugins("https://github.com/user/marketplace.git", "test-plugin"),
|
||||
).rejects.toThrow(
|
||||
"Failed to add marketplace 'https://github.com/user/marketplace.git' (exit code: 1)",
|
||||
);
|
||||
});
|
||||
|
||||
test("should stop if marketplace addition fails before installing plugins", async () => {
|
||||
const spy = createMockSpawn(1, false); // Exit code 1
|
||||
|
||||
await expect(
|
||||
installPlugins(
|
||||
"https://github.com/user/marketplace.git",
|
||||
"plugin1\nplugin2",
|
||||
),
|
||||
).rejects.toThrow("Failed to add marketplace");
|
||||
|
||||
// Should only try to add marketplace, not install any plugins
|
||||
expect(spy).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
|
||||
test("should use custom executable for marketplace operations", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"https://github.com/user/marketplace.git",
|
||||
"test-plugin",
|
||||
"/custom/path/to/claude",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"/custom/path/to/claude",
|
||||
[
|
||||
"plugin",
|
||||
"marketplace",
|
||||
"add",
|
||||
"https://github.com/user/marketplace.git",
|
||||
],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"/custom/path/to/claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
// Local marketplace path tests
|
||||
test("should accept local marketplace path with ./", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("./my-local-marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "./my-local-marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "install", "test-plugin"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept local marketplace path with absolute Unix path", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("/home/user/my-marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "/home/user/my-marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept local marketplace path with Windows absolute path", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("C:\\Users\\user\\marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "C:\\Users\\user\\marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept mixed local and remote marketplaces", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins(
|
||||
"./local-marketplace\nhttps://github.com/user/remote.git",
|
||||
"test-plugin",
|
||||
);
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(3);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "./local-marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
2,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "https://github.com/user/remote.git"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept local path with ../ (parent directory)", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("../shared-plugins/marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "../shared-plugins/marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept local path with nested directories", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("./plugins/my-org/my-marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "./plugins/my-org/my-marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
|
||||
test("should accept local path with dots in directory name", async () => {
|
||||
const spy = createMockSpawn();
|
||||
await installPlugins("./my.plugin.marketplace", "test-plugin");
|
||||
|
||||
expect(spy).toHaveBeenCalledTimes(2);
|
||||
expect(spy).toHaveBeenNthCalledWith(
|
||||
1,
|
||||
"claude",
|
||||
["plugin", "marketplace", "add", "./my.plugin.marketplace"],
|
||||
{ stdio: "inherit" },
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -2,6 +2,6 @@
|
||||
"name": "mcp-test",
|
||||
"version": "1.0.0",
|
||||
"dependencies": {
|
||||
"@modelcontextprotocol/sdk": "^1.11.0"
|
||||
"@modelcontextprotocol/sdk": "^1.24.0"
|
||||
}
|
||||
}
|
||||
|
||||
315
base-action/test/parse-sdk-options.test.ts
Normal file
315
base-action/test/parse-sdk-options.test.ts
Normal file
@@ -0,0 +1,315 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { describe, test, expect } from "bun:test";
|
||||
import { parseSdkOptions } from "../src/parse-sdk-options";
|
||||
import type { ClaudeOptions } from "../src/run-claude";
|
||||
|
||||
describe("parseSdkOptions", () => {
|
||||
describe("allowedTools merging", () => {
|
||||
test("should extract allowedTools from claudeArgs", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowedTools "Edit,Read,Write"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual(["Edit", "Read", "Write"]);
|
||||
expect(result.sdkOptions.extraArgs?.["allowedTools"]).toBeUndefined();
|
||||
});
|
||||
|
||||
test("should extract allowedTools from claudeArgs with MCP tools", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs:
|
||||
'--allowedTools "Edit,Read,mcp__github_comment__update_claude_comment"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual([
|
||||
"Edit",
|
||||
"Read",
|
||||
"mcp__github_comment__update_claude_comment",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should accumulate multiple --allowedTools flags from claudeArgs", () => {
|
||||
// This simulates tag mode adding its tools, then user adding their own
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs:
|
||||
'--allowedTools "Edit,Read,mcp__github_comment__update_claude_comment" --model "claude-3" --allowedTools "Bash(npm install),mcp__github__get_issue"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual([
|
||||
"Edit",
|
||||
"Read",
|
||||
"mcp__github_comment__update_claude_comment",
|
||||
"Bash(npm install)",
|
||||
"mcp__github__get_issue",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should merge allowedTools from both claudeArgs and direct options", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowedTools "Edit,Read"',
|
||||
allowedTools: "Write,Glob",
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual([
|
||||
"Edit",
|
||||
"Read",
|
||||
"Write",
|
||||
"Glob",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should deduplicate allowedTools when merging", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowedTools "Edit,Read"',
|
||||
allowedTools: "Edit,Write",
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual(["Edit", "Read", "Write"]);
|
||||
});
|
||||
|
||||
test("should use only direct options when claudeArgs has no allowedTools", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--model "claude-3-5-sonnet"',
|
||||
allowedTools: "Edit,Read",
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual(["Edit", "Read"]);
|
||||
});
|
||||
|
||||
test("should return undefined allowedTools when neither source has it", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--model "claude-3-5-sonnet"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toBeUndefined();
|
||||
});
|
||||
|
||||
test("should remove allowedTools from extraArgs after extraction", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowedTools "Edit,Read" --model "claude-3-5-sonnet"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.extraArgs?.["allowedTools"]).toBeUndefined();
|
||||
expect(result.sdkOptions.extraArgs?.["model"]).toBe("claude-3-5-sonnet");
|
||||
});
|
||||
|
||||
test("should handle hyphenated --allowed-tools flag", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowed-tools "Edit,Read,Write"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual(["Edit", "Read", "Write"]);
|
||||
expect(result.sdkOptions.extraArgs?.["allowed-tools"]).toBeUndefined();
|
||||
});
|
||||
|
||||
test("should accumulate multiple --allowed-tools flags (hyphenated)", () => {
|
||||
// This is the exact scenario from issue #746
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs:
|
||||
'--allowed-tools "Bash(git log:*)" "Bash(git diff:*)" "Bash(git fetch:*)" "Bash(gh pr:*)"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.allowedTools).toEqual([
|
||||
"Bash(git log:*)",
|
||||
"Bash(git diff:*)",
|
||||
"Bash(git fetch:*)",
|
||||
"Bash(gh pr:*)",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should handle mixed camelCase and hyphenated allowedTools flags", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--allowedTools "Edit,Read" --allowed-tools "Write,Glob"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
// Both should be merged - note: order depends on which key is found first
|
||||
expect(result.sdkOptions.allowedTools).toContain("Edit");
|
||||
expect(result.sdkOptions.allowedTools).toContain("Read");
|
||||
expect(result.sdkOptions.allowedTools).toContain("Write");
|
||||
expect(result.sdkOptions.allowedTools).toContain("Glob");
|
||||
});
|
||||
});
|
||||
|
||||
describe("disallowedTools merging", () => {
|
||||
test("should extract disallowedTools from claudeArgs", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--disallowedTools "Bash,Write"',
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.disallowedTools).toEqual(["Bash", "Write"]);
|
||||
expect(result.sdkOptions.extraArgs?.["disallowedTools"]).toBeUndefined();
|
||||
});
|
||||
|
||||
test("should merge disallowedTools from both sources", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--disallowedTools "Bash"',
|
||||
disallowedTools: "Write",
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.disallowedTools).toEqual(["Bash", "Write"]);
|
||||
});
|
||||
});
|
||||
|
||||
describe("mcp-config merging", () => {
|
||||
test("should pass through single mcp-config in extraArgs", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{"command":"cmd1"}}}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.extraArgs?.["mcp-config"]).toBe(
|
||||
'{"mcpServers":{"server1":{"command":"cmd1"}}}',
|
||||
);
|
||||
});
|
||||
|
||||
test("should merge multiple mcp-config flags with inline JSON", () => {
|
||||
// Simulates action prepending its config, then user providing their own
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '{"mcpServers":{"github_comment":{"command":"node","args":["server.js"]}}}' --mcp-config '{"mcpServers":{"user_server":{"command":"custom","args":["run"]}}}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
const mcpConfig = JSON.parse(
|
||||
result.sdkOptions.extraArgs?.["mcp-config"] as string,
|
||||
);
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("user_server");
|
||||
expect(mcpConfig.mcpServers.github_comment.command).toBe("node");
|
||||
expect(mcpConfig.mcpServers.user_server.command).toBe("custom");
|
||||
});
|
||||
|
||||
test("should merge three mcp-config flags", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{"command":"cmd1"}}}' --mcp-config '{"mcpServers":{"server2":{"command":"cmd2"}}}' --mcp-config '{"mcpServers":{"server3":{"command":"cmd3"}}}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
const mcpConfig = JSON.parse(
|
||||
result.sdkOptions.extraArgs?.["mcp-config"] as string,
|
||||
);
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("server1");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("server2");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("server3");
|
||||
});
|
||||
|
||||
test("should handle mcp-config file path when no inline JSON exists", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config /tmp/user-mcp-config.json`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.extraArgs?.["mcp-config"]).toBe(
|
||||
"/tmp/user-mcp-config.json",
|
||||
);
|
||||
});
|
||||
|
||||
test("should merge inline JSON configs when file path is also present", () => {
|
||||
// When action provides inline JSON and user provides a file path,
|
||||
// the inline JSON configs should be merged (file paths cannot be merged at parse time)
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '{"mcpServers":{"github_comment":{"command":"node"}}}' --mcp-config '{"mcpServers":{"github_ci":{"command":"node"}}}' --mcp-config /tmp/user-config.json`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
// The inline JSON configs should be merged
|
||||
const mcpConfig = JSON.parse(
|
||||
result.sdkOptions.extraArgs?.["mcp-config"] as string,
|
||||
);
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("github_ci");
|
||||
});
|
||||
|
||||
test("should handle mcp-config with other flags", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '{"mcpServers":{"server1":{}}}' --model claude-3-5-sonnet --mcp-config '{"mcpServers":{"server2":{}}}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
const mcpConfig = JSON.parse(
|
||||
result.sdkOptions.extraArgs?.["mcp-config"] as string,
|
||||
);
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("server1");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("server2");
|
||||
expect(result.sdkOptions.extraArgs?.["model"]).toBe("claude-3-5-sonnet");
|
||||
});
|
||||
|
||||
test("should handle real-world scenario: action config + user config", () => {
|
||||
// This is the exact scenario from the bug report
|
||||
const actionConfig = JSON.stringify({
|
||||
mcpServers: {
|
||||
github_comment: {
|
||||
command: "node",
|
||||
args: ["github-comment-server.js"],
|
||||
},
|
||||
github_ci: { command: "node", args: ["github-ci-server.js"] },
|
||||
},
|
||||
});
|
||||
const userConfig = JSON.stringify({
|
||||
mcpServers: {
|
||||
my_custom_server: { command: "python", args: ["server.py"] },
|
||||
},
|
||||
});
|
||||
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--mcp-config '${actionConfig}' --mcp-config '${userConfig}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
const mcpConfig = JSON.parse(
|
||||
result.sdkOptions.extraArgs?.["mcp-config"] as string,
|
||||
);
|
||||
// All servers should be present
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("github_comment");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("github_ci");
|
||||
expect(mcpConfig.mcpServers).toHaveProperty("my_custom_server");
|
||||
});
|
||||
});
|
||||
|
||||
describe("other extraArgs passthrough", () => {
|
||||
test("should pass through json-schema in extraArgs", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: `--json-schema '{"type":"object"}'`,
|
||||
};
|
||||
|
||||
const result = parseSdkOptions(options);
|
||||
|
||||
expect(result.sdkOptions.extraArgs?.["json-schema"]).toBe(
|
||||
'{"type":"object"}',
|
||||
);
|
||||
expect(result.hasJsonSchema).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -1,82 +0,0 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { describe, test, expect } from "bun:test";
|
||||
import { prepareRunConfig, type ClaudeOptions } from "../src/run-claude";
|
||||
|
||||
describe("prepareRunConfig", () => {
|
||||
test("should prepare config with basic arguments", () => {
|
||||
const options: ClaudeOptions = {};
|
||||
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
|
||||
|
||||
expect(prepared.claudeArgs).toEqual([
|
||||
"-p",
|
||||
"--verbose",
|
||||
"--output-format",
|
||||
"stream-json",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should include promptPath", () => {
|
||||
const options: ClaudeOptions = {};
|
||||
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
|
||||
|
||||
expect(prepared.promptPath).toBe("/tmp/test-prompt.txt");
|
||||
});
|
||||
|
||||
test("should use provided prompt path", () => {
|
||||
const options: ClaudeOptions = {};
|
||||
const prepared = prepareRunConfig("/custom/prompt/path.txt", options);
|
||||
|
||||
expect(prepared.promptPath).toBe("/custom/prompt/path.txt");
|
||||
});
|
||||
|
||||
describe("claudeArgs handling", () => {
|
||||
test("should parse and include custom claude arguments", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: "--max-turns 10 --model claude-3-opus-20240229",
|
||||
};
|
||||
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
|
||||
|
||||
expect(prepared.claudeArgs).toEqual([
|
||||
"-p",
|
||||
"--max-turns",
|
||||
"10",
|
||||
"--model",
|
||||
"claude-3-opus-20240229",
|
||||
"--verbose",
|
||||
"--output-format",
|
||||
"stream-json",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should handle empty claudeArgs", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: "",
|
||||
};
|
||||
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
|
||||
|
||||
expect(prepared.claudeArgs).toEqual([
|
||||
"-p",
|
||||
"--verbose",
|
||||
"--output-format",
|
||||
"stream-json",
|
||||
]);
|
||||
});
|
||||
|
||||
test("should handle claudeArgs with quoted strings", () => {
|
||||
const options: ClaudeOptions = {
|
||||
claudeArgs: '--system-prompt "You are a helpful assistant"',
|
||||
};
|
||||
const prepared = prepareRunConfig("/tmp/test-prompt.txt", options);
|
||||
|
||||
expect(prepared.claudeArgs).toEqual([
|
||||
"-p",
|
||||
"--system-prompt",
|
||||
"You are a helpful assistant",
|
||||
"--verbose",
|
||||
"--output-format",
|
||||
"stream-json",
|
||||
]);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -13,15 +13,19 @@ describe("validateEnvironmentVariables", () => {
|
||||
delete process.env.ANTHROPIC_API_KEY;
|
||||
delete process.env.CLAUDE_CODE_USE_BEDROCK;
|
||||
delete process.env.CLAUDE_CODE_USE_VERTEX;
|
||||
delete process.env.CLAUDE_CODE_USE_FOUNDRY;
|
||||
delete process.env.AWS_REGION;
|
||||
delete process.env.AWS_ACCESS_KEY_ID;
|
||||
delete process.env.AWS_SECRET_ACCESS_KEY;
|
||||
delete process.env.AWS_SESSION_TOKEN;
|
||||
delete process.env.AWS_BEARER_TOKEN_BEDROCK;
|
||||
delete process.env.ANTHROPIC_BEDROCK_BASE_URL;
|
||||
delete process.env.ANTHROPIC_VERTEX_PROJECT_ID;
|
||||
delete process.env.CLOUD_ML_REGION;
|
||||
delete process.env.GOOGLE_APPLICATION_CREDENTIALS;
|
||||
delete process.env.ANTHROPIC_VERTEX_BASE_URL;
|
||||
delete process.env.ANTHROPIC_FOUNDRY_RESOURCE;
|
||||
delete process.env.ANTHROPIC_FOUNDRY_BASE_URL;
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
@@ -92,31 +96,58 @@ describe("validateEnvironmentVariables", () => {
|
||||
);
|
||||
});
|
||||
|
||||
test("should fail when AWS_ACCESS_KEY_ID is missing", () => {
|
||||
test("should fail when only AWS_SECRET_ACCESS_KEY is provided without bearer token", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_SECRET_ACCESS_KEY = "test-secret-key";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"AWS_ACCESS_KEY_ID is required when using AWS Bedrock.",
|
||||
"Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should fail when AWS_SECRET_ACCESS_KEY is missing", () => {
|
||||
test("should fail when only AWS_ACCESS_KEY_ID is provided without bearer token", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_ACCESS_KEY_ID = "test-access-key";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"AWS_SECRET_ACCESS_KEY is required when using AWS Bedrock.",
|
||||
"Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should report all missing Bedrock variables", () => {
|
||||
test("should pass when AWS_BEARER_TOKEN_BEDROCK is provided instead of access keys", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_BEARER_TOKEN_BEDROCK = "test-bearer-token";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
});
|
||||
|
||||
test("should pass when both bearer token and access keys are provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_BEARER_TOKEN_BEDROCK = "test-bearer-token";
|
||||
process.env.AWS_ACCESS_KEY_ID = "test-access-key";
|
||||
process.env.AWS_SECRET_ACCESS_KEY = "test-secret-key";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
});
|
||||
|
||||
test("should fail when no authentication method is provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should report missing region and authentication", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
/AWS_REGION is required when using AWS Bedrock.*AWS_ACCESS_KEY_ID is required when using AWS Bedrock.*AWS_SECRET_ACCESS_KEY is required when using AWS Bedrock/s,
|
||||
/AWS_REGION is required when using AWS Bedrock.*Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock/s,
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -167,6 +198,56 @@ describe("validateEnvironmentVariables", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("Microsoft Foundry", () => {
|
||||
test("should pass when ANTHROPIC_FOUNDRY_RESOURCE is provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "test-resource";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
});
|
||||
|
||||
test("should pass when ANTHROPIC_FOUNDRY_BASE_URL is provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
process.env.ANTHROPIC_FOUNDRY_BASE_URL =
|
||||
"https://test-resource.services.ai.azure.com";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
});
|
||||
|
||||
test("should pass when both resource and base URL are provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "test-resource";
|
||||
process.env.ANTHROPIC_FOUNDRY_BASE_URL =
|
||||
"https://custom.services.ai.azure.com";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
});
|
||||
|
||||
test("should construct Foundry base URL from resource name when ANTHROPIC_FOUNDRY_BASE_URL is not provided", () => {
|
||||
// This test verifies our action.yml change, which constructs:
|
||||
// ANTHROPIC_FOUNDRY_BASE_URL: ${{ env.ANTHROPIC_FOUNDRY_BASE_URL || (env.ANTHROPIC_FOUNDRY_RESOURCE && format('https://{0}.services.ai.azure.com', env.ANTHROPIC_FOUNDRY_RESOURCE)) }}
|
||||
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "my-foundry-resource";
|
||||
// ANTHROPIC_FOUNDRY_BASE_URL is intentionally not set
|
||||
|
||||
// The actual URL construction happens in the composite action in action.yml
|
||||
// This test is a placeholder to document the behavior
|
||||
expect(() => validateEnvironmentVariables()).not.toThrow();
|
||||
|
||||
// In the actual action, ANTHROPIC_FOUNDRY_BASE_URL would be:
|
||||
// https://my-foundry-resource.services.ai.azure.com
|
||||
});
|
||||
|
||||
test("should fail when neither ANTHROPIC_FOUNDRY_RESOURCE nor ANTHROPIC_FOUNDRY_BASE_URL is provided", () => {
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Either ANTHROPIC_FOUNDRY_RESOURCE or ANTHROPIC_FOUNDRY_BASE_URL is required when using Microsoft Foundry.",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
describe("Multiple providers", () => {
|
||||
test("should fail when both Bedrock and Vertex are enabled", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
@@ -179,7 +260,51 @@ describe("validateEnvironmentVariables", () => {
|
||||
process.env.CLOUD_ML_REGION = "us-central1";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Cannot use both Bedrock and Vertex AI simultaneously. Please set only one provider.",
|
||||
"Cannot use multiple providers simultaneously. Please set only one of: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should fail when both Bedrock and Foundry are enabled", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
// Provide all required vars to isolate the mutual exclusion error
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_ACCESS_KEY_ID = "test-access-key";
|
||||
process.env.AWS_SECRET_ACCESS_KEY = "test-secret-key";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "test-resource";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Cannot use multiple providers simultaneously. Please set only one of: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should fail when both Vertex and Foundry are enabled", () => {
|
||||
process.env.CLAUDE_CODE_USE_VERTEX = "1";
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
// Provide all required vars to isolate the mutual exclusion error
|
||||
process.env.ANTHROPIC_VERTEX_PROJECT_ID = "test-project";
|
||||
process.env.CLOUD_ML_REGION = "us-central1";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "test-resource";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Cannot use multiple providers simultaneously. Please set only one of: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY.",
|
||||
);
|
||||
});
|
||||
|
||||
test("should fail when all three providers are enabled", () => {
|
||||
process.env.CLAUDE_CODE_USE_BEDROCK = "1";
|
||||
process.env.CLAUDE_CODE_USE_VERTEX = "1";
|
||||
process.env.CLAUDE_CODE_USE_FOUNDRY = "1";
|
||||
// Provide all required vars to isolate the mutual exclusion error
|
||||
process.env.AWS_REGION = "us-east-1";
|
||||
process.env.AWS_ACCESS_KEY_ID = "test-access-key";
|
||||
process.env.AWS_SECRET_ACCESS_KEY = "test-secret-key";
|
||||
process.env.ANTHROPIC_VERTEX_PROJECT_ID = "test-project";
|
||||
process.env.CLOUD_ML_REGION = "us-central1";
|
||||
process.env.ANTHROPIC_FOUNDRY_RESOURCE = "test-resource";
|
||||
|
||||
expect(() => validateEnvironmentVariables()).toThrow(
|
||||
"Cannot use multiple providers simultaneously. Please set only one of: CLAUDE_CODE_USE_BEDROCK, CLAUDE_CODE_USE_VERTEX, or CLAUDE_CODE_USE_FOUNDRY.",
|
||||
);
|
||||
});
|
||||
});
|
||||
@@ -204,10 +329,7 @@ describe("validateEnvironmentVariables", () => {
|
||||
" - AWS_REGION is required when using AWS Bedrock.",
|
||||
);
|
||||
expect(error!.message).toContain(
|
||||
" - AWS_ACCESS_KEY_ID is required when using AWS Bedrock.",
|
||||
);
|
||||
expect(error!.message).toContain(
|
||||
" - AWS_SECRET_ACCESS_KEY is required when using AWS Bedrock.",
|
||||
" - Either AWS_BEARER_TOKEN_BEDROCK or both AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are required when using AWS Bedrock.",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
34
bun.lock
34
bun.lock
@@ -1,11 +1,13 @@
|
||||
{
|
||||
"lockfileVersion": 1,
|
||||
"configVersion": 0,
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "@anthropic-ai/claude-code-action",
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.10.1",
|
||||
"@actions/github": "^6.0.1",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.16",
|
||||
"@modelcontextprotocol/sdk": "^1.11.0",
|
||||
"@octokit/graphql": "^8.2.2",
|
||||
"@octokit/rest": "^21.1.1",
|
||||
@@ -35,8 +37,40 @@
|
||||
|
||||
"@actions/io": ["@actions/io@1.1.3", "", {}, "sha512-wi9JjgKLYS7U/z8PPbco+PvTb/nRWjeoFlJ1Qer83k/3C5PHQi28hiVdeE2kHXmIL99mQFawx8qt/JPjZilJ8Q=="],
|
||||
|
||||
"@anthropic-ai/claude-agent-sdk": ["@anthropic-ai/claude-agent-sdk@0.2.16", "", { "optionalDependencies": { "@img/sharp-darwin-arm64": "^0.33.5", "@img/sharp-darwin-x64": "^0.33.5", "@img/sharp-linux-arm": "^0.33.5", "@img/sharp-linux-arm64": "^0.33.5", "@img/sharp-linux-x64": "^0.33.5", "@img/sharp-linuxmusl-arm64": "^0.33.5", "@img/sharp-linuxmusl-x64": "^0.33.5", "@img/sharp-win32-x64": "^0.33.5" }, "peerDependencies": { "zod": "^4.0.0" } }, "sha512-8sG7rvJZ7rc+oj0ZvWMTAtnYYTsh5gP5pCXiG21wYbwHqgEPod/oOIu5DCC/PWhwzN0sAmDbVURgCTDmimYlXw=="],
|
||||
|
||||
"@fastify/busboy": ["@fastify/busboy@2.1.1", "", {}, "sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA=="],
|
||||
|
||||
"@img/sharp-darwin-arm64": ["@img/sharp-darwin-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-arm64": "1.0.4" }, "os": "darwin", "cpu": "arm64" }, "sha512-UT4p+iz/2H4twwAoLCqfA9UH5pI6DggwKEGuaPy7nCVQ8ZsiY5PIcrRvD1DzuY3qYL07NtIQcWnBSY/heikIFQ=="],
|
||||
|
||||
"@img/sharp-darwin-x64": ["@img/sharp-darwin-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-darwin-x64": "1.0.4" }, "os": "darwin", "cpu": "x64" }, "sha512-fyHac4jIc1ANYGRDxtiqelIbdWkIuQaI84Mv45KvGRRxSAa7o7d1ZKAOBaYbnepLC1WqxfpimdeWfvqqSGwR2Q=="],
|
||||
|
||||
"@img/sharp-libvips-darwin-arm64": ["@img/sharp-libvips-darwin-arm64@1.0.4", "", { "os": "darwin", "cpu": "arm64" }, "sha512-XblONe153h0O2zuFfTAbQYAX2JhYmDHeWikp1LM9Hul9gVPjFY427k6dFEcOL72O01QxQsWi761svJ/ev9xEDg=="],
|
||||
|
||||
"@img/sharp-libvips-darwin-x64": ["@img/sharp-libvips-darwin-x64@1.0.4", "", { "os": "darwin", "cpu": "x64" }, "sha512-xnGR8YuZYfJGmWPvmlunFaWJsb9T/AO2ykoP3Fz/0X5XV2aoYBPkX6xqCQvUTKKiLddarLaxpzNe+b1hjeWHAQ=="],
|
||||
|
||||
"@img/sharp-libvips-linux-arm": ["@img/sharp-libvips-linux-arm@1.0.5", "", { "os": "linux", "cpu": "arm" }, "sha512-gvcC4ACAOPRNATg/ov8/MnbxFDJqf/pDePbBnuBDcjsI8PssmjoKMAz4LtLaVi+OnSb5FK/yIOamqDwGmXW32g=="],
|
||||
|
||||
"@img/sharp-libvips-linux-arm64": ["@img/sharp-libvips-linux-arm64@1.0.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-9B+taZ8DlyyqzZQnoeIvDVR/2F4EbMepXMc/NdVbkzsJbzkUjhXv/70GQJ7tdLA4YJgNP25zukcxpX2/SueNrA=="],
|
||||
|
||||
"@img/sharp-libvips-linux-x64": ["@img/sharp-libvips-linux-x64@1.0.4", "", { "os": "linux", "cpu": "x64" }, "sha512-MmWmQ3iPFZr0Iev+BAgVMb3ZyC4KeFc3jFxnNbEPas60e1cIfevbtuyf9nDGIzOaW9PdnDciJm+wFFaTlj5xYw=="],
|
||||
|
||||
"@img/sharp-libvips-linuxmusl-arm64": ["@img/sharp-libvips-linuxmusl-arm64@1.0.4", "", { "os": "linux", "cpu": "arm64" }, "sha512-9Ti+BbTYDcsbp4wfYib8Ctm1ilkugkA/uscUn6UXK1ldpC1JjiXbLfFZtRlBhjPZ5o1NCLiDbg8fhUPKStHoTA=="],
|
||||
|
||||
"@img/sharp-libvips-linuxmusl-x64": ["@img/sharp-libvips-linuxmusl-x64@1.0.4", "", { "os": "linux", "cpu": "x64" }, "sha512-viYN1KX9m+/hGkJtvYYp+CCLgnJXwiQB39damAO7WMdKWlIhmYTfHjwSbQeUK/20vY154mwezd9HflVFM1wVSw=="],
|
||||
|
||||
"@img/sharp-linux-arm": ["@img/sharp-linux-arm@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-arm": "1.0.5" }, "os": "linux", "cpu": "arm" }, "sha512-JTS1eldqZbJxjvKaAkxhZmBqPRGmxgu+qFKSInv8moZ2AmT5Yib3EQ1c6gp493HvrvV8QgdOXdyaIBrhvFhBMQ=="],
|
||||
|
||||
"@img/sharp-linux-arm64": ["@img/sharp-linux-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-arm64": "1.0.4" }, "os": "linux", "cpu": "arm64" }, "sha512-JMVv+AMRyGOHtO1RFBiJy/MBsgz0x4AWrT6QoEVVTyh1E39TrCUpTRI7mx9VksGX4awWASxqCYLCV4wBZHAYxA=="],
|
||||
|
||||
"@img/sharp-linux-x64": ["@img/sharp-linux-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linux-x64": "1.0.4" }, "os": "linux", "cpu": "x64" }, "sha512-opC+Ok5pRNAzuvq1AG0ar+1owsu842/Ab+4qvU879ippJBHvyY5n2mxF1izXqkPYlGuP/M556uh53jRLJmzTWA=="],
|
||||
|
||||
"@img/sharp-linuxmusl-arm64": ["@img/sharp-linuxmusl-arm64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linuxmusl-arm64": "1.0.4" }, "os": "linux", "cpu": "arm64" }, "sha512-XrHMZwGQGvJg2V/oRSUfSAfjfPxO+4DkiRh6p2AFjLQztWUuY/o8Mq0eMQVIY7HJ1CDQUJlxGGZRw1a5bqmd1g=="],
|
||||
|
||||
"@img/sharp-linuxmusl-x64": ["@img/sharp-linuxmusl-x64@0.33.5", "", { "optionalDependencies": { "@img/sharp-libvips-linuxmusl-x64": "1.0.4" }, "os": "linux", "cpu": "x64" }, "sha512-WT+d/cgqKkkKySYmqoZ8y3pxx7lx9vVejxW/W4DOFMYVSkErR+w7mf2u8m/y4+xHe7yY9DAXQMWQhpnMuFfScw=="],
|
||||
|
||||
"@img/sharp-win32-x64": ["@img/sharp-win32-x64@0.33.5", "", { "os": "win32", "cpu": "x64" }, "sha512-MpY/o8/8kj+EcnxwvrP4aTJSWw/aZ7JIGR4aBeZkZw5B7/Jn+tY9/VNwtcoGmdT7GfggGIU4kygOMSbYnOrAbg=="],
|
||||
|
||||
"@modelcontextprotocol/sdk": ["@modelcontextprotocol/sdk@1.16.0", "", { "dependencies": { "ajv": "^6.12.6", "content-type": "^1.0.5", "cors": "^2.8.5", "cross-spawn": "^7.0.5", "eventsource": "^3.0.2", "eventsource-parser": "^3.0.0", "express": "^5.0.1", "express-rate-limit": "^7.5.0", "pkce-challenge": "^5.0.0", "raw-body": "^3.0.0", "zod": "^3.23.8", "zod-to-json-schema": "^3.24.1" } }, "sha512-8ofX7gkZcLj9H9rSd50mCgm3SSF8C7XoclxJuLoV0Cz3rEQ1tv9MZRYYvJtm9n1BiEQQMzSmE/w2AEkNacLYfg=="],
|
||||
|
||||
"@octokit/auth-token": ["@octokit/auth-token@4.0.0", "", {}, "sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA=="],
|
||||
|
||||
@@ -1,16 +1,17 @@
|
||||
# Cloud Providers
|
||||
|
||||
You can authenticate with Claude using any of these three methods:
|
||||
You can authenticate with Claude using any of these four methods:
|
||||
|
||||
1. Direct Anthropic API (default)
|
||||
2. Amazon Bedrock with OIDC authentication
|
||||
3. Google Vertex AI with OIDC authentication
|
||||
4. Microsoft Foundry with OIDC authentication
|
||||
|
||||
For detailed setup instructions for AWS Bedrock and Google Vertex AI, see the [official documentation](https://docs.anthropic.com/en/docs/claude-code/github-actions#using-with-aws-bedrock-%26-google-vertex-ai).
|
||||
For detailed setup instructions for AWS Bedrock and Google Vertex AI, see the [official documentation](https://code.claude.com/docs/en/github-actions#for-aws-bedrock:).
|
||||
|
||||
**Note**:
|
||||
|
||||
- Bedrock and Vertex use OIDC authentication exclusively
|
||||
- Bedrock, Vertex, and Microsoft Foundry use OIDC authentication exclusively
|
||||
- AWS Bedrock automatically uses cross-region inference profiles for certain models
|
||||
- For cross-region inference profile models, you need to request and be granted access to the Claude models in all regions that the inference profile uses
|
||||
|
||||
@@ -40,11 +41,19 @@ Use provider-specific model names based on your chosen provider:
|
||||
claude_args: |
|
||||
--model claude-4-0-sonnet@20250805
|
||||
# ... other inputs
|
||||
|
||||
# For Microsoft Foundry with OIDC
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
use_foundry: "true"
|
||||
claude_args: |
|
||||
--model claude-sonnet-4-5
|
||||
# ... other inputs
|
||||
```
|
||||
|
||||
## OIDC Authentication for Bedrock and Vertex
|
||||
## OIDC Authentication for Cloud Providers
|
||||
|
||||
Both AWS Bedrock and GCP Vertex AI require OIDC authentication.
|
||||
AWS Bedrock, GCP Vertex AI, and Microsoft Foundry all support OIDC authentication.
|
||||
|
||||
```yaml
|
||||
# For AWS Bedrock with OIDC
|
||||
@@ -97,3 +106,36 @@ Both AWS Bedrock and GCP Vertex AI require OIDC authentication.
|
||||
permissions:
|
||||
id-token: write # Required for OIDC
|
||||
```
|
||||
|
||||
```yaml
|
||||
# For Microsoft Foundry with OIDC
|
||||
- name: Authenticate to Azure
|
||||
uses: azure/login@v2
|
||||
with:
|
||||
client-id: ${{ secrets.AZURE_CLIENT_ID }}
|
||||
tenant-id: ${{ secrets.AZURE_TENANT_ID }}
|
||||
subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
|
||||
|
||||
- name: Generate GitHub App token
|
||||
id: app-token
|
||||
uses: actions/create-github-app-token@v2
|
||||
with:
|
||||
app-id: ${{ secrets.APP_ID }}
|
||||
private-key: ${{ secrets.APP_PRIVATE_KEY }}
|
||||
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
use_foundry: "true"
|
||||
claude_args: |
|
||||
--model claude-sonnet-4-5
|
||||
# ... other inputs
|
||||
env:
|
||||
ANTHROPIC_FOUNDRY_BASE_URL: https://my-resource.services.ai.azure.com
|
||||
|
||||
permissions:
|
||||
id-token: write # Required for OIDC
|
||||
```
|
||||
|
||||
## Microsoft Foundry Setup
|
||||
|
||||
For detailed setup instructions for Microsoft Foundry, see the [official documentation](https://docs.anthropic.com/en/docs/claude-code/microsoft-foundry).
|
||||
|
||||
@@ -130,7 +130,7 @@ To allow Claude to view workflow run results, job logs, and CI status:
|
||||
2. **Configure the action with additional permissions**:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@beta
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
additional_permissions: |
|
||||
@@ -162,7 +162,7 @@ jobs:
|
||||
claude-ci-helper:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: anthropics/claude-code-action@beta
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
additional_permissions: |
|
||||
@@ -343,3 +343,31 @@ Many individual input parameters have been consolidated into `claude_args` or `s
|
||||
| `mcp_config` | Use `claude_args: "--mcp-config '{...}'"` |
|
||||
| `direct_prompt` | Use `prompt` input instead |
|
||||
| `override_prompt` | Use `prompt` with GitHub context variables |
|
||||
|
||||
## Custom Executables for Specialized Environments
|
||||
|
||||
For specialized environments like Nix, custom container setups, or other package management systems where the default installation doesn't work, you can provide your own executables:
|
||||
|
||||
### Custom Claude Code Executable
|
||||
|
||||
Use `path_to_claude_code_executable` to provide your own Claude Code binary instead of using the automatically installed version:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
path_to_claude_code_executable: "/path/to/custom/claude"
|
||||
# ... other inputs
|
||||
```
|
||||
|
||||
### Custom Bun Executable
|
||||
|
||||
Use `path_to_bun_executable` to provide your own Bun runtime instead of the default installation:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
path_to_bun_executable: "/path/to/custom/bun"
|
||||
# ... other inputs
|
||||
```
|
||||
|
||||
**Important**: Using incompatible versions may cause the action to fail. Ensure your custom executables are compatible with the action's requirements.
|
||||
|
||||
744
docs/create-app.html
Normal file
744
docs/create-app.html
Normal file
@@ -0,0 +1,744 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
||||
<title>Create Claude Code GitHub App</title>
|
||||
<style>
|
||||
* {
|
||||
box-sizing: border-box;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
:root {
|
||||
/* Claude Brand Colors */
|
||||
--primary-dark: #0e0e0e;
|
||||
--primary-light: #d4a27f;
|
||||
--background-light: rgb(253, 253, 247);
|
||||
--background-dark: rgb(9, 9, 11);
|
||||
--text-primary: #1a1a1a;
|
||||
--text-secondary: #525252;
|
||||
--text-tertiary: #737373;
|
||||
--border-color: rgba(0, 0, 0, 0.08);
|
||||
--hover-bg: rgba(0, 0, 0, 0.02);
|
||||
--success: #2ea44f;
|
||||
--warning: #e3b341;
|
||||
--card-shadow:
|
||||
0 1px 3px rgba(0, 0, 0, 0.06), 0 1px 2px rgba(0, 0, 0, 0.04);
|
||||
--card-shadow-hover:
|
||||
0 4px 6px rgba(0, 0, 0, 0.07), 0 2px 4px rgba(0, 0, 0, 0.05);
|
||||
}
|
||||
|
||||
body {
|
||||
font-family:
|
||||
-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto,
|
||||
"Helvetica Neue", Arial, sans-serif;
|
||||
background: var(--background-light);
|
||||
color: var(--text-primary);
|
||||
line-height: 1.6;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 960px;
|
||||
margin: 0 auto;
|
||||
padding: 40px 24px;
|
||||
}
|
||||
|
||||
/* Header */
|
||||
header {
|
||||
text-align: center;
|
||||
margin-bottom: 48px;
|
||||
}
|
||||
|
||||
h1 {
|
||||
font-size: 36px;
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
margin-bottom: 12px;
|
||||
letter-spacing: -0.02em;
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
font-size: 18px;
|
||||
color: var(--text-secondary);
|
||||
max-width: 640px;
|
||||
margin: 0 auto;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
/* Cards */
|
||||
.card {
|
||||
background: white;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 12px;
|
||||
padding: 32px;
|
||||
margin-bottom: 24px;
|
||||
box-shadow: var(--card-shadow);
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.card:hover {
|
||||
box-shadow: var(--card-shadow-hover);
|
||||
}
|
||||
|
||||
.card-header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 12px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.card-icon {
|
||||
font-size: 24px;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
h2 {
|
||||
font-size: 20px;
|
||||
font-weight: 600;
|
||||
color: var(--text-primary);
|
||||
margin: 0;
|
||||
letter-spacing: -0.01em;
|
||||
}
|
||||
|
||||
.card-description {
|
||||
color: var(--text-secondary);
|
||||
margin-bottom: 24px;
|
||||
font-size: 15px;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
/* Buttons */
|
||||
.button-group {
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
gap: 16px;
|
||||
}
|
||||
|
||||
.btn {
|
||||
display: inline-flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 8px;
|
||||
padding: 12px 24px;
|
||||
font-size: 15px;
|
||||
font-weight: 500;
|
||||
border-radius: 8px;
|
||||
border: none;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
text-decoration: none;
|
||||
font-family: inherit;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.btn-primary {
|
||||
background: var(--primary-dark);
|
||||
color: white;
|
||||
}
|
||||
|
||||
.btn-primary:hover {
|
||||
background: #1a1a1a;
|
||||
transform: translateY(-1px);
|
||||
box-shadow: 0 4px 12px rgba(0, 0, 0, 0.15);
|
||||
}
|
||||
|
||||
.btn-secondary {
|
||||
background: var(--primary-light);
|
||||
color: var(--primary-dark);
|
||||
}
|
||||
|
||||
.btn-secondary:hover {
|
||||
background: #c99a70;
|
||||
transform: translateY(-1px);
|
||||
box-shadow: 0 4px 12px rgba(212, 162, 127, 0.3);
|
||||
}
|
||||
|
||||
.btn-outline {
|
||||
background: white;
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.btn-outline:hover {
|
||||
background: var(--hover-bg);
|
||||
border-color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.btn:active {
|
||||
transform: translateY(0);
|
||||
}
|
||||
|
||||
.btn.copied {
|
||||
background: var(--success);
|
||||
color: white;
|
||||
}
|
||||
|
||||
/* Form */
|
||||
.form-row {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
align-items: flex-end;
|
||||
}
|
||||
|
||||
.form-group {
|
||||
flex: 1;
|
||||
}
|
||||
|
||||
label {
|
||||
display: block;
|
||||
font-size: 14px;
|
||||
font-weight: 500;
|
||||
color: var(--text-primary);
|
||||
margin-bottom: 6px;
|
||||
}
|
||||
|
||||
input[type="text"] {
|
||||
width: 100%;
|
||||
padding: 10px 14px;
|
||||
font-size: 15px;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
font-family: inherit;
|
||||
transition: all 0.2s ease;
|
||||
background: white;
|
||||
}
|
||||
|
||||
input[type="text"]:focus {
|
||||
outline: none;
|
||||
border-color: var(--primary-dark);
|
||||
box-shadow: 0 0 0 3px rgba(14, 14, 14, 0.1);
|
||||
}
|
||||
|
||||
/* Code Block */
|
||||
.code-container {
|
||||
position: relative;
|
||||
background: #fafafa;
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 8px;
|
||||
margin: 20px 0;
|
||||
}
|
||||
|
||||
.code-header {
|
||||
display: flex;
|
||||
justify-content: space-between;
|
||||
align-items: center;
|
||||
padding: 12px 16px;
|
||||
border-bottom: 1px solid var(--border-color);
|
||||
}
|
||||
|
||||
.code-label {
|
||||
font-size: 13px;
|
||||
font-weight: 500;
|
||||
color: var(--text-secondary);
|
||||
}
|
||||
|
||||
.copy-btn {
|
||||
padding: 6px 12px;
|
||||
font-size: 13px;
|
||||
font-weight: 500;
|
||||
background: white;
|
||||
color: var(--text-primary);
|
||||
border: 1px solid var(--border-color);
|
||||
border-radius: 6px;
|
||||
cursor: pointer;
|
||||
transition: all 0.2s ease;
|
||||
}
|
||||
|
||||
.copy-btn:hover {
|
||||
background: var(--hover-bg);
|
||||
}
|
||||
|
||||
.copy-btn.copied {
|
||||
background: var(--success);
|
||||
color: white;
|
||||
border-color: var(--success);
|
||||
}
|
||||
|
||||
.code-block {
|
||||
padding: 16px;
|
||||
overflow-x: auto;
|
||||
font-family:
|
||||
"SF Mono", Monaco, "Cascadia Code", "Roboto Mono", Consolas,
|
||||
"Courier New", monospace;
|
||||
font-size: 13px;
|
||||
line-height: 1.6;
|
||||
color: var(--text-primary);
|
||||
white-space: pre;
|
||||
}
|
||||
|
||||
/* Permissions List */
|
||||
.permissions-grid {
|
||||
display: grid;
|
||||
gap: 12px;
|
||||
margin-top: 16px;
|
||||
}
|
||||
|
||||
.permission-item {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
gap: 10px;
|
||||
padding: 10px 14px;
|
||||
background: #fafafa;
|
||||
border-radius: 8px;
|
||||
font-size: 14px;
|
||||
}
|
||||
|
||||
.permission-icon {
|
||||
color: var(--success);
|
||||
font-size: 16px;
|
||||
line-height: 1;
|
||||
}
|
||||
|
||||
.permission-name {
|
||||
font-weight: 500;
|
||||
color: var(--text-primary);
|
||||
}
|
||||
|
||||
.permission-value {
|
||||
margin-left: auto;
|
||||
color: var(--text-secondary);
|
||||
font-size: 13px;
|
||||
}
|
||||
|
||||
/* Steps */
|
||||
.steps {
|
||||
margin: 24px 0;
|
||||
}
|
||||
|
||||
.step {
|
||||
display: flex;
|
||||
gap: 16px;
|
||||
margin-bottom: 20px;
|
||||
}
|
||||
|
||||
.step-number {
|
||||
flex-shrink: 0;
|
||||
width: 28px;
|
||||
height: 28px;
|
||||
background: var(--primary-dark);
|
||||
color: white;
|
||||
border-radius: 50%;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
font-size: 14px;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.step-content {
|
||||
flex: 1;
|
||||
padding-top: 2px;
|
||||
}
|
||||
|
||||
.step-content p {
|
||||
color: var(--text-secondary);
|
||||
font-size: 15px;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
.step-content strong {
|
||||
color: var(--text-primary);
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
/* Alert Box */
|
||||
.alert {
|
||||
display: flex;
|
||||
gap: 12px;
|
||||
padding: 16px;
|
||||
background: #fffbf0;
|
||||
border: 1px solid #f5e7c3;
|
||||
border-radius: 8px;
|
||||
margin-top: 32px;
|
||||
}
|
||||
|
||||
.alert-icon {
|
||||
font-size: 18px;
|
||||
line-height: 1;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
.alert-content {
|
||||
flex: 1;
|
||||
font-size: 14px;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
.alert-content strong {
|
||||
color: var(--text-primary);
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Responsive */
|
||||
@media (min-width: 640px) {
|
||||
.button-group {
|
||||
flex-direction: row;
|
||||
}
|
||||
|
||||
.btn {
|
||||
width: auto;
|
||||
}
|
||||
|
||||
.permissions-grid {
|
||||
grid-template-columns: repeat(2, 1fr);
|
||||
}
|
||||
}
|
||||
|
||||
@media (max-width: 640px) {
|
||||
h1 {
|
||||
font-size: 28px;
|
||||
}
|
||||
|
||||
.subtitle {
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
.card {
|
||||
padding: 24px 20px;
|
||||
}
|
||||
|
||||
.container {
|
||||
padding: 24px 16px;
|
||||
}
|
||||
}
|
||||
|
||||
/* Hidden form elements */
|
||||
.hidden-form {
|
||||
display: none;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="container">
|
||||
<header>
|
||||
<h1>Create Your Custom GitHub App</h1>
|
||||
<p class="subtitle">
|
||||
Set up a custom GitHub App for Claude Code Action with all required
|
||||
permissions automatically configured.
|
||||
</p>
|
||||
</header>
|
||||
|
||||
<!-- Quick Setup Card -->
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<span class="card-icon">🚀</span>
|
||||
<h2>Quick Setup</h2>
|
||||
</div>
|
||||
<p class="card-description">
|
||||
Create your GitHub App with one click. All permissions will be
|
||||
automatically configured for Claude Code Action.
|
||||
</p>
|
||||
|
||||
<div class="button-group">
|
||||
<!-- Personal Account Button -->
|
||||
<form
|
||||
action="https://github.com/settings/apps/new"
|
||||
method="post"
|
||||
class="hidden-form"
|
||||
id="personal-form"
|
||||
>
|
||||
<input type="hidden" name="manifest" id="personal-manifest" />
|
||||
</form>
|
||||
<button
|
||||
type="button"
|
||||
class="btn btn-primary"
|
||||
onclick="submitPersonalForm()"
|
||||
>
|
||||
<span>👤</span>
|
||||
<span>Create for Personal Account</span>
|
||||
</button>
|
||||
|
||||
<!-- Organization Form -->
|
||||
<form id="org-form" method="post" class="hidden-form">
|
||||
<input type="hidden" name="manifest" id="org-manifest" />
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<!-- Organization Input -->
|
||||
<div
|
||||
style="
|
||||
margin-top: 24px;
|
||||
padding-top: 24px;
|
||||
border-top: 1px solid var(--border-color);
|
||||
"
|
||||
>
|
||||
<label for="org-name" style="margin-bottom: 8px"
|
||||
>Or create for an organization:</label
|
||||
>
|
||||
<div class="form-row">
|
||||
<div class="form-group">
|
||||
<input
|
||||
type="text"
|
||||
id="org-name"
|
||||
placeholder="Enter organization name (e.g., my-org)"
|
||||
/>
|
||||
</div>
|
||||
<button
|
||||
type="button"
|
||||
class="btn btn-secondary"
|
||||
onclick="submitOrgForm()"
|
||||
style="flex-shrink: 0"
|
||||
>
|
||||
<span>🏢</span>
|
||||
<span>Create for Org</span>
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Permissions Card -->
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<span class="card-icon">✅</span>
|
||||
<h2>Configured Permissions</h2>
|
||||
</div>
|
||||
<p class="card-description">
|
||||
Your GitHub App will be created with these permissions:
|
||||
</p>
|
||||
|
||||
<div class="permissions-grid">
|
||||
<div class="permission-item">
|
||||
<span class="permission-icon">✓</span>
|
||||
<span class="permission-name">Contents</span>
|
||||
<span class="permission-value">Read & Write</span>
|
||||
</div>
|
||||
<div class="permission-item">
|
||||
<span class="permission-icon">✓</span>
|
||||
<span class="permission-name">Issues</span>
|
||||
<span class="permission-value">Read & Write</span>
|
||||
</div>
|
||||
<div class="permission-item">
|
||||
<span class="permission-icon">✓</span>
|
||||
<span class="permission-name">Pull Requests</span>
|
||||
<span class="permission-value">Read & Write</span>
|
||||
</div>
|
||||
<div class="permission-item">
|
||||
<span class="permission-icon">✓</span>
|
||||
<span class="permission-name">Actions</span>
|
||||
<span class="permission-value">Read</span>
|
||||
</div>
|
||||
<div class="permission-item">
|
||||
<span class="permission-icon">✓</span>
|
||||
<span class="permission-name">Metadata</span>
|
||||
<span class="permission-value">Read</span>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Next Steps Card -->
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<span class="card-icon">📋</span>
|
||||
<h2>Next Steps</h2>
|
||||
</div>
|
||||
<p class="card-description">
|
||||
After creating your app, complete these steps:
|
||||
</p>
|
||||
|
||||
<div class="steps">
|
||||
<div class="step">
|
||||
<div class="step-number">1</div>
|
||||
<div class="step-content">
|
||||
<p>
|
||||
<strong>Generate a private key:</strong> In your app settings,
|
||||
scroll to "Private keys" and click "Generate a private key"
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="step">
|
||||
<div class="step-number">2</div>
|
||||
<div class="step-content">
|
||||
<p>
|
||||
<strong>Install the app:</strong> Click "Install App" and select
|
||||
the repositories where you want to use Claude
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="step">
|
||||
<div class="step-number">3</div>
|
||||
<div class="step-content">
|
||||
<p>
|
||||
<strong>Configure your workflow:</strong> Add your app's ID and
|
||||
private key to your repository secrets
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Manual Setup Card -->
|
||||
<div class="card">
|
||||
<div class="card-header">
|
||||
<span class="card-icon">⚙️</span>
|
||||
<h2>Manual Setup</h2>
|
||||
</div>
|
||||
<p class="card-description">
|
||||
If the buttons above don't work, you can manually create the app by
|
||||
copying the manifest JSON below:
|
||||
</p>
|
||||
|
||||
<div class="code-container">
|
||||
<div class="code-header">
|
||||
<span class="code-label">github-app-manifest.json</span>
|
||||
<button class="copy-btn" onclick="copyManifest()">Copy</button>
|
||||
</div>
|
||||
<div class="code-block" id="manifest-json"></div>
|
||||
</div>
|
||||
|
||||
<div class="steps">
|
||||
<div class="step">
|
||||
<div class="step-number">1</div>
|
||||
<div class="step-content">
|
||||
<p>Copy the manifest JSON above</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="step">
|
||||
<div class="step-number">2</div>
|
||||
<div class="step-content">
|
||||
<p>
|
||||
Go to
|
||||
<a
|
||||
href="https://github.com/settings/apps/new"
|
||||
target="_blank"
|
||||
style="color: var(--primary-dark); text-decoration: underline"
|
||||
>GitHub App Settings</a
|
||||
>
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
<div class="step">
|
||||
<div class="step-number">3</div>
|
||||
<div class="step-content">
|
||||
<p>Look for "Create from manifest" option and paste the JSON</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Warning Alert -->
|
||||
<div class="alert">
|
||||
<span class="alert-icon">⚠️</span>
|
||||
<div class="alert-content">
|
||||
<strong>Important:</strong> Keep your private key secure! Never commit
|
||||
it to your repository. Always use GitHub secrets to store sensitive
|
||||
credentials.
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
// Manifest configuration
|
||||
const manifest = {
|
||||
name: "Claude Code Custom App",
|
||||
description:
|
||||
"Custom GitHub App for Claude Code Action - AI-powered coding assistant for GitHub workflows",
|
||||
url: "https://github.com/anthropics/claude-code-action",
|
||||
hook_attributes: {
|
||||
url: "https://example.com/github/webhook",
|
||||
active: false,
|
||||
},
|
||||
redirect_url: "https://github.com/settings/apps/new",
|
||||
callback_urls: [],
|
||||
setup_url:
|
||||
"https://github.com/anthropics/claude-code-action/blob/main/docs/setup.md",
|
||||
public: false,
|
||||
default_permissions: {
|
||||
contents: "write",
|
||||
issues: "write",
|
||||
pull_requests: "write",
|
||||
actions: "read",
|
||||
metadata: "read",
|
||||
},
|
||||
default_events: [
|
||||
"issue_comment",
|
||||
"issues",
|
||||
"pull_request",
|
||||
"pull_request_review",
|
||||
"pull_request_review_comment",
|
||||
],
|
||||
};
|
||||
|
||||
// Populate manifest fields
|
||||
const manifestJson = JSON.stringify(manifest);
|
||||
const manifestJsonPretty = JSON.stringify(manifest, null, 2);
|
||||
|
||||
document.getElementById("personal-manifest").value = manifestJson;
|
||||
document.getElementById("org-manifest").value = manifestJson;
|
||||
|
||||
// Display formatted JSON
|
||||
const manifestDisplay = document.getElementById("manifest-json");
|
||||
manifestDisplay.textContent = manifestJsonPretty;
|
||||
|
||||
// Submit personal form
|
||||
function submitPersonalForm() {
|
||||
document.getElementById("personal-form").submit();
|
||||
}
|
||||
|
||||
// Submit organization form
|
||||
function submitOrgForm() {
|
||||
const orgName = document.getElementById("org-name").value.trim();
|
||||
if (!orgName) {
|
||||
alert("Please enter an organization name");
|
||||
document.getElementById("org-name").focus();
|
||||
return;
|
||||
}
|
||||
const form = document.getElementById("org-form");
|
||||
form.action = `https://github.com/organizations/${orgName}/settings/apps/new`;
|
||||
form.submit();
|
||||
}
|
||||
|
||||
// Allow Enter key to submit org form
|
||||
document
|
||||
.getElementById("org-name")
|
||||
.addEventListener("keypress", function (e) {
|
||||
if (e.key === "Enter") {
|
||||
e.preventDefault();
|
||||
submitOrgForm();
|
||||
}
|
||||
});
|
||||
|
||||
// Copy manifest to clipboard
|
||||
function copyManifest() {
|
||||
navigator.clipboard
|
||||
.writeText(manifestJsonPretty)
|
||||
.then(() => {
|
||||
const button = document.querySelector(".copy-btn");
|
||||
const originalText = button.textContent;
|
||||
button.textContent = "Copied!";
|
||||
button.classList.add("copied");
|
||||
setTimeout(() => {
|
||||
button.textContent = originalText;
|
||||
button.classList.remove("copied");
|
||||
}, 2000);
|
||||
})
|
||||
.catch(() => {
|
||||
// Fallback for older browsers
|
||||
const textArea = document.createElement("textarea");
|
||||
textArea.value = manifestJsonPretty;
|
||||
textArea.style.position = "fixed";
|
||||
textArea.style.opacity = "0";
|
||||
document.body.appendChild(textArea);
|
||||
textArea.select();
|
||||
try {
|
||||
document.execCommand("copy");
|
||||
const button = document.querySelector(".copy-btn");
|
||||
const originalText = button.textContent;
|
||||
button.textContent = "Copied!";
|
||||
button.classList.add("copied");
|
||||
setTimeout(() => {
|
||||
button.textContent = originalText;
|
||||
button.classList.remove("copied");
|
||||
}, 2000);
|
||||
} catch (err) {
|
||||
alert("Failed to copy. Please copy manually.");
|
||||
}
|
||||
document.body.removeChild(textArea);
|
||||
});
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
@@ -15,13 +15,13 @@ The action automatically detects which mode to use based on your configuration:
|
||||
|
||||
This action supports the following GitHub events ([learn more GitHub event triggers](https://docs.github.com/en/actions/writing-workflows/choosing-when-your-workflow-runs/events-that-trigger-workflows)):
|
||||
|
||||
- `pull_request` - When PRs are opened or synchronized
|
||||
- `pull_request` or `pull_request_target` - When PRs are opened or synchronized
|
||||
- `issue_comment` - When comments are created on issues or PRs
|
||||
- `pull_request_comment` - When comments are made on PR diffs
|
||||
- `issues` - When issues are opened or assigned
|
||||
- `pull_request_review` - When PR reviews are submitted
|
||||
- `pull_request_review_comment` - When comments are made on PR reviews
|
||||
- `repository_dispatch` - Custom events triggered via API (coming soon)
|
||||
- `repository_dispatch` - Custom events triggered via API
|
||||
- `workflow_dispatch` - Manual workflow triggers (coming soon)
|
||||
|
||||
## Automated Documentation Updates
|
||||
|
||||
@@ -61,68 +61,3 @@ For specialized use cases, you can fine-tune behavior using `claude_args`:
|
||||
--system-prompt "You are a code review specialist"
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
```
|
||||
|
||||
## Network Restrictions
|
||||
|
||||
For enhanced security, you can restrict Claude's network access to specific domains only. This feature is particularly useful for:
|
||||
|
||||
- Enterprise environments with strict security policies
|
||||
- Preventing access to external services
|
||||
- Limiting Claude to only your internal APIs and services
|
||||
|
||||
When `experimental_allowed_domains` is set, Claude can only access the domains you explicitly list. You'll need to include the appropriate provider domains based on your authentication method.
|
||||
|
||||
### Provider-Specific Examples
|
||||
|
||||
#### If using Anthropic API or subscription
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
# Or: claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com
|
||||
```
|
||||
|
||||
#### If using AWS Bedrock
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
use_bedrock: "true"
|
||||
experimental_allowed_domains: |
|
||||
bedrock.*.amazonaws.com
|
||||
bedrock-runtime.*.amazonaws.com
|
||||
```
|
||||
|
||||
#### If using Google Vertex AI
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
use_vertex: "true"
|
||||
experimental_allowed_domains: |
|
||||
*.googleapis.com
|
||||
vertexai.googleapis.com
|
||||
```
|
||||
|
||||
### Common GitHub Domains
|
||||
|
||||
In addition to your provider domains, you may need to include GitHub-related domains. For GitHub.com users, common domains include:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
experimental_allowed_domains: |
|
||||
.anthropic.com # For Anthropic API
|
||||
.github.com
|
||||
.githubusercontent.com
|
||||
ghcr.io
|
||||
.blob.core.windows.net
|
||||
```
|
||||
|
||||
For GitHub Enterprise users, replace the GitHub.com domains above with your enterprise domains (e.g., `.github.company.com`, `packages.company.com`, etc.).
|
||||
|
||||
To determine which domains your workflow needs, you can temporarily run without restrictions and monitor the network requests, or check your GitHub Enterprise configuration for the specific services you use.
|
||||
|
||||
67
docs/faq.md
67
docs/faq.md
@@ -28,6 +28,33 @@ permissions:
|
||||
|
||||
The OIDC token is required in order for the Claude GitHub app to function. If you wish to not use the GitHub app, you can instead provide a `github_token` input to the action for Claude to operate with. See the [Claude Code permissions documentation][perms] for more.
|
||||
|
||||
### Why am I getting '403 Resource not accessible by integration' errors?
|
||||
|
||||
This error occurs when the action tries to fetch the authenticated user information using a GitHub App installation token. GitHub App tokens have limited access and cannot access the `/user` endpoint, which causes this 403 error.
|
||||
|
||||
**Solution**: The action now includes `bot_id` and `bot_name` inputs that default to Claude's bot credentials. This avoids the need to fetch user information from the API.
|
||||
|
||||
For the default claude[bot]:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
# bot_id and bot_name have sensible defaults, no need to specify
|
||||
```
|
||||
|
||||
For custom bots, specify both:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
bot_id: "12345678" # Your bot's GitHub user ID
|
||||
bot_name: "my-bot" # Your bot's username
|
||||
```
|
||||
|
||||
This issue typically only affects agent/automation mode workflows. Interactive workflows (with @claude mentions) don't encounter this issue as they use the comment author's information.
|
||||
|
||||
## Claude's Capabilities and Limitations
|
||||
|
||||
### Why won't Claude update workflow files when I ask it to?
|
||||
@@ -100,7 +127,7 @@ For performance, Claude uses shallow clones:
|
||||
If you need full history, you can configure this in your workflow before calling Claude in the `actions/checkout` step.
|
||||
|
||||
```
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
depth: 0 # will fetch full repo history
|
||||
```
|
||||
|
||||
@@ -186,6 +213,44 @@ Check the GitHub Action log for Claude's run for the full execution trace.
|
||||
|
||||
The trigger uses word boundaries, so `@claude` must be a complete word. Variations like `@claude-bot`, `@claude!`, or `claude@mention` won't work unless you customize the `trigger_phrase`.
|
||||
|
||||
### How can I use custom executables in specialized environments?
|
||||
|
||||
For specialized environments like Nix, NixOS, or custom container setups where you need to provide your own executables:
|
||||
|
||||
**Using a custom Claude Code executable:**
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
path_to_claude_code_executable: "/path/to/custom/claude"
|
||||
# ... other inputs
|
||||
```
|
||||
|
||||
**Using a custom Bun executable:**
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
path_to_bun_executable: "/path/to/custom/bun"
|
||||
# ... other inputs
|
||||
```
|
||||
|
||||
**Common use cases:**
|
||||
|
||||
- Nix/NixOS environments where packages are managed differently
|
||||
- Docker containers with pre-installed executables
|
||||
- Custom build environments with specific version requirements
|
||||
- Debugging specific issues with particular versions
|
||||
|
||||
**Important notes:**
|
||||
|
||||
- Using an older Claude Code version may cause problems if the action uses newer features
|
||||
- Using an incompatible Bun version may cause runtime errors
|
||||
- The action will skip automatic installation when custom paths are provided
|
||||
- Ensure the custom executables are available in your GitHub Actions environment
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always specify permissions explicitly** in your workflow file
|
||||
|
||||
@@ -14,18 +14,19 @@ This guide helps you migrate from Claude Code Action v0.x to v1.0. The new versi
|
||||
|
||||
The following inputs have been deprecated and replaced:
|
||||
|
||||
| Deprecated Input | Replacement | Notes |
|
||||
| --------------------- | -------------------------------- | --------------------------------------------- |
|
||||
| `mode` | Auto-detected | Action automatically chooses based on context |
|
||||
| `direct_prompt` | `prompt` | Direct drop-in replacement |
|
||||
| `override_prompt` | `prompt` | Use GitHub context variables instead |
|
||||
| `custom_instructions` | `claude_args: --system-prompt` | Move to CLI arguments |
|
||||
| `max_turns` | `claude_args: --max-turns` | Use CLI format |
|
||||
| `model` | `claude_args: --model` | Specify via CLI |
|
||||
| `allowed_tools` | `claude_args: --allowedTools` | Use CLI format |
|
||||
| `disallowed_tools` | `claude_args: --disallowedTools` | Use CLI format |
|
||||
| `claude_env` | `settings` with env object | Use settings JSON |
|
||||
| `mcp_config` | `claude_args: --mcp-config` | Pass MCP config via CLI arguments |
|
||||
| Deprecated Input | Replacement | Notes |
|
||||
| --------------------- | ------------------------------------ | --------------------------------------------- |
|
||||
| `mode` | Auto-detected | Action automatically chooses based on context |
|
||||
| `direct_prompt` | `prompt` | Direct drop-in replacement |
|
||||
| `override_prompt` | `prompt` | Use GitHub context variables instead |
|
||||
| `custom_instructions` | `claude_args: --system-prompt` | Move to CLI arguments |
|
||||
| `max_turns` | `claude_args: --max-turns` | Use CLI format |
|
||||
| `model` | `claude_args: --model` | Specify via CLI |
|
||||
| `allowed_tools` | `claude_args: --allowedTools` | Use CLI format |
|
||||
| `disallowed_tools` | `claude_args: --disallowedTools` | Use CLI format |
|
||||
| `claude_env` | `settings` with env object | Use settings JSON |
|
||||
| `mcp_config` | `claude_args: --mcp-config` | Pass MCP config via CLI arguments |
|
||||
| `timeout_minutes` | Use GitHub Actions `timeout-minutes` | Configure at job level instead of input level |
|
||||
|
||||
## Migration Examples
|
||||
|
||||
@@ -198,6 +199,30 @@ The `track_progress` input only works with these GitHub events:
|
||||
}
|
||||
```
|
||||
|
||||
### Timeout Configuration
|
||||
|
||||
**Before (v0.x):**
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@beta
|
||||
with:
|
||||
timeout_minutes: 30
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
```
|
||||
|
||||
**After (v1.0):**
|
||||
|
||||
```yaml
|
||||
jobs:
|
||||
claude-task:
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30 # Moved to job level
|
||||
steps:
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
```
|
||||
|
||||
## How Mode Detection Works
|
||||
|
||||
The action now automatically detects the appropriate mode:
|
||||
@@ -312,6 +337,7 @@ You can also pass MCP configuration from a file:
|
||||
- [ ] Convert `disallowed_tools` to `claude_args` with `--disallowedTools`
|
||||
- [ ] Move `claude_env` to `settings` JSON format
|
||||
- [ ] Move `mcp_config` to `claude_args` with `--mcp-config`
|
||||
- [ ] Replace `timeout_minutes` with GitHub Actions `timeout-minutes` at job level
|
||||
- [ ] **Optional**: Add `track_progress: true` if you need tracking comments in automation mode
|
||||
- [ ] Test workflow in a non-production environment
|
||||
|
||||
|
||||
125
docs/security.md
125
docs/security.md
@@ -4,21 +4,108 @@
|
||||
|
||||
- **Repository Access**: The action can only be triggered by users with write access to the repository
|
||||
- **Bot User Control**: By default, GitHub Apps and bots cannot trigger this action for security reasons. Use the `allowed_bots` parameter to enable specific bots or all bots
|
||||
- **⚠️ Non-Write User Access (RISKY)**: The `allowed_non_write_users` parameter allows bypassing the write permission requirement. **This is a significant security risk and should only be used for workflows with extremely limited permissions** (e.g., issue labeling workflows that only have `issues: write` permission). This feature:
|
||||
- Only works when `github_token` is provided as input (not with GitHub App authentication)
|
||||
- Accepts either a comma-separated list of specific usernames or `*` to allow all users
|
||||
- **Should be used with extreme caution** as it bypasses the primary security mechanism of this action
|
||||
- Is designed for automation workflows where user permissions are already restricted by the workflow's permission scope
|
||||
- **Token Permissions**: The GitHub app receives only a short-lived token scoped specifically to the repository it's operating in
|
||||
- **No Cross-Repository Access**: Each action invocation is limited to the repository where it was triggered
|
||||
- **Limited Scope**: The token cannot access other repositories or perform actions beyond the configured permissions
|
||||
|
||||
## Pull Request Creation
|
||||
|
||||
In its default configuration, **Claude does not create pull requests automatically** when responding to `@claude` mentions. Instead:
|
||||
|
||||
- Claude commits code changes to a new branch
|
||||
- Claude provides a **link to the GitHub PR creation page** in its response
|
||||
- **The user must click the link and create the PR themselves**, ensuring human oversight before any code is proposed for merging
|
||||
|
||||
This design ensures that users retain full control over what pull requests are created and can review the changes before initiating the PR workflow.
|
||||
|
||||
## ⚠️ Prompt Injection Risks
|
||||
|
||||
**Beware of potential hidden markdown when tagging Claude on untrusted content.** External contributors may include hidden instructions through HTML comments, invisible characters, hidden attributes, or other techniques. The action sanitizes content by stripping HTML comments, invisible characters, markdown image alt text, hidden HTML attributes, and HTML entities, but new bypass techniques may emerge. We recommend reviewing the raw content of all input coming from external contributors before allowing Claude to process it.
|
||||
|
||||
## GitHub App Permissions
|
||||
|
||||
The [Claude Code GitHub app](https://github.com/apps/claude) requires these permissions:
|
||||
The [Claude Code GitHub app](https://github.com/apps/claude) requests the following permissions:
|
||||
|
||||
- **Pull Requests**: Read and write to create PRs and push changes
|
||||
- **Issues**: Read and write to respond to issues
|
||||
- **Contents**: Read and write to modify repository files
|
||||
### Currently Used Permissions
|
||||
|
||||
- **Contents** (Read & Write): For reading repository files and creating branches
|
||||
- **Pull Requests** (Read & Write): For reading PR data and creating/updating pull requests
|
||||
- **Issues** (Read & Write): For reading issue data and updating issue comments
|
||||
|
||||
### Permissions for Future Features
|
||||
|
||||
The following permissions are requested but not yet actively used. These will enable planned features in future releases:
|
||||
|
||||
- **Discussions** (Read & Write): For interaction with GitHub Discussions
|
||||
- **Actions** (Read): For accessing workflow run data and logs
|
||||
- **Checks** (Read): For reading check run results
|
||||
- **Workflows** (Read & Write): For triggering and managing GitHub Actions workflows
|
||||
|
||||
## Commit Signing
|
||||
|
||||
All commits made by Claude through this action are automatically signed with commit signatures. This ensures the authenticity and integrity of commits, providing a verifiable trail of changes made by the action.
|
||||
By default, commits made by Claude are unsigned. You can enable commit signing using one of two methods:
|
||||
|
||||
### Option 1: GitHub API Commit Signing (use_commit_signing)
|
||||
|
||||
This uses GitHub's API to create commits, which automatically signs them as verified from the GitHub App:
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@main
|
||||
with:
|
||||
use_commit_signing: true
|
||||
```
|
||||
|
||||
This is the simplest option and requires no additional setup. However, because it uses the GitHub API instead of git CLI, it cannot perform complex git operations like rebasing, cherry-picking, or interactive history manipulation.
|
||||
|
||||
### Option 2: SSH Signing Key (ssh_signing_key)
|
||||
|
||||
This uses an SSH key to sign commits via git CLI. Use this option when you need both signed commits AND standard git operations (rebasing, cherry-picking, etc.):
|
||||
|
||||
```yaml
|
||||
- uses: anthropics/claude-code-action@main
|
||||
with:
|
||||
ssh_signing_key: ${{ secrets.SSH_SIGNING_KEY }}
|
||||
bot_id: "YOUR_GITHUB_USER_ID"
|
||||
bot_name: "YOUR_GITHUB_USERNAME"
|
||||
```
|
||||
|
||||
Commits will show as verified and attributed to the GitHub account that owns the signing key.
|
||||
|
||||
**Setup steps:**
|
||||
|
||||
1. Generate an SSH key pair for signing:
|
||||
|
||||
```bash
|
||||
ssh-keygen -t ed25519 -f ~/.ssh/signing_key -N "" -C "commit signing key"
|
||||
```
|
||||
|
||||
2. Add the **public key** to your GitHub account:
|
||||
|
||||
- Go to GitHub → Settings → SSH and GPG keys
|
||||
- Click "New SSH key"
|
||||
- Select **Key type: Signing Key** (important)
|
||||
- Paste the contents of `~/.ssh/signing_key.pub`
|
||||
|
||||
3. Add the **private key** to your repository secrets:
|
||||
|
||||
- Go to your repo → Settings → Secrets and variables → Actions
|
||||
- Create a new secret named `SSH_SIGNING_KEY`
|
||||
- Paste the contents of `~/.ssh/signing_key`
|
||||
|
||||
4. Get your GitHub user ID:
|
||||
|
||||
```bash
|
||||
gh api users/YOUR_USERNAME --jq '.id'
|
||||
```
|
||||
|
||||
5. Update your workflow with `bot_id` and `bot_name` matching the account where you added the signing key.
|
||||
|
||||
**Note:** If both `ssh_signing_key` and `use_commit_signing` are provided, `ssh_signing_key` takes precedence.
|
||||
|
||||
## ⚠️ Authentication Protection
|
||||
|
||||
@@ -36,3 +123,31 @@ claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
|
||||
anthropic_api_key: "sk-ant-api03-..." # Exposed and vulnerable!
|
||||
claude_code_oauth_token: "oauth_token_..." # Exposed and vulnerable!
|
||||
```
|
||||
|
||||
## ⚠️ Full Output Security Warning
|
||||
|
||||
The `show_full_output` option is **disabled by default** for security reasons. When enabled, it outputs ALL Claude Code messages including:
|
||||
|
||||
- Full outputs from tool executions (e.g., `ps`, `env`, file reads)
|
||||
- API responses that may contain tokens or credentials
|
||||
- File contents that may include secrets
|
||||
- Command outputs that may expose sensitive system information
|
||||
|
||||
**These logs are publicly visible in GitHub Actions for public repositories!**
|
||||
|
||||
### Automatic Enabling in Debug Mode
|
||||
|
||||
Full output is **automatically enabled** when GitHub Actions debug mode is active (when `ACTIONS_STEP_DEBUG` secret is set to `true`). This helps with debugging but carries the same security risks.
|
||||
|
||||
### When to Enable Full Output
|
||||
|
||||
Only enable `show_full_output: true` or GitHub Actions debug mode when:
|
||||
|
||||
- Working in a private repository with controlled access
|
||||
- Debugging issues in a non-production environment
|
||||
- You have verified no secrets will be exposed in the output
|
||||
- You understand the security implications
|
||||
|
||||
### Recommended Practice
|
||||
|
||||
For debugging, prefer using `show_full_output: false` (the default) and rely on Claude Code's sanitized output, which shows only essential information like errors and completion status without exposing sensitive data.
|
||||
|
||||
@@ -20,7 +20,48 @@ If you prefer not to install the official Claude app, you can create your own Gi
|
||||
- Organization policies prevent installing third-party apps
|
||||
- You're using AWS Bedrock or Google Vertex AI
|
||||
|
||||
**Steps to create and use a custom GitHub App:**
|
||||
### Option 1: Quick Setup with App Manifest (Recommended)
|
||||
|
||||
The fastest way to create a custom GitHub App is using our pre-configured manifest. This ensures all permissions are correctly set up with a single click.
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. **Create the app:**
|
||||
|
||||
**🚀 [Download the Quick Setup Tool](./create-app.html)** (Right-click → "Save Link As" or "Download Linked File")
|
||||
|
||||
After downloading, open `create-app.html` in your web browser:
|
||||
|
||||
- **For Personal Accounts:** Click the "Create App for Personal Account" button
|
||||
- **For Organizations:** Enter your organization name and click "Create App for Organization"
|
||||
|
||||
The tool will automatically configure all required permissions and submit the manifest.
|
||||
|
||||
Alternatively, you can use the manifest file directly:
|
||||
|
||||
- Use the [`github-app-manifest.json`](../github-app-manifest.json) file from this repository
|
||||
- Visit https://github.com/settings/apps/new (for personal) or your organization's app settings
|
||||
- Look for the "Create from manifest" option and paste the JSON content
|
||||
|
||||
2. **Complete the creation flow:**
|
||||
|
||||
- GitHub will show you a preview of the app configuration
|
||||
- Confirm the app name (you can customize it)
|
||||
- Click "Create GitHub App"
|
||||
- The app will be created with all required permissions automatically configured
|
||||
|
||||
3. **Generate and download a private key:**
|
||||
|
||||
- After creating the app, you'll be redirected to the app settings
|
||||
- Scroll down to "Private keys"
|
||||
- Click "Generate a private key"
|
||||
- Download the `.pem` file (keep this secure!)
|
||||
|
||||
4. **Continue with installation** - Skip to step 3 in the manual setup below to install the app and configure your workflow.
|
||||
|
||||
### Option 2: Manual Setup
|
||||
|
||||
If you prefer to configure the app manually or need custom permissions:
|
||||
|
||||
1. **Create a new GitHub App:**
|
||||
|
||||
@@ -76,7 +117,7 @@ If you prefer not to install the official Claude app, you can create your own Gi
|
||||
private-key: ${{ secrets.APP_PRIVATE_KEY }}
|
||||
|
||||
# Use Claude with your custom app's token
|
||||
- uses: anthropics/claude-code-action@beta
|
||||
- uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
github_token: ${{ steps.app-token.outputs.token }}
|
||||
|
||||
@@ -35,7 +35,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -89,7 +89,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -153,7 +153,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -211,7 +211,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -268,7 +268,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
@@ -344,7 +344,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -456,7 +456,7 @@ jobs:
|
||||
pull-requests: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{ github.event.pull_request.head.ref }}
|
||||
fetch-depth: 0
|
||||
@@ -513,7 +513,7 @@ jobs:
|
||||
security-events: write
|
||||
id-token: write
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
125
docs/usage.md
125
docs/usage.md
@@ -32,6 +32,11 @@ jobs:
|
||||
# --max-turns 10
|
||||
# --model claude-4-0-sonnet-20250805
|
||||
|
||||
# Optional: add custom plugin marketplaces
|
||||
# plugin_marketplaces: "https://github.com/user/marketplace1.git\nhttps://github.com/user/marketplace2.git"
|
||||
# Optional: install Claude Code plugins
|
||||
# plugins: "code-review@claude-code-plugins\nfeature-dev@claude-code-plugins"
|
||||
|
||||
# Optional: add custom trigger phrase (default: @claude)
|
||||
# trigger_phrase: "/claude"
|
||||
# Optional: add assignee trigger for issues
|
||||
@@ -47,28 +52,35 @@ jobs:
|
||||
|
||||
## Inputs
|
||||
|
||||
| Input | Description | Required | Default |
|
||||
| ------------------------------ | -------------------------------------------------------------------------------------------------------------------- | -------- | --------- |
|
||||
| `anthropic_api_key` | Anthropic API key (required for direct API, not needed for Bedrock/Vertex) | No\* | - |
|
||||
| `claude_code_oauth_token` | Claude Code OAuth token (alternative to anthropic_api_key) | No\* | - |
|
||||
| `prompt` | Instructions for Claude. Can be a direct prompt or custom template for automation workflows | No | - |
|
||||
| `track_progress` | Force tag mode with tracking comments. Only works with specific PR/issue events. Preserves GitHub context | No | `false` |
|
||||
| `claude_args` | Additional arguments to pass directly to Claude CLI (e.g., `--max-turns 10 --model claude-4-0-sonnet-20250805`) | No | "" |
|
||||
| `base_branch` | The base branch to use for creating new branches (e.g., 'main', 'develop') | No | - |
|
||||
| `use_sticky_comment` | Use just one comment to deliver PR comments (only applies for pull_request event workflows) | No | `false` |
|
||||
| `github_token` | GitHub token for Claude to operate with. **Only include this if you're connecting a custom GitHub app of your own!** | No | - |
|
||||
| `use_bedrock` | Use Amazon Bedrock with OIDC authentication instead of direct Anthropic API | No | `false` |
|
||||
| `use_vertex` | Use Google Vertex AI with OIDC authentication instead of direct Anthropic API | No | `false` |
|
||||
| `mcp_config` | Additional MCP configuration (JSON string) that merges with the built-in GitHub MCP servers | No | "" |
|
||||
| `assignee_trigger` | The assignee username that triggers the action (e.g. @claude). Only used for issue assignment | No | - |
|
||||
| `label_trigger` | The label name that triggers the action when applied to an issue (e.g. "claude") | No | - |
|
||||
| `trigger_phrase` | The trigger phrase to look for in comments, issue/PR bodies, and issue titles | No | `@claude` |
|
||||
| `branch_prefix` | The prefix to use for Claude branches (defaults to 'claude/', use 'claude-' for dash format) | No | `claude/` |
|
||||
| `settings` | Claude Code settings as JSON string or path to settings JSON file | No | "" |
|
||||
| `additional_permissions` | Additional permissions to enable. Currently supports 'actions: read' for viewing workflow results | No | "" |
|
||||
| `experimental_allowed_domains` | Restrict network access to these domains only (newline-separated). | No | "" |
|
||||
| `use_commit_signing` | Enable commit signing using GitHub's commit signature verification. When false, Claude uses standard git commands | No | `false` |
|
||||
| `allowed_bots` | Comma-separated list of allowed bot usernames, or '\*' to allow all bots. Empty string (default) allows no bots | No | "" |
|
||||
| Input | Description | Required | Default |
|
||||
| -------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------- |
|
||||
| `anthropic_api_key` | Anthropic API key (required for direct API, not needed for Bedrock/Vertex) | No\* | - |
|
||||
| `claude_code_oauth_token` | Claude Code OAuth token (alternative to anthropic_api_key) | No\* | - |
|
||||
| `prompt` | Instructions for Claude. Can be a direct prompt or custom template for automation workflows | No | - |
|
||||
| `track_progress` | Force tag mode with tracking comments. Only works with specific PR/issue events. Preserves GitHub context | No | `false` |
|
||||
| `include_fix_links` | Include 'Fix this' links in PR code review feedback that open Claude Code with context to fix the identified issue | No | `true` |
|
||||
| `claude_args` | Additional [arguments to pass directly to Claude CLI](https://docs.claude.com/en/docs/claude-code/cli-reference#cli-flags) (e.g., `--max-turns 10 --model claude-4-0-sonnet-20250805`) | No | "" |
|
||||
| `base_branch` | The base branch to use for creating new branches (e.g., 'main', 'develop') | No | - |
|
||||
| `use_sticky_comment` | Use just one comment to deliver PR comments (only applies for pull_request event workflows) | No | `false` |
|
||||
| `github_token` | GitHub token for Claude to operate with. **Only include this if you're connecting a custom GitHub app of your own!** | No | - |
|
||||
| `use_bedrock` | Use Amazon Bedrock with OIDC authentication instead of direct Anthropic API | No | `false` |
|
||||
| `use_vertex` | Use Google Vertex AI with OIDC authentication instead of direct Anthropic API | No | `false` |
|
||||
| `assignee_trigger` | The assignee username that triggers the action (e.g. @claude). Only used for issue assignment | No | - |
|
||||
| `label_trigger` | The label name that triggers the action when applied to an issue (e.g. "claude") | No | - |
|
||||
| `trigger_phrase` | The trigger phrase to look for in comments, issue/PR bodies, and issue titles | No | `@claude` |
|
||||
| `branch_prefix` | The prefix to use for Claude branches (defaults to 'claude/', use 'claude-' for dash format) | No | `claude/` |
|
||||
| `settings` | Claude Code settings as JSON string or path to settings JSON file | No | "" |
|
||||
| `additional_permissions` | Additional permissions to enable. Currently supports 'actions: read' for viewing workflow results | No | "" |
|
||||
| `use_commit_signing` | Enable commit signing using GitHub's API. Simple but cannot perform complex git operations like rebasing. See [Security](./security.md#commit-signing) | No | `false` |
|
||||
| `ssh_signing_key` | SSH private key for signing commits. Enables signed commits with full git CLI support (rebasing, etc.). See [Security](./security.md#commit-signing) | No | "" |
|
||||
| `bot_id` | GitHub user ID to use for git operations (defaults to Claude's bot ID). Required with `ssh_signing_key` for verified commits | No | `41898282` |
|
||||
| `bot_name` | GitHub username to use for git operations (defaults to Claude's bot name). Required with `ssh_signing_key` for verified commits | No | `claude[bot]` |
|
||||
| `allowed_bots` | Comma-separated list of allowed bot usernames, or '\*' to allow all bots. Empty string (default) allows no bots | No | "" |
|
||||
| `allowed_non_write_users` | **⚠️ RISKY**: Comma-separated list of usernames to allow without write permissions, or '\*' for all users. Only works with `github_token` input. See [Security](./security.md) | No | "" |
|
||||
| `path_to_claude_code_executable` | Optional path to a custom Claude Code executable. Skips automatic installation. Useful for Nix, custom containers, or specialized environments | No | "" |
|
||||
| `path_to_bun_executable` | Optional path to a custom Bun executable. Skips automatic Bun installation. Useful for Nix, custom containers, or specialized environments | No | "" |
|
||||
| `plugin_marketplaces` | Newline-separated list of Claude Code plugin marketplace Git URLs to install from (e.g., see example in workflow above). Marketplaces are added before plugin installation | No | "" |
|
||||
| `plugins` | Newline-separated list of Claude Code plugin names to install (e.g., see example in workflow above). Plugins are installed before Claude Code execution | No | "" |
|
||||
|
||||
### Deprecated Inputs
|
||||
|
||||
@@ -85,6 +97,7 @@ These inputs are deprecated and will be removed in a future version:
|
||||
| `fallback_model` | **DEPRECATED**: Use `claude_args` with fallback configuration | Configure fallback in `claude_args` or `settings` |
|
||||
| `allowed_tools` | **DEPRECATED**: Use `claude_args` with `--allowedTools` instead | Use `claude_args: "--allowedTools Edit,Read,Write"` |
|
||||
| `disallowed_tools` | **DEPRECATED**: Use `claude_args` with `--disallowedTools` instead | Use `claude_args: "--disallowedTools WebSearch"` |
|
||||
| `mcp_config` | **DEPRECATED**: Use `claude_args` with `--mcp-config` instead | Use `claude_args: "--mcp-config '{...}'"` |
|
||||
| `claude_env` | **DEPRECATED**: Use `settings` with env configuration | Configure environment in `settings` JSON |
|
||||
|
||||
\*Required when using direct Anthropic API (default and when not using Bedrock or Vertex)
|
||||
@@ -173,6 +186,74 @@ For a comprehensive guide on migrating from v0.x to v1.0, including step-by-step
|
||||
Focus on the changed files in this PR.
|
||||
```
|
||||
|
||||
## Structured Outputs
|
||||
|
||||
Get validated JSON results from Claude that automatically become GitHub Action outputs. This enables building complex automation workflows where Claude analyzes data and subsequent steps use the results.
|
||||
|
||||
### Basic Example
|
||||
|
||||
```yaml
|
||||
- name: Detect flaky tests
|
||||
id: analyze
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
prompt: |
|
||||
Check the CI logs and determine if this is a flaky test.
|
||||
Return: is_flaky (boolean), confidence (0-1), summary (string)
|
||||
claude_args: |
|
||||
--json-schema '{"type":"object","properties":{"is_flaky":{"type":"boolean"},"confidence":{"type":"number"},"summary":{"type":"string"}},"required":["is_flaky"]}'
|
||||
|
||||
- name: Retry if flaky
|
||||
if: fromJSON(steps.analyze.outputs.structured_output).is_flaky == true
|
||||
run: gh workflow run CI
|
||||
```
|
||||
|
||||
### How It Works
|
||||
|
||||
1. **Define Schema**: Provide a JSON schema via `--json-schema` flag in `claude_args`
|
||||
2. **Claude Executes**: Claude uses tools to complete your task
|
||||
3. **Validated Output**: Result is validated against your schema
|
||||
4. **JSON Output**: All fields are returned in a single `structured_output` JSON string
|
||||
|
||||
### Accessing Structured Outputs
|
||||
|
||||
All structured output fields are available in the `structured_output` output as a JSON string:
|
||||
|
||||
**In GitHub Actions expressions:**
|
||||
|
||||
```yaml
|
||||
if: fromJSON(steps.analyze.outputs.structured_output).is_flaky == true
|
||||
run: |
|
||||
CONFIDENCE=${{ fromJSON(steps.analyze.outputs.structured_output).confidence }}
|
||||
```
|
||||
|
||||
**In bash with jq:**
|
||||
|
||||
```yaml
|
||||
- name: Process results
|
||||
run: |
|
||||
OUTPUT='${{ steps.analyze.outputs.structured_output }}'
|
||||
IS_FLAKY=$(echo "$OUTPUT" | jq -r '.is_flaky')
|
||||
SUMMARY=$(echo "$OUTPUT" | jq -r '.summary')
|
||||
```
|
||||
|
||||
**Note**: Due to GitHub Actions limitations, composite actions cannot expose dynamic outputs. All fields are bundled in the single `structured_output` JSON string.
|
||||
|
||||
### Complete Example
|
||||
|
||||
See `examples/test-failure-analysis.yml` for a working example that:
|
||||
|
||||
- Detects flaky test failures
|
||||
- Uses confidence thresholds in conditionals
|
||||
- Auto-retries workflows
|
||||
- Comments on PRs
|
||||
|
||||
### Documentation
|
||||
|
||||
For complete details on JSON Schema syntax and Agent SDK structured outputs:
|
||||
https://docs.claude.com/en/docs/agent-sdk/structured-outputs
|
||||
|
||||
## Ways to Tag @claude
|
||||
|
||||
These examples show how to interact with Claude using comments in PRs and issues. By default, Claude will be triggered anytime you mention `@claude`, but you can customize the exact trigger phrase using the `trigger_phrase` input in the workflow.
|
||||
|
||||
@@ -22,7 +22,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout code
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
ref: ${{ github.event.workflow_run.head_branch }}
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -26,7 +26,7 @@ jobs:
|
||||
actions: read # Required for Claude to read CI results on PRs
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
name: Issue Triage
|
||||
name: Claude Issue Triage
|
||||
description: Run Claude Code for issue triage in GitHub Actions
|
||||
on:
|
||||
issues:
|
||||
types: [opened]
|
||||
@@ -10,67 +11,19 @@ jobs:
|
||||
permissions:
|
||||
contents: read
|
||||
issues: write
|
||||
id-token: write
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Triage issue with Claude
|
||||
- name: Run Claude Code for Issue Triage
|
||||
uses: anthropics/claude-code-action@v1
|
||||
with:
|
||||
prompt: |
|
||||
You're an issue triage assistant for GitHub issues. Your task is to analyze the issue and select appropriate labels from the provided list.
|
||||
|
||||
IMPORTANT: Don't post any comments or messages to the issue. Your only action should be to apply labels.
|
||||
|
||||
Issue Information:
|
||||
- REPO: ${{ github.repository }}
|
||||
- ISSUE_NUMBER: ${{ github.event.issue.number }}
|
||||
|
||||
TASK OVERVIEW:
|
||||
|
||||
1. First, fetch the list of labels available in this repository by running: `gh label list`. Run exactly this command with nothing else.
|
||||
|
||||
2. Next, use the GitHub tools to get context about the issue:
|
||||
- You have access to these tools:
|
||||
- mcp__github__get_issue: Use this to retrieve the current issue's details including title, description, and existing labels
|
||||
- mcp__github__get_issue_comments: Use this to read any discussion or additional context provided in the comments
|
||||
- mcp__github__update_issue: Use this to apply labels to the issue (do not use this for commenting)
|
||||
- mcp__github__search_issues: Use this to find similar issues that might provide context for proper categorization and to identify potential duplicate issues
|
||||
- mcp__github__list_issues: Use this to understand patterns in how other issues are labeled
|
||||
- Start by using mcp__github__get_issue to get the issue details
|
||||
|
||||
3. Analyze the issue content, considering:
|
||||
- The issue title and description
|
||||
- The type of issue (bug report, feature request, question, etc.)
|
||||
- Technical areas mentioned
|
||||
- Severity or priority indicators
|
||||
- User impact
|
||||
- Components affected
|
||||
|
||||
4. Select appropriate labels from the available labels list provided above:
|
||||
- Choose labels that accurately reflect the issue's nature
|
||||
- Be specific but comprehensive
|
||||
- Select priority labels if you can determine urgency (high-priority, med-priority, or low-priority)
|
||||
- Consider platform labels (android, ios) if applicable
|
||||
- If you find similar issues using mcp__github__search_issues, consider using a "duplicate" label if appropriate. Only do so if the issue is a duplicate of another OPEN issue.
|
||||
|
||||
5. Apply the selected labels:
|
||||
- Use mcp__github__update_issue to apply your selected labels
|
||||
- DO NOT post any comments explaining your decision
|
||||
- DO NOT communicate directly with users
|
||||
- If no labels are clearly applicable, do not apply any labels
|
||||
|
||||
IMPORTANT GUIDELINES:
|
||||
- Be thorough in your analysis
|
||||
- Only select labels from the provided list above
|
||||
- DO NOT post any comments to the issue
|
||||
- Your ONLY action should be to apply labels using mcp__github__update_issue
|
||||
- It's okay to not add any labels if none are clearly applicable
|
||||
# NOTE: /label-issue here requires a .claude/commands/label-issue.md file in your repo (see this repo's .claude directory for an example)
|
||||
prompt: "/label-issue REPO: ${{ github.repository }} ISSUE_NUMBER${{ github.event.issue.number }}"
|
||||
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
claude_args: |
|
||||
--allowedTools "Bash(gh label list),mcp__github__get_issue,mcp__github__get_issue_comments,mcp__github__update_issue,mcp__github__search_issues,mcp__github__list_issues"
|
||||
allowed_non_write_users: "*" # Required for issue triage workflow, if users without repo write access create issues
|
||||
github_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
@@ -23,7 +23,7 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 2 # Need at least 2 commits to analyze the latest
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@ jobs:
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
@@ -18,7 +18,7 @@ jobs:
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ jobs:
|
||||
id-token: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
uses: actions/checkout@v5
|
||||
with:
|
||||
fetch-depth: 1
|
||||
|
||||
|
||||
114
examples/test-failure-analysis.yml
Normal file
114
examples/test-failure-analysis.yml
Normal file
@@ -0,0 +1,114 @@
|
||||
name: Auto-Retry Flaky Tests
|
||||
|
||||
# This example demonstrates using structured outputs to detect flaky test failures
|
||||
# and automatically retry them, reducing noise from intermittent failures.
|
||||
#
|
||||
# Use case: When CI fails, automatically determine if it's likely flaky and retry if so.
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: ["CI"]
|
||||
types: [completed]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
actions: write
|
||||
|
||||
jobs:
|
||||
detect-flaky:
|
||||
runs-on: ubuntu-latest
|
||||
if: ${{ github.event.workflow_run.conclusion == 'failure' }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Detect flaky test failures
|
||||
id: detect
|
||||
uses: anthropics/claude-code-action@main
|
||||
with:
|
||||
anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }}
|
||||
prompt: |
|
||||
The CI workflow failed: ${{ github.event.workflow_run.html_url }}
|
||||
|
||||
Check the logs: gh run view ${{ github.event.workflow_run.id }} --log-failed
|
||||
|
||||
Determine if this looks like a flaky test failure by checking for:
|
||||
- Timeout errors
|
||||
- Race conditions
|
||||
- Network errors
|
||||
- "Expected X but got Y" intermittent failures
|
||||
- Tests that passed in previous commits
|
||||
|
||||
Return:
|
||||
- is_flaky: true if likely flaky, false if real bug
|
||||
- confidence: number 0-1 indicating confidence level
|
||||
- summary: brief one-sentence explanation
|
||||
claude_args: |
|
||||
--json-schema '{"type":"object","properties":{"is_flaky":{"type":"boolean","description":"Whether this appears to be a flaky test failure"},"confidence":{"type":"number","minimum":0,"maximum":1,"description":"Confidence level in the determination"},"summary":{"type":"string","description":"One-sentence explanation of the failure"}},"required":["is_flaky","confidence","summary"]}'
|
||||
|
||||
# Auto-retry only if flaky AND high confidence (>= 0.7)
|
||||
- name: Retry flaky tests
|
||||
if: |
|
||||
fromJSON(steps.detect.outputs.structured_output).is_flaky == true &&
|
||||
fromJSON(steps.detect.outputs.structured_output).confidence >= 0.7
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
OUTPUT='${{ steps.detect.outputs.structured_output }}'
|
||||
CONFIDENCE=$(echo "$OUTPUT" | jq -r '.confidence')
|
||||
SUMMARY=$(echo "$OUTPUT" | jq -r '.summary')
|
||||
|
||||
echo "🔄 Flaky test detected (confidence: $CONFIDENCE)"
|
||||
echo "Summary: $SUMMARY"
|
||||
echo ""
|
||||
echo "Triggering automatic retry..."
|
||||
|
||||
gh workflow run "${{ github.event.workflow_run.name }}" \
|
||||
--ref "${{ github.event.workflow_run.head_branch }}"
|
||||
|
||||
# Low confidence flaky detection - skip retry
|
||||
- name: Low confidence detection
|
||||
if: |
|
||||
fromJSON(steps.detect.outputs.structured_output).is_flaky == true &&
|
||||
fromJSON(steps.detect.outputs.structured_output).confidence < 0.7
|
||||
run: |
|
||||
OUTPUT='${{ steps.detect.outputs.structured_output }}'
|
||||
CONFIDENCE=$(echo "$OUTPUT" | jq -r '.confidence')
|
||||
|
||||
echo "⚠️ Possible flaky test but confidence too low ($CONFIDENCE)"
|
||||
echo "Not retrying automatically - manual review recommended"
|
||||
|
||||
# Comment on PR if this was a PR build
|
||||
- name: Comment on PR
|
||||
if: github.event.workflow_run.event == 'pull_request'
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
OUTPUT='${{ steps.detect.outputs.structured_output }}'
|
||||
IS_FLAKY=$(echo "$OUTPUT" | jq -r '.is_flaky')
|
||||
CONFIDENCE=$(echo "$OUTPUT" | jq -r '.confidence')
|
||||
SUMMARY=$(echo "$OUTPUT" | jq -r '.summary')
|
||||
|
||||
pr_number=$(gh pr list --head "${{ github.event.workflow_run.head_branch }}" --json number --jq '.[0].number')
|
||||
|
||||
if [ -n "$pr_number" ]; then
|
||||
if [ "$IS_FLAKY" = "true" ]; then
|
||||
TITLE="🔄 Flaky Test Detected"
|
||||
ACTION="✅ Automatically retrying the workflow"
|
||||
else
|
||||
TITLE="❌ Test Failure"
|
||||
ACTION="⚠️ This appears to be a real bug - manual intervention needed"
|
||||
fi
|
||||
|
||||
gh pr comment "$pr_number" --body "$(cat <<EOF
|
||||
## $TITLE
|
||||
|
||||
**Analysis**: $SUMMARY
|
||||
**Confidence**: $CONFIDENCE
|
||||
|
||||
$ACTION
|
||||
|
||||
[View workflow run](${{ github.event.workflow_run.html_url }})
|
||||
EOF
|
||||
)"
|
||||
fi
|
||||
27
github-app-manifest.json
Normal file
27
github-app-manifest.json
Normal file
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"name": "Claude Code Custom App",
|
||||
"description": "Custom GitHub App for Claude Code Action - AI-powered coding assistant for GitHub workflows",
|
||||
"url": "https://github.com/anthropics/claude-code-action",
|
||||
"hook_attributes": {
|
||||
"url": "https://example.com/github/webhook",
|
||||
"active": false
|
||||
},
|
||||
"redirect_url": "https://github.com/settings/apps/new",
|
||||
"callback_urls": [],
|
||||
"setup_url": "https://github.com/anthropics/claude-code-action/blob/main/docs/setup.md",
|
||||
"public": false,
|
||||
"default_permissions": {
|
||||
"contents": "write",
|
||||
"issues": "write",
|
||||
"pull_requests": "write",
|
||||
"actions": "read",
|
||||
"metadata": "read"
|
||||
},
|
||||
"default_events": [
|
||||
"issue_comment",
|
||||
"issues",
|
||||
"pull_request",
|
||||
"pull_request_review",
|
||||
"pull_request_review_comment"
|
||||
]
|
||||
}
|
||||
@@ -12,6 +12,7 @@
|
||||
"dependencies": {
|
||||
"@actions/core": "^1.10.1",
|
||||
"@actions/github": "^6.0.1",
|
||||
"@anthropic-ai/claude-agent-sdk": "^0.2.16",
|
||||
"@modelcontextprotocol/sdk": "^1.11.0",
|
||||
"@octokit/graphql": "^8.2.2",
|
||||
"@octokit/rest": "^21.1.1",
|
||||
|
||||
@@ -1,123 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Setup Network Restrictions with Squid Proxy
|
||||
# This script sets up a Squid proxy to restrict network access to whitelisted domains only.
|
||||
|
||||
set -e
|
||||
|
||||
# Check if experimental_allowed_domains is provided
|
||||
if [ -z "$EXPERIMENTAL_ALLOWED_DOMAINS" ]; then
|
||||
echo "ERROR: EXPERIMENTAL_ALLOWED_DOMAINS environment variable is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check required environment variables
|
||||
if [ -z "$RUNNER_TEMP" ]; then
|
||||
echo "ERROR: RUNNER_TEMP environment variable is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ -z "$GITHUB_ENV" ]; then
|
||||
echo "ERROR: GITHUB_ENV environment variable is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Setting up network restrictions with Squid proxy..."
|
||||
|
||||
SQUID_START_TIME=$(date +%s.%N)
|
||||
|
||||
# Create whitelist file
|
||||
echo "$EXPERIMENTAL_ALLOWED_DOMAINS" > $RUNNER_TEMP/whitelist.txt
|
||||
|
||||
# Ensure each domain has proper format
|
||||
# If domain doesn't start with a dot and isn't an IP, add the dot for subdomain matching
|
||||
mv $RUNNER_TEMP/whitelist.txt $RUNNER_TEMP/whitelist.txt.orig
|
||||
while IFS= read -r domain; do
|
||||
if [ -n "$domain" ]; then
|
||||
# Trim whitespace
|
||||
domain=$(echo "$domain" | xargs)
|
||||
# If it's not empty and doesn't start with a dot, add one
|
||||
if [[ "$domain" != .* ]] && [[ ! "$domain" =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
|
||||
echo ".$domain" >> $RUNNER_TEMP/whitelist.txt
|
||||
else
|
||||
echo "$domain" >> $RUNNER_TEMP/whitelist.txt
|
||||
fi
|
||||
fi
|
||||
done < $RUNNER_TEMP/whitelist.txt.orig
|
||||
|
||||
# Create Squid config with whitelist
|
||||
echo "http_port 3128" > $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Define ACLs" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl whitelist dstdomain \"/etc/squid/whitelist.txt\"" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl localnet src 127.0.0.1/32" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl localnet src 172.17.0.0/16" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl SSL_ports port 443" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl Safe_ports port 80" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl Safe_ports port 443" >> $RUNNER_TEMP/squid.conf
|
||||
echo "acl CONNECT method CONNECT" >> $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Deny requests to certain unsafe ports" >> $RUNNER_TEMP/squid.conf
|
||||
echo "http_access deny !Safe_ports" >> $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Only allow CONNECT to SSL ports" >> $RUNNER_TEMP/squid.conf
|
||||
echo "http_access deny CONNECT !SSL_ports" >> $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Allow localhost" >> $RUNNER_TEMP/squid.conf
|
||||
echo "http_access allow localhost" >> $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Allow localnet access to whitelisted domains" >> $RUNNER_TEMP/squid.conf
|
||||
echo "http_access allow localnet whitelist" >> $RUNNER_TEMP/squid.conf
|
||||
echo "" >> $RUNNER_TEMP/squid.conf
|
||||
echo "# Deny everything else" >> $RUNNER_TEMP/squid.conf
|
||||
echo "http_access deny all" >> $RUNNER_TEMP/squid.conf
|
||||
|
||||
echo "Starting Squid proxy..."
|
||||
# First, remove any existing container
|
||||
sudo docker rm -f squid-proxy 2>/dev/null || true
|
||||
|
||||
# Ensure whitelist file is not empty (Squid fails with empty files)
|
||||
if [ ! -s "$RUNNER_TEMP/whitelist.txt" ]; then
|
||||
echo "WARNING: Whitelist file is empty, adding a dummy entry"
|
||||
echo ".example.com" >> $RUNNER_TEMP/whitelist.txt
|
||||
fi
|
||||
|
||||
# Use sudo to prevent Claude from stopping the container
|
||||
CONTAINER_ID=$(sudo docker run -d \
|
||||
--name squid-proxy \
|
||||
-p 127.0.0.1:3128:3128 \
|
||||
-v $RUNNER_TEMP/squid.conf:/etc/squid/squid.conf:ro \
|
||||
-v $RUNNER_TEMP/whitelist.txt:/etc/squid/whitelist.txt:ro \
|
||||
ubuntu/squid:latest 2>&1) || {
|
||||
echo "ERROR: Failed to start Squid container"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Wait for proxy to be ready (usually < 1 second)
|
||||
READY=false
|
||||
for i in {1..30}; do
|
||||
if nc -z 127.0.0.1 3128 2>/dev/null; then
|
||||
TOTAL_TIME=$(echo "scale=3; $(date +%s.%N) - $SQUID_START_TIME" | bc)
|
||||
echo "Squid proxy ready in ${TOTAL_TIME}s"
|
||||
READY=true
|
||||
break
|
||||
fi
|
||||
sleep 0.1
|
||||
done
|
||||
|
||||
if [ "$READY" != "true" ]; then
|
||||
echo "ERROR: Squid proxy failed to start within 3 seconds"
|
||||
echo "Container logs:"
|
||||
sudo docker logs squid-proxy 2>&1 || true
|
||||
echo "Container status:"
|
||||
sudo docker ps -a | grep squid-proxy || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Set proxy environment variables
|
||||
echo "http_proxy=http://127.0.0.1:3128" >> $GITHUB_ENV
|
||||
echo "https_proxy=http://127.0.0.1:3128" >> $GITHUB_ENV
|
||||
echo "HTTP_PROXY=http://127.0.0.1:3128" >> $GITHUB_ENV
|
||||
echo "HTTPS_PROXY=http://127.0.0.1:3128" >> $GITHUB_ENV
|
||||
|
||||
echo "Network restrictions setup completed successfully"
|
||||
@@ -21,8 +21,12 @@ import type { ParsedGitHubContext } from "../github/context";
|
||||
import type { CommonFields, PreparedContext, EventData } from "./types";
|
||||
import { GITHUB_SERVER_URL } from "../github/api/config";
|
||||
import type { Mode, ModeContext } from "../modes/types";
|
||||
import { extractUserRequest } from "../utils/extract-user-request";
|
||||
export type { CommonFields, PreparedContext } from "./types";
|
||||
|
||||
/** Filename for the user request file, read by the SDK runner */
|
||||
const USER_REQUEST_FILENAME = "claude-user-request.txt";
|
||||
|
||||
// Tag mode defaults - these tools are needed for tag mode to function
|
||||
const BASE_ALLOWED_TOOLS = [
|
||||
"Edit",
|
||||
@@ -192,11 +196,6 @@ export function prepareContext(
|
||||
if (!isPR) {
|
||||
throw new Error("IS_PR must be true for pull_request_review event");
|
||||
}
|
||||
if (!commentBody) {
|
||||
throw new Error(
|
||||
"COMMENT_BODY is required for pull_request_review event",
|
||||
);
|
||||
}
|
||||
eventData = {
|
||||
eventName: "pull_request_review",
|
||||
isPR: true,
|
||||
@@ -335,6 +334,7 @@ export function prepareContext(
|
||||
return {
|
||||
...commonFields,
|
||||
eventData,
|
||||
githubContext: context,
|
||||
};
|
||||
}
|
||||
|
||||
@@ -383,6 +383,7 @@ export function getEventTypeAndContext(envVars: PreparedContext): {
|
||||
};
|
||||
|
||||
case "pull_request":
|
||||
case "pull_request_target":
|
||||
return {
|
||||
eventType: "PULL_REQUEST",
|
||||
triggerContext: eventData.eventAction
|
||||
@@ -462,6 +463,123 @@ export function generatePrompt(
|
||||
return mode.generatePrompt(context, githubData, useCommitSigning);
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a simplified prompt for tag mode (opt-in via USE_SIMPLE_PROMPT env var)
|
||||
* @internal
|
||||
*/
|
||||
function generateSimplePrompt(
|
||||
context: PreparedContext,
|
||||
githubData: FetchDataResult,
|
||||
useCommitSigning: boolean = false,
|
||||
): string {
|
||||
const {
|
||||
contextData,
|
||||
comments,
|
||||
changedFilesWithSHA,
|
||||
reviewData,
|
||||
imageUrlMap,
|
||||
} = githubData;
|
||||
const { eventData } = context;
|
||||
|
||||
const { triggerContext } = getEventTypeAndContext(context);
|
||||
|
||||
const formattedContext = formatContext(contextData, eventData.isPR);
|
||||
const formattedComments = formatComments(comments, imageUrlMap);
|
||||
const formattedReviewComments = eventData.isPR
|
||||
? formatReviewComments(reviewData, imageUrlMap)
|
||||
: "";
|
||||
const formattedChangedFiles = eventData.isPR
|
||||
? formatChangedFilesWithSHA(changedFilesWithSHA)
|
||||
: "";
|
||||
|
||||
const hasImages = imageUrlMap && imageUrlMap.size > 0;
|
||||
const imagesInfo = hasImages
|
||||
? `\n\n<images_info>
|
||||
Images from comments have been saved to disk. Paths are in the formatted content above. Use Read tool to view them.
|
||||
</images_info>`
|
||||
: "";
|
||||
|
||||
const formattedBody = contextData?.body
|
||||
? formatBody(contextData.body, imageUrlMap)
|
||||
: "No description provided";
|
||||
|
||||
const entityType = eventData.isPR ? "pull request" : "issue";
|
||||
const jobUrl = `${GITHUB_SERVER_URL}/${context.repository}/actions/runs/${process.env.GITHUB_RUN_ID}`;
|
||||
|
||||
let promptContent = `You were tagged on a GitHub ${entityType} via "${context.triggerPhrase}". Read the request and decide how to help.
|
||||
|
||||
<context>
|
||||
${formattedContext}
|
||||
</context>
|
||||
|
||||
<${eventData.isPR ? "pr" : "issue"}_body>
|
||||
${formattedBody}
|
||||
</${eventData.isPR ? "pr" : "issue"}_body>
|
||||
|
||||
<comments>
|
||||
${formattedComments || "No comments"}
|
||||
</comments>
|
||||
${
|
||||
eventData.isPR
|
||||
? `
|
||||
<review_comments>
|
||||
${formattedReviewComments || "No review comments"}
|
||||
</review_comments>
|
||||
|
||||
<changed_files>
|
||||
${formattedChangedFiles || "No files changed"}
|
||||
</changed_files>`
|
||||
: ""
|
||||
}${imagesInfo}
|
||||
|
||||
<metadata>
|
||||
repository: ${context.repository}
|
||||
${eventData.isPR && eventData.prNumber ? `pr_number: ${eventData.prNumber}` : ""}
|
||||
${!eventData.isPR && eventData.issueNumber ? `issue_number: ${eventData.issueNumber}` : ""}
|
||||
trigger: ${triggerContext}
|
||||
triggered_by: ${context.triggerUsername ?? "Unknown"}
|
||||
claude_comment_id: ${context.claudeCommentId}
|
||||
</metadata>
|
||||
${
|
||||
(eventData.eventName === "issue_comment" ||
|
||||
eventData.eventName === "pull_request_review_comment" ||
|
||||
eventData.eventName === "pull_request_review") &&
|
||||
eventData.commentBody
|
||||
? `
|
||||
<trigger_comment>
|
||||
${sanitizeContent(eventData.commentBody)}
|
||||
</trigger_comment>`
|
||||
: ""
|
||||
}
|
||||
|
||||
Your request is in <trigger_comment> above${eventData.eventName === "issues" ? ` (or the ${entityType} body for assigned/labeled events)` : ""}.
|
||||
|
||||
Decide what's being asked:
|
||||
1. **Question or code review** - Answer directly or provide feedback
|
||||
2. **Code change** - Implement the change, commit, and push
|
||||
|
||||
Communication:
|
||||
- Your ONLY visible output is your GitHub comment - update it with progress and results
|
||||
- Use mcp__github_comment__update_claude_comment to update (only "body" param needed)
|
||||
- Use checklist format for tasks: - [ ] incomplete, - [x] complete
|
||||
- Use ### headers (not #)
|
||||
${getCommitInstructions(eventData, githubData, context, useCommitSigning)}
|
||||
${
|
||||
eventData.claudeBranch
|
||||
? `
|
||||
When done with changes, provide a PR link:
|
||||
[Create a PR](${GITHUB_SERVER_URL}/${context.repository}/compare/${eventData.baseBranch}...${eventData.claudeBranch}?quick_pull=1&title=<url-encoded-title>&body=<url-encoded-body>)
|
||||
Use THREE dots (...) between branches. URL-encode all parameters.`
|
||||
: ""
|
||||
}
|
||||
|
||||
Always include at the bottom:
|
||||
- Job link: [View job run](${jobUrl})
|
||||
- Follow the repo's CLAUDE.md file for project-specific guidelines`;
|
||||
|
||||
return promptContent;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates the default prompt for tag mode
|
||||
* @internal
|
||||
@@ -471,6 +589,10 @@ export function generateDefaultPrompt(
|
||||
githubData: FetchDataResult,
|
||||
useCommitSigning: boolean = false,
|
||||
): string {
|
||||
// Use simplified prompt if opted in
|
||||
if (process.env.USE_SIMPLE_PROMPT === "true") {
|
||||
return generateSimplePrompt(context, githubData, useCommitSigning);
|
||||
}
|
||||
const {
|
||||
contextData,
|
||||
comments,
|
||||
@@ -616,7 +738,13 @@ ${eventData.eventName === "issue_comment" || eventData.eventName === "pull_reque
|
||||
- Reference specific code sections with file paths and line numbers${eventData.isPR ? `\n - AFTER reading files and analyzing code, you MUST call mcp__github_comment__update_claude_comment to post your review` : ""}
|
||||
- Formulate a concise, technical, and helpful response based on the context.
|
||||
- Reference specific code with inline formatting or code blocks.
|
||||
- Include relevant file paths and line numbers when applicable.
|
||||
- Include relevant file paths and line numbers when applicable.${
|
||||
eventData.isPR && context.githubContext?.inputs.includeFixLinks
|
||||
? `
|
||||
- When identifying issues that could be fixed, include an inline link: [Fix this →](https://claude.ai/code?q=<URI_ENCODED_INSTRUCTIONS>&repo=${context.repository})
|
||||
The query should be URI-encoded and include enough context for Claude Code to understand and fix the issue (file path, line numbers, branch name, what needs to change).`
|
||||
: ""
|
||||
}
|
||||
- ${eventData.isPR ? `IMPORTANT: Submit your review feedback by updating the Claude comment using mcp__github_comment__update_claude_comment. This will be displayed as your PR review.` : `Remember that this feedback must be posted to the GitHub comment using mcp__github_comment__update_claude_comment.`}
|
||||
|
||||
B. For Straightforward Changes:
|
||||
@@ -682,7 +810,7 @@ ${
|
||||
- Display the todo list as a checklist in the GitHub comment and mark things off as you go.
|
||||
- REPOSITORY SETUP INSTRUCTIONS: The repository's CLAUDE.md file(s) contain critical repo-specific setup instructions, development guidelines, and preferences. Always read and follow these files, particularly the root CLAUDE.md, as they provide essential context for working with the codebase effectively.
|
||||
- Use h3 headers (###) for section titles in your comments, not h1 headers (#).
|
||||
- Your comment must always include the job run link (and branch link if there is one) at the bottom.
|
||||
- Your comment must always include the job run link in the format "[View job run](${GITHUB_SERVER_URL}/${context.repository}/actions/runs/${process.env.GITHUB_RUN_ID})" at the bottom of your response (branch link if there is one should also be included there).
|
||||
|
||||
CAPABILITIES AND LIMITATIONS:
|
||||
When users ask you to do something, be aware of what you can and cannot do. This section helps you understand how to respond when users request actions outside your scope.
|
||||
@@ -707,7 +835,7 @@ What You CANNOT Do:
|
||||
- Modify files in the .github/workflows directory (GitHub App permissions do not allow workflow modifications)
|
||||
|
||||
When users ask you to perform actions you cannot do, politely explain the limitation and, when applicable, direct them to the FAQ for more information and workarounds:
|
||||
"I'm unable to [specific action] due to [reason]. You can find more information and potential workarounds in the [FAQ](https://github.com/anthropics/claude-code-action/blob/main/FAQ.md)."
|
||||
"I'm unable to [specific action] due to [reason]. You can find more information and potential workarounds in the [FAQ](https://github.com/anthropics/claude-code-action/blob/main/docs/faq.md)."
|
||||
|
||||
If a user asks for something outside these capabilities (and you have no other tools provided), politely explain that you cannot perform that action and suggest an alternative approach if possible.
|
||||
|
||||
@@ -723,6 +851,55 @@ f. If you are unable to complete certain steps, such as running a linter or test
|
||||
return promptContent;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts the user's request from the prepared context and GitHub data.
|
||||
*
|
||||
* This is used to send the user's actual command/request as a separate
|
||||
* content block, enabling slash command processing in the CLI.
|
||||
*
|
||||
* @param context - The prepared context containing event data and trigger phrase
|
||||
* @param githubData - The fetched GitHub data containing issue/PR body content
|
||||
* @returns The extracted user request text (e.g., "/review-pr" or "fix this bug"),
|
||||
* or null for assigned/labeled events without an explicit trigger in the body
|
||||
*
|
||||
* @example
|
||||
* // Comment event: "@claude /review-pr" -> returns "/review-pr"
|
||||
* // Issue body with "@claude fix this" -> returns "fix this"
|
||||
* // Issue assigned without @claude in body -> returns null
|
||||
*/
|
||||
function extractUserRequestFromContext(
|
||||
context: PreparedContext,
|
||||
githubData: FetchDataResult,
|
||||
): string | null {
|
||||
const { eventData, triggerPhrase } = context;
|
||||
|
||||
// For comment events, extract from comment body
|
||||
if (
|
||||
"commentBody" in eventData &&
|
||||
eventData.commentBody &&
|
||||
(eventData.eventName === "issue_comment" ||
|
||||
eventData.eventName === "pull_request_review_comment" ||
|
||||
eventData.eventName === "pull_request_review")
|
||||
) {
|
||||
return extractUserRequest(eventData.commentBody, triggerPhrase);
|
||||
}
|
||||
|
||||
// For issue/PR events triggered by body content, extract from the body
|
||||
if (githubData.contextData?.body) {
|
||||
const request = extractUserRequest(
|
||||
githubData.contextData.body,
|
||||
triggerPhrase,
|
||||
);
|
||||
if (request) {
|
||||
return request;
|
||||
}
|
||||
}
|
||||
|
||||
// For assigned/labeled events without explicit trigger in body,
|
||||
// return null to indicate the full context should be used
|
||||
return null;
|
||||
}
|
||||
|
||||
export async function createPrompt(
|
||||
mode: Mode,
|
||||
modeContext: ModeContext,
|
||||
@@ -771,6 +948,22 @@ export async function createPrompt(
|
||||
promptContent,
|
||||
);
|
||||
|
||||
// Extract and write the user request separately for SDK multi-block messaging
|
||||
// This allows the CLI to process slash commands (e.g., "@claude /review-pr")
|
||||
const userRequest = extractUserRequestFromContext(
|
||||
preparedContext,
|
||||
githubData,
|
||||
);
|
||||
if (userRequest) {
|
||||
await writeFile(
|
||||
`${process.env.RUNNER_TEMP || "/tmp"}/claude-prompts/${USER_REQUEST_FILENAME}`,
|
||||
userRequest,
|
||||
);
|
||||
console.log("===== USER REQUEST =====");
|
||||
console.log(userRequest);
|
||||
console.log("========================");
|
||||
}
|
||||
|
||||
// Set allowed tools
|
||||
const hasActionsReadPermission = false;
|
||||
|
||||
|
||||
@@ -23,7 +23,7 @@ type PullRequestReviewEvent = {
|
||||
eventName: "pull_request_review";
|
||||
isPR: true;
|
||||
prNumber: string;
|
||||
commentBody: string;
|
||||
commentBody?: string; // May be absent for approvals without comments
|
||||
claudeBranch?: string;
|
||||
baseBranch?: string;
|
||||
};
|
||||
@@ -78,8 +78,7 @@ type IssueLabeledEvent = {
|
||||
labelTrigger: string;
|
||||
};
|
||||
|
||||
type PullRequestEvent = {
|
||||
eventName: "pull_request";
|
||||
type PullRequestBaseEvent = {
|
||||
eventAction?: string; // opened, synchronize, etc.
|
||||
isPR: true;
|
||||
prNumber: string;
|
||||
@@ -87,6 +86,14 @@ type PullRequestEvent = {
|
||||
baseBranch?: string;
|
||||
};
|
||||
|
||||
type PullRequestEvent = PullRequestBaseEvent & {
|
||||
eventName: "pull_request";
|
||||
};
|
||||
|
||||
type PullRequestTargetEvent = PullRequestBaseEvent & {
|
||||
eventName: "pull_request_target";
|
||||
};
|
||||
|
||||
// Union type for all possible event types
|
||||
export type EventData =
|
||||
| PullRequestReviewCommentEvent
|
||||
@@ -96,7 +103,8 @@ export type EventData =
|
||||
| IssueOpenedEvent
|
||||
| IssueAssignedEvent
|
||||
| IssueLabeledEvent
|
||||
| PullRequestEvent;
|
||||
| PullRequestEvent
|
||||
| PullRequestTargetEvent;
|
||||
|
||||
// Combined type with separate eventData field
|
||||
export type PreparedContext = CommonFields & {
|
||||
|
||||
21
src/entrypoints/cleanup-ssh-signing.ts
Normal file
21
src/entrypoints/cleanup-ssh-signing.ts
Normal file
@@ -0,0 +1,21 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
/**
|
||||
* Cleanup SSH signing key after action completes
|
||||
* This is run as a post step for security purposes
|
||||
*/
|
||||
|
||||
import { cleanupSshSigning } from "../github/operations/git-config";
|
||||
|
||||
async function run() {
|
||||
try {
|
||||
await cleanupSshSigning();
|
||||
} catch (error) {
|
||||
// Don't fail the action if cleanup fails, just log it
|
||||
console.error("Failed to cleanup SSH signing key:", error);
|
||||
}
|
||||
}
|
||||
|
||||
if (import.meta.main) {
|
||||
run();
|
||||
}
|
||||
@@ -26,7 +26,7 @@ export function collectActionInputsPresence(): void {
|
||||
max_turns: "",
|
||||
use_sticky_comment: "false",
|
||||
use_commit_signing: "false",
|
||||
experimental_allowed_domains: "",
|
||||
ssh_signing_key: "",
|
||||
};
|
||||
|
||||
const allInputsJson = process.env.ALL_INPUTS;
|
||||
|
||||
@@ -30,9 +30,13 @@ async function run() {
|
||||
|
||||
// Step 3: Check write permissions (only for entity contexts)
|
||||
if (isEntityContext(context)) {
|
||||
// Check if github_token was provided as input (not from app)
|
||||
const githubTokenProvided = !!process.env.OVERRIDE_GITHUB_TOKEN;
|
||||
const hasWritePermissions = await checkWritePermissions(
|
||||
octokit.rest,
|
||||
context,
|
||||
context.inputs.allowedNonWriteUsers,
|
||||
githubTokenProvided,
|
||||
);
|
||||
if (!hasWritePermissions) {
|
||||
throw new Error(
|
||||
|
||||
@@ -152,7 +152,7 @@ async function run() {
|
||||
|
||||
// Check if action failed and read output file for execution details
|
||||
let executionDetails: {
|
||||
cost_usd?: number;
|
||||
total_cost_usd?: number;
|
||||
duration_ms?: number;
|
||||
duration_api_ms?: number;
|
||||
} | null = null;
|
||||
@@ -179,11 +179,11 @@ async function run() {
|
||||
const lastElement = outputData[outputData.length - 1];
|
||||
if (
|
||||
lastElement.type === "result" &&
|
||||
"cost_usd" in lastElement &&
|
||||
"total_cost_usd" in lastElement &&
|
||||
"duration_ms" in lastElement
|
||||
) {
|
||||
executionDetails = {
|
||||
cost_usd: lastElement.cost_usd,
|
||||
total_cost_usd: lastElement.total_cost_usd,
|
||||
duration_ms: lastElement.duration_ms,
|
||||
duration_api_ms: lastElement.duration_api_ms,
|
||||
};
|
||||
|
||||
@@ -13,9 +13,16 @@ export const PR_QUERY = `
|
||||
headRefName
|
||||
headRefOid
|
||||
createdAt
|
||||
updatedAt
|
||||
lastEditedAt
|
||||
additions
|
||||
deletions
|
||||
state
|
||||
labels(first: 1) {
|
||||
nodes {
|
||||
name
|
||||
}
|
||||
}
|
||||
commits(first: 100) {
|
||||
totalCount
|
||||
nodes {
|
||||
@@ -96,7 +103,14 @@ export const ISSUE_QUERY = `
|
||||
login
|
||||
}
|
||||
createdAt
|
||||
updatedAt
|
||||
lastEditedAt
|
||||
state
|
||||
labels(first: 1) {
|
||||
nodes {
|
||||
name
|
||||
}
|
||||
}
|
||||
comments(first: 100) {
|
||||
nodes {
|
||||
id
|
||||
|
||||
13
src/github/constants.ts
Normal file
13
src/github/constants.ts
Normal file
@@ -0,0 +1,13 @@
|
||||
/**
|
||||
* GitHub-related constants used throughout the application
|
||||
*/
|
||||
|
||||
/**
|
||||
* Claude App bot user ID
|
||||
*/
|
||||
export const CLAUDE_APP_BOT_ID = 41898282;
|
||||
|
||||
/**
|
||||
* Claude bot username
|
||||
*/
|
||||
export const CLAUDE_BOT_LOGIN = "claude[bot]";
|
||||
@@ -8,6 +8,7 @@ import type {
|
||||
PullRequestReviewCommentEvent,
|
||||
WorkflowRunEvent,
|
||||
} from "@octokit/webhooks-types";
|
||||
import { CLAUDE_APP_BOT_ID, CLAUDE_BOT_LOGIN } from "./constants";
|
||||
// Custom types for GitHub Actions events that aren't webhooks
|
||||
export type WorkflowDispatchEvent = {
|
||||
action?: never;
|
||||
@@ -25,6 +26,20 @@ export type WorkflowDispatchEvent = {
|
||||
workflow: string;
|
||||
};
|
||||
|
||||
export type RepositoryDispatchEvent = {
|
||||
action: string;
|
||||
client_payload?: Record<string, any>;
|
||||
repository: {
|
||||
name: string;
|
||||
owner: {
|
||||
login: string;
|
||||
};
|
||||
};
|
||||
sender: {
|
||||
login: string;
|
||||
};
|
||||
};
|
||||
|
||||
export type ScheduleEvent = {
|
||||
action?: never;
|
||||
schedule?: string;
|
||||
@@ -47,6 +62,7 @@ const ENTITY_EVENT_NAMES = [
|
||||
|
||||
const AUTOMATION_EVENT_NAMES = [
|
||||
"workflow_dispatch",
|
||||
"repository_dispatch",
|
||||
"schedule",
|
||||
"workflow_run",
|
||||
] as const;
|
||||
@@ -72,10 +88,16 @@ type BaseContext = {
|
||||
labelTrigger: string;
|
||||
baseBranch?: string;
|
||||
branchPrefix: string;
|
||||
branchNameTemplate?: string;
|
||||
useStickyComment: boolean;
|
||||
useCommitSigning: boolean;
|
||||
sshSigningKey: string;
|
||||
botId: string;
|
||||
botName: string;
|
||||
allowedBots: string;
|
||||
allowedNonWriteUsers: string;
|
||||
trackProgress: boolean;
|
||||
includeFixLinks: boolean;
|
||||
};
|
||||
};
|
||||
|
||||
@@ -92,10 +114,14 @@ export type ParsedGitHubContext = BaseContext & {
|
||||
isPR: boolean;
|
||||
};
|
||||
|
||||
// Context for automation events (workflow_dispatch, schedule, workflow_run)
|
||||
// Context for automation events (workflow_dispatch, repository_dispatch, schedule, workflow_run)
|
||||
export type AutomationContext = BaseContext & {
|
||||
eventName: AutomationEventName;
|
||||
payload: WorkflowDispatchEvent | ScheduleEvent | WorkflowRunEvent;
|
||||
payload:
|
||||
| WorkflowDispatchEvent
|
||||
| RepositoryDispatchEvent
|
||||
| ScheduleEvent
|
||||
| WorkflowRunEvent;
|
||||
};
|
||||
|
||||
// Union type for all contexts
|
||||
@@ -120,10 +146,16 @@ export function parseGitHubContext(): GitHubContext {
|
||||
labelTrigger: process.env.LABEL_TRIGGER ?? "",
|
||||
baseBranch: process.env.BASE_BRANCH,
|
||||
branchPrefix: process.env.BRANCH_PREFIX ?? "claude/",
|
||||
branchNameTemplate: process.env.BRANCH_NAME_TEMPLATE,
|
||||
useStickyComment: process.env.USE_STICKY_COMMENT === "true",
|
||||
useCommitSigning: process.env.USE_COMMIT_SIGNING === "true",
|
||||
sshSigningKey: process.env.SSH_SIGNING_KEY || "",
|
||||
botId: process.env.BOT_ID ?? String(CLAUDE_APP_BOT_ID),
|
||||
botName: process.env.BOT_NAME ?? CLAUDE_BOT_LOGIN,
|
||||
allowedBots: process.env.ALLOWED_BOTS ?? "",
|
||||
allowedNonWriteUsers: process.env.ALLOWED_NON_WRITE_USERS ?? "",
|
||||
trackProgress: process.env.TRACK_PROGRESS === "true",
|
||||
includeFixLinks: process.env.INCLUDE_FIX_LINKS === "true",
|
||||
},
|
||||
};
|
||||
|
||||
@@ -148,7 +180,8 @@ export function parseGitHubContext(): GitHubContext {
|
||||
isPR: Boolean(payload.issue.pull_request),
|
||||
};
|
||||
}
|
||||
case "pull_request": {
|
||||
case "pull_request":
|
||||
case "pull_request_target": {
|
||||
const payload = context.payload as PullRequestEvent;
|
||||
return {
|
||||
...commonFields,
|
||||
@@ -185,6 +218,13 @@ export function parseGitHubContext(): GitHubContext {
|
||||
payload: context.payload as unknown as WorkflowDispatchEvent,
|
||||
};
|
||||
}
|
||||
case "repository_dispatch": {
|
||||
return {
|
||||
...commonFields,
|
||||
eventName: "repository_dispatch",
|
||||
payload: context.payload as unknown as RepositoryDispatchEvent,
|
||||
};
|
||||
}
|
||||
case "schedule": {
|
||||
return {
|
||||
...commonFields,
|
||||
|
||||
@@ -3,6 +3,8 @@ import type { Octokits } from "../api/client";
|
||||
import { ISSUE_QUERY, PR_QUERY, USER_QUERY } from "../api/queries/github";
|
||||
import {
|
||||
isIssueCommentEvent,
|
||||
isIssuesEvent,
|
||||
isPullRequestEvent,
|
||||
isPullRequestReviewEvent,
|
||||
isPullRequestReviewCommentEvent,
|
||||
type ParsedGitHubContext,
|
||||
@@ -40,6 +42,31 @@ export function extractTriggerTimestamp(
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extracts the original title from the GitHub webhook payload.
|
||||
* This is the title as it existed when the trigger event occurred.
|
||||
*
|
||||
* @param context - Parsed GitHub context from webhook
|
||||
* @returns The original title string or undefined if not available
|
||||
*/
|
||||
export function extractOriginalTitle(
|
||||
context: ParsedGitHubContext,
|
||||
): string | undefined {
|
||||
if (isIssueCommentEvent(context)) {
|
||||
return context.payload.issue?.title;
|
||||
} else if (isPullRequestEvent(context)) {
|
||||
return context.payload.pull_request?.title;
|
||||
} else if (isPullRequestReviewEvent(context)) {
|
||||
return context.payload.pull_request?.title;
|
||||
} else if (isPullRequestReviewCommentEvent(context)) {
|
||||
return context.payload.pull_request?.title;
|
||||
} else if (isIssuesEvent(context)) {
|
||||
return context.payload.issue?.title;
|
||||
}
|
||||
|
||||
return undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filters comments to only include those that existed in their final state before the trigger time.
|
||||
* This prevents malicious actors from editing comments after the trigger to inject harmful content.
|
||||
@@ -107,6 +134,38 @@ export function filterReviewsToTriggerTime<
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Checks if the issue/PR body was edited after the trigger time.
|
||||
* This prevents a race condition where an attacker could edit the issue/PR body
|
||||
* between when an authorized user triggered Claude and when Claude processes the request.
|
||||
*
|
||||
* @param contextData - The PR or issue data containing body and edit timestamps
|
||||
* @param triggerTime - ISO timestamp of when the trigger event occurred
|
||||
* @returns true if the body is safe to use, false if it was edited after trigger
|
||||
*/
|
||||
export function isBodySafeToUse(
|
||||
contextData: { createdAt: string; updatedAt?: string; lastEditedAt?: string },
|
||||
triggerTime: string | undefined,
|
||||
): boolean {
|
||||
// If no trigger time is available, we can't validate - allow the body
|
||||
// This maintains backwards compatibility for triggers that don't have timestamps
|
||||
if (!triggerTime) return true;
|
||||
|
||||
const triggerTimestamp = new Date(triggerTime).getTime();
|
||||
|
||||
// Check if the body was edited after the trigger
|
||||
// Use lastEditedAt if available (more accurate for body edits), otherwise fall back to updatedAt
|
||||
const lastEditTime = contextData.lastEditedAt || contextData.updatedAt;
|
||||
if (lastEditTime) {
|
||||
const lastEditTimestamp = new Date(lastEditTime).getTime();
|
||||
if (lastEditTimestamp >= triggerTimestamp) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
type FetchDataParams = {
|
||||
octokits: Octokits;
|
||||
repository: string;
|
||||
@@ -114,6 +173,7 @@ type FetchDataParams = {
|
||||
isPR: boolean;
|
||||
triggerUsername?: string;
|
||||
triggerTime?: string;
|
||||
originalTitle?: string;
|
||||
};
|
||||
|
||||
export type GitHubFileWithSHA = GitHubFile & {
|
||||
@@ -137,6 +197,7 @@ export async function fetchGitHubData({
|
||||
isPR,
|
||||
triggerUsername,
|
||||
triggerTime,
|
||||
originalTitle,
|
||||
}: FetchDataParams): Promise<FetchDataResult> {
|
||||
const [owner, repo] = repository.split("/");
|
||||
if (!owner || !repo) {
|
||||
@@ -273,9 +334,13 @@ export async function fetchGitHubData({
|
||||
body: c.body,
|
||||
}));
|
||||
|
||||
// Add the main issue/PR body if it has content
|
||||
const mainBody: CommentWithImages[] = contextData.body
|
||||
? [
|
||||
// Add the main issue/PR body if it has content and wasn't edited after trigger
|
||||
// This prevents a TOCTOU race condition where an attacker could edit the body
|
||||
// between when an authorized user triggered Claude and when Claude processes the request
|
||||
let mainBody: CommentWithImages[] = [];
|
||||
if (contextData.body) {
|
||||
if (isBodySafeToUse(contextData, triggerTime)) {
|
||||
mainBody = [
|
||||
{
|
||||
...(isPR
|
||||
? {
|
||||
@@ -289,8 +354,14 @@ export async function fetchGitHubData({
|
||||
body: contextData.body,
|
||||
}),
|
||||
},
|
||||
]
|
||||
: [];
|
||||
];
|
||||
} else {
|
||||
console.warn(
|
||||
`Security: ${isPR ? "PR" : "Issue"} #${prNumber} body was edited after the trigger event. ` +
|
||||
`Excluding body content to prevent potential injection attacks.`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
const allComments = [
|
||||
...mainBody,
|
||||
@@ -312,6 +383,11 @@ export async function fetchGitHubData({
|
||||
triggerDisplayName = await fetchUserDisplayName(octokits, triggerUsername);
|
||||
}
|
||||
|
||||
// Use the original title from the webhook payload if provided
|
||||
if (originalTitle !== undefined) {
|
||||
contextData.title = originalTitle;
|
||||
}
|
||||
|
||||
return {
|
||||
contextData,
|
||||
comments,
|
||||
|
||||
@@ -14,7 +14,8 @@ export function formatContext(
|
||||
): string {
|
||||
if (isPR) {
|
||||
const prData = contextData as GitHubPullRequest;
|
||||
return `PR Title: ${prData.title}
|
||||
const sanitizedTitle = sanitizeContent(prData.title);
|
||||
return `PR Title: ${sanitizedTitle}
|
||||
PR Author: ${prData.author.login}
|
||||
PR Branch: ${prData.headRefName} -> ${prData.baseRefName}
|
||||
PR State: ${prData.state}
|
||||
@@ -24,7 +25,8 @@ Total Commits: ${prData.commits.totalCount}
|
||||
Changed Files: ${prData.files.nodes.length} files`;
|
||||
} else {
|
||||
const issueData = contextData as GitHubIssue;
|
||||
return `Issue Title: ${issueData.title}
|
||||
const sanitizedTitle = sanitizeContent(issueData.title);
|
||||
return `Issue Title: ${sanitizedTitle}
|
||||
Issue Author: ${issueData.author.login}
|
||||
Issue State: ${issueData.state}`;
|
||||
}
|
||||
|
||||
@@ -7,11 +7,120 @@
|
||||
*/
|
||||
|
||||
import { $ } from "bun";
|
||||
import { execFileSync } from "child_process";
|
||||
import * as core from "@actions/core";
|
||||
import type { ParsedGitHubContext } from "../context";
|
||||
import type { GitHubPullRequest } from "../types";
|
||||
import type { Octokits } from "../api/client";
|
||||
import type { FetchDataResult } from "../data/fetcher";
|
||||
import { generateBranchName } from "../../utils/branch-template";
|
||||
|
||||
/**
|
||||
* Extracts the first label from GitHub data, or returns undefined if no labels exist
|
||||
*/
|
||||
function extractFirstLabel(githubData: FetchDataResult): string | undefined {
|
||||
const labels = githubData.contextData.labels?.nodes;
|
||||
return labels && labels.length > 0 ? labels[0]?.name : undefined;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validates a git branch name against a strict whitelist pattern.
|
||||
* This prevents command injection by ensuring only safe characters are used.
|
||||
*
|
||||
* Valid branch names:
|
||||
* - Start with alphanumeric character (not dash, to prevent option injection)
|
||||
* - Contain only alphanumeric, forward slash, hyphen, underscore, or period
|
||||
* - Do not start or end with a period
|
||||
* - Do not end with a slash
|
||||
* - Do not contain '..' (path traversal)
|
||||
* - Do not contain '//' (consecutive slashes)
|
||||
* - Do not end with '.lock'
|
||||
* - Do not contain '@{'
|
||||
* - Do not contain control characters or special git characters (~^:?*[\])
|
||||
*/
|
||||
export function validateBranchName(branchName: string): void {
|
||||
// Check for empty or whitespace-only names
|
||||
if (!branchName || branchName.trim().length === 0) {
|
||||
throw new Error("Branch name cannot be empty");
|
||||
}
|
||||
|
||||
// Check for leading dash (prevents option injection like --help, -x)
|
||||
if (branchName.startsWith("-")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot start with a dash.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Check for control characters and special git characters (~^:?*[\])
|
||||
// eslint-disable-next-line no-control-regex
|
||||
if (/[\x00-\x1F\x7F ~^:?*[\]\\]/.test(branchName)) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot contain control characters, spaces, or special git characters (~^:?*[\\]).`,
|
||||
);
|
||||
}
|
||||
|
||||
// Strict whitelist pattern: alphanumeric start, then alphanumeric/slash/hyphen/underscore/period
|
||||
const validPattern = /^[a-zA-Z0-9][a-zA-Z0-9/_.-]*$/;
|
||||
|
||||
if (!validPattern.test(branchName)) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names must start with an alphanumeric character and contain only alphanumeric characters, forward slashes, hyphens, underscores, or periods.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Check for leading/trailing periods
|
||||
if (branchName.startsWith(".") || branchName.endsWith(".")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot start or end with a period.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Check for trailing slash
|
||||
if (branchName.endsWith("/")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot end with a slash.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Check for consecutive slashes
|
||||
if (branchName.includes("//")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot contain consecutive slashes.`,
|
||||
);
|
||||
}
|
||||
|
||||
// Additional git-specific validations
|
||||
if (branchName.includes("..")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot contain '..'`,
|
||||
);
|
||||
}
|
||||
|
||||
if (branchName.endsWith(".lock")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot end with '.lock'`,
|
||||
);
|
||||
}
|
||||
|
||||
if (branchName.includes("@{")) {
|
||||
throw new Error(
|
||||
`Invalid branch name: "${branchName}". Branch names cannot contain '@{'`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Executes a git command safely using execFileSync to avoid shell interpolation.
|
||||
*
|
||||
* Security: execFileSync passes arguments directly to the git binary without
|
||||
* invoking a shell, preventing command injection attacks where malicious input
|
||||
* could be interpreted as shell commands (e.g., branch names containing `;`, `|`, `&&`).
|
||||
*
|
||||
* @param args - Git command arguments (e.g., ["checkout", "branch-name"])
|
||||
*/
|
||||
function execGit(args: string[]): void {
|
||||
execFileSync("git", args, { stdio: "inherit" });
|
||||
}
|
||||
|
||||
export type BranchInfo = {
|
||||
baseBranch: string;
|
||||
@@ -26,7 +135,7 @@ export async function setupBranch(
|
||||
): Promise<BranchInfo> {
|
||||
const { owner, repo } = context.repository;
|
||||
const entityNumber = context.entityNumber;
|
||||
const { baseBranch, branchPrefix } = context.inputs;
|
||||
const { baseBranch, branchPrefix, branchNameTemplate } = context.inputs;
|
||||
const isPR = context.isPR;
|
||||
|
||||
if (isPR) {
|
||||
@@ -53,14 +162,19 @@ export async function setupBranch(
|
||||
`PR #${entityNumber}: ${commitCount} commits, using fetch depth ${fetchDepth}`,
|
||||
);
|
||||
|
||||
// Validate branch names before use to prevent command injection
|
||||
validateBranchName(branchName);
|
||||
|
||||
// Execute git commands to checkout PR branch (dynamic depth based on PR size)
|
||||
await $`git fetch origin --depth=${fetchDepth} ${branchName}`;
|
||||
await $`git checkout ${branchName} --`;
|
||||
// Using execFileSync instead of shell template literals for security
|
||||
execGit(["fetch", "origin", `--depth=${fetchDepth}`, branchName]);
|
||||
execGit(["checkout", branchName, "--"]);
|
||||
|
||||
console.log(`Successfully checked out PR branch for PR #${entityNumber}`);
|
||||
|
||||
// For open PRs, we need to get the base branch of the PR
|
||||
const baseBranch = prData.baseRefName;
|
||||
validateBranchName(baseBranch);
|
||||
|
||||
return {
|
||||
baseBranch,
|
||||
@@ -87,17 +201,8 @@ export async function setupBranch(
|
||||
// Generate branch name for either an issue or closed/merged PR
|
||||
const entityType = isPR ? "pr" : "issue";
|
||||
|
||||
// Create Kubernetes-compatible timestamp: lowercase, hyphens only, shorter format
|
||||
const now = new Date();
|
||||
const timestamp = `${now.getFullYear()}${String(now.getMonth() + 1).padStart(2, "0")}${String(now.getDate()).padStart(2, "0")}-${String(now.getHours()).padStart(2, "0")}${String(now.getMinutes()).padStart(2, "0")}`;
|
||||
|
||||
// Ensure branch name is Kubernetes-compatible:
|
||||
// - Lowercase only
|
||||
// - Alphanumeric with hyphens
|
||||
// - No underscores
|
||||
// - Max 50 chars (to allow for prefixes)
|
||||
const branchName = `${branchPrefix}${entityType}-${entityNumber}-${timestamp}`;
|
||||
const newBranch = branchName.toLowerCase().substring(0, 50);
|
||||
// Get the SHA of the source branch to use in template
|
||||
let sourceSHA: string | undefined;
|
||||
|
||||
try {
|
||||
// Get the SHA of the source branch to verify it exists
|
||||
@@ -107,8 +212,46 @@ export async function setupBranch(
|
||||
ref: `heads/${sourceBranch}`,
|
||||
});
|
||||
|
||||
const currentSHA = sourceBranchRef.data.object.sha;
|
||||
console.log(`Source branch SHA: ${currentSHA}`);
|
||||
sourceSHA = sourceBranchRef.data.object.sha;
|
||||
console.log(`Source branch SHA: ${sourceSHA}`);
|
||||
|
||||
// Extract first label from GitHub data
|
||||
const firstLabel = extractFirstLabel(githubData);
|
||||
|
||||
// Extract title from GitHub data
|
||||
const title = githubData.contextData.title;
|
||||
|
||||
// Generate branch name using template or default format
|
||||
let newBranch = generateBranchName(
|
||||
branchNameTemplate,
|
||||
branchPrefix,
|
||||
entityType,
|
||||
entityNumber,
|
||||
sourceSHA,
|
||||
firstLabel,
|
||||
title,
|
||||
);
|
||||
|
||||
// Check if generated branch already exists on remote
|
||||
try {
|
||||
await $`git ls-remote --exit-code origin refs/heads/${newBranch}`.quiet();
|
||||
|
||||
// If we get here, branch exists (exit code 0)
|
||||
console.log(
|
||||
`Branch '${newBranch}' already exists, falling back to default format`,
|
||||
);
|
||||
newBranch = generateBranchName(
|
||||
undefined, // Force default template
|
||||
branchPrefix,
|
||||
entityType,
|
||||
entityNumber,
|
||||
sourceSHA,
|
||||
firstLabel,
|
||||
title,
|
||||
);
|
||||
} catch {
|
||||
// Branch doesn't exist (non-zero exit code), continue with generated name
|
||||
}
|
||||
|
||||
// For commit signing, defer branch creation to the file ops server
|
||||
if (context.inputs.useCommitSigning) {
|
||||
@@ -118,8 +261,9 @@ export async function setupBranch(
|
||||
|
||||
// Ensure we're on the source branch
|
||||
console.log(`Fetching and checking out source branch: ${sourceBranch}`);
|
||||
await $`git fetch origin ${sourceBranch} --depth=1`;
|
||||
await $`git checkout ${sourceBranch}`;
|
||||
validateBranchName(sourceBranch);
|
||||
execGit(["fetch", "origin", sourceBranch, "--depth=1"]);
|
||||
execGit(["checkout", sourceBranch, "--"]);
|
||||
|
||||
// Set outputs for GitHub Actions
|
||||
core.setOutput("CLAUDE_BRANCH", newBranch);
|
||||
@@ -138,11 +282,13 @@ export async function setupBranch(
|
||||
|
||||
// Fetch and checkout the source branch first to ensure we branch from the correct base
|
||||
console.log(`Fetching and checking out source branch: ${sourceBranch}`);
|
||||
await $`git fetch origin ${sourceBranch} --depth=1`;
|
||||
await $`git checkout ${sourceBranch}`;
|
||||
validateBranchName(sourceBranch);
|
||||
validateBranchName(newBranch);
|
||||
execGit(["fetch", "origin", sourceBranch, "--depth=1"]);
|
||||
execGit(["checkout", sourceBranch, "--"]);
|
||||
|
||||
// Create and checkout the new branch from the source branch
|
||||
await $`git checkout -b ${newBranch}`;
|
||||
execGit(["checkout", "-b", newBranch]);
|
||||
|
||||
console.log(
|
||||
`Successfully created and checked out local branch: ${newBranch}`,
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import { GITHUB_SERVER_URL } from "../api/config";
|
||||
|
||||
export type ExecutionDetails = {
|
||||
cost_usd?: number;
|
||||
total_cost_usd?: number;
|
||||
duration_ms?: number;
|
||||
duration_api_ms?: number;
|
||||
};
|
||||
|
||||
@@ -6,9 +6,14 @@
|
||||
*/
|
||||
|
||||
import { $ } from "bun";
|
||||
import { mkdir, writeFile, rm } from "fs/promises";
|
||||
import { join } from "path";
|
||||
import { homedir } from "os";
|
||||
import type { GitHubContext } from "../context";
|
||||
import { GITHUB_SERVER_URL } from "../api/config";
|
||||
|
||||
const SSH_SIGNING_KEY_PATH = join(homedir(), ".ssh", "claude_signing_key");
|
||||
|
||||
type GitUser = {
|
||||
login: string;
|
||||
id: number;
|
||||
@@ -17,7 +22,7 @@ type GitUser = {
|
||||
export async function configureGitAuth(
|
||||
githubToken: string,
|
||||
context: GitHubContext,
|
||||
user: GitUser | null,
|
||||
user: GitUser,
|
||||
) {
|
||||
console.log("Configuring git authentication for non-signing mode");
|
||||
|
||||
@@ -28,20 +33,14 @@ export async function configureGitAuth(
|
||||
? "users.noreply.github.com"
|
||||
: `users.noreply.${serverUrl.hostname}`;
|
||||
|
||||
// Configure git user based on the comment creator
|
||||
// Configure git user
|
||||
console.log("Configuring git user...");
|
||||
if (user) {
|
||||
const botName = user.login;
|
||||
const botId = user.id;
|
||||
console.log(`Setting git user as ${botName}...`);
|
||||
await $`git config user.name "${botName}"`;
|
||||
await $`git config user.email "${botId}+${botName}@${noreplyDomain}"`;
|
||||
console.log(`✓ Set git user as ${botName}`);
|
||||
} else {
|
||||
console.log("No user data in comment, using default bot user");
|
||||
await $`git config user.name "github-actions[bot]"`;
|
||||
await $`git config user.email "41898282+github-actions[bot]@${noreplyDomain}"`;
|
||||
}
|
||||
const botName = user.login;
|
||||
const botId = user.id;
|
||||
console.log(`Setting git user as ${botName}...`);
|
||||
await $`git config user.name "${botName}"`;
|
||||
await $`git config user.email "${botId}+${botName}@${noreplyDomain}"`;
|
||||
console.log(`✓ Set git user as ${botName}`);
|
||||
|
||||
// Remove the authorization header that actions/checkout sets
|
||||
console.log("Removing existing git authentication headers...");
|
||||
@@ -60,3 +59,55 @@ export async function configureGitAuth(
|
||||
|
||||
console.log("Git authentication configured successfully");
|
||||
}
|
||||
|
||||
/**
|
||||
* Configure git to use SSH signing for commits
|
||||
* This is an alternative to GitHub API-based commit signing (use_commit_signing)
|
||||
*/
|
||||
export async function setupSshSigning(sshSigningKey: string): Promise<void> {
|
||||
console.log("Configuring SSH signing for commits...");
|
||||
|
||||
// Validate SSH key format
|
||||
if (!sshSigningKey.trim()) {
|
||||
throw new Error("SSH signing key cannot be empty");
|
||||
}
|
||||
if (
|
||||
!sshSigningKey.includes("BEGIN") ||
|
||||
!sshSigningKey.includes("PRIVATE KEY")
|
||||
) {
|
||||
throw new Error("Invalid SSH private key format");
|
||||
}
|
||||
|
||||
// Create .ssh directory with secure permissions (700)
|
||||
const sshDir = join(homedir(), ".ssh");
|
||||
await mkdir(sshDir, { recursive: true, mode: 0o700 });
|
||||
|
||||
// Ensure key ends with newline (required for ssh-keygen to parse it)
|
||||
const normalizedKey = sshSigningKey.endsWith("\n")
|
||||
? sshSigningKey
|
||||
: sshSigningKey + "\n";
|
||||
|
||||
// Write the signing key atomically with secure permissions (600)
|
||||
await writeFile(SSH_SIGNING_KEY_PATH, normalizedKey, { mode: 0o600 });
|
||||
console.log(`✓ SSH signing key written to ${SSH_SIGNING_KEY_PATH}`);
|
||||
|
||||
// Configure git to use SSH signing
|
||||
await $`git config gpg.format ssh`;
|
||||
await $`git config user.signingkey ${SSH_SIGNING_KEY_PATH}`;
|
||||
await $`git config commit.gpgsign true`;
|
||||
|
||||
console.log("✓ Git configured to use SSH signing for commits");
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up the SSH signing key file
|
||||
* Should be called in the post step for security
|
||||
*/
|
||||
export async function cleanupSshSigning(): Promise<void> {
|
||||
try {
|
||||
await rm(SSH_SIGNING_KEY_PATH, { force: true });
|
||||
console.log("✓ SSH signing key cleaned up");
|
||||
} catch (error) {
|
||||
console.log("No SSH signing key to clean up");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -58,9 +58,16 @@ export type GitHubPullRequest = {
|
||||
headRefName: string;
|
||||
headRefOid: string;
|
||||
createdAt: string;
|
||||
updatedAt?: string;
|
||||
lastEditedAt?: string;
|
||||
additions: number;
|
||||
deletions: number;
|
||||
state: string;
|
||||
labels: {
|
||||
nodes: Array<{
|
||||
name: string;
|
||||
}>;
|
||||
};
|
||||
commits: {
|
||||
totalCount: number;
|
||||
nodes: Array<{
|
||||
@@ -83,7 +90,14 @@ export type GitHubIssue = {
|
||||
body: string;
|
||||
author: GitHubAuthor;
|
||||
createdAt: string;
|
||||
updatedAt?: string;
|
||||
lastEditedAt?: string;
|
||||
state: string;
|
||||
labels: {
|
||||
nodes: Array<{
|
||||
name: string;
|
||||
}>;
|
||||
};
|
||||
comments: {
|
||||
nodes: GitHubComment[];
|
||||
};
|
||||
|
||||
@@ -6,11 +6,11 @@
|
||||
*/
|
||||
|
||||
import type { Octokit } from "@octokit/rest";
|
||||
import type { ParsedGitHubContext } from "../context";
|
||||
import type { GitHubContext } from "../context";
|
||||
|
||||
export async function checkHumanActor(
|
||||
octokit: Octokit,
|
||||
githubContext: ParsedGitHubContext,
|
||||
githubContext: GitHubContext,
|
||||
) {
|
||||
// Fetch user information from GitHub API
|
||||
const { data: userData } = await octokit.users.getByUsername({
|
||||
|
||||
@@ -6,17 +6,43 @@ import type { Octokit } from "@octokit/rest";
|
||||
* Check if the actor has write permissions to the repository
|
||||
* @param octokit - The Octokit REST client
|
||||
* @param context - The GitHub context
|
||||
* @param allowedNonWriteUsers - Comma-separated list of users allowed without write permissions, or '*' for all
|
||||
* @param githubTokenProvided - Whether github_token was provided as input (not from app)
|
||||
* @returns true if the actor has write permissions, false otherwise
|
||||
*/
|
||||
export async function checkWritePermissions(
|
||||
octokit: Octokit,
|
||||
context: ParsedGitHubContext,
|
||||
allowedNonWriteUsers?: string,
|
||||
githubTokenProvided?: boolean,
|
||||
): Promise<boolean> {
|
||||
const { repository, actor } = context;
|
||||
|
||||
try {
|
||||
core.info(`Checking permissions for actor: ${actor}`);
|
||||
|
||||
// Check if we should bypass permission checks for this user
|
||||
if (allowedNonWriteUsers && githubTokenProvided) {
|
||||
const allowedUsers = allowedNonWriteUsers.trim();
|
||||
if (allowedUsers === "*") {
|
||||
core.warning(
|
||||
`⚠️ SECURITY WARNING: Bypassing write permission check for ${actor} due to allowed_non_write_users='*'. This should only be used for workflows with very limited permissions.`,
|
||||
);
|
||||
return true;
|
||||
} else if (allowedUsers) {
|
||||
const allowedUserList = allowedUsers
|
||||
.split(",")
|
||||
.map((u) => u.trim())
|
||||
.filter((u) => u.length > 0);
|
||||
if (allowedUserList.includes(actor)) {
|
||||
core.warning(
|
||||
`⚠️ SECURITY WARNING: Bypassing write permission check for ${actor} due to allowed_non_write_users configuration. This should only be used for workflows with very limited permissions.`,
|
||||
);
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check if the actor is a GitHub App (bot user)
|
||||
if (actor.endsWith("[bot]")) {
|
||||
core.info(`Actor is a GitHub App: ${actor}`);
|
||||
|
||||
@@ -4,11 +4,12 @@ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
|
||||
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
||||
import { z } from "zod";
|
||||
import { readFile, stat } from "fs/promises";
|
||||
import { join } from "path";
|
||||
import { resolve } from "path";
|
||||
import { constants } from "fs";
|
||||
import fetch from "node-fetch";
|
||||
import { GITHUB_API_URL } from "../github/api/config";
|
||||
import { retryWithBackoff } from "../utils/retry";
|
||||
import { validatePathWithinRepo } from "./path-validation";
|
||||
|
||||
type GitHubRef = {
|
||||
object: {
|
||||
@@ -213,12 +214,18 @@ server.tool(
|
||||
throw new Error("GITHUB_TOKEN environment variable is required");
|
||||
}
|
||||
|
||||
const processedFiles = files.map((filePath) => {
|
||||
if (filePath.startsWith("/")) {
|
||||
return filePath.slice(1);
|
||||
}
|
||||
return filePath;
|
||||
});
|
||||
// Validate all paths are within repository root and get full/relative paths
|
||||
const resolvedRepoDir = resolve(REPO_DIR);
|
||||
const validatedFiles = await Promise.all(
|
||||
files.map(async (filePath) => {
|
||||
const fullPath = await validatePathWithinRepo(filePath, REPO_DIR);
|
||||
// Calculate the relative path for the git tree entry
|
||||
// Use the original filePath (normalized) for the git path, not the symlink-resolved path
|
||||
const normalizedPath = resolve(resolvedRepoDir, filePath);
|
||||
const relativePath = normalizedPath.slice(resolvedRepoDir.length + 1);
|
||||
return { fullPath, relativePath };
|
||||
}),
|
||||
);
|
||||
|
||||
// 1. Get the branch reference (create if doesn't exist)
|
||||
const baseSha = await getOrCreateBranchRef(
|
||||
@@ -247,18 +254,14 @@ server.tool(
|
||||
|
||||
// 3. Create tree entries for all files
|
||||
const treeEntries = await Promise.all(
|
||||
processedFiles.map(async (filePath) => {
|
||||
const fullPath = filePath.startsWith("/")
|
||||
? filePath
|
||||
: join(REPO_DIR, filePath);
|
||||
|
||||
validatedFiles.map(async ({ fullPath, relativePath }) => {
|
||||
// Get the proper file mode based on file permissions
|
||||
const fileMode = await getFileMode(fullPath);
|
||||
|
||||
// Check if file is binary (images, etc.)
|
||||
const isBinaryFile =
|
||||
/\.(png|jpg|jpeg|gif|webp|ico|pdf|zip|tar|gz|exe|bin|woff|woff2|ttf|eot)$/i.test(
|
||||
filePath,
|
||||
relativePath,
|
||||
);
|
||||
|
||||
if (isBinaryFile) {
|
||||
@@ -284,7 +287,7 @@ server.tool(
|
||||
if (!blobResponse.ok) {
|
||||
const errorText = await blobResponse.text();
|
||||
throw new Error(
|
||||
`Failed to create blob for ${filePath}: ${blobResponse.status} - ${errorText}`,
|
||||
`Failed to create blob for ${relativePath}: ${blobResponse.status} - ${errorText}`,
|
||||
);
|
||||
}
|
||||
|
||||
@@ -292,7 +295,7 @@ server.tool(
|
||||
|
||||
// Return tree entry with blob SHA
|
||||
return {
|
||||
path: filePath,
|
||||
path: relativePath,
|
||||
mode: fileMode,
|
||||
type: "blob",
|
||||
sha: blobData.sha,
|
||||
@@ -301,7 +304,7 @@ server.tool(
|
||||
// For text files, include content directly in tree
|
||||
const content = await readFile(fullPath, "utf-8");
|
||||
return {
|
||||
path: filePath,
|
||||
path: relativePath,
|
||||
mode: fileMode,
|
||||
type: "blob",
|
||||
content: content,
|
||||
@@ -421,7 +424,9 @@ server.tool(
|
||||
author: newCommitData.author.name,
|
||||
date: newCommitData.author.date,
|
||||
},
|
||||
files: processedFiles.map((path) => ({ path })),
|
||||
files: validatedFiles.map(({ relativePath }) => ({
|
||||
path: relativePath,
|
||||
})),
|
||||
tree: {
|
||||
sha: treeData.sha,
|
||||
},
|
||||
|
||||
@@ -3,6 +3,7 @@ import { GITHUB_API_URL, GITHUB_SERVER_URL } from "../github/api/config";
|
||||
import type { GitHubContext } from "../github/context";
|
||||
import { isEntityContext } from "../github/context";
|
||||
import { Octokit } from "@octokit/rest";
|
||||
import type { AutoDetectedMode } from "../modes/detector";
|
||||
|
||||
type PrepareConfigParams = {
|
||||
githubToken: string;
|
||||
@@ -12,6 +13,7 @@ type PrepareConfigParams = {
|
||||
baseBranch: string;
|
||||
claudeCommentId?: string;
|
||||
allowedTools: string[];
|
||||
mode: AutoDetectedMode;
|
||||
context: GitHubContext;
|
||||
};
|
||||
|
||||
@@ -59,12 +61,17 @@ export async function prepareMcpConfig(
|
||||
claudeCommentId,
|
||||
allowedTools,
|
||||
context,
|
||||
mode,
|
||||
} = params;
|
||||
try {
|
||||
const allowedToolsList = allowedTools || [];
|
||||
|
||||
// Detect if we're in agent mode (explicit prompt provided)
|
||||
const isAgentMode = !!context.inputs?.prompt;
|
||||
const isAgentMode = mode === "agent";
|
||||
|
||||
const hasGitHubCommentTools = allowedToolsList.some((tool) =>
|
||||
tool.startsWith("mcp__github_comment__"),
|
||||
);
|
||||
|
||||
const hasGitHubMcpTools = allowedToolsList.some((tool) =>
|
||||
tool.startsWith("mcp__github__"),
|
||||
@@ -74,10 +81,6 @@ export async function prepareMcpConfig(
|
||||
tool.startsWith("mcp__github_inline_comment__"),
|
||||
);
|
||||
|
||||
const hasGitHubCommentTools = allowedToolsList.some((tool) =>
|
||||
tool.startsWith("mcp__github_comment__"),
|
||||
);
|
||||
|
||||
const hasGitHubCITools = allowedToolsList.some((tool) =>
|
||||
tool.startsWith("mcp__github_ci__"),
|
||||
);
|
||||
@@ -206,7 +209,7 @@ export async function prepareMcpConfig(
|
||||
"GITHUB_PERSONAL_ACCESS_TOKEN",
|
||||
"-e",
|
||||
"GITHUB_HOST",
|
||||
"ghcr.io/github/github-mcp-server:sha-efef8ae", // https://github.com/github/github-mcp-server/releases/tag/v0.9.0
|
||||
"ghcr.io/github/github-mcp-server:sha-23fa0dd", // https://github.com/github/github-mcp-server/releases/tag/v0.17.1
|
||||
],
|
||||
env: {
|
||||
GITHUB_PERSONAL_ACCESS_TOKEN: githubToken,
|
||||
|
||||
64
src/mcp/path-validation.ts
Normal file
64
src/mcp/path-validation.ts
Normal file
@@ -0,0 +1,64 @@
|
||||
import { realpath } from "fs/promises";
|
||||
import { resolve, sep } from "path";
|
||||
|
||||
/**
|
||||
* Validates that a file path resolves within the repository root.
|
||||
* Prevents path traversal attacks via "../" sequences and symlinks.
|
||||
* @param filePath - The file path to validate (can be relative or absolute)
|
||||
* @param repoRoot - The repository root directory
|
||||
* @returns The resolved absolute path (with symlinks resolved) if valid
|
||||
* @throws Error if the path resolves outside the repository root
|
||||
*/
|
||||
export async function validatePathWithinRepo(
|
||||
filePath: string,
|
||||
repoRoot: string,
|
||||
): Promise<string> {
|
||||
// First resolve the path string (handles .. and . segments)
|
||||
const initialPath = resolve(repoRoot, filePath);
|
||||
|
||||
// Resolve symlinks to get the real path
|
||||
// This prevents symlink attacks where a link inside the repo points outside
|
||||
let resolvedRoot: string;
|
||||
let resolvedPath: string;
|
||||
|
||||
try {
|
||||
resolvedRoot = await realpath(repoRoot);
|
||||
} catch {
|
||||
throw new Error(`Repository root '${repoRoot}' does not exist`);
|
||||
}
|
||||
|
||||
try {
|
||||
resolvedPath = await realpath(initialPath);
|
||||
} catch {
|
||||
// File doesn't exist yet - fall back to checking the parent directory
|
||||
// This handles the case where we're creating a new file
|
||||
const parentDir = resolve(initialPath, "..");
|
||||
try {
|
||||
const resolvedParent = await realpath(parentDir);
|
||||
if (
|
||||
resolvedParent !== resolvedRoot &&
|
||||
!resolvedParent.startsWith(resolvedRoot + sep)
|
||||
) {
|
||||
throw new Error(
|
||||
`Path '${filePath}' resolves outside the repository root`,
|
||||
);
|
||||
}
|
||||
// Parent is valid, return the initial path since file doesn't exist yet
|
||||
return initialPath;
|
||||
} catch {
|
||||
throw new Error(
|
||||
`Path '${filePath}' resolves outside the repository root`,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// Path must be within repo root (or be the root itself)
|
||||
if (
|
||||
resolvedPath !== resolvedRoot &&
|
||||
!resolvedPath.startsWith(resolvedRoot + sep)
|
||||
) {
|
||||
throw new Error(`Path '${filePath}' resolves outside the repository root`);
|
||||
}
|
||||
|
||||
return resolvedPath;
|
||||
}
|
||||
@@ -4,7 +4,11 @@ import type { Mode, ModeOptions, ModeResult } from "../types";
|
||||
import type { PreparedContext } from "../../create-prompt/types";
|
||||
import { prepareMcpConfig } from "../../mcp/install-mcp-server";
|
||||
import { parseAllowedTools } from "./parse-tools";
|
||||
import { configureGitAuth } from "../../github/operations/git-config";
|
||||
import {
|
||||
configureGitAuth,
|
||||
setupSshSigning,
|
||||
} from "../../github/operations/git-config";
|
||||
import { checkHumanActor } from "../../github/validation/actor";
|
||||
import type { GitHubContext } from "../../github/context";
|
||||
import { isEntityContext } from "../../github/context";
|
||||
|
||||
@@ -79,20 +83,41 @@ export const agentMode: Mode = {
|
||||
|
||||
async prepare({
|
||||
context,
|
||||
githubToken,
|
||||
octokit,
|
||||
githubToken,
|
||||
}: ModeOptions): Promise<ModeResult> {
|
||||
// Configure git authentication for agent mode (same as tag mode)
|
||||
if (!context.inputs.useCommitSigning) {
|
||||
try {
|
||||
// Get the authenticated user (will be claude[bot] when using Claude App token)
|
||||
const { data: authenticatedUser } =
|
||||
await octokit.rest.users.getAuthenticated();
|
||||
const user = {
|
||||
login: authenticatedUser.login,
|
||||
id: authenticatedUser.id,
|
||||
};
|
||||
// Check if actor is human (prevents bot-triggered loops)
|
||||
await checkHumanActor(octokit.rest, context);
|
||||
|
||||
// Configure git authentication for agent mode (same as tag mode)
|
||||
// SSH signing takes precedence if provided
|
||||
const useSshSigning = !!context.inputs.sshSigningKey;
|
||||
const useApiCommitSigning =
|
||||
context.inputs.useCommitSigning && !useSshSigning;
|
||||
|
||||
if (useSshSigning) {
|
||||
// Setup SSH signing for commits
|
||||
await setupSshSigning(context.inputs.sshSigningKey);
|
||||
|
||||
// Still configure git auth for push operations (user/email and remote URL)
|
||||
const user = {
|
||||
login: context.inputs.botName,
|
||||
id: parseInt(context.inputs.botId),
|
||||
};
|
||||
try {
|
||||
await configureGitAuth(githubToken, context, user);
|
||||
} catch (error) {
|
||||
console.error("Failed to configure git authentication:", error);
|
||||
// Continue anyway - git operations may still work with default config
|
||||
}
|
||||
} else if (!useApiCommitSigning) {
|
||||
// Use bot_id and bot_name from inputs directly
|
||||
const user = {
|
||||
login: context.inputs.botName,
|
||||
id: parseInt(context.inputs.botId),
|
||||
};
|
||||
|
||||
try {
|
||||
// Use the shared git configuration function
|
||||
await configureGitAuth(githubToken, context, user);
|
||||
} catch (error) {
|
||||
@@ -141,6 +166,7 @@ export const agentMode: Mode = {
|
||||
baseBranch: baseBranch,
|
||||
claudeCommentId: undefined, // No tracking comment in agent mode
|
||||
allowedTools,
|
||||
mode: "agent",
|
||||
context,
|
||||
});
|
||||
|
||||
|
||||
@@ -1,22 +1,33 @@
|
||||
export function parseAllowedTools(claudeArgs: string): string[] {
|
||||
// Match --allowedTools followed by the value
|
||||
// Match --allowedTools or --allowed-tools followed by the value
|
||||
// Handle both quoted and unquoted values
|
||||
// Use /g flag to find ALL occurrences, not just the first one
|
||||
const patterns = [
|
||||
/--allowedTools\s+"([^"]+)"/, // Double quoted
|
||||
/--allowedTools\s+'([^']+)'/, // Single quoted
|
||||
/--allowedTools\s+([^\s]+)/, // Unquoted
|
||||
/--(?:allowedTools|allowed-tools)\s+"([^"]+)"/g, // Double quoted
|
||||
/--(?:allowedTools|allowed-tools)\s+'([^']+)'/g, // Single quoted
|
||||
/--(?:allowedTools|allowed-tools)\s+([^'"\s][^\s]*)/g, // Unquoted (must not start with quote)
|
||||
];
|
||||
|
||||
const tools: string[] = [];
|
||||
const seen = new Set<string>();
|
||||
|
||||
for (const pattern of patterns) {
|
||||
const match = claudeArgs.match(pattern);
|
||||
if (match && match[1]) {
|
||||
// Don't return if the value starts with -- (another flag)
|
||||
if (match[1].startsWith("--")) {
|
||||
return [];
|
||||
for (const match of claudeArgs.matchAll(pattern)) {
|
||||
if (match[1]) {
|
||||
// Don't add if the value starts with -- (another flag)
|
||||
if (match[1].startsWith("--")) {
|
||||
continue;
|
||||
}
|
||||
for (const tool of match[1].split(",")) {
|
||||
const trimmed = tool.trim();
|
||||
if (trimmed && !seen.has(trimmed)) {
|
||||
seen.add(trimmed);
|
||||
tools.push(trimmed);
|
||||
}
|
||||
}
|
||||
}
|
||||
return match[1].split(",").map((t) => t.trim());
|
||||
}
|
||||
}
|
||||
|
||||
return [];
|
||||
return tools;
|
||||
}
|
||||
|
||||
@@ -19,7 +19,13 @@ export function detectMode(context: GitHubContext): AutoDetectedMode {
|
||||
|
||||
// If track_progress is set for PR/issue events, force tag mode
|
||||
if (context.inputs.trackProgress && isEntityContext(context)) {
|
||||
if (isPullRequestEvent(context) || isIssuesEvent(context)) {
|
||||
if (
|
||||
isPullRequestEvent(context) ||
|
||||
isIssuesEvent(context) ||
|
||||
isIssueCommentEvent(context) ||
|
||||
isPullRequestReviewCommentEvent(context) ||
|
||||
isPullRequestReviewEvent(context)
|
||||
) {
|
||||
return "tag";
|
||||
}
|
||||
}
|
||||
@@ -44,6 +50,10 @@ export function detectMode(context: GitHubContext): AutoDetectedMode {
|
||||
|
||||
// Issue events
|
||||
if (isEntityContext(context) && isIssuesEvent(context)) {
|
||||
// If prompt is provided, use agent mode (same as PR events)
|
||||
if (context.inputs.prompt) {
|
||||
return "agent";
|
||||
}
|
||||
// Check for @claude mentions or labels/assignees
|
||||
if (checkContainsTrigger(context)) {
|
||||
return "tag";
|
||||
@@ -83,10 +93,16 @@ export function getModeDescription(mode: AutoDetectedMode): string {
|
||||
|
||||
function validateTrackProgressEvent(context: GitHubContext): void {
|
||||
// track_progress is only valid for pull_request and issue events
|
||||
const validEvents = ["pull_request", "issues"];
|
||||
const validEvents = [
|
||||
"pull_request",
|
||||
"issues",
|
||||
"issue_comment",
|
||||
"pull_request_review_comment",
|
||||
"pull_request_review",
|
||||
];
|
||||
if (!validEvents.includes(context.eventName)) {
|
||||
throw new Error(
|
||||
`track_progress is only supported for pull_request and issue events. ` +
|
||||
`track_progress is only supported for events: ${validEvents.join(", ")}. ` +
|
||||
`Current event: ${context.eventName}`,
|
||||
);
|
||||
}
|
||||
|
||||
@@ -4,16 +4,21 @@ import { checkContainsTrigger } from "../../github/validation/trigger";
|
||||
import { checkHumanActor } from "../../github/validation/actor";
|
||||
import { createInitialComment } from "../../github/operations/comments/create-initial";
|
||||
import { setupBranch } from "../../github/operations/branch";
|
||||
import { configureGitAuth } from "../../github/operations/git-config";
|
||||
import {
|
||||
configureGitAuth,
|
||||
setupSshSigning,
|
||||
} from "../../github/operations/git-config";
|
||||
import { prepareMcpConfig } from "../../mcp/install-mcp-server";
|
||||
import {
|
||||
fetchGitHubData,
|
||||
extractTriggerTimestamp,
|
||||
extractOriginalTitle,
|
||||
} from "../../github/data/fetcher";
|
||||
import { createPrompt, generateDefaultPrompt } from "../../create-prompt";
|
||||
import { isEntityContext } from "../../github/context";
|
||||
import type { PreparedContext } from "../../create-prompt/types";
|
||||
import type { FetchDataResult } from "../../github/data/fetcher";
|
||||
import { parseAllowedTools } from "../agent/parse-tools";
|
||||
|
||||
/**
|
||||
* Tag mode implementation.
|
||||
@@ -74,6 +79,7 @@ export const tagMode: Mode = {
|
||||
const commentId = commentData.id;
|
||||
|
||||
const triggerTime = extractTriggerTimestamp(context);
|
||||
const originalTitle = extractOriginalTitle(context);
|
||||
|
||||
const githubData = await fetchGitHubData({
|
||||
octokits: octokit,
|
||||
@@ -82,15 +88,42 @@ export const tagMode: Mode = {
|
||||
isPR: context.isPR,
|
||||
triggerUsername: context.actor,
|
||||
triggerTime,
|
||||
originalTitle,
|
||||
});
|
||||
|
||||
// Setup branch
|
||||
const branchInfo = await setupBranch(octokit, githubData, context);
|
||||
|
||||
// Configure git authentication if not using commit signing
|
||||
if (!context.inputs.useCommitSigning) {
|
||||
// Configure git authentication
|
||||
// SSH signing takes precedence if provided
|
||||
const useSshSigning = !!context.inputs.sshSigningKey;
|
||||
const useApiCommitSigning =
|
||||
context.inputs.useCommitSigning && !useSshSigning;
|
||||
|
||||
if (useSshSigning) {
|
||||
// Setup SSH signing for commits
|
||||
await setupSshSigning(context.inputs.sshSigningKey);
|
||||
|
||||
// Still configure git auth for push operations (user/email and remote URL)
|
||||
const user = {
|
||||
login: context.inputs.botName,
|
||||
id: parseInt(context.inputs.botId),
|
||||
};
|
||||
try {
|
||||
await configureGitAuth(githubToken, context, commentData.user);
|
||||
await configureGitAuth(githubToken, context, user);
|
||||
} catch (error) {
|
||||
console.error("Failed to configure git authentication:", error);
|
||||
throw error;
|
||||
}
|
||||
} else if (!useApiCommitSigning) {
|
||||
// Use bot_id and bot_name from inputs directly
|
||||
const user = {
|
||||
login: context.inputs.botName,
|
||||
id: parseInt(context.inputs.botId),
|
||||
};
|
||||
|
||||
try {
|
||||
await configureGitAuth(githubToken, context, user);
|
||||
} catch (error) {
|
||||
console.error("Failed to configure git authentication:", error);
|
||||
throw error;
|
||||
@@ -106,19 +139,10 @@ export const tagMode: Mode = {
|
||||
|
||||
await createPrompt(tagMode, modeContext, githubData, context);
|
||||
|
||||
// Get our GitHub MCP servers configuration
|
||||
const ourMcpConfig = await prepareMcpConfig({
|
||||
githubToken,
|
||||
owner: context.repository.owner,
|
||||
repo: context.repository.repo,
|
||||
branch: branchInfo.claudeBranch || branchInfo.currentBranch,
|
||||
baseBranch: branchInfo.baseBranch,
|
||||
claudeCommentId: commentId.toString(),
|
||||
allowedTools: [],
|
||||
context,
|
||||
});
|
||||
|
||||
// Don't output mcp_config separately anymore - include in claude_args
|
||||
const userClaudeArgs = process.env.CLAUDE_ARGS || "";
|
||||
const userAllowedMCPTools = parseAllowedTools(userClaudeArgs).filter(
|
||||
(tool) => tool.startsWith("mcp__github_"),
|
||||
);
|
||||
|
||||
// Build claude_args for tag mode with required tools
|
||||
// Tag mode REQUIRES these tools to function properly
|
||||
@@ -134,10 +158,12 @@ export const tagMode: Mode = {
|
||||
"mcp__github_ci__get_ci_status",
|
||||
"mcp__github_ci__get_workflow_run_details",
|
||||
"mcp__github_ci__download_job_log",
|
||||
...userAllowedMCPTools,
|
||||
];
|
||||
|
||||
// Add git commands when not using commit signing
|
||||
if (!context.inputs.useCommitSigning) {
|
||||
// Add git commands when using git CLI (no API commit signing, or SSH signing)
|
||||
// SSH signing still uses git CLI, just with signing enabled
|
||||
if (!useApiCommitSigning) {
|
||||
tagModeTools.push(
|
||||
"Bash(git add:*)",
|
||||
"Bash(git commit:*)",
|
||||
@@ -148,14 +174,25 @@ export const tagMode: Mode = {
|
||||
"Bash(git rm:*)",
|
||||
);
|
||||
} else {
|
||||
// When using commit signing, use MCP file ops tools
|
||||
// When using API commit signing, use MCP file ops tools
|
||||
tagModeTools.push(
|
||||
"mcp__github_file_ops__commit_files",
|
||||
"mcp__github_file_ops__delete_files",
|
||||
);
|
||||
}
|
||||
|
||||
const userClaudeArgs = process.env.CLAUDE_ARGS || "";
|
||||
// Get our GitHub MCP servers configuration
|
||||
const ourMcpConfig = await prepareMcpConfig({
|
||||
githubToken,
|
||||
owner: context.repository.owner,
|
||||
repo: context.repository.repo,
|
||||
branch: branchInfo.claudeBranch || branchInfo.currentBranch,
|
||||
baseBranch: branchInfo.baseBranch,
|
||||
claudeCommentId: commentId.toString(),
|
||||
allowedTools: Array.from(new Set(tagModeTools)),
|
||||
mode: "tag",
|
||||
context,
|
||||
});
|
||||
|
||||
// Build complete claude_args with multiple --mcp-config flags
|
||||
let claudeArgs = "";
|
||||
|
||||
99
src/utils/branch-template.ts
Normal file
99
src/utils/branch-template.ts
Normal file
@@ -0,0 +1,99 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
/**
|
||||
* Branch name template parsing and variable substitution utilities
|
||||
*/
|
||||
|
||||
const NUM_DESCRIPTION_WORDS = 5;
|
||||
|
||||
/**
|
||||
* Extracts the first 5 words from a title and converts them to kebab-case
|
||||
*/
|
||||
function extractDescription(
|
||||
title: string,
|
||||
numWords: number = NUM_DESCRIPTION_WORDS,
|
||||
): string {
|
||||
if (!title || title.trim() === "") {
|
||||
return "";
|
||||
}
|
||||
|
||||
return title
|
||||
.trim()
|
||||
.split(/\s+/)
|
||||
.slice(0, numWords) // Only first `numWords` words
|
||||
.join("-")
|
||||
.toLowerCase()
|
||||
.replace(/[^a-z0-9-]/g, "") // Remove non-alphanumeric except hyphens
|
||||
.replace(/-+/g, "-") // Replace multiple hyphens with single
|
||||
.replace(/^-|-$/g, ""); // Remove leading/trailing hyphens
|
||||
}
|
||||
|
||||
export interface BranchTemplateVariables {
|
||||
prefix: string;
|
||||
entityType: string;
|
||||
entityNumber: number;
|
||||
timestamp: string;
|
||||
sha?: string;
|
||||
label?: string;
|
||||
description?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
* Replaces template variables in a branch name template
|
||||
* Template format: {{variableName}}
|
||||
*/
|
||||
export function applyBranchTemplate(
|
||||
template: string,
|
||||
variables: BranchTemplateVariables,
|
||||
): string {
|
||||
let result = template;
|
||||
|
||||
// Replace each variable
|
||||
Object.entries(variables).forEach(([key, value]) => {
|
||||
const placeholder = `{{${key}}}`;
|
||||
const replacement = value ? String(value) : "";
|
||||
result = result.replaceAll(placeholder, replacement);
|
||||
});
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generates a branch name from the provided `template` and set of `variables`. Uses a default format if the template is empty or produces an empty result.
|
||||
*/
|
||||
export function generateBranchName(
|
||||
template: string | undefined,
|
||||
branchPrefix: string,
|
||||
entityType: string,
|
||||
entityNumber: number,
|
||||
sha?: string,
|
||||
label?: string,
|
||||
title?: string,
|
||||
): string {
|
||||
const now = new Date();
|
||||
|
||||
const variables: BranchTemplateVariables = {
|
||||
prefix: branchPrefix,
|
||||
entityType,
|
||||
entityNumber,
|
||||
timestamp: `${now.getFullYear()}${String(now.getMonth() + 1).padStart(2, "0")}${String(now.getDate()).padStart(2, "0")}-${String(now.getHours()).padStart(2, "0")}${String(now.getMinutes()).padStart(2, "0")}`,
|
||||
sha: sha?.substring(0, 8), // First 8 characters of SHA
|
||||
label: label || entityType, // Fall back to entityType if no label
|
||||
description: title ? extractDescription(title) : undefined,
|
||||
};
|
||||
|
||||
if (template?.trim()) {
|
||||
const branchName = applyBranchTemplate(template, variables);
|
||||
|
||||
// Some templates could produce empty results- validate
|
||||
if (branchName.trim().length > 0) return branchName;
|
||||
|
||||
console.log(
|
||||
`Branch template '${template}' generated empty result, falling back to default format`,
|
||||
);
|
||||
}
|
||||
|
||||
const branchName = `${branchPrefix}${entityType}-${entityNumber}-${variables.timestamp}`;
|
||||
// Kubernetes compatible: lowercase, max 50 chars, alphanumeric and hyphens only
|
||||
return branchName.toLowerCase().substring(0, 50);
|
||||
}
|
||||
32
src/utils/extract-user-request.ts
Normal file
32
src/utils/extract-user-request.ts
Normal file
@@ -0,0 +1,32 @@
|
||||
/**
|
||||
* Extracts the user's request from a trigger comment.
|
||||
*
|
||||
* Given a comment like "@claude /review-pr please check the auth module",
|
||||
* this extracts "/review-pr please check the auth module".
|
||||
*
|
||||
* @param commentBody - The full comment body containing the trigger phrase
|
||||
* @param triggerPhrase - The trigger phrase (e.g., "@claude")
|
||||
* @returns The user's request (text after the trigger phrase), or null if not found
|
||||
*/
|
||||
export function extractUserRequest(
|
||||
commentBody: string | undefined,
|
||||
triggerPhrase: string,
|
||||
): string | null {
|
||||
if (!commentBody) {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Use string operations instead of regex for better performance and security
|
||||
// (avoids potential ReDoS with large comment bodies)
|
||||
const triggerIndex = commentBody
|
||||
.toLowerCase()
|
||||
.indexOf(triggerPhrase.toLowerCase());
|
||||
if (triggerIndex === -1) {
|
||||
return null;
|
||||
}
|
||||
|
||||
const afterTrigger = commentBody
|
||||
.substring(triggerIndex + triggerPhrase.length)
|
||||
.trim();
|
||||
return afterTrigger || null;
|
||||
}
|
||||
247
test/branch-template.test.ts
Normal file
247
test/branch-template.test.ts
Normal file
@@ -0,0 +1,247 @@
|
||||
#!/usr/bin/env bun
|
||||
|
||||
import { describe, it, expect } from "bun:test";
|
||||
import {
|
||||
applyBranchTemplate,
|
||||
generateBranchName,
|
||||
} from "../src/utils/branch-template";
|
||||
|
||||
describe("branch template utilities", () => {
|
||||
describe("applyBranchTemplate", () => {
|
||||
it("should replace all template variables", () => {
|
||||
const template =
|
||||
"{{prefix}}{{entityType}}-{{entityNumber}}-{{timestamp}}";
|
||||
const variables = {
|
||||
prefix: "feat/",
|
||||
entityType: "issue",
|
||||
entityNumber: 123,
|
||||
timestamp: "20240301-1430",
|
||||
sha: "abcd1234",
|
||||
};
|
||||
|
||||
const result = applyBranchTemplate(template, variables);
|
||||
expect(result).toBe("feat/issue-123-20240301-1430");
|
||||
});
|
||||
|
||||
it("should handle custom templates with multiple variables", () => {
|
||||
const template =
|
||||
"{{prefix}}fix/{{entityType}}_{{entityNumber}}_{{timestamp}}_{{sha}}";
|
||||
const variables = {
|
||||
prefix: "claude-",
|
||||
entityType: "pr",
|
||||
entityNumber: 456,
|
||||
timestamp: "20240301-1430",
|
||||
sha: "abcd1234",
|
||||
};
|
||||
|
||||
const result = applyBranchTemplate(template, variables);
|
||||
expect(result).toBe("claude-fix/pr_456_20240301-1430_abcd1234");
|
||||
});
|
||||
|
||||
it("should handle templates with missing variables gracefully", () => {
|
||||
const template = "{{prefix}}{{entityType}}-{{missing}}-{{entityNumber}}";
|
||||
const variables = {
|
||||
prefix: "feat/",
|
||||
entityType: "issue",
|
||||
entityNumber: 123,
|
||||
timestamp: "20240301-1430",
|
||||
};
|
||||
|
||||
const result = applyBranchTemplate(template, variables);
|
||||
expect(result).toBe("feat/issue-{{missing}}-123");
|
||||
});
|
||||
});
|
||||
|
||||
describe("generateBranchName", () => {
|
||||
it("should use custom template when provided", () => {
|
||||
const template = "{{prefix}}custom-{{entityType}}_{{entityNumber}}";
|
||||
const result = generateBranchName(template, "feature/", "issue", 123);
|
||||
|
||||
expect(result).toBe("feature/custom-issue_123");
|
||||
});
|
||||
|
||||
it("should use default format when template is empty", () => {
|
||||
const result = generateBranchName("", "claude/", "issue", 123);
|
||||
|
||||
expect(result).toMatch(/^claude\/issue-123-\d{8}-\d{4}$/);
|
||||
});
|
||||
|
||||
it("should use default format when template is undefined", () => {
|
||||
const result = generateBranchName(undefined, "claude/", "pr", 456);
|
||||
|
||||
expect(result).toMatch(/^claude\/pr-456-\d{8}-\d{4}$/);
|
||||
});
|
||||
|
||||
it("should preserve custom template formatting (no automatic lowercase/truncation)", () => {
|
||||
const template = "{{prefix}}UPPERCASE_Branch-Name_{{entityNumber}}";
|
||||
const result = generateBranchName(template, "Feature/", "issue", 123);
|
||||
|
||||
expect(result).toBe("Feature/UPPERCASE_Branch-Name_123");
|
||||
});
|
||||
|
||||
it("should not truncate custom template results", () => {
|
||||
const template =
|
||||
"{{prefix}}very-long-branch-name-that-exceeds-the-maximum-allowed-length-{{entityNumber}}";
|
||||
const result = generateBranchName(template, "feature/", "issue", 123);
|
||||
|
||||
expect(result).toBe(
|
||||
"feature/very-long-branch-name-that-exceeds-the-maximum-allowed-length-123",
|
||||
);
|
||||
});
|
||||
|
||||
it("should apply Kubernetes-compatible transformations to default template only", () => {
|
||||
const result = generateBranchName(undefined, "Feature/", "issue", 123);
|
||||
|
||||
expect(result).toMatch(/^feature\/issue-123-\d{8}-\d{4}$/);
|
||||
expect(result.length).toBeLessThanOrEqual(50);
|
||||
});
|
||||
|
||||
it("should handle SHA in template", () => {
|
||||
const template = "{{prefix}}{{entityType}}-{{entityNumber}}-{{sha}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"fix/",
|
||||
"pr",
|
||||
789,
|
||||
"abcdef123456",
|
||||
);
|
||||
|
||||
expect(result).toBe("fix/pr-789-abcdef12");
|
||||
});
|
||||
|
||||
it("should use label in template when provided", () => {
|
||||
const template = "{{prefix}}{{label}}/{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"feature/",
|
||||
"issue",
|
||||
123,
|
||||
undefined,
|
||||
"bug",
|
||||
);
|
||||
|
||||
expect(result).toBe("feature/bug/123");
|
||||
});
|
||||
|
||||
it("should fallback to entityType when label template is used but no label provided", () => {
|
||||
const template = "{{prefix}}{{label}}-{{entityNumber}}";
|
||||
const result = generateBranchName(template, "fix/", "pr", 456);
|
||||
|
||||
expect(result).toBe("fix/pr-456");
|
||||
});
|
||||
|
||||
it("should handle template with both label and entityType", () => {
|
||||
const template = "{{prefix}}{{label}}-{{entityType}}_{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"dev/",
|
||||
"issue",
|
||||
789,
|
||||
undefined,
|
||||
"enhancement",
|
||||
);
|
||||
|
||||
expect(result).toBe("dev/enhancement-issue_789");
|
||||
});
|
||||
|
||||
it("should use description in template when provided", () => {
|
||||
const template = "{{prefix}}{{description}}/{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"feature/",
|
||||
"issue",
|
||||
123,
|
||||
undefined,
|
||||
undefined,
|
||||
"Fix login bug with OAuth",
|
||||
);
|
||||
|
||||
expect(result).toBe("feature/fix-login-bug-with-oauth/123");
|
||||
});
|
||||
|
||||
it("should handle template with multiple variables including description", () => {
|
||||
const template =
|
||||
"{{prefix}}{{label}}/{{description}}-{{entityType}}_{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"dev/",
|
||||
"issue",
|
||||
456,
|
||||
undefined,
|
||||
"bug",
|
||||
"User authentication fails completely",
|
||||
);
|
||||
|
||||
expect(result).toBe(
|
||||
"dev/bug/user-authentication-fails-completely-issue_456",
|
||||
);
|
||||
});
|
||||
|
||||
it("should handle description with special characters in template", () => {
|
||||
const template = "{{prefix}}{{description}}-{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"fix/",
|
||||
"pr",
|
||||
789,
|
||||
undefined,
|
||||
undefined,
|
||||
"Add: User Registration & Email Validation",
|
||||
);
|
||||
|
||||
expect(result).toBe("fix/add-user-registration-email-789");
|
||||
});
|
||||
|
||||
it("should truncate descriptions to exactly 5 words", () => {
|
||||
const result = generateBranchName(
|
||||
"{{prefix}}{{description}}/{{entityNumber}}",
|
||||
"feature/",
|
||||
"issue",
|
||||
999,
|
||||
undefined,
|
||||
undefined,
|
||||
"This is a very long title with many more than five words in it",
|
||||
);
|
||||
expect(result).toBe("feature/this-is-a-very-long/999");
|
||||
});
|
||||
|
||||
it("should handle empty description in template", () => {
|
||||
const template = "{{prefix}}{{description}}-{{entityNumber}}";
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"test/",
|
||||
"issue",
|
||||
101,
|
||||
undefined,
|
||||
undefined,
|
||||
"",
|
||||
);
|
||||
|
||||
expect(result).toBe("test/-101");
|
||||
});
|
||||
|
||||
it("should fallback to default format when template produces empty result", () => {
|
||||
const template = "{{description}}"; // Will be empty if no title provided
|
||||
const result = generateBranchName(template, "claude/", "issue", 123);
|
||||
|
||||
expect(result).toMatch(/^claude\/issue-123-\d{8}-\d{4}$/);
|
||||
expect(result.length).toBeLessThanOrEqual(50);
|
||||
});
|
||||
|
||||
it("should fallback to default format when template produces only whitespace", () => {
|
||||
const template = " {{description}} "; // Will be " " if description is empty
|
||||
const result = generateBranchName(
|
||||
template,
|
||||
"fix/",
|
||||
"pr",
|
||||
456,
|
||||
undefined,
|
||||
undefined,
|
||||
"",
|
||||
);
|
||||
|
||||
expect(result).toMatch(/^fix\/pr-456-\d{8}-\d{4}$/);
|
||||
expect(result.length).toBeLessThanOrEqual(50);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -258,7 +258,7 @@ describe("updateCommentBody", () => {
|
||||
const input = {
|
||||
...baseInput,
|
||||
executionDetails: {
|
||||
cost_usd: 0.13382595,
|
||||
total_cost_usd: 0.13382595,
|
||||
duration_ms: 31033,
|
||||
duration_api_ms: 31034,
|
||||
},
|
||||
@@ -301,7 +301,7 @@ describe("updateCommentBody", () => {
|
||||
const input = {
|
||||
...baseInput,
|
||||
executionDetails: {
|
||||
cost_usd: 0.25,
|
||||
total_cost_usd: 0.25,
|
||||
},
|
||||
triggerUsername: "testuser",
|
||||
};
|
||||
@@ -322,7 +322,7 @@ describe("updateCommentBody", () => {
|
||||
branchName: "claude-branch-123",
|
||||
prLink: "\n[Create a PR](https://github.com/owner/repo/pr-url)",
|
||||
executionDetails: {
|
||||
cost_usd: 0.01,
|
||||
total_cost_usd: 0.01,
|
||||
duration_ms: 65000, // 1 minute 5 seconds
|
||||
},
|
||||
triggerUsername: "trigger-user",
|
||||
|
||||
@@ -61,6 +61,7 @@ describe("generatePrompt", () => {
|
||||
body: "This is a test PR",
|
||||
author: { login: "testuser" },
|
||||
state: "OPEN",
|
||||
labels: { nodes: [] },
|
||||
createdAt: "2023-01-01T00:00:00Z",
|
||||
additions: 15,
|
||||
deletions: 5,
|
||||
@@ -475,6 +476,7 @@ describe("generatePrompt", () => {
|
||||
body: "The login form is not working",
|
||||
author: { login: "testuser" },
|
||||
state: "OPEN",
|
||||
labels: { nodes: [] },
|
||||
createdAt: "2023-01-01T00:00:00Z",
|
||||
comments: {
|
||||
nodes: [],
|
||||
|
||||
@@ -1,13 +1,16 @@
|
||||
import { describe, expect, it, jest } from "bun:test";
|
||||
import {
|
||||
extractTriggerTimestamp,
|
||||
extractOriginalTitle,
|
||||
fetchGitHubData,
|
||||
filterCommentsToTriggerTime,
|
||||
filterReviewsToTriggerTime,
|
||||
isBodySafeToUse,
|
||||
} from "../src/github/data/fetcher";
|
||||
import {
|
||||
createMockContext,
|
||||
mockIssueCommentContext,
|
||||
mockPullRequestCommentContext,
|
||||
mockPullRequestReviewContext,
|
||||
mockPullRequestReviewCommentContext,
|
||||
mockPullRequestOpenedContext,
|
||||
@@ -62,6 +65,47 @@ describe("extractTriggerTimestamp", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("extractOriginalTitle", () => {
|
||||
it("should extract title from IssueCommentEvent on PR", () => {
|
||||
const title = extractOriginalTitle(mockPullRequestCommentContext);
|
||||
expect(title).toBe("Fix: Memory leak in user service");
|
||||
});
|
||||
|
||||
it("should extract title from PullRequestReviewEvent", () => {
|
||||
const title = extractOriginalTitle(mockPullRequestReviewContext);
|
||||
expect(title).toBe("Refactor: Improve error handling in API layer");
|
||||
});
|
||||
|
||||
it("should extract title from PullRequestReviewCommentEvent", () => {
|
||||
const title = extractOriginalTitle(mockPullRequestReviewCommentContext);
|
||||
expect(title).toBe("Performance: Optimize search algorithm");
|
||||
});
|
||||
|
||||
it("should extract title from pull_request event", () => {
|
||||
const title = extractOriginalTitle(mockPullRequestOpenedContext);
|
||||
expect(title).toBe("Feature: Add user authentication");
|
||||
});
|
||||
|
||||
it("should extract title from issues event", () => {
|
||||
const title = extractOriginalTitle(mockIssueOpenedContext);
|
||||
expect(title).toBe("Bug: Application crashes on startup");
|
||||
});
|
||||
|
||||
it("should return undefined for event without title", () => {
|
||||
const context = createMockContext({
|
||||
eventName: "issue_comment",
|
||||
payload: {
|
||||
comment: {
|
||||
id: 123,
|
||||
body: "test",
|
||||
},
|
||||
} as any,
|
||||
});
|
||||
const title = extractOriginalTitle(context);
|
||||
expect(title).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe("filterCommentsToTriggerTime", () => {
|
||||
const createMockComment = (
|
||||
createdAt: string,
|
||||
@@ -371,6 +415,139 @@ describe("filterReviewsToTriggerTime", () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe("isBodySafeToUse", () => {
|
||||
const triggerTime = "2024-01-15T12:00:00Z";
|
||||
|
||||
const createMockContextData = (
|
||||
createdAt: string,
|
||||
updatedAt?: string,
|
||||
lastEditedAt?: string,
|
||||
) => ({
|
||||
createdAt,
|
||||
updatedAt,
|
||||
lastEditedAt,
|
||||
});
|
||||
|
||||
describe("body edit time validation", () => {
|
||||
it("should return true when body was never edited", () => {
|
||||
const contextData = createMockContextData("2024-01-15T10:00:00Z");
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(true);
|
||||
});
|
||||
|
||||
it("should return true when body was edited before trigger time", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T11:00:00Z",
|
||||
"2024-01-15T11:30:00Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(true);
|
||||
});
|
||||
|
||||
it("should return false when body was edited after trigger time (using updatedAt)", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T13:00:00Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(false);
|
||||
});
|
||||
|
||||
it("should return false when body was edited after trigger time (using lastEditedAt)", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
undefined,
|
||||
"2024-01-15T13:00:00Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(false);
|
||||
});
|
||||
|
||||
it("should return false when body was edited exactly at trigger time", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T12:00:00Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(false);
|
||||
});
|
||||
|
||||
it("should prioritize lastEditedAt over updatedAt", () => {
|
||||
// updatedAt is after trigger, but lastEditedAt is before - should be safe
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T13:00:00Z", // updatedAt after trigger
|
||||
"2024-01-15T11:00:00Z", // lastEditedAt before trigger
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("edge cases", () => {
|
||||
it("should return true when no trigger time is provided (backward compatibility)", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T13:00:00Z", // Would normally fail
|
||||
"2024-01-15T14:00:00Z", // Would normally fail
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, undefined)).toBe(true);
|
||||
});
|
||||
|
||||
it("should handle millisecond precision correctly", () => {
|
||||
// Edit 1ms after trigger - should be unsafe
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T12:00:00.001Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(false);
|
||||
});
|
||||
|
||||
it("should handle edit 1ms before trigger - should be safe", () => {
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T11:59:59.999Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, triggerTime)).toBe(true);
|
||||
});
|
||||
|
||||
it("should handle various ISO timestamp formats", () => {
|
||||
const contextData1 = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T11:00:00Z",
|
||||
);
|
||||
const contextData2 = createMockContextData(
|
||||
"2024-01-15T10:00:00+00:00",
|
||||
"2024-01-15T11:00:00+00:00",
|
||||
);
|
||||
const contextData3 = createMockContextData(
|
||||
"2024-01-15T10:00:00.000Z",
|
||||
"2024-01-15T11:00:00.000Z",
|
||||
);
|
||||
|
||||
expect(isBodySafeToUse(contextData1, triggerTime)).toBe(true);
|
||||
expect(isBodySafeToUse(contextData2, triggerTime)).toBe(true);
|
||||
expect(isBodySafeToUse(contextData3, triggerTime)).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe("security scenarios", () => {
|
||||
it("should detect race condition attack - body edited between trigger and processing", () => {
|
||||
// Simulates: Owner triggers @claude at 12:00, attacker edits body at 12:00:30
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z", // Issue created
|
||||
"2024-01-15T12:00:30Z", // Body edited after trigger
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, "2024-01-15T12:00:00Z")).toBe(false);
|
||||
});
|
||||
|
||||
it("should allow body that was stable at trigger time", () => {
|
||||
// Body was last edited well before the trigger
|
||||
const contextData = createMockContextData(
|
||||
"2024-01-15T10:00:00Z",
|
||||
"2024-01-15T10:30:00Z",
|
||||
"2024-01-15T10:30:00Z",
|
||||
);
|
||||
expect(isBodySafeToUse(contextData, "2024-01-15T12:00:00Z")).toBe(true);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe("fetchGitHubData integration with time filtering", () => {
|
||||
it("should filter comments based on trigger time when provided", async () => {
|
||||
const mockOctokits = {
|
||||
@@ -696,4 +873,230 @@ describe("fetchGitHubData integration with time filtering", () => {
|
||||
// All three comments should be included as they're all before trigger time
|
||||
expect(result.comments.length).toBe(3);
|
||||
});
|
||||
|
||||
it("should exclude issue body when edited after trigger time (TOCTOU protection)", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
issue: {
|
||||
number: 555,
|
||||
title: "Test Issue",
|
||||
body: "Malicious body edited after trigger",
|
||||
author: { login: "attacker" },
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
updatedAt: "2024-01-15T12:30:00Z", // Edited after trigger
|
||||
lastEditedAt: "2024-01-15T12:30:00Z", // Edited after trigger
|
||||
comments: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "555",
|
||||
isPR: false,
|
||||
triggerUsername: "trigger-user",
|
||||
triggerTime: "2024-01-15T12:00:00Z",
|
||||
});
|
||||
|
||||
// The body should be excluded from image processing due to TOCTOU protection
|
||||
// We can verify this by checking that issue_body is NOT in the imageUrlMap keys
|
||||
const hasIssueBodyInMap = Array.from(result.imageUrlMap.keys()).some(
|
||||
(key) => key.includes("issue_body"),
|
||||
);
|
||||
expect(hasIssueBodyInMap).toBe(false);
|
||||
});
|
||||
|
||||
it("should include issue body when not edited after trigger time", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
issue: {
|
||||
number: 666,
|
||||
title: "Test Issue",
|
||||
body: "Safe body not edited after trigger",
|
||||
author: { login: "author" },
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
updatedAt: "2024-01-15T11:00:00Z", // Edited before trigger
|
||||
lastEditedAt: "2024-01-15T11:00:00Z", // Edited before trigger
|
||||
comments: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "666",
|
||||
isPR: false,
|
||||
triggerUsername: "trigger-user",
|
||||
triggerTime: "2024-01-15T12:00:00Z",
|
||||
});
|
||||
|
||||
// The contextData should still contain the body
|
||||
expect(result.contextData.body).toBe("Safe body not edited after trigger");
|
||||
});
|
||||
|
||||
it("should exclude PR body when edited after trigger time (TOCTOU protection)", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
pullRequest: {
|
||||
number: 777,
|
||||
title: "Test PR",
|
||||
body: "Malicious PR body edited after trigger",
|
||||
author: { login: "attacker" },
|
||||
baseRefName: "main",
|
||||
headRefName: "feature",
|
||||
headRefOid: "abc123",
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
updatedAt: "2024-01-15T12:30:00Z", // Edited after trigger
|
||||
lastEditedAt: "2024-01-15T12:30:00Z", // Edited after trigger
|
||||
additions: 10,
|
||||
deletions: 5,
|
||||
state: "OPEN",
|
||||
commits: { totalCount: 1, nodes: [] },
|
||||
files: { nodes: [] },
|
||||
comments: { nodes: [] },
|
||||
reviews: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "777",
|
||||
isPR: true,
|
||||
triggerUsername: "trigger-user",
|
||||
triggerTime: "2024-01-15T12:00:00Z",
|
||||
});
|
||||
|
||||
// The body should be excluded from image processing due to TOCTOU protection
|
||||
const hasPrBodyInMap = Array.from(result.imageUrlMap.keys()).some((key) =>
|
||||
key.includes("pr_body"),
|
||||
);
|
||||
expect(hasPrBodyInMap).toBe(false);
|
||||
});
|
||||
|
||||
it("should use originalTitle when provided instead of fetched title", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
pullRequest: {
|
||||
number: 123,
|
||||
title: "Fetched Title From GraphQL",
|
||||
body: "PR body",
|
||||
author: { login: "author" },
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
additions: 10,
|
||||
deletions: 5,
|
||||
state: "OPEN",
|
||||
commits: { totalCount: 1, nodes: [] },
|
||||
files: { nodes: [] },
|
||||
comments: { nodes: [] },
|
||||
reviews: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "123",
|
||||
isPR: true,
|
||||
triggerUsername: "trigger-user",
|
||||
originalTitle: "Original Title From Webhook",
|
||||
});
|
||||
|
||||
expect(result.contextData.title).toBe("Original Title From Webhook");
|
||||
});
|
||||
|
||||
it("should use fetched title when originalTitle is not provided", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
pullRequest: {
|
||||
number: 123,
|
||||
title: "Fetched Title From GraphQL",
|
||||
body: "PR body",
|
||||
author: { login: "author" },
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
additions: 10,
|
||||
deletions: 5,
|
||||
state: "OPEN",
|
||||
commits: { totalCount: 1, nodes: [] },
|
||||
files: { nodes: [] },
|
||||
comments: { nodes: [] },
|
||||
reviews: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "123",
|
||||
isPR: true,
|
||||
triggerUsername: "trigger-user",
|
||||
});
|
||||
|
||||
expect(result.contextData.title).toBe("Fetched Title From GraphQL");
|
||||
});
|
||||
|
||||
it("should use original title from webhook even if title was edited after trigger", async () => {
|
||||
const mockOctokits = {
|
||||
graphql: jest.fn().mockResolvedValue({
|
||||
repository: {
|
||||
pullRequest: {
|
||||
number: 123,
|
||||
title: "Edited Title (from GraphQL)",
|
||||
body: "PR body",
|
||||
author: { login: "author" },
|
||||
createdAt: "2024-01-15T10:00:00Z",
|
||||
lastEditedAt: "2024-01-15T12:30:00Z", // Edited after trigger
|
||||
additions: 10,
|
||||
deletions: 5,
|
||||
state: "OPEN",
|
||||
commits: { totalCount: 1, nodes: [] },
|
||||
files: { nodes: [] },
|
||||
comments: { nodes: [] },
|
||||
reviews: { nodes: [] },
|
||||
},
|
||||
},
|
||||
user: { login: "trigger-user" },
|
||||
}),
|
||||
rest: jest.fn() as any,
|
||||
};
|
||||
|
||||
const result = await fetchGitHubData({
|
||||
octokits: mockOctokits as any,
|
||||
repository: "test-owner/test-repo",
|
||||
prNumber: "123",
|
||||
isPR: true,
|
||||
triggerUsername: "trigger-user",
|
||||
triggerTime: "2024-01-15T12:00:00Z",
|
||||
originalTitle: "Original Title (from webhook at trigger time)",
|
||||
});
|
||||
|
||||
expect(result.contextData.title).toBe(
|
||||
"Original Title (from webhook at trigger time)",
|
||||
);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -28,6 +28,9 @@ describe("formatContext", () => {
|
||||
additions: 50,
|
||||
deletions: 30,
|
||||
state: "OPEN",
|
||||
labels: {
|
||||
nodes: [],
|
||||
},
|
||||
commits: {
|
||||
totalCount: 3,
|
||||
nodes: [],
|
||||
@@ -63,6 +66,9 @@ Changed Files: 2 files`,
|
||||
author: { login: "test-user" },
|
||||
createdAt: "2023-01-01T00:00:00Z",
|
||||
state: "OPEN",
|
||||
labels: {
|
||||
nodes: [],
|
||||
},
|
||||
comments: {
|
||||
nodes: [],
|
||||
},
|
||||
|
||||
77
test/extract-user-request.test.ts
Normal file
77
test/extract-user-request.test.ts
Normal file
@@ -0,0 +1,77 @@
|
||||
import { describe, test, expect } from "bun:test";
|
||||
import { extractUserRequest } from "../src/utils/extract-user-request";
|
||||
|
||||
describe("extractUserRequest", () => {
|
||||
test("extracts text after @claude trigger", () => {
|
||||
expect(extractUserRequest("@claude /review-pr", "@claude")).toBe(
|
||||
"/review-pr",
|
||||
);
|
||||
});
|
||||
|
||||
test("extracts slash command with arguments", () => {
|
||||
expect(
|
||||
extractUserRequest(
|
||||
"@claude /review-pr please check the auth module",
|
||||
"@claude",
|
||||
),
|
||||
).toBe("/review-pr please check the auth module");
|
||||
});
|
||||
|
||||
test("handles trigger phrase with extra whitespace", () => {
|
||||
expect(extractUserRequest("@claude /review-pr", "@claude")).toBe(
|
||||
"/review-pr",
|
||||
);
|
||||
});
|
||||
|
||||
test("handles trigger phrase at start of multiline comment", () => {
|
||||
const comment = `@claude /review-pr
|
||||
Please review this PR carefully.
|
||||
Focus on security issues.`;
|
||||
expect(extractUserRequest(comment, "@claude")).toBe(
|
||||
`/review-pr
|
||||
Please review this PR carefully.
|
||||
Focus on security issues.`,
|
||||
);
|
||||
});
|
||||
|
||||
test("handles trigger phrase in middle of text", () => {
|
||||
expect(
|
||||
extractUserRequest("Hey team, @claude can you review this?", "@claude"),
|
||||
).toBe("can you review this?");
|
||||
});
|
||||
|
||||
test("returns null for empty comment body", () => {
|
||||
expect(extractUserRequest("", "@claude")).toBeNull();
|
||||
});
|
||||
|
||||
test("returns null for undefined comment body", () => {
|
||||
expect(extractUserRequest(undefined, "@claude")).toBeNull();
|
||||
});
|
||||
|
||||
test("returns null when trigger phrase not found", () => {
|
||||
expect(extractUserRequest("Please review this PR", "@claude")).toBeNull();
|
||||
});
|
||||
|
||||
test("returns null when only trigger phrase with no request", () => {
|
||||
expect(extractUserRequest("@claude", "@claude")).toBeNull();
|
||||
});
|
||||
|
||||
test("handles custom trigger phrase", () => {
|
||||
expect(extractUserRequest("/claude help me", "/claude")).toBe("help me");
|
||||
});
|
||||
|
||||
test("handles trigger phrase with special regex characters", () => {
|
||||
expect(
|
||||
extractUserRequest("@claude[bot] do something", "@claude[bot]"),
|
||||
).toBe("do something");
|
||||
});
|
||||
|
||||
test("is case insensitive", () => {
|
||||
expect(extractUserRequest("@CLAUDE /review-pr", "@claude")).toBe(
|
||||
"/review-pr",
|
||||
);
|
||||
expect(extractUserRequest("@Claude /review-pr", "@claude")).toBe(
|
||||
"/review-pr",
|
||||
);
|
||||
});
|
||||
});
|
||||
2
test/fixtures/sample-turns.json
vendored
2
test/fixtures/sample-turns.json
vendored
@@ -189,7 +189,7 @@
|
||||
},
|
||||
{
|
||||
"type": "result",
|
||||
"cost_usd": 0.0347,
|
||||
"total_cost_usd": 0.0347,
|
||||
"duration_ms": 18750,
|
||||
"result": "Successfully removed debug print statement from file and added review comment to document the change."
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user