Git Commit Analyzer

AI-powered Git plugin that automatically generates meaningful commit messages using local llama.cpp models

Powered by Local AI - No Cloud Required

Generate consistent, meaningful Git commit messages automatically using local llama.cpp inference with GGUF models. All processing happens on your machine - your code never leaves your device.

AI-Powered

Local GGUF models

Git Flow

Standard format

Interactive

Use, edit, or cancel

100% Private

Local processing

Quick Installation

🍺

Homebrew (Recommended)

Fast installation with pre-built binaries for macOS

$brew tap zh30/tap
$brew install git-ca
No Rust compilation required
Apple Silicon & Intel support

⚡ One-Click Install

bash -c "$(curl -fsSL https://sh.zhanghe.dev/install-git-ca.sh)"

Manual Download

GitHub Releases

System Requirements

Git (version 2.30+)
GGUF model (auto-downloaded)
macOS

Homebrew (Intel & Apple Silicon)

Windows/Linux

Manual installation only

Open Source

Fully open source on GitHub. Contributions welcome!

View on GitHub

Key Features

Smart Message Generation

Analyzes staged changes and generates contextual, meaningful commit messages using local GGUF models.

Git Flow Compliant

Follows Git Flow conventions for clean, maintainable commit history.

Interactive Mode

Choose to use, edit, or cancel the proposed commit message with full control.

100% Local & Private

All AI processing happens locally using llama.cpp - no data leaves your device.

How to Use

1
Stage changes: git add <files>
2
Run analyzer: git ca
3
Use, edit, or cancel the suggested message

Configuration

git ca modelSelect model
git ca languageEN/ZH
git ca doctorDiagnostics

Why Choose Git Commit Analyzer?

AI-Powered Intelligence

Leverages local llama.cpp with GGUF models for contextual understanding.

Consistent History

Ensures clean, maintainable commit history following Git Flow standards.

Developer Privacy

Complete privacy - no external servers or API calls.

Privacy & Security

Local Processing: All AI inference happens locally using llama.cpp with GGUF models.
No Cloud: Your code and commit messages never leave your device. Zero external API calls.
Open Source: The entire codebase is open source and auditable on GitHub.
No Telemetry: We do not collect any usage data or analytics from the CLI tool.

Frequently Asked Questions