Stop Guessing Which AI Model Your Computer Can Run — Use LLMChecker

This free CLI tool scans your hardware in seconds and tells you exactly which local LLMs will actually run well on your machine.

Ricardo Rosero
Ricardo Rosero
3 min read
Stop Guessing Which AI Model Your Computer Can Run — Use LLMChecker

Stop Guessing Which AI Model Your Computer Can Run

You want to run AI locally. You open the Ollama model library. There are hundreds of options — llama3, mistral, qwen, gemma, deepseek — each available in multiple sizes and quantizations. You have absolutely no idea which one is right for your machine. So you pick something, wait 20 minutes for it to download, and either watch it crawl at 2 tokens per second, or watch it crash entirely. Sound familiar?

The Short Answer

LLMChecker is a free, open-source CLI tool that solves this problem in two commands. It scans your actual hardware — GPU, VRAM, RAM, CPU — and returns a ranked list of the models you can run, with exact install commands and a score explaining why. No more guessing. No more wasted downloads.

How It Works

Install it globally with one command:

npm install -g llm-checker

Then run the hardware check:

llm-checker check

Within seconds, LLMChecker has detected your system specs, assigned you a hardware tier (HIGH, MEDIUM-HIGH, MEDIUM, or LOW), and scored over 200 models from the Ollama library across four dimensions: Quality, Speed, Fit, and Context. The output is a ranked list — best model first — with a direct ollama pull command ready to copy.

You can also go deeper. The recommend command gives category-specific suggestions:

llm-checker recommend --category coding

This tells you the best coding model for your specific machine — not a generic internet recommendation written for someone with a different GPU than you.

Why This Matters

The local AI space has exploded. Running models on your own hardware is now genuinely feasible for a huge range of machines — not just high-end rigs. But the tooling around model selection hasn’t kept up. Most people still rely on Reddit posts, YouTube comments, or trial and error to figure out what runs on their hardware. LLMChecker brings a principled, data-driven answer to a question that has been frustratingly hand-wavy for too long.

It also integrates directly with Claude Code via MCP, meaning you can connect it to your AI-powered development workflow — letting Claude itself analyze your hardware and suggest the right model for whatever codebase you’re working on.

Closing

Local AI is finally accessible enough that the bottleneck is no longer technical capability — it’s just knowing where to start. LLMChecker removes that friction entirely. Two commands, a few seconds, and you know exactly what your machine can do. The question now isn’t whether you can run a local model. It’s which one you want to run first.

Find the tool here: github.com/Pavelevich/llm-checker

What are your thoughts?

Reading List