Local LLM
Ollama
pacman -S ollamaOr install the Ollama specific to your hardware: https://wiki.archlinux.org/title/Ollama
For my case on a Lenovo ThinkPad P16s Gen 4 AMD:
yay -S ollama-rocmThe GPU should be detected out of the box:
time=2026-01-25T14:33:37.605+01:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1150 name=ROCm0 description="AMD Radeon 890M Graphics" libdirs=ollama driver=70152.80 pci_id=0000:c4:00.0 type=iGPU total="51.0 GiB" available="48.3 GiB"Alpaca
yay -S alpaca-aipython-pptx
For the python-pptx package I had to use this fix.
makepkg .python-primp
For the python-primp package I had to use this fix.
I had to install the clang compiler first.
sudo pacman -S clangmakepkg .
sudo pacman -U python-primp-0.15.0-1-x86_64.pkg.tar.zsInstall Claude Code
$ yay -S claude-codeRun the local LLM with Claude Code
How to use open-source language models with Claude Code, an agentic coding tool by Anthropic, via Ollama's Anthropic-compatible API.
In one terminal:
OLLAMA_CONTEXT_LENGTH=64000 ollama serveIn a second terminal:
ollama launch claudeThis launches the claude-code directly with an inidivial LLMs of your choise.