Ollama

pacman -S ollama

Or install the Ollama specific to your hardware: https://wiki.archlinux.org/title/Ollama

For my case on a Lenovo ThinkPad P16s Gen 4 AMD:

yay -S ollama-rocm

The GPU should be detected out of the box:

time=2026-01-25T14:33:37.605+01:00 level=INFO source=types.go:42 msg="inference compute" id=0 filter_id=0 library=ROCm compute=gfx1150 name=ROCm0 description="AMD Radeon 890M Graphics" libdirs=ollama driver=70152.80 pci_id=0000:c4:00.0 type=iGPU total="51.0 GiB" available="48.3 GiB"

Alpaca

yay -S alpaca-ai

python-pptx

For the python-pptx package I had to use this fix.

makepkg .

python-primp

For the python-primp package I had to use this fix.

I had to install the clang compiler first.

sudo pacman -S clang
makepkg .
sudo pacman -U python-primp-0.15.0-1-x86_64.pkg.tar.zs

Install Claude Code

$ yay -S claude-code

Run the local LLM with Claude Code

How to use open-source language models with Claude Code, an agentic coding tool by Anthropic, via Ollama's Anthropic-compatible API.

In one terminal:

OLLAMA_CONTEXT_LENGTH=64000 ollama serve

In a second terminal:

ollama launch claude

This launches the claude-code directly with an inidivial LLMs of your choise.

Visual Studio Code with Claude Code