Starship Prompt in Claude Code's Status Line

Claude Code has a status line at the bottom of the terminal that shows you the current model and context usage. It’s useful, but it doesn’t show you where you are in the filesystem or what git branch you’re on. I already have Starship configured with all that information for my shell prompt, so I figured: why not reuse it?

Claude Code lets you set a custom status line command in ~/.claude/settings.json. The command receives a JSON blob on stdin with the model name, working directory, and context window usage, and whatever you print to stdout becomes the status line.

The trick is getting starship to output something Claude Code can actually render. Starship detects which shell it’s running in and wraps ANSI color codes with shell-specific escape sequences - %{...%} for zsh, \[...\] for bash. Claude Code isn’t a shell, so those sequences show up as literal garbage:

[Opus 4.6] | ctx:18% | %{%}~/p/dot-files%{%} on %{%} %{%}main%{%} %{%}❯%{%}

The fix: set STARSHIP_SHELL=plain. There’s no actual “plain” shell type in starship’s code (I checked - it has bash, fish, zsh, etc.), but unrecognized values fall through to an “unknown” handler that outputs raw ANSI codes without any shell wrapping.

I also didn’t want the prompt character in a status line, so I point starship at a separate config that disables the character and line_break modules:

# claude-starship.toml
[character]
disabled = true

[line_break]
disabled = true

Here’s the script:

#!/bin/bash
SCRIPT_DIR=$(cd "$(dirname "$0")" && pwd)

if [ "$1" = "install" ]; then
    SETTINGS="$HOME/.claude/settings.json"
    COMMAND="$SCRIPT_DIR/claude-statusline.sh"
    mkdir -p "$HOME/.claude"
    if [ -f "$SETTINGS" ]; then
        jq --arg cmd "$COMMAND" \
          '.statusLine = {"type": "command", "command": $cmd}' \
          "$SETTINGS" > "$SETTINGS.tmp" \
            && mv "$SETTINGS.tmp" "$SETTINGS"
    else
        jq -n --arg cmd "$COMMAND" \
          '{"statusLine": {"type": "command", "command": $cmd}}' \
          > "$SETTINGS"
    fi
    echo "Installed. Restart Claude Code to apply."
    exit 0
fi

input=$(cat)
MODEL=$(echo "$input" | jq -r '.model.display_name')
CURRENT_DIR=$(echo "$input" | jq -r '.workspace.current_dir')
CONTEXT_PERCENT=$(echo "$input" | jq -r \
  '.context_window.used_percentage // 0' | cut -d. -f1)

YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'

STARSHIP_PROMPT=""
if command -v starship &>/dev/null; then
    STARSHIP_PROMPT=$(cd "$CURRENT_DIR" 2>/dev/null \
      && STARSHIP_SHELL=plain \
         STARSHIP_CONFIG="$SCRIPT_DIR/claude-starship.toml" \
         starship prompt --terminal-width=80 2>/dev/null \
      | tr -d '\n')
fi

printf "${BLUE}[%s]${NC} | ${YELLOW}ctx:%s%%${NC} | %s\n" \
  "$MODEL" "$CONTEXT_PERCENT" "$STARSHIP_PROMPT"

Drop the script and claude-starship.toml in your dotfiles, then run claude-statusline.sh install. It uses jq to merge the statusLine config into your existing ~/.claude/settings.json without clobbering anything else.

Now my status line shows the model, context usage, current directory, and git branch - all rendered by my existing starship config. And because it’s in my private dotfiles repo, install works on any machine.

#claude-code #llm

Chrome DevTools CLI for Claude Code

I’ve been using Claude Code to build many wondrous (but often half working) things. Part of that is getting it to test on it’s own in a web browser. The official Chrome DevTools MCP kept freezing up while I was using it with a Meta Quest 3, and when it did work, asking for console logs would return massive amounts of data that filled my context (are there more dreaded words than “Compacting Conversation…”?).

Chrome DevTools Protocol (CDP) is Chrome’s way of letting external programs control the browser - taking screenshots, evaluating JavaScript, monitoring network traffic. Most CDP tools are libraries meant to be imported into code, but Claude Code needs a CLI.

I built @myerscarpenter/cdp-cli. It outputs NDJSON (newline-delimited JSON) - one complete JSON object per line, making it grep-compatible and easy to parse.

The key feature is the console command. By default it outputs bare JSON strings and shows only the last 10 messages:

cdp-cli console "GitHub"
"Page loaded"
"API call successful"

This saves tokens. When you need more detail, flags like --verbose, --tail 50, or --all give you control over how much data comes back. When truncated, it warns on stderr so Claude Code knows there’s more available.

I’m also liking using cli’s over mcp’s as you can see exactly what it’s doing.

Chrome needs to be running with remote debugging enabled:

/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome \
  --remote-debugging-port=9222

Or you could ask Claude to port forward 9222 from your Quest with adb.

Then install and use:

npm install -g @myerscarpenter/cdp-cli
cdp-cli tabs
cdp-cli console "GitHub" --tail 5
cdp-cli screenshot "GitHub" --output screenshot.jpg

Commands cover page management (tabs, new, go, close), debugging (console, snapshot, eval, screenshot), network inspection, and input automation (click, fill, key).

See it on GitHub and npm.

#llm #chrome #meta-quest #claude-code

WebXR + WebGPU on Quest 3 via Virtual Desktop

I saw this post with a demo link in it, and wondered: can I use my quest 3 with virtual desktop to try this WebGPU + WebXR demo out?

WebGPU is the modern way to tap into your computer’s graphics card (GPU) directly from a web browser. Before WebGPU, browsers used an older technology called WebGL, which worked but was kind of showing its age. WebGPU is faster, more efficient, and gives developers way more control over how they use the GPU.

WebXR you experience virtual reality (VR) and augmented reality (AR) directly through your web browser. The “XR” stands for “extended reality,” which is just a catch-all term for both VR (where you’re fully immersed in a virtual world) and AR (where digital stuff gets overlaid on the real world). So if you’ve got a VR headset like a Meta Quest, WebXR lets websites tap into that hardware and create immersive experiences.

The Chrome team is working towards allowing you to use WebGPU to power WebXR, but it’s not in the navtive Quest Browser.

I couldn’t get a good answer about if it would work, and if not, why not from Claude Research. The answer is it does work.

  1. Buy Virtual Desktop
  2. In Chrome go to chrome://flags and turn on any flags to do with WebGPU.
  3. Put on your Quest 3 and connect to your desktop via Virtual Desktop
  4. go to https://toji.github.io/webgpu-metaballs/
  5. in the panel on the right, open the WebXR section
  6. enjoy the lava.

I tried a bunch of the threejs examples and they don’t all work correctly.

#webxr webgpu

Practical Deep Learning: Lesson 2: Is it a Hotdog? meets the Internet

Practical Deep Learning for Coders Lesson 2: Let’s share this with the world.

Take aways from video:

  • make a model, then clean the data. The initial model you create will help you find the data that doesn’t seem to fit it’s generalization hypothesis.

I have shipped even more ML code: Hotdog or Not?

my first model deployed on Hugging Face 🤗

Hugging Face 🤗 is darn slick. Your project is built into a docker image and then launched as needed. Github could learn a thing or two about showing status. There was some errors due to the fastai API changing to no longer needing you to wrap an image that you want to predict in a PILImage and another problem with adding example images, that I solved by just removing them.

#machine-learning

Practical Deep Learning: Lesson 1: Is it a Hotdog?

Practical Deep Learning for Coders Lesson 1

The biggest change since I last took a course on Machine Learning is one of the key points of this course: the use of foundational models that you fine tune to get great results. In this lesson’s video we fine tune an image classifier to see if a picture has a bird in it.

While building my own model I attempted to get the classifier fine tuned to look at comic book covers and tell me what publisher it was from. I thought with the publishers mark on 100 issues from Marvel, DC, Dark Horse, and Image the classifier would be able to tell. The best I was able to do was about 30% error rate, a far cry from the 0% in the example models. I tried a few different ideas of how to improve:

  • train with larger images. The notebook used in the video makes the training go faster by reducing the size of the image. As I write this I wonder if it is even possible to use larger images in a model that might have been trained on a fixed size.
  • create a smaller image by getting the 4 corners of the cover into one image.
  • clean the data so that all the covers in the dataset had a publisher mark on them.
sample cover from X-Men Red #2 with only the corners

Nothing moved the needle. I’m hoping something I learn later in the course will give me the insight I need to do better.

I took a second attempt with a simpler project. Of course I remembered that Silicon Valley episode with the hot dog detector, and make a hot dog vs hamburger classifier. It works great. The next lesson covers getting a model like that into production, so hang tight.

#machine-learning