source: simon willison: using llm in the shebang line of a script
level: technical
simon willison investigated ways to use his llm command-line tool in a shebang line, inspired by a hacker news comment. the simplest approach uses llm fragments: a shebang like #!/usr/bin/env -s llm -f followed by a prompt, such as 'generate an svg of a pelican riding a bicycle'. this turns a text file into an executable script that outputs the llm's response.
the technique supports tool calls with the -t option. for example, #!/usr/bin/env -s llm -t llm_time -f 'write a haiku that mentions the exact current time' runs a script that uses the llm_time tool to get the current time and incorporate it into a haiku. this allows scripts to leverage external tools directly from the shebang.
more advanced usage involves yaml templates that define custom tools as python functions. a shebang like #!/usr/bin/env -s llm -t can point to a template specifying a model, system prompt, and functions like add and multiply. running the script with a query such as 'what is 2344 * 5252 + 134' triggers tool calls, and the --td flag shows debug output. willison also links to a longer example using the datasette sql api to answer questions about his blog content.
why it matters: this pattern lets developers create self-contained, executable scripts that use llms for content generation or data tasks, simplifying automation and integration of ai into command-line workflows.
source: simon willison: using llm in the shebang line of a script