run.sh
Keeping track of CLI commands in software projects
Code repositories usually offer a certain set of operations for developers to carry out on the command line: for example running the tests, fetching dependencies, building the project, checking code style, or seeding a local database with mock data.
From a developer’s perspective, such CLI commands are the main entrypoints for interacting with a project. (That is, besides browsing the code.) In that sense, they can be perceived as some sort of development API of the repository.
The available tasks are usually well-defined, and they likely even need to be configured in a certain, project-specific way for them to work properly. Therefore, it’s common to somehow keep track of the commands in the repository, to ensure that all developers are able to use them in the same fashion.
In this post, I want to discuss a few approaches that I have tried myself over time. Eventually, I came up with a pattern that I’m using since a few years already, and which have worked quite well for me, especially in small to mid-sized projects. I share it in the second part of this article.
Makefiles, npm scripts & Co.
The most straightforward approach is to document and describe operations in a README file. While this is the most flexible way, it can be inconvenient to having to copy-and-paste chunks of commands from a text file to the terminal. Configuring and supplying runtime parameters needs to be done manually, which can be repeatitive and prone to error.
To avoid the copy-and-paste hassle, the commands can be stored in individual shell scripts.1 These can either be located in the project root, or they can be stashed away in a dedicated folder such as scripts/
. While the former can become quite messy, the latter is less discoverable. Having one file per task is generally less concise, and it introduces overhead when it comes to sharing code between the files.
With JavaScript projects, it’s common to define npm scripts in package.json
. The popularity of this technique clearly shows that there is a need for storing tasks in a structured way. However, cramming CLI commands into a single-line JSON string isn’t particularly nice to work with. Outside of the JavaScript world, you also need to find a different solution.
A more universal approach is to use a Makefile, which allows you to define procedures in a tidy and structured fashion. However, GNU Make is a build tool originally, not a task runner, so there are some quirks that you may run into. It might also not be obvious to other people to inspect a Makefile to look for tasks.
The creators of just were inspired by the Makefile concept and adopted it for task management. While justfiles look sophisticated and powerful, the idea of inventing a whole new language only for the purpose of task definitions feels a bit too heavy for my taste. Using it also enforces the installation of a separate tool.
Back to the shell
The longer I’ve been thinking about solutions to this problem, the more I realised that there are mainly three boxes I need ticking:
- Pure shell script: when it comes to storing shell commands, a shell script seems like the most obvious solution. Besides being universal, shell script has everything to get the job done: variables, loops, conditionals, I/O, file paths, sub processes. It might not be the nicest language on earth – however, if you work on the CLI then you are already knee-deep in the shell script realm anyway. It’s also not that terrible, although you certainly have to learn it just like every other language.
- Single file at project root: to promote discoverability and to encourage simplicity, I think the best option is having one file at project root. If a single file becomes too convoluted, you can still split it up and extract subroutines to other files. (Or, if the setup becomes too complex, it might be justified to pull in more specialised tools. There is no one-size-fits-all solution after all.)
- No tool enforcement: providing project commands is important enough that it should be accessible to developers without requiring dedicated tooling. That way, the tasks can also be re-used in other contexts more easily, such as on the CI platform, or in the production environment.
The way to make all this happen neither requires a new tool nor a dedicated file format. All it needs is a shell script that follows a certain convention. What I came up with is a file called run.sh
that lives at the root of the repository.2 It contains the available tasks which are defined as shell functions.
For a NodeJS project, a run.sh
could look like so:
# Start web server
run::server() {
node src/index.js \
--port=8001 \
--log=DEBUG
}
# Execute unit tests
run::test() {
./node_modules/.bin/mocha \
--recursive \
--parallel \
src/**/*.spec.js
}
There are two rules for run.sh
files:
- Tasks are defined in shell functions whose name start with
run::
. That way, they can be recognised by both humans and tools. - The (optional) commentary above tasks is supposed to be the help text of the respective task.
Apart from that, the run.sh
file is just an ordinary shell script. All regular shell features like variables, conditionals, or sub-routines can be used without restriction. Additional input arguments are passed through to the respective task.
Utility tool
Such a file convention is neither novel nor ground-breaking in itself. It’s really more of a pattern than anything else. However, sticking to a common format makes it easier to work with, and it also allows for utility tools to increase the developer experience. I created a small tool (Github / Docs) that helps to explore run.sh
tasks, and that makes it easier to invoke them.
For the above run.sh
file, you could use it like so:
$ run --list
server Start development server
test Execute unit tests
$ run test
In order to see real-life examples of run.sh
files, you can either find one in my time-tracking tool “klog” (a mid-sized Go CLI application), or in the run.sh
repository itself.
A tool would not be mandatory, though. Since the format is self-documenting and meaningful on its own, the tasks can still be found and copy-and-pasted manually. Alternatively, the run.sh
file can be sourced, and the tasks can then be invoked via their canonical names. I find that important, because it allows re-using the configurations of test or bootstrap procedures in the CI or production environment.
-
I use the term “shell script” in a generic sense here, and do not refer to only the Bourne Shell for example. ↩︎
-
Two years ago, I already dabbled with the idea of such
run
scripts. ↩︎ -
This assumes that the
run.sh
is in the current working directory. You could otherwise reference one via the--file
flag. ↩︎