Recently,
I've been thinking about the idea of making my own Make-like tool.
If you're reading this blog,
you're almost certainly already familiar with make on Unix and Windows systems.
If I end up writing such a tool,
this will be purely for educational purposes
and/or
will find use within the Kestrel-2/EX environment.
(At least, until more conventional tools find their way over.)
If I do this, I think Redo would be a good model to follow.
It is simple enough to be implemented in regular
Unix shell script.
The Kestrel-2/EX lacks a Unix command-line interface;
however, whatever CLI we end up with,
redo should be simple enough to implement
make-like functionality without the overall complexity of Make.
I just need the ability to run programs with parameters,
and to return result codes, something even MS-DOS could do.
There is a definite short-coming of redo, however --
how does one handle projects
that yield multiple outputs?
For example,
let's suppose we have an outline font description
that is used to generate multiple bitmapped fonts.
Apparently, one solution to this is to use virtual targets instead of file targets.
However, we lose the ability to place a dependency on these virtual targets.
Since the target doesn't actually exist,
by redo's own logic, it will always attempt to build the virtual target.
Even so, gaining 80% of Make's functionality for 20% of its implementation complexity
makes this a worthwhile tradeoff.
The crux of redo is the .do script files.
Each target has a corresponding .do file.
So, for example, if I have a project with two C source files and one output binary,
I'd have a example.do file that looked something like:
redo-ifchange component1.o component2.o gcc -o $3 component1.o component2.o
Basically, redo-ifchange attempts to make sure that
component1.o and component2.o
are currently up to date.
If they are, great; otherwise, it will attempt to build them.
Only after the completion of the redo-ifchange statement
will gcc be invoked to link everything together.
Of course, this is just a shell script;
you can have as many redo-ifchange
(or even redo-ifcreate, but I'll ignore that for now)
statements as required.
For example,
you can have a dependency on a list of dependencies, like this:
redo-ifchange my-deps redo-ifchange `cat my-deps` # ...etc...
It is almost as if there is an implied if-statement:
redo-ifchange my-deps
IF changes were made THEN
redo-ifchange `cat my-deps`
IF changes were made THEN
# ...etc...
END
END
So, basically, if the dependencies don't need to be rebuilt and the target already exists, then we can safely assume it too is already up to date.
Most redo implementations use a hidden database to keep track of this meta-information
(the
apenwarr implementation
tucks all this into a hidden .redo subdirectory),
with the proviso that missing metadata is treated the same as changes were made.
This allows redo to rebuild the missing data if it ever gets corrupted.
NOTE: Most implementations of redo use file hashing to detect changes to a source file.
However, I'm still a fan of using last-modified time metadata.
It's not that hashes are problematic; rather,
there is a huge convenience
to being able to touch some_source_file and re-running redo
to force only a partial rebuild of a project, instead of the whole thing.