Friday, April 21, 2017

Quick Kotlin idiom: naming things

Kotlin is very much a better Java. A great example is this simple idiom for naming things:

open class Named<in T>(val name: String, check: (T) -> Boolean)
    : (T) -> Boolean by check {
    override fun toString() = name

So I can write something like this:

fun maybe_three(check: (Int) -> Boolean) {
    if (check(3)) do_the_three_thing()
    else println("\"$check says\", Do not pass go.")

maybe_three(Named("Is it three?") { i -> 3 == i })
maybe_three(Named("Is it four?") { i -> 4 == i })

The first call of maybe_three prints: Breakfast, lunch, dinner. The second call prints: "Is it four?" says, Do not pass go.

Many variations on this are possible, not just functions! What makes this example work nicely is delegation—the magical by keyword— for the general feature of naming things by overriding toString(); and for the function delegated to, the elegant lambda (anonymous function) syntax for the last parameter. You can delegate anything, not just functions, so you could make named maps, named business objects, et al, by using delegation on existing types without needing to change them.

Thursday, April 13, 2017

WSL, first experiences

I first tried Windows Subsystem for Linux in December 2016, but was not successful in installing, so I held off.

After getting the Windows 10 Creators Edition update, I saw how much work and love went into improving WSL, and decided to try again. I was rewarded.

On the whole, the installation was smooth, brief, you might even say trivial. There were Windows reboots to enable Developer Mode, and after installing WSL, but much solid effort has gone into making Windows reboots quick and painless, and with a regular Linux distro I'd have rebooted anyhow after upgrading, so no disgruntlement.

And what did I get for my efforts? WSL bash is bash. Just bash. Really, it is just plain old bash, with all the command line tools I've grown accustomed to over 30 years. The best praise I can give a tool: It just works. And WSL just works. (But see Almost there, below.)

Out of the box WSL runs Ubuntu 16.04 (Xenial), the official LTS distribution (long-term support). This is a sane choice for Microsoft. It's stable, reliable, secure, tested, trusted. For anyone wanting a working Linux command line, this is a go-to choice. Still, I updated it.

Things I changed

Even with all the goodness, there were some things I had to change:

The terminal
I immediately installed Mintty for WSL. I've grown to love Mintty on Cygwin, trusting it as a reliable and featureful terminal emulator without going overboard. It's a tasteful balance, well executed. And CMD.EXE, though much improved, still is not there (but may head there; we'll see if PowerShell wins out).
Not to get into flamewars, I just accept that Ubuntu uses DBus. By default it doesn't run on WSL, but this was easy to fix, and it made upgrading Ubuntu smoother. Using sudo, edit /etc/dbus-1/session.conf as others have suggested (I did it by hand, not with sed). You may have to repeat after upgrading Ubuntu.
The Ubuntu version
It seems trivial, but I was unhappy that diff --color didn't work. Am I shallow—color? Some of the scripts I write for open source provide colorized diff output, and I'd like to work on them in WSL without disabling this feature. Microsoft made much hay over 24-bit color support in CMD.EXE. So I updated to Ubuntu 17, which includes diffutils 3.5 (the version in which --color was added). Microsoft does not official support upgrading Ubuntu, but I ran into no real problems.

Upgrading WSL Ubuntu

Caveat coder — there is a reason this is unsupported by Microsoft at present. I just never ran into those reasons myself. For example, I used DBus to make upgrading happier; I am not using any Linux desktop (graphical) programs, so maybe this could be a reason.

Researching several helpful Internet sources, I:

  1. Edited /etc/update-manager/release-upgrades to use "normal" releases, not just LTS
  2. Fixed /etc/dbus-1/session.conf
  3. Ran sudo do-release-upgrade to move to 16.10 from 16.04
  4. Re-fixed /etc/dbus-1/session.conf
  5. Ran sudo do-release-upgrade -d to move to 17.04 from 16.10

(Pay attention: there are many "yN" prompts were the default is to abort: you must enter "y" to these!)

When I am prompted to reboot, I quit the upgrade, close all WSL terminals, and start a fresh one. There is no actual kernel to reboot: it remains 4.4.0-42-Microsoft throughout. The kernel is emulated by Windows, not an actual file to boot, so upgrades only change the packages bundled with the kernel, not the kernel itself. The underlying abstraction is quite elegant.

Almost there

Can I drop Cygwin and make WSL my daily development environment? Not quite yet. For shell script work, WSL is excellent. But for my Kotlin, Java, Ruby, et al, other projects, I rely on IntelliJ IDEA as my editor (though Emacs might return into my life again). Filesystem interop between Windows programs (such as java.exe) and WSL is good but not perfect.

Other options

Cygwin on Windows
This is and has been my solution for bash on Windows for many years. I will move to WSL when I'm ready, but I'm not ready yet. I need my regular development cycle to work first. (See Almost there.) There are downsides to Cygwin, for example, coping with line endings, but it's been reliable for me.
Homebrew on Mac
This is work. My company issues me a Mac laptop, and I use it. For the most part, it is fine for work with colleagues and clients, though at times the Mac is a little strange, and much of the user experiences feels counterintuitive. Still, the software mostly works, and the hardware is incredibly good.

But why not just use Linux? Well, my daily machine at home is a Windows box. Because it's my gaming rig, and games I play don't run well in Linux, and getting a Mac desktop is not currently a pretty story.

UPDATE: More on how syscalls work.

UPDATE: Slightly dated (Microsoft is moving very fast on WSL—kudos!), this is a good video presentation on what happens under the hood.

Wednesday, April 12, 2017

Quick diff tip, make, et al

I'm using make for a simple shell project, to run tests before committing. The check was trivial:

SHELL = bash

	@./run-tests t | grep 'Summary: 16 PASSED, 0 FAILED, 0 ERRORED' >/dev/null

This has the nice quality of Silence is Golden: say nothing when all is good. However, it loses the quality of Complain on Failure: it simply fails without saying why.

A better solution, preserving both qualities:

SHELL = bash

	@diff --color=auto \
	    <(./run-tests t | grep 'Summary: .* PASSED, .* FAILED, .* ERRORED') \
	    <(echo 'Summary: 16 PASSED, 0 FAILED, 0 ERRORED')

It still says nothing when all is good, but now shows on failure how many tests went awry. Bonus: color for programmers who like that sort of thing.

Why set SHELL to bash? I'm taking advantage of Process Substitution. Essentially the command outputs inside the subshells are turned into special kinds of files, and diff likes to compare files. Ksh and Zsh also support process substitution, so I'm going with the most widely available option.


Why are my arguments to diff ordered like that? In usual testing language, I'm comparing "actual" vs "expected", and more commonly you'll see programmers list "expected" first.

diff by default colors the left-hand input in RED, and the right-hand input in GREEN. On failure, it makes more sense to color "actual" in red and "expected" in green. Example output on failure:

$ make
< Summary: 17 PASSED, 1 FAILED, 0 ERRORED
> Summary: 19 PASSED, 0 FAILED, 0 ERRORED
make: *** [Makefile:4: test] Error 1

Tuesday, April 04, 2017

Maven logging and the command line

I usually set up my Maven-build projects to be as quiet as possible. My preference is "Silence is Golden": if the command says nothing on the command line, it worked; if it failed, it prints to STDERR.

However, sometimes I want to see some output while I'm tracking down a problem. How best to reconcile these?

Maven 3.3.1 (maybe earlier) introduced the .mvn directory for your project root (note leading DOT). In here you can keep a jvm.config file which has the flags to the java command used when running mvn. Here's my usual jvm.config:


This quiets maven down quite a bit, using the properties to control Maven's more recent logger, SLF4J. And I normally commit that into my project code repository.

And for those times I'd like more output? I could edit the file, but I don't trust myself enough not to accidentally commit those temporary changes. So I use the command line:

$ MAVEN_OPTS='-Dorg.slf4j.simpleLogger.defaultLogLevel=INFO' mvn

Ultimately mvn reads .mvn/jvm.config, putting the contents into the variable MAVEN_OPTS, and uses MAVEN_OPTS in the invocation of java, and you can override the variable yourself on the command line.

Sunday, April 02, 2017

DDD, A Handmaid's Tale

(No, this is not a post about the venerable and excellent GNU DDD.)

Documentation Driven Development—DDD—is a term I just made up (not really; read on). I was working on some code TDD-style ("first, write a failing test"), and also thinking about my user documentation. My usual practice is to get my tests and code into good shape, push-worthy, and then update the documentation with my improvements (one hopes). Then the thought struck me: I'm doing this wrong!

We write tests first as miniature specifications for the code. But my documentation is conveying to the public my specifications. In the world of closed-source software, this makes sense. You prepare the documentation to ship to customers (internal or external); generally holding off until the code is stable so your documentation is mostly accurate. After all, with closed source, users can't see your tests or the code: the documentation is their only view into how to use your code.

With open-source software, this picture is radically changed. Your users can see your tests and code, in fact, you generally encourage them to look, or fork! So now your tests are little visible public specifications. Why documentation then?

Personally I still like solid documentation on open source projects. True, I could just browse the tests. But that isn't the best way to start with code that is new to me. I'd like to see examples, some explanation, perhaps some architecture or high-level pictures. Hence, documentation.

So, back to DDD. If I'm pushing out my tests and code to a public repository as soon as they pass (or near enough), how is my documentation ever to keep up? How do I encourage others to clone or fork my code, and contribute? I still want new users to have good documentation for getting started; I still want my tests to ultimately define my specifications. The answer is easy: First write failing documentation.

This is not at all a new idea! See Steve Richert, Zach Supalla, and many others. An early form of this idea is Knuth's Literate Programming.

Failing documentation

What is "failing documentation"?

Firstly, just as with "failing tests", you start with documentation of how your code should behave, but which isn't actually the case. The ways to do this are the usual suspects:

  • Write examples which don't work, or possibly don't even compile
  • Write explanations which don't fit your code
  • Write step-by-step walkthroughs which can't be followed
  • Write architecture diagrams which are wrong
  • Etc, etc, etc, anything you'd put in documentation which is invalid for your current code

Then you fix it:

  1. Write failing documentation
  2. Write failing tests which correspond to the documentation
  3. Fix the code to make the tests pass, and the documentation correct

Afterwards you have:

  • Current, accurate documentation
  • Current, passing tests
  • Current, working code

Supporting ecosystems

As straight-forward as DDD is to explain, some software ecosystems make it easier to actually do than others. A standout example is Python and doctest. In doctest you write your tests directly in the API documentation as examples. This is a perfect marriage of documentation and tests.

Swagger is an interesting case. It's generally a documentation-first approach tailored for REST API specifications. But the documentation is "live documentation"—i.e., an executable web form for exploratory testing—rather than text and code examples to read. Using DDD, you would write your REST API specification first in Swagger, then write failing tests around that before fixing the code to implement. Clever people have leveraged this.

About the post title

The Handmaid's Tale is a sly reference to Chaucer's The Wife of Bath's Tale (featuring a strong protagonist balancing among bickering companions), and The Merchants's Tale sequence. Documentation has often been treated as subservient to code, an afterthought, when really it is the first thing most new users see about a system. Give it its due.

Saturday, April 01, 2017

Kotlinc on Cygwin

There may be a better way, but I found that running kotlinc to bring up the Kotlin REPL, while in a Cygwin BASH shell using Mintty, did not respond to keyboard input. A little research indicated the issue is with JLine, which has some understandable difficulties reconciling running under Cygwin with running under CMD.

The workaround I used:

$ JAVA_OPTS='-Djline.terminal=unix' kotlinc
Welcome to Kotlin version 1.1.1 (JRE 1.8.0_121-b13)
Type :help for help, :quit for quit
>>> println("FOOBAR")

Requesting JLine to use UNIX-y primitives for terminal access solved the problem. I would like to hear about other solutions.

UPDATE: Edited for clarity. And some additional reading:

Saturday, March 18, 2017

Followup on Bash long options

A followup on Bash long options.

The top-level option parsing while-loop I discussed works fine for regular options. Sometimes you need special parsing for subcommand options. A hypothetical example might be:

$ my-script --toplevel-thing my-subcommand --something-wonderful option-arg

Here the --toplevel-thing option is for my-script, and --something-wonderful option and its option-arg is for my-subcommand. Regular getopts parsing will try to handle all options for the top level, failing to distinguish subcommand options as separate. Further, getopts in a function does not behave quite as expected.

One solution is simple and hearkens back to the pre-getopts days. For the top level:

while (( 0 < $# ))
    case $1 in
        --toplevel-thing ) _toplevel_thing=true ; shift ;;
        -* ) usage >&2 ; exit 2 ;;
        * ) break ;;

Using a while-loop with explicit breaks avoids looking too far along the command line, and wrongly consuming options meant for subcommands. Rechecking $# each time through the loop breaks gracefully. Similarly, for subcommands expressed in a function:

function my-subcommand {
    while (( 0 < $# ))
        case $1 in
            --something-special ) local option_arg="$2" ; shift 2 ;;
            * ) usage >&2 ; exit 2 ;;
    # Rest of my-subcommand, using `option_arg` if provided

This uses the same pattern as the top level so you avoid needing to remember to handle top level one way, and subcommand another.

An example script using this pattern.

Monday, March 13, 2017

Frequent commits

Pair posting with guest Sarah Krueger!

A source control pattern for TDD

At work we recently revisited our commit practices. One issue spotted: we didn't commit often enough. To address we adopted the source control pattern in this post. There are lots of benefits; the one that mattered to me most: No more throwing the baby out with the bathwater, that is, no more two hour coding sessions only to start again and lose the good with the bad.

So we worked out this command-line pattern using single unit-of-work commits (without git rebase -i!):

# TDD cycle: edit code, run tests, rather-rinse-repeat until green
$ git pull --rebase --autostash && run tests && git commit -a --amend --no-edit
# Simple unit-of-work commit, push, begin TDD cycle again
$ git commit --amend && git push && git commit --allow-empty -m WIP

What is this?

  1. Start with a pull. This ensures you are always current, and find conflicts as soon as possible.
  2. Run your full tests. This depends on your project, for example, mvn verify or rake. If some tests are slow, split them out, and add a full test run before pushing.
  3. Amend your work to the current commit. This gives you a safe fallback known to pass tests. Worst case you might lose some recent work, but not hours worth. (Hint: run tests often.)
  4. When ready to push, update the commit message to the final message for public push.
  5. Push. Share. Make the world better.
  6. Restart the TDD cycle with an empty commit using a message that makes sense to you, for example "WIP" (work in progress); the message should be obvious not to push. Key: the TDD cycle command line only amends commits, so you need a first, empty commit to amend against.


They key feature of this source control pattern is: Always commit after reaching green on tests; never commit without testing. When tests fail, the commit fails (the && is short-circuit logical and).

In the math sense, this pattern makes testing and committing one-to-one and onto. Since TDD requires frequent running of tests, this means frequent commits when those tests pass. To avoid a long series of tiny commits when pushing, amend to collect a unit of work.


The TDD cycle depends on an initial, empty commit. The first time using this source control pattern:

# Do this after the most recent commit, before any edits
$ git commit --allow-empty -m WIP

Adding files

This pattern, though very useful, does not address new files. You do need to run git add with new files to include them in the commit. Automatically adding new files can be dangerous if gitignore isn't set up right.

It depends on your style

The exact command line depends on your style. You could include a script to run before tests, or before commit (though the latter might be better done with a git pre-commit hook). You might prefer merge pulls instead of rebase pulls. If your editor runs from the command line you might toss $EDITOR at the front of the TDD cycle.

The command lines assume git, but this source control pattern works with any system that supports similar functionality.

Fine-grained commits

An example of style choice. If you prefer fine-grained commits to unit-of-work single commits (depending on your taste or project; they're both good practice):

# TDD cycle: edit code, run tests, rather-rinse-repeat until green
$ git pull --rebase --autostash && run tests && git commit -a
# Fine-grained commits, push, begin TDD cycle again
$ git push

Improving your life

No matter your exact command line, it can be made friendlier for you. Yes, shell history can story your long chain of commands. What if they vary slightly between programmers sharing a project, or what if there is a common standard approach? Extend git. Let's call our example subcommand "tdd". Save this in a file named git-tdd in your $PATH:

set -e
case $1 in
    test ) git pull --rebase --autostash && run tests && git commit -a --amend --no-edit ;;
    accept ) git commit --amend && git push && git commit --allow-empty -m WIP ;;

Now your command line becomes:

$ git tdd test  # Repeat until unit of work is ready
$ git tdd accept

The source is in GitHub.


An editing error left out the Why? section when initially posted.

Remember to autostash.

Saturday, March 11, 2017

Two BDD styles in Kotlin

Experimenting with BDD syntax in Kotlin, I tried these two styles:

fun main(args: Array<String>) {
            GIVEN "an apple"
            WHEN "it falls"
            THEN "Newton thinks")

data class BDD constructor(
        val GIVEN: String, val WHEN: String, val THEN: String) {
    companion object {
        val So = So()

    class So {
        infix fun GIVEN(GIVEN: String) = Given(GIVEN)
        data class Given(private val GIVEN: String) {
            infix fun WHEN(WHEN: String) = When(GIVEN, WHEN)
            data class When(private val GIVEN: String, private val WHEN: String) {
                infix fun THEN(THEN: String) = BDD(GIVEN, WHEN, THEN)


fun main(args: Array<String>) {
    println(GIVEN `an apple`
            WHEN `it falls`
            THEN `Newton thinks`

infix fun Given.`an apple`(WHEN: When) = When()
infix fun When.`it falls`(THEN: Then) = Then(GIVEN)
infix fun Then.`Newton thinks`(QED: Qed) = BDD(GIVEN, WHEN)

inline fun whoami() = Throwable().stackTrace[1].methodName

data class BDD(val GIVEN: String, val WHEN: String, val THEN: String = whoami()) {
    companion object {
        val GIVEN = Given()
        val WHEN = When()
        val THEN = Then("")
        val QED = Qed()

    class Given
    class When(val GIVEN: String = whoami())
    class Then(val GIVEN: String, val WHEN: String = whoami())
    class Qed

Comparing main() methods, which is easier to read or use? I haven't tried implementing, just have looked at testing code style. Note that I'm using the infix feature of Kotlin to have my BDD "GIVEN/WHEN/THEN" as punctuation free as I'm able.

In the one case—using strings to describe cases—, an implementation would be more similar to Spec or Cucumber, which usually uses pattern matching to associate text with implementation. In the other case—using functions to describe cases—, an implementation goes directly into the function definition. In either case, Kotlin only supports binary infix functions, not unary (of course, you say, that's what "infix" means!), so I need either an initial starting token (So in the strings case) or an ending one (QED in the functions case).

I'm curious how implementation sorts out.

(Code here.)


I have working code now that runs these BDD sentences but remain unclear which of the two styles (strings vs functions) would be easier to work with:

fun main(args: Array<String>) {
    var apple: Apple? = null
    upon("an apple") {
        apple = Apple(Newton(thinking = false))
    upon("it falls") {
    upon("Newton thinks") {
        assert(apple?.physicist?.thinking ?: false) {
            "Newton is sleeping"

            GIVEN "an apple"
            WHEN "it falls"
            THEN "Newton thinks")


fun main(args: Array<String>) {
    println(GIVEN `an apple`
            WHEN `it falls`
            THEN `Newton thinks`

var apple: Apple? = null

infix fun Given.`an apple`(WHEN: When) = upon(this) {
    apple = Apple(Newton(thinking = false))

infix fun When.`it falls`(THEN: Then) = upon(this) {

infix fun Then.`Newton thinks`(QED: Qed) = upon(this) {
    assert(apple?.physicist?.thinking ?: false) {
        "Newton is sleeping"

The strings style is certainly more familiar. However, mistakes in registering matches of "GIVEN/WHEN/THEN" clauses appear at runtime and do not provide much help.

The functions style is more obtuse. However, mistakes cause compile-time errors that are easier to understand, and your code editor can navigate between declaration and usage.

Friday, September 16, 2016

Google Test, generated source, and GNU Make

I had trouble with this arrangement:

  • Using Pro*C for a client to generate "C" files from .pc sources
  • Google Test for unit testing
  • GNU Make

What was the problem?

Ideally I could write a pattern rule like this:

%-test.o: %.c

This means when I want to compile my C++ test source, it required make first run the Pro*C preprocessor to generate a "C" source used in the test. Why? Google tests follow this template:

#include "source-to-test.c"
#include <gtest/gtest.h>
// Tests follow

Google test includes your source file (not header) so the test code has access to static variables and functions (think "private" if you're from Java or C#).

So my problem is make is very clever with rules like:

And knows that if you want a "bob.qux", which needs a "", and there's no "" but there is a file named "", make follows the recipe for turning a "bar" into a "foo", and this satisfies the rule for "bob.qux".

However the simple rule I guessed at:

%-test.o: %.c

Doesn't work! GNU Make has a corner case when there are multiple prerequisites (dependencies), and won't make missing files even where there's another rule saying how to do so.

There is another way to state what I want:

%-test.o: %.c

This is called a static rule. It looks promising, but again doesn't work. GNU make does not support patterns (the "%") in static rules. I would need to write each case out explicitly, e.g.:

a-test.o: a.c

While this does work, it's also a problem.

What's wrong with being explicit?

Nothing is wrong with "explicit" per se. Usually it's a good thing. In this case, it clashes with the rule of "say it once". For each test module a programmer writes, he would need to edit the Makefile with a new, duplicative rule. So when new tests break, instead of thinking "my code is wrong", he needs to ask "is it my code, or my build?" Extra cognitive burden.

What does work?

There is a way to get the benefit of a static rule without the duplication, but it's hackery—good hackery, to be sure, but violating the "rule of least surprise". Use make's powers to rewrite the Makefile at run time:

define BUILD_test
$(1:%=%.o): $(1:%-test=%.c) $(
        $$( $$(OUTPUT_OPTION) $(
$(1): $(1:%=%.o)
$(foreach t,$(wildcard *,$(eval $(call BUILD_test,$(

What a mouthful! If I have a "" file, make inserts these rules into the build:

foo-test.o: foo.c
        $( $(OUTPUT_OPTION)
foo-test: foo-test.o

I'd like something simpler, less inscrutable. Suggestions welcome!

Monday, May 30, 2016

Gee wilickers

I have a common command-line pattern I grew tired of typing. An example

$ mvn verify | tee verify.out

I use this pattern so often as I want to both watch the build on screen, and have a save file to grep when something goes wrong. Sometimes I also find myself telling the computer:

$ mvn verify | tee verify.out
$ mv verify.out verify-old.out
$ $EDITOR pom.xml
$ mvn verify | tee verify.out
$ diff verify-old.out verify.out

I want to see what changed in my build. But ... too much typing! So I automated with gee, a mashup of git and tee. You can think of it as source control for <STDOUT>.

Now I can type:

$ gee -o verify.out mvn verify
$ gee -g git diff HEAD^  # Did build output change?

How does it work? gee keeps a git repository in a hidden directory (.gee), committing program output there using tee. It follows simple conventions for file name and commit message (changeable with flags), and accepts <STDIN>:

$ mvn verify | gee -o verify.out -m 'After foobar edit'

Sunday, May 22, 2016

Automating github

I got tired of downloading my own scripts from Github when working among multiple projects. So I automated it, of course. The bitsh project reuses a test script from the shell project, and now the Makefile for bitsh is simply:


	@[ -t 1 ] && flags=-c ; \
	./ -i -- $$flags t

When run-tests is updated in Github, bitsh automatically picks up the changes. And I learned the value of ETag.

By the way, why "bitsh"? I hunted around for project names combining "git" and "bash" and found most of them already taken. Beggars can't be choosers.

UPDATE: I found the <TAB> character got munged by Blogger to a single space. This is unfortunate as you cannot copy out a valid Makefile. One solution is to put in an HTML tab entity.

Saturday, May 21, 2016

Metaprogramming with Bash

Most programmers do not take full advantage of the languages they work in, though some languages make this a real challenge. Take metaprogramming, or programs that have some self-knowledge. LISP-family languages make this easy and natural; those with macros even more so. Bytecode languages (think Java), and even more so object code languages (think "C"), fall back on extra-linguistic magic such as AOP rewriting.

Text-based languages lay in a middle ground. Best known is Bash. Rarely do programmers take full advantage of Bash features, and few would think of metaprogramming. Not as clean as LISP macros, it is still straight forward.

As an example of function rewriting note line 21. The surrounding function body redefines an existing function, incorporating the original's body into itself. This is very similar to aspect-oriented programming with a "around" advice. Much care is taken to preserve the original function context.

This is a fully working script—give it a try!


export PS4='+${BASH_SOURCE}:${LINENO}: ${FUNCNAME[0]:+${FUNCNAME[0]}(): } '

pgreen=$(printf "\e[32m")
pred=$(printf "\e[31m")
pboldred=$(printf "\e[31;1m")
preset=$(printf "\e[0m")
pcheckmark=$(printf "\xE2\x9C\x93")
pballotx=$(printf "\xE2\x9C\x97")
pinterrobang=$(printf "\xE2\x80\xBD")

function _register {
    case $# in
    1 ) local -r name=$1 arity=0 ;;
    2 ) local -r name=$1 arity=$2 ;;
    read -d '' -r wrapper <<EOF
function $name {
    # Original function
    $(declare -f $name | sed '1,2d;$d')

    shift $arity
    if (( 0 < __e || 0 == \$# ))
        __tally \$__e
        __tally \$__e
    eval "$wrapper"

let __passed=0 __failed=0 __errored=0
function __tally {
    local -r __e=$1
    $__tallied && return $__e
    case $__e in
    0 ) let ++__passed ;;
    1 ) let ++__failed ;;
    * ) let ++__errored ;;
    _print_result $__e
    return $__e

function _print_result {
    local -r __e=$1
    case $__e in
    0 ) echo -e $pgreen$pcheckmark$preset $_scenario_name ;;
    1 ) echo -e $pred$pballotx$preset $_scenario_name ;;
    * ) echo -e $pboldred$pinterrobang$preset $_scenario_name ;;

function check_exit {
    (( $? == $1 ))

function make_exit {
    local -r e=$1
    (exit $e)

function check_d {
    [[ $PWD == $1 ]]

function change_d {
    cd $1

function variadic {

function early_return {
    return $1

function eq {
    [[ "$bob" == nancy ]]

function normal_return {
    (exit $1)

function f {
    local -r bob=nancy

function AND {

function SCENARIO {
    local -r _scenario_name="$1"
    local __tallied=false
    local __e=0
    pushd $PWD >/dev/null
    __tally $?
    popd >/dev/null

_register f
_register normal_return 1
_register variadic 1
_register eq 
_register early_return 1
_register change_d 1
_register check_exit 1

echo "Expect 10 passes, 4 failures and 4 errors:"
SCENARIO "Normal return pass direct" normal_return 0
SCENARIO "Normal return fail direct" normal_return 1
SCENARIO "Normal return error direct" normal_return 2
SCENARIO "Normal return pass indirect" f AND normal_return 0
SCENARIO "Normal return fail indirect" f AND normal_return 1
SCENARIO "Normal return error indirect" f AND normal_return 2
SCENARIO "Early return pass indirect" f AND early_return 0
SCENARIO "Early return fail indirect" f AND early_return 1
SCENARIO "Early return error indirect" f AND early_return 2
SCENARIO "Early return pass direct" early_return 0
SCENARIO "Early return fail direct" early_return 1
SCENARIO "Early return error direct" early_return 2
SCENARIO "Variadic with none" f AND variadic
SCENARIO "Variadic with one" f AND variadic apple
SCENARIO "Local vars" f AND eq
SCENARIO "Change directory" change_d /tmp
SCENARIO "Check directory" check_d $here
SCENARIO "Check exit" make_exit 1 AND check_exit 1

(( 0 == __passed )) || ppassed=$pgreen
(( 0 == __failed )) || pfailed=$pred
(( 0 == __errored )) || perrored=$pboldred
cat <<EOS
$ppassed$__passed PASSED$preset, $pfailed$__failed FAILED$preset, $perrored$__errored ERRORED$preset

Practical use of this script as a test framework would pull out SCENARIO and AND to a separate source script, included with "." or source, put the registered functions and their helpers in another source script, and provide command-line parsing to pick out which tests to execute. is an example in progress.

Thursday, May 19, 2016

Color your world

My coworkers use many ad hoc or single-purpose scripts, things like: checking system status, wrappers for build systems, launching services locally, etc. My UNIX background tells me, "keep it simple, avoid output; Silence is Golden."

Somehow my younger colleagues aren't impressed.

So to avoid acting my age I started sprinkling color into my scripts, and it worked. Feedback was uniformly positive. And true to my UNIX roots, I provided command line flags to disable color.

Some lessons for budding BASHers:

  1. Yes, experiment and learn, but be sure to do your research. The Internet has outstanding help for BASH.
  2. Learn standard idioms (see below).
  3. Don't overdo it. Color for summary lines and warnings have more impact when the rest of the text is plain.
  4. Keep functions small, just as you would in other languages. BASH is a programming language with a command line, so keep your good habits when writing shell.
  5. Collaborate, pair! This comes naturally to my fellows. Coding is more enjoyable, goes faster and has fewer bugs.


Most of these idioms appear in testing with bash, a simple BASH BDD test framework I wrote for demonstration.

Process command-line flags

while getopts :htT-: opt
    [[ - == $opt ]] && opt="${OPTARG%%=*}" OPTARG="${OPTARG#*=}"
    case $opt in
    h | help ) print_help ; exit 0 ;;
    t | this ) this=true ;;
    T | no-that ) that=false ;;
shift $((OPTIND - 1))

Keep boolean toggles simple


if $run_faster
    use_faster_algorithm "$@"
    use_more_correct_algorithm "$@"

Simple coloring


if $success
    echo -e "${pgreen}PASS${preset} $test_name"
    echo -e "${pred}FAIL${preset} $test_name - $failure_reason"

Consistent exit codes

function check_it {
    local -r failed=$1
    local -r syntax_error=$2
    if $syntax_error
        return 2
    elif $failed
        return 1
        return 0


There are many more idioms to learn, hopefully this taste catches your interest. I was careful to include others mixed in with these (what does local -r do?) to whet the appetite for research. Go try the BASH debugger sometime.

UPDATE: Fixed thinko. ANSI escape codes need to be handled by echo -e or printf, not sent directly via cat!

Monday, May 02, 2016

Do not test Java getters and setters

An excellent project from Osman Shoukry to automate testing of Java getters and setters—that is when you have getters and setters to test. There's the rub: do not write getters or setters.

For starters they violate encapsulation, exposing your objects innards to others. Ok, but there are frameworks which require them, even in 2016. What to do?

Generate them:

public final class SampleBean {
    private final String left;
    private String right;

With what result?

public final class SampleBean {
    private final String left;
    private String right;

    public String getLeft() {
        return this.left;

    public String getRight() {
        return this.right;

    public void setRight(final String right) {
        this.right = right;

    public SampleBean(final String left) {
        this.left = left;

This code need never be tested. Test the generator, not the generated code. If the generator is correct, so is the generated code. In this case, Lombok is heavily tested.

As an alternative to the constructor taking field values, Jesse Wilson has interesting advice.

Thursday, April 07, 2016

BDD-style fluent testing in BASH

I wanted to impress on my colleagues that BASH was still hip, still relevant. And that I wasn't an old hacker. So I wrote a small BDD test framework for BASH.


Fluent coding relies on several BASH features:

  • Variable expansion happens before executing commands
  • A shell function is indistinguishable from a program: they are called the same way
  • Local function variables are dynamically scoped but only within a function, so are visible to other functions called within that scope, directly or indirectly through further function calls

Together with Builder pattern, it's easy to write given/when/then tests. (Builder pattern here solves the problem not of telescoping constructors, but massive, arbitrary argument lists.)

So when you run:

function c {
    echo "$message"

function b {

function a {
    local message="$1"

a "Bob's your uncle" b c

You see the output:

Bob's your uncle

How does this work?

First BASH expands variables. In function a this means that after the first argument is remembered and removed from the argument list, "$@" expands to b c. Then b c is executed.

Then BASH calls the function b with argument "c". Similarly b expands "$@" to c and calls it.

Finally as $message is visible in functions called by a, c prints the first argument passed to a (as it was remebered in the variable $message), or "Bob's your uncle" in this example.

Running the snippet with xtrace makes this clear (assuming the snippet is saved in a file named example):

bash -x example
+ a 'Bob'\''s your uncle' b c
+ local 'message=Bob'\''s your uncle'
+ shift
+ b c
+ c
+ echo 'Bob'\''s your uncle'
Bob's your uncle

So the test functions for given_jar, when_run and then_expect (along with other, similar functions) work the same way. Keep this in mind.


So how does this buy me fluent BDD?

Given these functions:

function then_expect {
    local expectation="$1"

    if [[ some_test "$pre_condition" "$condition" "$expectation" ]]
        echo "PASS: $scenario"
        return 0
        echo "FAIL: $scenario"
        return 1

function when {
    local condition="$1"

function given {
    local pre_condition="$1"

function scenario {
    local scenario="$1"

When you write:

scenario "Some test case" \
    given "Something always true" \
    when "Something you want to test" \
    then_expect "Some outcome"

Then it executes:

some_test "Something always true" "Something you want to test" "Some outcome"

And prints one of:

PASS Some test case
FAIL Some test case

A real example at

Thursday, March 24, 2016

Bash long options

UPDATED: Long options with arguments in the "name=value" style. The original post neglected this important case.

For years I've never know quite the right way to handle long options in Bash without significant, ugly coding. The usual sources (Advanced Bash-Scripting Guide, The Bash Hackers Wiki, others) are not much help. An occasional glimpse appears on StackOverflow, but not well explained or voted.


Working with a colleague yesterday, we found this:

while getopts :hn:-: opt
    [[ - == $opt ]] && opt=${OPTARG%%=*} OPTARG=${OPTARG#*=}
    case $opt in
    h | help ) print_help ; exit 0 ;;
    n | name ) name=$OPTARG ;;
    * ) print_usage >&2 ; exit 2 ;;
shift $((OPTIND - 1))
echo "$0: $name"
$ ./try-me -h
Usage: ./try-me [-h|--help][-n|--name <name>]
$ ./try-me --help
Usage: ./try-me [-h|--help][-n|--name <name>]
$ ./try-me -n Fred
./try-me: Fred
$ ./try-me --name=Fred
./try-me: Fred


I checked with bash 3.2 and 4.3. At least for these, the '-' option argument has a bit of magic when it takes an argument. When the argument to '-' starts with a dash, as in --help (here "-help" is the argument to the '-' option), getopts drops the argument's leading '-', and OPTARG is just the text ("help" in this example). Only '-' has this magic.

Add a quick check for '-' at the top of the while-loop, and the case-block is simple and clear.

Bob's your uncle.

UPDATE: Followup on Bash long options.

Tuesday, March 01, 2016

Hand-rolling builders in Java

I showed off a hand-rolled example Java builder pattern today. It has some benefits over existing builder solutions, but is more work than I like:

  1. All parts of the builder are immutable; you can pass a partially built object for another object to complete (I'm looking at you, Lombok)
  2. It's syntactically complete; that is, code won't compile without providing all arguments for the thing to build
  3. It's easy to generalize; in fact, I'm thinking about an annotation processor to generate it for you (but not there quite yet)
public final class CartesianPoint {
    public final int x;
    public final int y;

    public static Builder builder() {
        return new Builder();

    private CartesianPoint(final int x, final int y) {
        this.x = x;
        this.y = y;

    public static final class Builder {
        public WithX x(final int x) {
            return new WithX(x);

        public static final class WithX {
            private final int x;

            public WithY y(final int y) {
                return new WithY(y);

            public final class WithY {
                private final int y;

                public CartesianPoint build() {
                    return new CartesianPoint(x, y);

That was a lot to say! Which is why most times you don't hand-roll builders. Usage is obvious:

public void shouldBuild() {
    final CartesianPoint point = CartesianPoint.builder().

Adding caching for equivalent values is not hard:

public static final class Builder {
    private static final ConcurrentMap
            cache = new ConcurrentHashMap<>();

    public WithX x(final int x) {
        return new WithX(x);

    public static final class WithX {
        private final int x;

        public WithY y(final int y) {
            return new WithY(y);

        public final class WithY {
            private final int y;

            public CartesianPoint build() {
                final CartesianPoint point = new CartesianPoint(x, y);
                final CartesianPoint cached = cache.
                        putIfAbsent(point, point);
                return null == cached ? point : cached;

Monday, February 29, 2016

Maven testing module

My usual practice is to put test dependencies in my Maven parent POM when working on a multi-module project. And I usually have a "testing" module as well for shared test resources such as a logback-test.xml to quiet down test output.

The test dependencies look like clutter in my parent POM, and they are, or so I recently realized.

As all my non-test modules use the "testing" module as a test dependency, I clean this up by moving my test dependencies out of the parent POM and into the "testing" module alongside the common resources. So my layout looks like:

Parent POM
Common properties such as dependency versions, dependencies management marks the "testing" module as "test" scope.
Testing POM
Test dependencies not marked as "test" scope—consumers of this module will mark it as "test", and its transitive dependencies will automatically be "test" as well.
Non-test POMs
Use the "testing" module as a dependency—specified in the parent POM dependencies management—, no test dependencies or resources inherited from parent POM.


Thursday, February 25, 2016

Java 8 shim method references

So I'm working on Spring Boot autoconfiguration for Axon Framework. I run into a nice interface in Axon framework that is unfortunately too specific. So I generalize. The original, pared down:

public interface AuditDataProvider {
    Map<String, Object> provideAuditDataFor(CommandMessage<?> command);

Aha! A SAM interface, interesting. So I craft my look-a-like:

public interface AuditDataProvider {
    Map<String, Object> provideAuditDataFor(Message<?> command);

Not much difference. Note the method parameter is Message rather than CommandMessage. This works fine as the implementation I have in mind uses getMetaData(), defined in Message and inherited by CommandMessage—so the original Axon interface is overspecified, using a more specific parameter type than needed.

(Keep this in mind: most times use the most general type you can.)

Ah, but other parts of the Axon framework ask for an AuditDataProvider (the original code, above) and I'm defining a new, more general interface. I cannot extend the original with mine; Java correctly complains that I am widening the type: all CommandMessages are Messages, but not all Messages are CommandMessages.

Java 8 method references to the rescue!

public interface MessageAuditDataProvider {
    Map<String, Object> provideAuditDataFor(final Message<?> message);

    default AuditDataProvider asAuditDataProvider() {
        return this::provideAuditDataFor;

Because I accept a supertype in my new interface relative to the original, my method reference works simply and cleanly.

Sunday, February 21, 2016

Followup: Feature Toogles for Spring

The original technique in Spring Techniques: Feature toggles for controller request handler methods works well in the small but failed for our large project. We have too many snowflakes, customized replumbing of Spring and Boot, and destructive interference forced another path. So we went with AOP, the magical fallback in such situations, a pity.

But help is on the way!

The Togglz project is close to an official solution for the 2.3.0 release (no timeline announced). I am pleased with the solution taken and contributed a small bit. Here's the documentation commit. Please try 2.3.0-RC1 when it goes to Maven Central.

Modern maven

I've pushed my first release of Modern-J, a maven archetype (project starter), to github. Mostly this is for myself, to have a decent maven archetype for starting spikes and projects.

One thing I learned about maven is dealing with version mismatch in dependencies. The technique is not to modify <dependency> blocks with exclusions but to add a <dependencyManagement> block:


(My POM sets "junit.version" to 4.12.)

This resolves the dependency mismatch between current JUnit (4.12) and the JUnit for System-Rules (4.11), a wonderful JUnit @Rule I hope to see eventually bundled with JUnit itself.

UPDATE: Hat tip to Qulice who beat me there first, though I'm not as strict.

Sunday, February 14, 2016

Java generic exception specifiers

I'm not sure it's widely appreciated that throws clauses can take generic parameters, just as return type or arguments. You can leverage this to improve your error handling. Note the helpful type inference provided by the compiler:

public final class ErrorHandlingMain {
    public static void main(final String... args) {
        final Result<String, RuntimeException> fooResult
                = success("foo");
        final Result<String, Exception> barResult
                = failure(new IOException("bar")); // Note #1

        out.println(fooResult.get());  // Note #2
        out.println(get(fooResult));   // Note #3
        try {
            out.println(barResult.get());  // Note #4
        } catch (final Exception e) {
        try {
        } catch (final Exception e) {

    public static <T, E extends Exception>
    T get(final Result<T, E> result)
            throws E {
        return result.get();

    public interface Result<T, E extends Exception> {
        T get()
                throws E;

        static <T> Result<T, RuntimeException>
        success(final T value) {
            return () -> value;

        static <T, E extends Exception> Result<T, E>
                final E exception) {
            return () -> {
                throw exception;

(Unusual formatting to help with screen width.)

  1. Note type widening from IOException to Exception. Reversing those types won't compile.
  2. Compiler sees RuntimeException, does not require try/catch.
  3. Likewise for static methods.
  4. Compiler sees Exception, requires try/catch.

Sunday, January 10, 2016

Spring Techniques: Feature toggles for controller request handler methods

Maria Gomez, a favorite colleague, asked a wonderful question, "How can I have feature toggles on Spring MVC controller request handler methods?" Existing Java feature toggle libraries focus on toggling individual beans, or using if/else logic inside methods, and don't work at the method level.

Given a trivial example toggle:

public @interface Enabled {
    boolean value();

I'd like my controller to work like this:

public class HelloWorldController {
    public static final String PATH = "/hello-world";

    private static final String texanTemplate = "Howdy, %s!";
    private static final String russianTemplate = "Привет, %s!";
    private final AtomicLong counter = new AtomicLong();

    @RequestMapping(value = "/{name}", method = GET)
    public Greeting sayHowdy(@PathVariable("name") final String name) {
        return new Greeting(counter.incrementAndGet(),
                format(texanTemplate, name));

    @RequestMapping(value = "/{name}", method = GET)
    public Greeting sayPrivet(@PathVariable("name") final String name) {
        return new Greeting(counter.incrementAndGet(),
                format(russianTemplate, name));

(Greeting is a simple struct turned into JSON by Spring.)

To make the example a little more sophisticated, I'd like to use a "3rd-party library" to decide on which features to activate (think "Togglz" or "FF4J", say):

public class EnabledChecker {
    public boolean isMapped(final Enabled enabled) {
        return null == enabled || enabled.value();

Originally I investigated Spring's RequestCondition classes, thinking I could do the same as @RequestMapping(... match conditions ...). However, this is tricky! Spring uses these conditions to decide which method to invoke for each HTTP request, not when deciding which methods should be treated as the handler for a given HTTP path. Taking this route, Spring complains at wiring time of duplicate handlers for the same request path.

The right way is to control the initial wiring of request handler methods, not decide later. First extend RequestMappingHandlerMapping (what a mouthful!):

public class EnabledRequestMappingHandlerMapping
        extends RequestMappingHandlerMapping {
    private EnabledChecker checker;

    protected RequestMappingInfo getMappingForMethod(final Method method,
            final Class<?> handlerType) {
        final Enabled enabled = findAnnotation(method, Enabled.class);
        final boolean mapped = checker.isMapped(enabled);
        return mapped ? super.getMappingForMethod(method, handlerType) : null;

Note this is not directly a bean (no @Component). We need one more bit, to override the factory method that creates these handler mappings:

public class EnabledWebMvcConfigurationSupport
        extends WebMvcConfigurationSupport {
    protected RequestMappingHandlerMapping createRequestMappingHandlerMapping() {
        return new EnabledRequestMappingHandlerMapping();

And Bob's your uncle. EnabledWebMvcConfigurationSupport ensures the returned ReqeustMappingHandlerMapping is injected, and so the "3rd-party library" is available to consult.

Full code in Github.