Running containers from go tests

by about Go, Software Engineering in Technology

Sometimes, we need external services for our tests in order to test the integration. Traditionally, these are run somewhere on the Internet. What if we were to run them dynamically in a container?

In an ideal world we would test all our code in isolation with unit tests. Each component is tested thoroughly for the main functionality, as well as several edge cases. These tests run quickly, a component takes a mere few seconds.

Once we are satisfied that all our components work individually, we tie them together and test them with end-to-end tests. These end to end tests run against the real systems, with real databases and real API backends. Naturally, these tests take a lot longer and run on systems that would not be easy to replicate on your laptop.

End-to-end tests are a wonderful tool that test if the system has been plugged together correctly. Needless to say, we can’t test every edge case, but that’s not a problem. That kind of coverage is provided by unit tests.

However, there is a problem in this plan. A dark spot on the map, if you will. How do we test the database integration in isolation? How do we test talking to an external HTTP service without also testing the entirety of our application?

Easy: we spin up a database service, or the API, on our development environment. We then write unit tests against this external database.

Problem solved? Well, not quite. Normally, setting up these services isn’t as simple as pushing a button. They typically do require a fair bit of configuration. This is why over the years I’ve seen two solutions:

  1. The company system administrators set up a single copy of the service internally, on a server.
  2. The company wiki has a whole host of command line scripts that you need to run to set up the services on your laptop.
  3. Expect everyone to run the tests through a shell script or a makefile. Also, have all the command line tools needed installed on their computer.

As a minor inconvenience, the first solution doesn’t pass the let-me-run-tests-in-an-airplane-test. But that’s not the biggest problem. Both solutions have the nasty side effect of having state. The external service isn’t set up with a clean database for every test case. Leftovers from the previous run can wreak havoc on our test runs and we can spend wonderful hours debugging problems with our test runs.

Dilbert and Pointy-haired Boss have a conversation. Boss: Did you finish writing the software? Dilbert: No. I spent the last three days setting up my programming environment. Boss: So you've done nothing? Dilbert: Nothing you'd understand.
Comic by Dilbert

Containers to the rescue ▲ Back to top

As you might have guessed, containers are a very handy tool to work around this problem. We can just run a container as an external service. So, we put our docker run commands in the wiki and everyone can just go and run those in case of problems. Right?

Sure, this works. But it’s also a manual process and can get quite tedious fairly quickly. Think about it: you have to maintain a wiki page with the copy-paste shell script snippets to each service you want to use. Whenever a crashed test leaves behind some junk in your database, you have a chance for a lengthy and rather unpleasant debugging session to find out why your tests suddenly broke.

What’s worse, you may be left alone with this problem. Your sysadmins have scripted the CI system in such a way that the database is cleaned up. Your developer laptop may not have that amount of care.

What if we could make our test cases so that they launch services on demand? What if we had every test case start what it needs?

Well, I’m glad you asked, because that’s just what I’m about to show you!

Creating containers in Docker ▲ Back to top

Your first instinct may be to just do exec.Command("docker run ..."), and this would probably even work. However, I wouldn’t recommend it as it introduces an external dependency. Instead, being a Go developer, you can pull in Docker as a library.

Now, if you happen to be a Podman user, don’t worry. Podman is compatible with the Docker API and the client library is licensed under the Apache 2.0 license.

So, without further ado, let’s install the client library:

go get github.com/docker/docker@latest
go mod tidy

Once this is done you can set up the Docker connection from your test case. First, we create an instance of the Docker client. We can pass extra options here if we want to connect to remote Docker engines, but the default should work just fine if Docker is installed locally.

Next, we negotiate the API version. This is required, otherwise we’ll use an ancient API version. Finally, we’ll print the server version for verification.

package main_test

import (
    "context"
    "fmt"
    "testing"

    "github.com/docker/docker/client"
)

func TestMyExternalService(t *testing.T) {
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to create Docker client (%v)", err)
    }

    cli.NegotiateAPIVersion(context.Background())

    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        panic(err)
    }
    fmt.Printf("%v\n", serverVersion)
}

If we now run go test we should see something along these lines:

$ go test
{{Docker Engine - Community} [{Engine 20.10.13 map[ApiVersion:1.41 Arch:amd64 BuildTime:2022-03-10T14:05:44.000000000+00:00 Experimental:false GitCommit:906f57f GoVersion:go1.16.15 KernelVersion:5.11.0-46-generic MinAPIVersion:1.12 Os:linux]} {containerd 1.5.10 map[GitCommit:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc]} {runc 1.0.3 map[GitCommit:v1.0.3-0-gf46b6ba]} {docker-init 0.19.0 map[GitCommit:de40ad0]}] 20.10.13 1.41 1.12 906f57f go1.16.15 linux amd64 5.11.0-46-generic false 2022-03-10T14:05:44.000000000+00:00}
PASS
ok      test    0.008s

In other words, it works. But, hold on, we will probably need this in several places. Let’s separate that out into a helper function. Let’s also add a call to t.Helper() to exclude the helper function from the log output, and some helpful log messages.

package main_test

import (
    "context"
    "testing"

    "github.com/docker/docker/client"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    //...
}

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

Launching a container ▲ Back to top

Cool, that makes things much easier for the future. Now, how do we launch a container?

Here’s where things get a bit tricky. You see, the command line Docker client does a lot for us by default. It pulls the container image and then runs it in one step. We will need to implement this in two steps.

First, let’s pull the container image we want to use. To do this we will call cli.ImagePull() and then copy the output of the pull process to a logger. We will need to create this logger because io.Copy() cannot directly write to t.Log().

package main_test

import (
    "context"
    "testing"

    "github.com/docker/docker/client"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    reader, err := cli.ImagePull(
        context.Background(),
        "docker.io/minio/minio",
        types.ImagePullOptions{},
    )
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", "docker.io/minio/minio", err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
}

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

If we did everything correctly, we should see something along these lines:

=== RUN   TestMyExternalService
    main_test.go:33: Creating Docker client...
    main_test.go:33: Docker client ready, server version is 20.10.13.
    main_test.go:39: Pulling image docker.io/minio/minio...
    io.go:425: {"status":"Pulling from minio/minio","id":"latest"}
        
    io.go:425: {"status":"Digest: sha256:21defd60adc7c80269234b470ee2340c4cd16f0a2d82372a968425f9a4f7a1fe"}
        {"status":"Status: Image is up to date for minio/minio:latest"}
        
    main_test.go:47: Image pull complete.
--- PASS: TestMyExternalService (1.32s)
PASS

Not too pretty, but it works and we can see the image pull progress output. As before, let’s make a helper function to make our life just a little bit easier:

package main_test

import (
    "context"
    "io"
    "testing"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/client"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    pullImage(t, cli, "docker.io/minio/minio")
}

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

As a next step, we need to create the container. We’ll implement this in a function called createContainer(). This function will first initialize the parameters required for container creation.

The container config contains the container-specific options that are independent of the host it is running on. In our example we provide the container image here, but we can also pass more parameters such as the command to execute, environment variables, and more.

The host config contains the parameters related to the host machine itself. These can be volume mounts, port mappings, and so forth.

The network config contains the network-specific configuration, such as IP addresses used by the container, links to other containers, and aliases you can use to call this container.

Finally, the platform config contains information like the host architecture, operating system, and so on. This is not terribly interesting for our test use case, but we’ll mention it nonetheless.

Once we have all parameters we can call ContainerCreate(). Finally, we’ll set up a cleanup function to make sure the container is stopped and removed when the test case finishes.

package main_test

import (
    "context"
    "io"
    "testing"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/api/types/container"
    "github.com/docker/docker/api/types/network"
    "github.com/docker/docker/client"
    specs "github.com/opencontainers/image-spec/specs-go/v1"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    pullImage(t, cli, "docker.io/minio/minio")
    containerID := createContainer(t, cli, "docker.io/minio/minio")
}

func createContainer(t *testing.T, cli *client.Client, image string) string {
    t.Helper()
    t.Logf("Creating container from %s...", image)
    var containerConfig = &container.Config{
        Image: image,
    }
    var hostConfig *container.HostConfig
    var networkConfig *network.NetworkingConfig
    var platformConfig *specs.Platform
    var containerName string
    resp, err := cli.ContainerCreate(
        context.Background(),
        containerConfig,
        hostConfig,
        networkConfig,
        platformConfig,
        containerName,
    )
    if err != nil {
        t.Fatalf("failed to create %s container (%v)", image, err)
    }
    t.Logf("Container has ID %s...", resp.ID)

    t.Cleanup(
        func() {
            err := cli.ContainerRemove(
                context.Background(),
                resp.ID,
                types.ContainerRemoveOptions{
                    Force: true,
                },
            )
            if err != nil && !client.IsErrNotFound(err) {
                t.Fatalf("failed to remove container %s (%v)", resp.ID, err)
            }
        },
    )

    return resp.ID
}

//region func pullImage from the previous step

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//endregion

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

Now that we have created a container we can start it. This is a rather simple call, we just have to call ContainerStart(). However, just because the container has started doesn’t mean the service is ready, so we will add a loop based on the container health check. This doesn’t work on all container images, but may be helpful in some cases.

package main_test

import (
    "context"
    "io"
    "testing"
    "time"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/api/types/container"
    "github.com/docker/docker/api/types/network"
    "github.com/docker/docker/client"
    specs "github.com/opencontainers/image-spec/specs-go/v1"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    pullImage(t, cli, "docker.io/minio/minio")
    containerID := createContainer(t, cli, "docker.io/minio/minio")
    startContainer(t, cli, containerID)
}

func startContainer(t *testing.T, cli *client.Client, id string) {
    t.Logf("Starting container %s...", id)
    err := cli.ContainerStart(context.Background(), id, types.ContainerStartOptions{})
    if err != nil {
        t.Fatalf("failed to start container %s (%v)", id, err)
    }
    t.Logf("Container %s is now running, waiting for health check to succeeed...", id)
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
    defer cancel()
    for {
        t.Logf("Inspecting container %s...", id)
        inspectResult, err := cli.ContainerInspect(ctx, id)
        if err != nil {
            t.Fatalf("Failed to inspect container %s (%v)", id, err)
        }
        if inspectResult.State.Health == nil ||
            inspectResult.State.Health.Status == types.Healthy ||
            inspectResult.State.Health.Status == types.NoHealthcheck {
            if inspectResult.State.Running {
                t.Logf("Container %s is now healthy.", id)
                return
            } else {
                t.Logf("Container %s is not running yet, it is %s...", id, inspectResult.State.Status)
            }
        } else {
            t.Logf("Container %s is not healthy yet, health is %s...", id, inspectResult.State.Health.Status)
        }
        select {
        case <-ctx.Done():
            t.Fatalf("Failed to wait for container %s (timeout)", id)
        case <-time.After(time.Second):
        }
    }
}

//region func createContainer from the previous step

func createContainer(t *testing.T, cli *client.Client, image string) string {
    t.Helper()
    t.Logf("Creating container from %s...", image)
    var containerConfig = &container.Config{
        Image: image,
    }
    var hostConfig *container.HostConfig
    var networkConfig *network.NetworkingConfig
    var platformConfig *specs.Platform
    var containerName string
    resp, err := cli.ContainerCreate(
        context.Background(),
        containerConfig,
        hostConfig,
        networkConfig,
        platformConfig,
        containerName,
    )
    if err != nil {
        t.Fatalf("failed to create %s container (%v)", image, err)
    }
    t.Logf("Container has ID %s...", resp.ID)

    t.Cleanup(
        func() {
            err := cli.ContainerRemove(
                context.Background(),
                resp.ID,
                types.ContainerRemoveOptions{
                    Force: true,
                },
            )
            if err != nil && !client.IsErrNotFound(err) {
                t.Fatalf("failed to remove container %s (%v)", resp.ID, err)
            }
        },
    )

    return resp.ID
}

//endregion

//region func pullImage from the previous step

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//endregion

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

Great! So now we have a container running that doesn’t do anything. Isn’t that all wonderful? However, we want our container to be accessible from the host, and we want our test code to be able to use it. In order to do that we will want to create a port mapping to the host that lets us access the service from our test code.

To do that we will add an extra parameter to the createContainer() function that holds the ports that should be mapped. We will specify these ports in this format: 80/tcp. We will extend the host config with port mappings.

We create a port mapping for each specified port. We will not specify which port to map the container port to, which will automatically select a free port on the host. This has the benefit of multiple tests being able to run in parallel.

We will, of course, add the port mapping to our test code, too.

package main_test

import (
    "context"
    "io"
    "testing"
    "time"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/api/types/container"
    "github.com/docker/docker/api/types/network"
    "github.com/docker/docker/client"
    "github.com/docker/go-connections/nat"
    specs "github.com/opencontainers/image-spec/specs-go/v1"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    pullImage(t, cli, "docker.io/minio/minio")
    containerID := createContainer(t, cli, "docker.io/minio/minio", []string{"9000/tcp"})
    startContainer(t, cli, containerID)
}

//region func startContainer from the previous step

func startContainer(t *testing.T, cli *client.Client, id string) {
    t.Logf("Starting container %s...", id)
    err := cli.ContainerStart(context.Background(), id, types.ContainerStartOptions{})
    if err != nil {
        t.Fatalf("failed to start container %s (%v)", id, err)
    }
    t.Logf("Container %s is now running, waiting for health check to succeeed...", id)
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
    defer cancel()
    for {
        t.Logf("Inspecting container %s...", id)
        inspectResult, err := cli.ContainerInspect(ctx, id)
        if err != nil {
            t.Fatalf("Failed to inspect container %s (%v)", id, err)
        }
        if inspectResult.State.Health == nil ||
            inspectResult.State.Health.Status == types.Healthy ||
            inspectResult.State.Health.Status == types.NoHealthcheck {
            if inspectResult.State.Running {
                t.Logf("Container %s is now healthy.", id)
                return
            } else {
                t.Logf("Container %s is not running yet, it is %s...", id, inspectResult.State.Status)
            }
        } else {
            t.Logf("Container %s is not healthy yet, health is %s...", id, inspectResult.State.Health.Status)
        }
        select {
        case <-ctx.Done():
            t.Fatalf("Failed to wait for container %s (timeout)", id)
        case <-time.After(time.Second):
        }
    }
}

//endregion

func createContainer(t *testing.T, cli *client.Client, image string, ports []string) string {
    t.Helper()
    t.Logf("Creating container from %s...", image)
    var containerConfig = &container.Config{
        Image: image,
    }
    var hostConfig = &container.HostConfig{
        PortBindings: map[nat.Port][]nat.PortBinding{},
    }
    for _, port := range ports {
        portString := nat.Port(port)
        hostConfig.PortBindings[portString] = []nat.PortBinding{
            {
                HostIP: "127.0.0.1",
            },
        }
    }
    var networkConfig *network.NetworkingConfig
    var platformConfig *specs.Platform
    var containerName string
    resp, err := cli.ContainerCreate(
        context.Background(),
        containerConfig,
        hostConfig,
        networkConfig,
        platformConfig,
        containerName,
    )
    if err != nil {
        t.Fatalf("failed to create %s container (%v)", image, err)
    }
    t.Logf("Container has ID %s...", resp.ID)

    t.Cleanup(
        func() {
            err := cli.ContainerRemove(
                context.Background(),
                resp.ID,
                types.ContainerRemoveOptions{
                    Force: true,
                },
            )
            if err != nil && !client.IsErrNotFound(err) {
                t.Fatalf("failed to remove container %s (%v)", resp.ID, err)
            }
        },
    )

    return resp.ID
}

//region func pullImage from the previous step

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//endregion

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

Now there will be a port mapping, but our code doesn’t know which port it was mapped to. We can extract that information in the inspect call. We take all the ports and create a map with the host ports that are mapped to container ports.

Finally, we add the port to our test code.

package main_test

import (
    "context"
    "io"
    "strconv"
    "testing"
    "time"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/api/types/container"
    "github.com/docker/docker/api/types/network"
    "github.com/docker/docker/client"
    "github.com/docker/go-connections/nat"
    specs "github.com/opencontainers/image-spec/specs-go/v1"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    pullImage(t, cli, "docker.io/minio/minio")
    containerID := createContainer(t, cli, "docker.io/minio/minio", []string{"9000/tcp"})
    portMappings := startContainer(t, cli, containerID)
    t.Logf("Minio is now available on port %d", portMappings["9000/tcp"])
}

func startContainer(t *testing.T, cli *client.Client, id string) map[string]int {
    t.Logf("Starting container %s...", id)
    err := cli.ContainerStart(context.Background(), id, types.ContainerStartOptions{})
    if err != nil {
        t.Fatalf("failed to start container %s (%v)", id, err)
    }
    t.Logf("Container %s is now running, waiting for health check to succeeed...", id)
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
    defer cancel()
    for {
        t.Logf("Inspecting container %s...", id)
        inspectResult, err := cli.ContainerInspect(ctx, id)
        if err != nil {
            t.Fatalf("Failed to inspect container %s (%v)", id, err)
        }
        if inspectResult.State.Health == nil || inspectResult.State.Health.Status == types.Healthy ||
            inspectResult.State.Health.Status == types.NoHealthcheck {
            if inspectResult.State.Running {
                t.Logf("Container %s is now healthy.", id)

                portMappings := map[string]int{}
                for containerPort, port := range inspectResult.NetworkSettings.Ports {
                    if port != nil {
                        hostPort, err := strconv.ParseInt(port[0].HostPort, 10, 64)
                        if err != nil {
                            t.Fatalf("failed to parse host port: %s", port[0].HostPort)
                        }
                        portMappings[string(containerPort)] = int(hostPort)
                    }
                }
                return portMappings
            } else {
                t.Logf("Container %s is not running yet, it is %s...", id, inspectResult.State.Status)
            }
        } else {
            t.Logf("Container %s is not healthy yet, health is %s...", id, inspectResult.State.Health.Status)
        }
        select {
        case <-ctx.Done():
            t.Fatalf("Failed to wait for container %s (timeout)", id)
        case <-time.After(time.Second):
        }
    }
}

//region func createContainer from the previous step

func createContainer(t *testing.T, cli *client.Client, image string, ports []string) string {
    t.Helper()
    t.Logf("Creating container from %s...", image)
    var containerConfig = &container.Config{
        Image: image,
    }
    var hostConfig = &container.HostConfig{
        PortBindings: map[nat.Port][]nat.PortBinding{},
    }
    for _, port := range ports {
        portString := nat.Port(port)
        hostConfig.PortBindings[portString] = []nat.PortBinding{
            {
                HostIP: "127.0.0.1",
            },
        }
    }
    var networkConfig *network.NetworkingConfig
    var platformConfig *specs.Platform
    var containerName string
    resp, err := cli.ContainerCreate(
        context.Background(),
        containerConfig,
        hostConfig,
        networkConfig,
        platformConfig,
        containerName,
    )
    if err != nil {
        t.Fatalf("failed to create %s container (%v)", image, err)
    }
    t.Logf("Container has ID %s...", resp.ID)

    t.Cleanup(
        func() {
            err := cli.ContainerRemove(
                context.Background(),
                resp.ID,
                types.ContainerRemoveOptions{
                    Force: true,
                },
            )
            if err != nil && !client.IsErrNotFound(err) {
                t.Fatalf("failed to remove container %s (%v)", resp.ID, err)
            }
        },
    )

    return resp.ID
}

//endregion

//region func pullImage from the previous step

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//endregion

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

So far so good. However, we haven’t addressed one important issue yet: not all container images implement health checks. In other words, we may need an additional manual health check to make sure the backing service is truly running.

Creating a container from a Dockerfile ▲ Back to top

Another case you may run into is a situation where you’d want to create a container image for testing purposes from a local Dockerfile without providing a registry. To build a container image we need to create a .tar.gz file with the build directory in it and send it to the Docker Engine.

So, to do that we create a buildImage() function that takes the image name and a list of files with their contents. We build the tar file in the background and feed it through a pipe directly into the ImageBuild() call.

Finally, we modify the test case to build the image.

package main_test

import (
    "archive/tar"
    "compress/gzip"
    "context"
    "io"
    "strconv"
    "testing"
    "time"

    "github.com/docker/docker/api/types"
    "github.com/docker/docker/api/types/container"
    "github.com/docker/docker/api/types/network"
    "github.com/docker/docker/client"
    "github.com/docker/go-connections/nat"
    specs "github.com/opencontainers/image-spec/specs-go/v1"
)

func TestMyExternalService(t *testing.T) {
    cli := dockerClient(t)

    buildImage(t, cli, "minio", map[string][]byte{
        "Dockerfile": []byte("FROM minio/minio"),
    })
    //pullImage(t, cli, "docker.io/minio/minio")
    containerID := createContainer(t, cli, "minio", []string{"9000/tcp"})
    portMappings := startContainer(t, cli, containerID)
    t.Logf("Minio is now available on port %d", portMappings["9000/tcp"])
}

func buildImage(t *testing.T, cli *client.Client, name string, files map[string][]byte) {
    t.Helper()
    t.Logf("Building local image...")

    buildContext, buildContextWriter := io.Pipe()

    go func() {
        t.Logf("Compiling build context...")
        defer func() {
            t.Logf("Build context compiled.")
            _ = buildContextWriter.Close()
        }()

        gzipWriter := gzip.NewWriter(buildContextWriter)
        defer func() {
            _ = gzipWriter.Close()
        }()

        tarWriter := tar.NewWriter(gzipWriter)
        defer func() {
            _ = tarWriter.Close()
        }()

        for filePath, fileContent := range files {
            header := &tar.Header{
                Name:    filePath,
                Size:    int64(len(fileContent)),
                Mode:    0755,
                ModTime: time.Now(),
            }

            if err := tarWriter.WriteHeader(header); err != nil {
                t.Logf("Failed to write build context file %s header (%v)", filePath, err)
                return
            }

            if _, err := tarWriter.Write(fileContent); err != nil {
                t.Logf("Failed to write build context file %s (%v)", filePath, err)
            }
        }
    }()

    imageBuildResponse, err := cli.ImageBuild(
        context.Background(),
        buildContext,
        types.ImageBuildOptions{
            Tags:       []string{name},
            Dockerfile: "Dockerfile",
        },
    )
    if err != nil {
        t.Fatalf("Failed to build local image (%v)", err)
    }

    t.Logf("Reading build log...")
    if _, err := io.Copy(&testLogWriter{t: t}, imageBuildResponse.Body); err != nil {
        t.Fatalf("Failed to read image build log (%v)", err)
    }
    if err := imageBuildResponse.Body.Close(); err != nil {
        t.Fatalf("Failed to close image build log (%v)", err)
    }
}

//region func startContainer from the previous step

func startContainer(t *testing.T, cli *client.Client, id string) map[string]int {
    t.Logf("Starting container %s...", id)
    err := cli.ContainerStart(context.Background(), id, types.ContainerStartOptions{})
    if err != nil {
        t.Fatalf("failed to start container %s (%v)", id, err)
    }
    t.Logf("Container %s is now running, waiting for health check to succeeed...", id)
    ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
    defer cancel()
    for {
        t.Logf("Inspecting container %s...", id)
        inspectResult, err := cli.ContainerInspect(ctx, id)
        if err != nil {
            t.Fatalf("Failed to inspect container %s (%v)", id, err)
        }
        if inspectResult.State.Health == nil || inspectResult.State.Health.Status == types.Healthy ||
            inspectResult.State.Health.Status == types.NoHealthcheck {
            if inspectResult.State.Running {
                t.Logf("Container %s is now healthy.", id)

                portMappings := map[string]int{}
                for containerPort, port := range inspectResult.NetworkSettings.Ports {
                    if port != nil {
                        hostPort, err := strconv.ParseInt(port[0].HostPort, 10, 64)
                        if err != nil {
                            t.Fatalf("failed to parse host port: %s", port[0].HostPort)
                        }
                        portMappings[string(containerPort)] = int(hostPort)
                    }
                }
                return portMappings
            } else {
                t.Logf("Container %s is not running yet, it is %s...", id, inspectResult.State.Status)
            }
        } else {
            t.Logf("Container %s is not healthy yet, health is %s...", id, inspectResult.State.Health.Status)
        }
        select {
        case <-ctx.Done():
            t.Fatalf("Failed to wait for container %s (timeout)", id)
        case <-time.After(time.Second):
        }
    }
}

//endregion

//region func createContainer from the previous step

func createContainer(t *testing.T, cli *client.Client, image string, ports []string) string {
    t.Helper()
    t.Logf("Creating container from %s...", image)
    var containerConfig = &container.Config{
        Image: image,
    }
    var hostConfig = &container.HostConfig{
        PortBindings: map[nat.Port][]nat.PortBinding{},
    }
    for _, port := range ports {
        portString := nat.Port(port)
        hostConfig.PortBindings[portString] = []nat.PortBinding{
            {
                HostIP: "127.0.0.1",
            },
        }
    }
    var networkConfig *network.NetworkingConfig
    var platformConfig *specs.Platform
    var containerName string
    resp, err := cli.ContainerCreate(
        context.Background(),
        containerConfig,
        hostConfig,
        networkConfig,
        platformConfig,
        containerName,
    )
    if err != nil {
        t.Fatalf("failed to create %s container (%v)", image, err)
    }
    t.Logf("Container has ID %s...", resp.ID)

    t.Cleanup(
        func() {
            err := cli.ContainerRemove(
                context.Background(),
                resp.ID,
                types.ContainerRemoveOptions{
                    Force: true,
                },
            )
            if err != nil && !client.IsErrNotFound(err) {
                t.Fatalf("failed to remove container %s (%v)", resp.ID, err)
            }
        },
    )

    return resp.ID
}

//endregion

//region func pullImage from the previous step

func pullImage(t *testing.T, cli *client.Client, image string) {
    t.Logf("Pulling image %s...", image)
    reader, err := cli.ImagePull(context.Background(), image, types.ImagePullOptions{})
    if err != nil {
        t.Fatalf("failed to pull container image %s (%v)", image, err)
    }
    if _, err := io.Copy(&testLogWriter{t: t}, reader); err != nil {
        t.Fatalf("failed to stream logs from image pull (%v)", err)
    }
    t.Logf("Image pull complete.")
}

//endregion

//region func testLogWriter from the previous step

type testLogWriter struct {
    t *testing.T
}

func (logger testLogWriter) Write(p []byte) (n int, err error) {
    logger.t.Helper()
    logger.t.Log(string(p))

    return len(p), nil
}

//endregion

//region func dockerClient() from the previous step

func dockerClient(t *testing.T) *client.Client {
    t.Helper()
    t.Logf("Creating Docker client...")
    cli, err := client.NewClientWithOpts()
    if err != nil {
        t.Fatalf("failed to obtain Docker client (%v)", err)
    }
    cli.NegotiateAPIVersion(context.Background())
    serverVersion, err := cli.ServerVersion(context.Background())
    if err != nil {
        t.Fatalf("failed to get server version (%v)", err)
    }
    t.Logf("Docker client ready, server version is %s.", serverVersion.Version)
    return cli
}

//endregion

That’s it! Now you have everything to run and build containers directly from your test cases. Alternatively, you could take what you learned and integrate something else directly with the Docker API instead of shelling out.