Use ADB to Connect to Your Android Device From a Docker Container

You may have a use-case where you want to write software to manipulate an Android device using a system or set of tools that are not natively available from your current system. However, you might be able to expose this as a Docker image. For example, your device is (or will be) connected to a Windows machine and you really want to or need to use Linux tools.

No problem. ADB implicitly uses a client-server model: The ADB tool (on your system) connects to the ADB server (runs in the background on your system) which interacts with the ADB daemon (runs on your device). This means that we can forward requests from ADB on the command-line in the guest container in Docker to the ADB server on the host system.

The ADB client and server have to be at the same version, or the client will indiscriminately kill/restart your ADB server. So, as I am currently running Ubuntu 14.04 on my host system, I will do the same in Docker.

First, I will make sure the ADB server is running on my host system. Most of the subcommands that will automatically start the local server, but I will start it directly:

$ adb start-server
* daemon not running. starting it now on port 5037 *
* daemon started successfully *

Now, I will start a container in Docker with Ubuntu 14.04 and automatically install ADB before dropping to a prompt. Note that we are passing “–network=host” in order to share the host’s network identity:

$ docker run -i -t --network=host ubuntu:14.04 /bin/bash -c "sudo apt-get update && sudo apt-get install -y android-tools-adb && /bin/bash"

Eventually, you will end-up at the prompt. Just do something simple like enumerating the devices:

root@mlll2664:/# adb devices
List of devices attached 
05157df572841820 device

The “mlll2664” hostname, represented in the prompt in the Docker container, is, actually, the same hostname as my host system.

So, there you go. Not too painful.

 

Advertisements

Verifying Gerrit CRs to Your Jenkins’ Pipeline’s Shared Libraries

Jenkins’ pipelines represent a totally different direction from traditional, script-based jobs. Instead of specifying your SCM configuration and other build semantics in your job, you mostly script them out via a pipeline (“Jenkinsfile”) file, which is a heterogeneous script/declarative mess. Although you can be purely declarative, this is sometimes too strict to be useful, e.g. not being able to have traditional variable assignments in order to pass information between steps. Even though there are drawbacks, your whole workflow is largely version-controlled.

One of the drawbacks is the complexity of managing shared-library dependencies that you might have in order to make some of your Java/Groovy logic reusable. You can define these in your project (or, the case of multibranch pipelines, the folder) or at the admin level. You can also define these on the fly in the code.

Gerrit change-requests are applied essentially by fetching on a pseudo-refspec location (refs/changes/), and then cherry-picking it in. Therefore, in order to use one, you need to 1) clone, 2) fetch, and 3) either cherry-pick or checkout (or a couple of other methods). Although you can do this with a little effort with your actual Jenkinsfile (which is configured in the job; you can take the refspec from the environment during a verification and then use “FETCH_HEAD” as your branch), these are not intuitively available for the shared-libraries that you might be importing into your pipeline.

It turns out that you can massage the on-the-fly library loader to do this for you.

if (env.GERRIT_PATCHSET_REVISION) {
  echo("Using shared-library for verification.")

  library([
    identifier: 'myLibrary@' + env.GERRIT_PATCHSET_REVISION,
    retriever: modernSCM([
      $class: 'GitSCMSource',
      remote: 'https://repo.host/pipeline/library',
      traits: [
        [$class: 'jenkins.plugins.git.traits.BranchDiscoveryTrait'],
        [
          $class: 'RefSpecsSCMSourceTrait',
          templates: [
            [value: '+refs/heads/*:refs/remotes/@{remote}/*'], 
            [value: "+refs/changes/*:refs/remotes/@{remote}/*"]
          ]
        ]
      ]
    ])
  ])
} else {
  echo("Using shared-library from branch (not a verification).")

  library("myLibrary@" + env.BRANCH_NAME)
}

The principal things to notice are:

  1. We are telling it to bring all of the change-requests into scope (“+refs/changes/:refs/remotes/@{remote}/“).
  2. We are telling Jenkins to import exactly the library version tied to the change (“‘myLibrary@’ + env.GERRIT_PATCHSET_REVISION”). This wouldn’t be accessible without (1).

It works great.

I generated the original version of the code by using the Snippet Generator with the “library” step and then modifying according to the above.

Note that this pipeline can be used both in a multibranch pipeline job context as well as in the normal [single-branch] pipeline job used for verification (because we would only want to kick-off verification jobs just for the branch of the change). env.BRANCH_NAME will automatically be defined in the multibranch context.

Git: Annotate Recent Changes in Blame

Pretty awesome. Pass a duration of time and the blame output will mark the lines from older commits with a “^” prefix.

$ git blame --since=3.weeks -- work_deserving_a_promotion.py

Output:

^4412d8c5 (Dustin Oprea 2018-05-17 18:56:11 -0400 1285)                     remote_fil
^4412d8c5 (Dustin Oprea 2018-05-17 18:56:11 -0400 1286)                     attributes
3386b3595 (Dustin Oprea 2018-05-25 19:27:55 -0400 1287) 
^4412d8c5 (Dustin Oprea 2018-05-17 18:56:11 -0400 1288)             elif fnmatch.fnmat
aac11271e (Dustin Oprea 2018-05-27 02:52:29 -0400 1289)                 # If we're bui
aac11271e (Dustin Oprea 2018-05-27 02:52:29 -0400 1290)                 # and test-key

Thanks to this SO.

Git: Putting All Submodules on Their Branches

By default, submodules are initialized in a detached-head state and not made to track specific branches, even when you specify a branch when initially adding the submodule. This means that any commits you produce will not be on a particular branch and the head commit will not be updated to point to new commits (you would not be able to push any new commits, at least not in the way you expect). This is fine where there is no active development, but, otherwise, you would likely need to intervene and individually checkout each project to the branches.

Assuming you specified a branch when you added the submodule, you can use the “git submodule foreach” subcommand to automate this:

git submodule foreach --recursive 'git checkout $(git config -f .gitmodules --get submodule.$name.branch)'

You can run this from your supermodule project or qualify the “.gitmodule” filename with its path.

If you need something more complicated, you can obviously write a script and call it from this context.

Git: Automatically Squashing at the Prompt

I do a huge amount of squashing, every day of the week. Ever the kind of engineer who wishes to optimize every single redundant operation, I wrote a simple script and then aliased it in my shell. When I do a commit that I know I will be squashing into the previous commit, I simply do a “git commit -m SQUASH -a” and then run “SQUASH_LAST” (my alias, which is autocompleted) to run the squash. The script verifies that the last commit message starts with “SQUASH” (for verification/sanity), executes the squash, and then prints the current commit, previous commit, and final commit revisions.

It is extremely convenient and saves a ton of time and annoyingly-repetitive steps.

The script (which I put in my home):

#!/bin/bash -e

HEAD_COMMIT_MESSAGE=$(git log --format=%B -1 HEAD)

# For safety. Our use-case is usually to always just squash into a commit
# that's associated with an active change. We really don't want lose our head
# and accidentally squash something that wasn't intended to be squashed.
if [[ "${HEAD_COMMIT_MESSAGE}" != SQUASH* ]]; then
    echo "SQUASH: Commit to be squashed should have 'SQUASH' as its commit-message."
    exit 1
fi

_FILEPATH=$(mktemp)
git log --format=%B -1 HEAD~1 >"${_FILEPATH}"

echo "Initial head: $(git rev-parse HEAD)"

git reset --soft HEAD~2 >/dev/null

echo "Head after reset: $(git rev-parse HEAD)"

git commit -F $_FILEPATH >/dev/null
rm $_FILEPATH

echo "Head after commit: $(git rev-parse HEAD)"

echo

The alias (for completeness):

alias SQUASH_LAST='<filepath>'

It really is about the little things.

I have also put the script into a gist.

Go: Testing Against Application Binaries

Unit-testing in Go is simple and a pleasure. The minimum structure required to do unit-tests is scarcely more than that required to write any kind of code. In fact, most of the time it is so easy that you are almost, arguably, guaranteed to waste time doing any debugging at all before you have written unit-tests.

However, it may take a little more thought to test your executables. Even though you can still have a unit-testing source-file (“*_test.go”) and you can call your main() to do something, it’s non-trivial to capture your output and/or pass arguments:

  • You might end-up using os.Pipe() to hook stdout/stderr and launching a goroutine to read from the other end, but you might have issues.
  • Your test might call back into the execute in os.Args[0] (the tests run from a test-specific binary generated by the testing process), but this won’t accept the arbitrary command-line arguments required by your application.
  • You might wrap a call to “go test” and try to pass “-args ” (“-args” is like “–” for tests, where all following arguments are passed verbatim), but I have had issues with this.

Naturally, you want to avoid having to kick-off a build of your application at the top of the tests in order to have something to test against.

You can use “go run” with exec.Command (in os/exec) to easily accomplish all of this while still avoiding a manual build. You can even provide it alternative io.Writer instances in order to capture stdout/stderr output.

Example:

package main

import (
    "testing"
    "os"
    "path"
    "bytes"
    "fmt"

    "os/exec"
)

var (
    assetsPath = ""
    appFilepath = ""
)

func TestMain(t *testing.T) {
    imageFilepath := path.Join(assetsPath, "NDM_8901.jpg")

    cmd := exec.Command(
            "go", "run", appFilepath,
            "-filepath", imageFilepath)

    b := new(bytes.Buffer)
    cmd.Stdout = b
    cmd.Stderr = b

    err := cmd.Run()
    actual := b.String()

    if err != nil {
        fmt.Printf(actual)
        panic(err)
    }

    expected := `IFD=[IfdIdentity] ID=(0x010f) NAME=[Make] COUNT=(6) TYPE=[ASCII] VALUE=[Canon]
IFD=[IfdIdentity] ID=(0x0110) NAME=[Model] COUNT=(22) TYPE=[ASCII] VALUE=[Canon EOS 5D Mark III]
IFD=[IfdIdentity] ID=(0x0112) NAME=[Orientation] COUNT=(1) TYPE=[SHORT] VALUE=[1]
IFD=[IfdIdentity] ID=(0x011a) NAME=[XResolution] COUNT=(1) TYPE=[RATIONAL] VALUE=[72/1]
IFD=[IfdIdentity] ID=(0x011b) NAME=[YResolution] COUNT=(1) TYPE=[RATIONAL] VALUE=[72/1]
IFD=[IfdIdentity] ID=(0x0128) NAME=[ResolutionUnit] COUNT=(1) TYPE=[SHORT] VALUE=[2]
...
IFD=[IfdIdentity] ID=(0x0128) NAME=[ResolutionUnit] COUNT=(1) TYPE=[SHORT] VALUE=[2]
IFD=[IfdIdentity] ID=(0x0201) NAME=[JPEGInterchangeFormat] COUNT=(1) TYPE=[LONG] VALUE=[11444]
IFD=[IfdIdentity] ID=(0x0202) NAME=[JPEGInterchangeFormatLength] COUNT=(1) TYPE=[LONG] VALUE=[21491]
`

    if actual != expected {
        t.Fatalf("Output not as expected:\n%s", actual)
    }
}

func init() {
    goPath := os.Getenv("GOPATH")

    assetsPath = path.Join(goPath, "src", "github.com", "dsoprea", "go-exif", "assets")
    appFilepath = path.Join(goPath, "src", "github.com", "dsoprea", "go-exif", "exif-read-tool", "main.go")
}