AWS: Adding a new MFA device says “This entity already exists” or “MFA device already exists”

A team-member was trying to register a new MFA device in AWS, and was being told that they already had one registered:

However, their account claims that none are registered:

However, it looks like AWS might show an empty list when it shouldn’t when the user has started the process but was interrupted from completing it. Use the AWS CLI “list-virtual-mfa-devices” subcommand to enumerate the current MFA devices:

$ aws iam list-virtual-mfa-devices
{
    "VirtualMFADevices": [
        {
            "SerialNumber": "arn:aws:iam::326764833890:mfa/karan"
        },
        {
            "SerialNumber": "arn:aws:iam::326764833890:mfa/rachel"
        },
        {
            "SerialNumber": "arn:aws:iam::326764833890:mfa/sarah.benhart"

Now, remove the problematic one using the corresponding SerialNumber value:

$ aws iam delete-virtual-mfa-device --serial-number <SerialNumber value>

You will now be able to restart the process with them. Make sure to have them remove any existing entries in their app so they don’t get confused.

go-exfat

I’m a Linux guy, but the lure of a new filesystem spec can not be denied.

This is a read-only exFAT impementation in pure Go. This provides a direct API, but several command-line tools were also made available with which to inspect the filesystem’s structure. There’s 90% test-coverage and an A+ on the score-card.

https://github.com/dsoprea/go-exfat

Git: Producing a Revert Commit for a Previous Change

Create an inverse commit to flip a previous change. Child’s play:

$ git revert <REFSPEC>

The new commit looks like:

commit 09cc98e3fa121774750728f5fa337befeb02d914
Author: Dustin Oprea <dustin@randomingenuity.com>
Date:   Tue Mar 27 16:02:25 2018 -0400

    Revert "What's the worst that could happen?"

    This reverts commit cf4fc9a50a20a633b82ee28ef9efa46df86db18d.

A lot more nicer and a lot more professional than copying-and-pasting into a new commit or dropping an old commit with a rebase.

It is nearly identical to similar, existing features provided by many version-control review systems.

Add Timestamps to Your Bash History

Put this in your profile script (.bashrc):

export HISTTIMEFORMAT="%d/%m/%y %T "

I also find it helpful to increase the size of my command-line history:

export HISTFILESIZE=10000
export HISTSIZE=10000

The first controls how many are stored. The second controls how many are kept in memory. You can also tweak it to hold an actual unlimited number if you so wish.

Colored Logging in Python Under Jenkins

Setting-up the coloredlogs package is easy. Just install the coloredlogs package from PIP, import coloredlogs, and initialize like the following:

coloredlogs.install(isatty=True)

Note the isatty parameter. By default, coloredlogs detects whether or not you’re in a terminal and enabled ANSI output accordingly. This means that you won’t see ANSI output in Jenkins unless you explicitly pass True.

Make sure to check “Color ANSI Console Output” in your job config. Using “xterm” for “ANSI color map” worked fine for me.

As this will add another handler to the logging, you may see duplicate logging if you have any other handlers.

For more information on configuration, see the documentation.

The Numb-Nuts Tutorial to the Celery Distributed Task Queue (using Python)

Celery is a distributed queue that is very easy to pick-up. I’ll do two quick examples: one that sends a job and returns and another that sends a job and then retrieves a result. I’m going to use SQLite for this example (which is interfaced via SQLAlchemy). Since Celery seems to have some issues importing SQLite under Python 3, we’ll use Python 2.7 . Make sure that you install the “celery” and “sqlalchemy” Python packages.

Without Results

Define the tasks module and save it as sqlite_queue_without_results.py:

import celery

_BACKEND_URI = 'sqla+sqlite:///tasks.sqlite'
_APP = celery.Celery('sqlite_queue_without_results', broker=_BACKEND_URI)

@_APP.task
def some_task(incoming_message):
    return "ECHO: {0}".format(incoming_message)

Start the server:

$ celery -A sqlite_queue_without_results worker --loglevel=info

Execute the following Python to submit a job:

import sqlite_queue_without_results

_ARG1 = "Test message"
sqlite_queue_without_results.some_task.delay(_ARG1)

That’s it. You’ll see something similar to the following in the server window:

Celery (Without Result)

With Results

This time, when we define the tasks module, we’ll provide Celery a results backend. Call this module sqlite_queue_with_results.py:

import celery

_RESULT_URI = 'db+sqlite:///results.sqlite'
_BACKEND_URI = 'sqla+sqlite:///tasks.sqlite'

_APP = celery.Celery('sqlite_queue_with_results', broker=_BACKEND_URI, backend=_RESULT_URI)

@_APP.task
def some_task(incoming_message):
    return "ECHO: {0}".format(incoming_message)

Start the server:

$ celery -A sqlite_queue_with_results worker --loglevel=info

Execute the following Python to submit a job:

import sqlite_queue_with_results

_ARG1 = "Test message"
r = sqlite_queue_with_results.some_task.delay(_ARG1)
value = r.get(timeout=2)

print("Result: [{0}]".format(value))

Since we’re using a traditional DBMS (albeit a fast, local one) to store our results, we’ll be internally polling for a state change and then fetching the result. Therefore, it is a more costly operation and I’ve used a two-second timeout to accommodate this.

The server output will be similar to the following:

Celery (With Result)

The client output will look like:

Result: [ECHO: Test message]

Celery has many more features not explored by this tutorial, including:

  • exception propagation
  • custom task states (including providing metadata that can be read by the client)
  • task ignore/reject/retry responses
  • HTTP-based tasks (for calling your tasks in another place or language)
  • task routing
  • periodic/scheduled tasks
  • workflows
  • drawing visible graphs in order to inspect behavior

For more information, see the user guide.