Go: Write RPC-Connected Plugins

Plugins are Go’s system for developing shared-libraries. However, this is backed by a general system that can also have alternative implementations. In this case, you can write a plugin in Go that runs from one system and load that plugin in Go, on the fly, from a million other systems. Courtesy of Hashicorp.

https://github.com/hashicorp/go-plugin

 

 

Python: Command-Line Completion for argparse

argcomplete provides very useful functionality that you will basically get for free with just a couple of steps.

Implementation

Put some markup below the shebang of your frontend script in the form of a comment:

#!/usr/bin/env python
# PYTHON_ARGCOMPLETE_OK

The BASH-completion script argcomplete will basically identify and scan any script with a Python shebang that is used with BASH-completion. This entails actually running the script. In order to minimize how much time is spent loading scripts that don’t actually use argcomplete, the completion script will ignore anything that does not have this comment directly following the shebang.

Next, add and import for the argcomplete package and run argcomplete.autocomplete(parser) after you have configured your command-line parameters but before your call to parser.parse_args() (where parser is an instance of argparse.ArgumentParser). This function will produce command-line configuration metadata and then terminate.

That is it. Note that it is not practical to assume that everyone who uses your script will have argcomplete installed. They may not be using BASH (BASH is the only well-supported shell at this time), they may not be using a supported OS, and/or any commercial environments that adopt your tools may be server environments that have no use for command-line completion and refuse to support it. Therefore, you should wrap the import with a try-except for ImportError and then only call argcomplete.autocomplete if you were able to import the package.

Installation

To install autocomplete, the simplest route is to merely do a “sudo pip install argcomplete” and then call “activate-global-python-argcomplete” (this is a physical script likely installed to /usr/local/bin. This only has to be done once and will install a non-project-specific script that will work for any script that is equipped to use argcomplete. For other configuration and semantics, see the project page.

Go: Parsing Time Expressions

go-time-parse will parse time expressions into time.Duration quantities. From the example:

actualDuration, phraseType, err := ParseDuration("24 days from now")
log.PanicIf(err)

fmt.Printf("%d [%s]\n", actualDuration/time.Hour/24, phraseType)

actualDuration, phraseType, err = ParseDuration("now")
log.PanicIf(err)

fmt.Printf("%d [%s]\n", actualDuration, phraseType)

actualDuration, phraseType, err = ParseDuration("12m")
log.PanicIf(err)

fmt.Printf("%d [%s]\n", actualDuration/time.Minute, phraseType)

actualDuration, phraseType, err = ParseDuration("every 6 hours")
log.PanicIf(err)

fmt.Printf("%d [%s]\n", actualDuration/time.Hour, phraseType)

Output:

24 [time]
0 [time]
12 [interval]
6 [interval]

Go: Exif Reader/Writer Library

The go-exif project is now available. It allows you to parse and enumerate/visit/search/dump the existing IFDs/tags in an Exif blob, instantiate a builder to create and construct a new Exif blob, and create a builder from existing IFDs/tags (so you can add/remove starting from what you have). There are also utility functions to make the GPS data manageable.

There are currently 140 unit-tests in the CI process and tested examples covering enumeration, building, thumbnails, GPS, etc…

I have also published go-jpeg-image-structure and go-png-image-structure to actually implement reading/writing Exif in those corresponding formats. PNG adopted Exif support in 2017 and this project was primarily meant to provide PNG with fully-featured Exif-writer support both via library and via command-line tool.

go-exif includes a command-line utility to generally find and parse Exif data in any blob of data. This works for TIFF right off the bat (TIFF is the underlying format of Exif), which I did not specifically write a wrapper implementation for.

 

Go: Implementing Subcommands With go-flags

github.com/jessevdk/go-flags is the go-to tool for argument processing. It supports subcommands but understanding how to do it is a feat of reverse-engineering. So, here is an example.

Code:

package main

import (
    "os"

    "github.com/jessevdk/go-flags"
)

type readParameters struct {
}

type writeParameters struct {
}

type parameters struct {
    Verbose bool `short:"v" long:"verbose" description:"Display logging"`
    Read readParameters `command:"read" alias:"r" description:"Read functions"`
    Write readParameters `command:"write" alias:"w" description:"Write functions"`
}

var (
    arguments = new(parameters)
)

func main() {
    p := flags.NewParser(arguments, flags.Default)

    _, err := p.Parse()
    if err != nil {
        os.Exit(-1)
    }

    switch p.Active.Name {
    case "read":
        //...
    case "write":
        //...
    }
}

If you were to save it as "args.go", this is what the help and the usage would look like:

$ go run args.go -h
Usage:
  args [OPTIONS] 

Application Options:
  -v, --verbose  Display logging

Help Options:
  -h, --help     Show this help message

Available commands:
  read   Read functions (aliases: r)
  write  Write functions (aliases: w)

exit status 255

$ go run args.go read 

Implementing Sessions Under AppEngine With Go

A simple and intuitive package named cascadestore provided by the go-appengine-sessioncascade project to implement and combine Memcache, Datastore, the request context, or any combination of them, as session backends under AppEngine.

Example:

package handlers

import (
    "net/http"

    "google.golang.org/appengine"
    "google.golang.org/appengine/log"

    "github.com/dsoprea/goappenginesessioncascade"
)

const (
    sessionName = "MainSession"
)

var (
    sessionSecret = []byte("SessionSecret")
    sessionStore  = cascadestore.NewCascadeStore(cascadestore.DistributedBackends, sessionSecret)
)

func HandleRequest(w http.ResponseWriter, r *http.Request) {
    ctx := appengine.NewContext(r)

    if session, err := sessionStore.Get(r, sessionName); err != nil {
        panic(err)
    } else {
        if vRaw, found := session.Values["ExistingKey"]; found == false {
            log.Debugf(ctx, "Existing value not found.")
        } else {
            v := vRaw.(string)
            log.Debugf(ctx, "Existing value: [%s]", v)
        }

        session.Values["NewKey"] = "NewValue"
        if err := session.Save(r, w); err != nil {
         panic(err)
        }
    }
}
Atom UI

Efficiently Processing GPX Files in Go

Use gpxreader to process a GPX file of any size without reading the whole thing into memory. This also avoids Go’s issue where the decoder can decode one node at a time, but, when you do that, it implicitly ignores all child nodes (because it automatically seeks to the matching close tag for validation without any ability to disable this behavior).

An excerpt of the test-script from the project:

//...

func (gv *gpxVisitor) GpxOpen(gpx *gpxreader.Gpx) error {
    fmt.Printf("GPX: %s\n", gpx)

    return nil
}

func (gv *gpxVisitor) GpxClose(gpx *gpxreader.Gpx) error {
    return nil
}

func (gv *gpxVisitor) TrackOpen(track *gpxreader.Track) error {
    fmt.Printf("Track: %s\n", track)

    return nil
}

func (gv *gpxVisitor) TrackClose(track *gpxreader.Track) error {
    return nil
}

func (gv *gpxVisitor) TrackSegmentOpen(trackSegment *gpxreader.TrackSegment) error {
    fmt.Printf("Track segment: %s\n", trackSegment)

    return nil
}

func (gv *gpxVisitor) TrackSegmentClose(trackSegment *gpxreader.TrackSegment) error {
    return nil
}

func (gv *gpxVisitor) TrackPointOpen(trackPoint *gpxreader.TrackPoint) error {
    return nil
}

func (gv *gpxVisitor) TrackPointClose(trackPoint *gpxreader.TrackPoint) error {
    fmt.Printf("Point: %s\n", trackPoint)

    return nil
}

//...

func main() {
    var gpxFilepath string

    o := readOptions()

    gpxFilepath = o.GpxFilepath

    f, err := os.Open(gpxFilepath)
    if err != nil {
        panic(err)
    }

    defer f.Close()

    gv := newGpxVisitor()
    gp := gpxreader.NewGpxParser(f, gv)

    err = gp.Parse()
    if err != nil {
        print("Error: %s\n", err.Error())
        os.Exit(1)
    }
}

Output:

$ gpxreadertest -f 20140909.gpx
GPX: GPX
Track: Track
Track segment: TrackSegment
Point: TrackPoint
Point: TrackPoint
Point: TrackPoint

Processing Text for Sentiment and Other Good Stuff

textblob integrates nltk and pattern. It allows you to easily extract and derive information from a passage of text.

To install:

$ sudo pip install textblob

Based on the example, here:

import textblob
import pprint

text = '''
The titular threat of The Blob has always struck me as the ultimate movie monster: an insatiably hungry, amoeba-like mass able to penetrate virtually any safeguard, capable of--as a doomed doctor chillingly describes it--"assimilating flesh on contact. Snide comparisons to gelatin be darned, it's a concept with the most devastating of potential consequences, not unlike the grey goo scenario proposed by technological theorists fearful of artificial intelligence run rampant.
'''

blob = textblob.TextBlob(text)

# Get parts of speech.
blob.tags

# Get list of individual noun-phrases.
blob.noun_phrases

# Print sentence and sentiment polarity:
for sentence in blob.sentences:
    print(sentence)
    print('')
    print(sentence.sentiment.polarity)
    print('')
    print('--')
    print('')

# Convert to Spanish.
blob.translate(to="es")

Output:

>>> import textblob
>>> import pprint
>>> 
>>> text = '''
... The titular threat of The Blob has always struck me as the ultimate movie monster: an insatiably hungry, amoeba-like mass able to penetrate virtually any safeguard, capable of--as a doomed doctor chillingly describes it--"assimilating flesh on contact. Snide comparisons to gelatin be darned, it's a concept with the most devastating of potential consequences, not unlike the grey goo scenario proposed by technological theorists fearful of artificial intelligence run rampant.
... '''
>>> 
>>> blob = textblob.TextBlob(text)
>>>
>>> # Get parts of speech.
>>> blob.tags
[(u'The', u'DT'), (u'titular', u'JJ'), (u'threat', u'NN'), (u'of', u'IN'), (u'The', u'DT'), (u'Blob', u'NNP'), (u'has', u'VBZ'), (u'always', u'RB'), (u'struck', u'VBD'), (u'me', u'PRP'), (u'as', u'IN'), (u'the', u'DT'), (u'ultimate', u'JJ'), (u'movie', u'NN'), (u'monster', u'NN'), (u'an', u'DT'), (u'insatiably', u'RB'), (u'hungry', u'JJ'), (u'amoeba-like', u'JJ'), (u'mass', u'NN'), (u'able', u'JJ'), (u'to', u'TO'), (u'penetrate', u'VB'), (u'virtually', u'RB'), (u'any', u'DT'), (u'safeguard', u'VB'), (u'capable', u'JJ'), (u'of--as', u'JJ'), (u'a', u'DT'), (u'doomed', u'VBN'), (u'doctor', u'NN'), (u'chillingly', u'RB'), (u'describes', u'VBZ'), (u'it', u'PRP'), (u'assimilating', u'VBG'), (u'flesh', u'NN'), (u'on', u'IN'), (u'contact', u'NN'), (u'Snide', u'NNP'), (u'comparisons', u'NNS'), (u'to', u'TO'), (u'gelatin', u'NN'), (u'be', u'VB'), (u'darned', u'JJ'), (u'it', u'PRP'), (u"'", u'POS'), (u's', u'PRP'), (u'a', u'DT'), (u'concept', u'NN'), (u'with', u'IN'), (u'the', u'DT'), (u'most', u'RBS'), (u'devastating', u'JJ'), (u'of', u'IN'), (u'potential', u'JJ'), (u'consequences', u'NNS'), (u'not', u'RB'), (u'unlike', u'IN'), (u'the', u'DT'), (u'grey', u'JJ'), (u'goo', u'NN'), (u'scenario', u'NN'), (u'proposed', u'VBN'), (u'by', u'IN'), (u'technological', u'JJ'), (u'theorists', u'NNS'), (u'fearful', u'JJ'), (u'of', u'IN'), (u'artificial', u'JJ'), (u'intelligence', u'NN'), (u'run', u'VB'), (u'rampant', u'JJ')]
>>>
>>> # Get list of individual noun-phrases.
>>> blob.noun_phrases
WordList([u'titular threat', 'blob', u'ultimate movie monster', u'amoeba-like mass', 'snide', u'potential consequences', u'grey goo scenario', u'technological theorists fearful', u'artificial intelligence run rampant'])
>>>
>>> # Print sentence and sentiment polarity:
>>> for sentence in blob.sentences:
...     print(sentence)
...     print('')
...     print(sentence.sentiment.polarity)
...     print('')
...     print('--')
...     print('')
... 

The titular threat of The Blob has always struck me as the ultimate movie monster: an insatiably hungry, amoeba-like mass able to penetrate virtually any safeguard, capable of--as a doomed doctor chillingly describes it--"assimilating flesh on contact.

0.06

--

Snide comparisons to gelatin be darned, it's a concept with the most devastating of potential consequences, not unlike the grey goo scenario proposed by technological theorists fearful of artificial intelligence run rampant.


-0.341666666667

--

>>>
>>> # Convert to Spanish.
>>> blob.translate(to="es")
TextBlob("La amenaza titular de The Blob siempre me ha parecido como el último monstruo de la película: una, la masa insaciablemente hambriento ameba capaz de penetrar prácticamente cualquier salvaguardia, capaz de - como médico condenado escalofriantemente lo describe - "asimilar carne en contacto. comparaciones Snide a la gelatina ser condenados, es un concepto con el más devastador de las posibles consecuencias, no muy diferente del escenario plaga gris propuesta por los teóricos tecnológicos temerosos de la inteligencia artificial ejecutar rampante.")

Awesome, right?

Subversion from Python

Generally, it’s preferable to bind to libraries rather than executables when given the option. In my case, I needed SVN access from Python and couldn’t, at that time, find a confidence-inspiring library to work with. So, I wrote svn.

It turns out that there is a Subversion-sponsored Python project. It looks to be SWIG-based.

This comes from the python-svn Apt package under Ubuntu.

The Programmer’s Guide has the following examples, among others:

cat:

import pysvn
client = pysvn.Client()
file_content = client.cat('file.txt')

ls:

import pysvn
client = pysvn.Client()
entry_list = client.ls('.')

info:

import pysvn
client = pysvn.Client()
entry = client.info('.')