Split a Media File by a List of Time Offsets

We’ll split a single audio file containing the whole Quake soundtrack using SplitMedia.

The list file:

0:00:00 Quake Theme
0:05:08 Aftermath
0:07:34 The Hall of Souls
...
1:08:21 Scourge of Armagon 4
1:11:34 Scourge of Armagon 5
...
1:39:57 Dissolution of Eternity 6 
1:43:01 Dissolution of Eternity 7
1:46:07 Dissolution of Eternity 8

The command:

$ splitmedia Quake\ Soundtrack.m4a list_file.quake quake_output
OFF 000:00:00.000 DUR 000308.000 01_QuakeTheme.m4a
OFF 000:05:08.000 DUR 000146.000 02_Aftermath.m4a
OFF 000:07:34.000 DUR 000500.000 03_TheHallofSouls.m4a
...
OFF 001:08:21.000 DUR 000193.000 14_ScourgeofArmagon4.m4a
OFF 001:11:34.000 DUR 000193.000 15_ScourgeofArmagon5.m4a
...
OFF 001:39:57.000 DUR 000184.000 24_DissolutionofEternity6.m4a
OFF 001:43:01.000 DUR 000186.000 25_DissolutionofEternity7.m4a
OFF 001:46:07.000 DUR 000000.000 26_DissolutionofEternity8.m4a
Advertisements

Create a Video From Your Processing Sketch (Using the IDE)

A quick example to show how to create a video from Processing 2.0 . Here, I’m using the Python mode. We write each frame out to a file and then use the built-in Movie Maker to create a QuickTime video (you can also attach audio if you desire).

Example code:

def setup():
    size(500, 500)
    fill(0)

def draw():
    background(255, 255, 255)

    if mousePressed:
        ellipse(mouseX, mouseY, 80, 80)

    saveFrame("frames/frame-#####.png")

For those that aren’t familiar, this just configures the canvas and then constantly clears the canvas with each redraw. If you’re pressing the mouse-button, it’ll draw a circle whereever the cursor is. After the redraw, it’ll capture one PNG image. It’ll implicitly create the “frames” folder if it doesn’t already exist.

Now, we open Movie Maker:

Open Move Maker

Click on the top “Choose…” button to elect your “frames” directory (or whatever you called it):

Dialog

Click “Create Movie…”, select your video-file name/path, and watch it go:

Make Movie

The final result (in my case):

Final Result

For a library-based approach, look into GSVideo.

Drawing to a Video Using OpenCV and Python

I ran into a considerable amount of difficulty writing a video-file using OpenCV (under Python). Almost every video-writing example on the Internet is only concerned with capturing from a webcam, and, even for the relevant examples, I kept getting an empty/insubstantial file.

In order to write a video-file, you need to declare the FOURCC code that you require. I prefer H.264, so I [unsuccessfully] gave it “H264”. I also heard somewhere that since H.264 is actually the standard, I needed to use “X264” to refer to the codec. This didn’t work either. I also tried “XVID” and “DIVX”. I eventually resorted to trying to pass (-1), as this will allegedly prompt you to make a choice (thereby showing you what options are available). Naturally, no prompt was given and yet it still seemed to execute to the end. There doesn’t appear to be a way to show the available codecs. I was out of options.

It turns out that you still have one or more raw-format codecs available. For example, “8BPS” and “IYUV” are available. MJPEG (“MJPG”) also ended-up working, too. This is the best option (so that we can get compression).

It’s important to note that the nicer codecs might’ve not been available simply due to dependencies. At one point, I reinstalled OpenCV (using Brew) with the “–with-ffmpeg” option. This seemed to pull-down XVID and other codecs. However, I still had the same problems. Note that, since this was installed at the time that I tested “MJPG”, the latter may actually require the former.

Code, using MJPEG:

import cv2
import cv
import numpy as np

_CANVAS_WIDTH = 500
_CANVAS_HEIGHT = 500
_COLOR_DEPTH = 3
_CIRCLE_RADIUS = 40
_STROKE_THICKNESS = -1
_VIDEO_FPS = 1

def _make_image(x, y, b, g, r):
    img = np.zeros((_CANVAS_WIDTH, _CANVAS_HEIGHT, _COLOR_DEPTH), np.uint8)
    position = (x, y)
    color = (b, g, r)
    cv2.circle(img, position, _CIRCLE_RADIUS, color, _STROKE_THICKNESS)

    return img

def _make_video(filepath):
    # Works without FFMPEG.
    #fourcc = cv.FOURCC('8', 'B', 'P', 'S')

    # Works, but we don't have a viewer for it.
    #fourcc = cv.CV_FOURCC('i','Y','U', 'V')

    # Works (but might require FFMPEG).
    fourcc = cv.CV_FOURCC('M', 'J', 'P', 'G')

    # Prompt. This never works, though (the prompt never shows).
    #fourcc = -1

    w = cv2.VideoWriter(
            filepath,
            fourcc,
            _VIDEO_FPS,
            (_CANVAS_WIDTH, _CANVAS_HEIGHT))

    img = _make_image(100, 100, 0, 0, 255)
    w.write(img)

    img = _make_image(200, 200, 0, 255, 0)
    w.write(img)

    img = _make_image(300, 300, 255, 0, 0)
    w.write(img)

    w.release()

if __name__ == '__main__':
    _make_video('video.avi')

Open Health APIs with SMART on FHIR

SMART on FHIR is an initiative to create open-standard health APIs (SMART) on open-standard health data-formats (FHIR).

Here is a tutorial with general SMART/FHIR notes and a sample project to query the SMART API on the public sandbox server and plot the data using Seaborn:

SMARTOnFHIRExample

The tutorial also has information on how to boot a sandbox server with Vagrant.

Screenshots

Community diastolic blood-pressure:

Community diastolic blood-pressure

Community systolic blood-pressure:

Community systolic blood-pressure

Using NetworkX to Plot Graphs

I’ve previously mentioned graphviz for plotting graphs. In truth, these resemble flowcharts. To create something that looks like a more traditional vertex and edge representation, you might consider NetworkX.

Whereas graphviz is a fairly general purpose utility that is not specific to Python and is developed around the well-defined DOT-format, NetworkX is Python specific but creates very nice graphics. It’s also significantly easier to get something that’s acceptable while probably minimizing the amount of time that you have to monkey with it. With that said, there are multiple layout algorithms that you can invoke to calculate the positions of the elements in the output image, and the only apparent way to get a consistent, well-organized/balanced representation seems to arrange them using the circular layout.

Digraph example:

import networkx as nx
import matplotlib.pyplot as plt

def _main():
    g = nx.DiGraph()

    g.add_edge(2, 3, weight=1)
    g.add_edge(3, 4, weight=5)
    g.add_edge(5, 1, weight=10)
    g.add_edge(1, 3, weight=15)

    g.add_edge(2, 7, weight=1)
    g.add_edge(13, 6, weight=5)
    g.add_edge(12, 5, weight=10)
    g.add_edge(11, 4, weight=15)

    g.add_edge(9, 2, weight=1)
    g.add_edge(10, 13, weight=5)
    g.add_edge(7, 5, weight=10)
    g.add_edge(9, 4, weight=15)

    g.add_edge(10, 3, weight=1)
    g.add_edge(11, 2, weight=5)
    g.add_edge(9, 6, weight=10)
    g.add_edge(10, 5, weight=15)

    pos = nx.circular_layout(g)

    edge_labels = { (u,v): d['weight'] for u,v,d in g.edges(data=True) }

    nx.draw_networkx_nodes(g,pos,node_size=700)
    nx.draw_networkx_edges(g,pos)
    nx.draw_networkx_labels(g,pos)
    nx.draw_networkx_edge_labels(g,pos,edge_labels=edge_labels)

    plt.title("Graph Title")
    plt.axis('off')

    plt.savefig('output.png')
    plt.show()

if __name__ == '__main__':
    _main()

Notice that NetworkX depends on matplotlib to do the actual drawing. The boots (highlighted parts on the edges) represent directedness.

Output:

NetworkX

As I said before, it’s easier to get a nicer representation, but it appears that this is at the cost of flexibility. Notice that in the image, there’s a a tendency to overlap. In fact, all of the edge-labels are dead-center. Since the nodes are arranged in a circle, all edges that cross from one side to another will have labels that overlap in the middle. Technically you can adjust whether the label is left, middle, or right, but it’s limited to that (rather than being calculated on the fly).

A Complete Huffman Encoder Implementation

I’ve written a Huffman implementation for the purpose of completely showing how to build the frequency-table, Huffman tree, encoding table, as well as how to serialize the tree, store the tree and data to a file, restore both structures from a file, decode the data using the tree, and how to make this more fun using Python.

This is the test code (test_steps):

clear_bytes = test_get_data()
_dump_hex("Original data:", clear_bytes)

tu = TreeUtility()

# Build encoding table and tree.

he = Encoding()
encoding = he.get_encoding(clear_bytes)

print("Weights:n{0}".format(pprint.pformat(encoding.weights)))
print('')

print("Tree:")
print('')

tu.print_tree(encoding.tree)
print('')

flat_encoding_table = { 
    (hex(c)[2:] + ' ' + chr(c).strip()): b
    for (c, b) 
    in encoding.table.items() }

print("Encoding:n{0}".format(pprint.pformat(flat_encoding_table)))
print('')

# Encode the data.

print("Encoded characters:nn{0}n".
      format(encode_to_debug_string(encoding.table, clear_bytes)))

encoded_bytes = encode(encoding.table, clear_bytes)
_dump_hex("Encoded:", encoded_bytes)

# Decode the data.

decoded_bytes_list = decode(encoding.tree, encoded_bytes)
decoded_bytes = bytes(decoded_bytes_list)

assert 
    clear_bytes == decoded_bytes, 
    "Decoded does not equal the original."

_dump_hex("Decoded:", decoded_bytes)

print("Decoded text:")
print('')
print(decoded_bytes)
print('')

# Serialize and unserialize tree.

serialized_tree = tu.serialize(encoding.tree)
unserialized_tree = tu.unserialize(serialized_tree)

decoded_bytes_list2 = decode(unserialized_tree, encoded_bytes)
decoded_bytes2 = bytes(decoded_bytes_list2)

assert 
    clear_bytes == decoded_bytes2, 
    "Decoded does not equal the original after serializing/" 
    "unserializing the tree."

This is its output:

(Dump) Original data:

54 68 69 73 20 69 73 20 61 20 74 65 73 74 2e 20
54 68 61 6e 6b 20 79 6f 75 20 66 6f 72 20 6c 69
73 74 65 6e 69 6e 67 2e 0a

Weights:
{10: 1,
 32: 7,
 46: 2,
 84: 2,
 97: 2,
 101: 2,
 102: 1,
 103: 1,
 104: 2,
 105: 4,
 107: 1,
 108: 1,
 110: 3,
 111: 2,
 114: 1,
 115: 4,
 116: 3,
 117: 1,
 121: 1}

Tree:

LEFT>
. LEFT>
. . LEFT>
. . . VALUE=(69) [i]
. . RIGHT>
. . . VALUE=(73) [s]
. RIGHT>
. . LEFT>
. . . LEFT>
. . . . VALUE=(54) [T]
. . . RIGHT>
. . . . VALUE=(65) [e]
. . RIGHT>
. . . LEFT>
. . . . LEFT>
. . . . . VALUE=(66) [f]
. . . . RIGHT>
. . . . . VALUE=(72) [r]
. . . RIGHT>
. . . . LEFT>
. . . . . VALUE=(6c) [l]
. . . . RIGHT>
. . . . . VALUE=(a) []
RIGHT>
. LEFT>
. . LEFT>
. . . LEFT>
. . . . VALUE=(6f) [o]
. . . RIGHT>
. . . . VALUE=(61) [a]
. . RIGHT>
. . . LEFT>
. . . . VALUE=(74) [t]
. . . RIGHT>
. . . . VALUE=(6e) [n]
. RIGHT>
. . LEFT>
. . . VALUE=(20) []
. . RIGHT>
. . . LEFT>
. . . . LEFT>
. . . . . LEFT>
. . . . . . VALUE=(6b) [k]
. . . . . RIGHT>
. . . . . . VALUE=(79) [y]
. . . . RIGHT>
. . . . . VALUE=(68) [h]
. . . RIGHT>
. . . . LEFT>
. . . . . VALUE=(2e) [.]
. . . . RIGHT>
. . . . . LEFT>
. . . . . . VALUE=(75) [u]
. . . . . RIGHT>
. . . . . . VALUE=(67) [g]

Encoding:
{'20 ': bitarray('110'),
 '2e .': bitarray('11110'),
 '54 T': bitarray('0100'),
 '61 a': bitarray('1001'),
 '65 e': bitarray('0101'),
 '66 f': bitarray('01100'),
 '67 g': bitarray('111111'),
 '68 h': bitarray('11101'),
 '69 i': bitarray('000'),
 '6b k': bitarray('111000'),
 '6c l': bitarray('01110'),
 '6e n': bitarray('1011'),
 '6f o': bitarray('1000'),
 '72 r': bitarray('01101'),
 '73 s': bitarray('001'),
 '74 t': bitarray('1010'),
 '75 u': bitarray('111110'),
 '79 y': bitarray('111001'),
 'a ': bitarray('01111')}

Encoded characters:

0100 11101 000 001 110 000 001 110 1001 110 1010 0101 001 1010 11110 110 0100 11101 1001 1011 111000 110 111001 1000 111110 110 01100 1000 01101 110 01110 000 001 1010 0101 1011 000 1011 111111 11110 01111

(Dump) Encoded:

4e 83 81 d3 a9 4d 7b 27 66 f8 dc c7 d9 90 dc e0
69 6c 5f fe 7c

(Dump) Decoded:

54 68 69 73 20 69 73 20 61 20 74 65 73 74 2e 20
54 68 61 6e 6b 20 79 6f 75 20 66 6f 72 20 6c 69
73 74 65 6e 69 6e 67 2e 0a

Decoded text:

b'This is a test. Thank you for listening.\n'

PriorityQueue versus heapq

Python’s queue.PriorityQueue queue is actually based on the heapq module, but provides a traditional Python queue interface. The difference appears to largely be the interface: OO vs. passing a list (which heapq can act on directly).

The documentation for PriorityQueue is a little misleading, at least when you didn’t take a moment to think about how the sorting works. This is what it says:

A typical pattern for entries is a tuple in the form: (priority_number, data)

I ran into an issue where I was getting an error when the second parameter (the actual item) couldn’t be used to sort. Whereas the documentation implies that there’s a convention the expects the priority to be in the first spot, it looks like the sort is just evaluating the entire tuple. This means that, when I was trying to insert with a priority that was already in the queue, the second item of both was being compared (this is how tuples are sorted). Curiously, I guess most of my previous use-cases involved priorities (such as timestamps) that were either sparse enough or the data happened to be sortable. Crap.

Now, looking back at the documentation for heapq, I’ve noticed one of the examples:

>>> h = []
>>> heappush(h, (5, 'write code'))
>>> heappush(h, (7, 'release product'))
>>> heappush(h, (1, 'write spec'))
>>> heappush(h, (3, 'create tests'))
>>> heappop(h)
(1, 'write spec')

So, it turns out that heapq also [hastily] recommends using tuples, but we now know that this comes with a lazy assumption: It only works if you’re willing to allow it to sort by the item itself if two or more items share a priority.

So, in conclusion, the nicest strategy is to use an object that has the “rich-comparison methods” defined on it (e.g. __lt__ and __eq__) rather than tuples. This will allow you to constrain the comparison operations.