Tools/Services
safe-rm, an “rm” Wrapper
Use the safe-rm utility to replace your system rm binary with a version that checks the path against a blacklist before continuing.
Tool to Quickly Create Upstart Jobs
Upstart is a monumental improvement over the classical SysV mechanism for Unix/Linux process/daemon management. Still, it’s a somewhat manual process to create jobs. I’ve previously written about the Upstart library that provides the ability to start and stop jobs (using D-Bus), as well as build jobs.
However, the Upstart library also provides two command-line tools:
- upstart-create: Create Upstart jobs using reasonable defaults.
- upstart-reload: Send a signal to Upstart to reload jobs.
Of particular note is the first tool. It’ll take a couple of options, and write a new job file (in /etc/init). The example from the project website (which displays to the screen rather than write a job file):
$ upstart-create test-job /bin/sh -j -d "some description" -a "some author " description "some description" author "some author " exec /bin/sh start on runlevel [2345] stop on runlevel [016] respawn
Inspecting JSON at the Command-Line
This is a simple tool to pull specific values out of JSON, or to pull JSON from JSON, at the command-line:
It’s useful to pull configuration values from within a Bash script.
Example data:
{"a": [9, 6, {"b": [99, 88, 77, "text", 55]}]}
Example commands:
$ cat example.json | jp a.2.b.3 "text" $ cat example.json | jp a.2 | jp b.3 "text" $ cat example.json | jp a.2 | jp -p b.3 text
ZFS for Volume Management and RAID
ZFS is an awesome filesystem, developed by Sun and ported to Linux. Although not distributed, it emphasizes durability and simplicity. It’s essentially an alternative to the common combination of md and LVM.
I’m not going to actually go into a RAID configuration, here, but the following should be intuitive-enough to send you on your way. I’m using Ubuntu 13.10 .
$ sudo apt-get install zfs-fuse Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: nfs-kernel-server kpartx The following NEW packages will be installed: zfs-fuse 0 upgraded, 1 newly installed, 0 to remove and 34 not upgraded. Need to get 1,258 kB of archives. After this operation, 3,302 kB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu/ saucy/universe zfs-fuse amd64 0.7.0-10.1 [1,258 kB] Fetched 1,258 kB in 1s (750 kB/s) Selecting previously unselected package zfs-fuse. (Reading database ... 248708 files and directories currently installed.) Unpacking zfs-fuse (from .../zfs-fuse_0.7.0-10.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up zfs-fuse (0.7.0-10.1) ... * Starting zfs-fuse zfs-fuse [ OK ] * Immunizing zfs-fuse against OOM kills and sendsigs signals... [ OK ] * Mounting ZFS filesystems... [ OK ] Processing triggers for ureadahead ... $ sudo zpool list no pools available $ dd if=/dev/zero of=/home/dustin/zfs1.part bs=1M count=64 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 0.0588473 s, 1.1 GB/s $ sudo zpool create zfs_test /home/dustin/zfs1.part $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 59.5M 94K 59.4M 0% 1.00x ONLINE - $ sudo dd if=/dev/zero of=/zfs_test/dummy_file bs=1M count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.3918 s, 7.5 MB/s $ ls -l /zfs_test/ total 9988 -rw-r--r-- 1 root root 10485760 Mar 7 21:51 dummy_file $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 59.5M 10.2M 49.3M 17% 1.00x ONLINE - $ sudo zpool status zfs_test pool: zfs_test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs_test ONLINE 0 0 0 /home/dustin/zfs1.part ONLINE 0 0 0 errors: No known data errors
So, now we have one pool with one disk. However, ZFS also allows hot reconfiguration. Add (stripe) another disk to the pool:
$ dd if=/dev/zero of=/home/dustin/zfs2.part bs=1M count=64 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 0.0571095 s, 1.2 GB/s $ sudo zpool add zfs_test /home/dustin/zfs2.part $ sudo zpool status zfs_test pool: zfs_test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs_test ONLINE 0 0 0 /home/dustin/zfs1.part ONLINE 0 0 0 /home/dustin/zfs2.part ONLINE 0 0 0 errors: No known data errors $ sudo dd if=/dev/zero of=/zfs_test/dummy_file2 bs=1M count=70 70+0 records in 70+0 records out 73400320 bytes (73 MB) copied, 12.4728 s, 5.9 MB/s $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 119M 80.3M 38.7M 67% 1.00x ONLINE -
I should mention that there is some diskspace overhead, or, at least, some need for explicitly optimizing the disk (if possible). Though I assigned two 64M “disks” to the pool, I received “out of space” errors when I first wrote a 10M file and then attempted to write a 80M file. It was successful when I chose to write a 70M file, instead.
You can also view IO stats:
$ sudo zpool iostat -v zfs_test
capacity operations bandwidth
pool alloc free read write read write
------------------------ ----- ----- ----- ----- ----- -----
zfs_test 80.5M 38.5M 0 11 127 110K
/home/dustin/zfs1.part 40.4M 19.1M 0 6 100 56.3K
/home/dustin/zfs2.part 40.1M 19.4M 0 5 32 63.0K
------------------------ ----- ----- ----- ----- ----- -----
For further usage examples, look at these tutorials:
Making an APNS Push Certificate
It turns out that producing a certificate-request that Apple will accept in order to authorize you to send notifications to a client’s phone on their behalf is nightmarish, due to the shear lack of information on the subject (Apple provides no documentation).
Behold, csr_to_apns_csr. As long as you have your “MDM vendor certificate” (a P12 certificate that Apple gives you) and the CSR for your client, you’re in business.
$ csr_to_apns_csr -h usage: csr_to_apns_csr [-h] [-x] csr vendor_p12 vendor_p12_pass Produce an Apple-formatted APNS 'push' CSR. positional arguments: csr client CSR (PEM) vendor_p12 MDM vendor P12 certificate (DER) vendor_p12_pass passphrase for MDM vendor P12 certificate optional arguments: -h, --help show this help message and exit -x, --xml show raw XML
Using Docker to Package and Distribute Applications as Containers
i’ve already posted a couple of articles referencing Linux’s LXC containers (see here), for lightweight process isolation and resource limiting.
Docker takes LXC and pushes the functionality to produce portable containers that can be:
- versioned alongside your sourcecode
- automatically built alongside binaries (using Dockerfiles)
- published to a repository (both public and private can be used)
In this way, Docker producer application containers that will work consistently from within a variety of different environments.
The purpose of this article is to provide a very quick introduction to Docker, and a tutorial that hastily explains how to create a Python web project within an Ubuntu container, and connect to it.
Docker Daemon
Just like with LXC, a daemon runs in the background to manage the running containers. Though there exists a Docker Ubuntu repository, we use the manual process so that we can provide a more universal tutorial.
$ wget https://get.docker.io/builds/Linux/x86_64/docker-latest -O docker --2014-02-12 01:08:01-- https://get.docker.io/builds/Linux/x86_64/docker-latest Resolving get.docker.io (get.docker.io)... 198.41.249.135, 162.159.251.135, 2400:cb00:2048:1::a29f:fb87, ... Connecting to get.docker.io (get.docker.io)|198.41.249.135|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 15119491 (14M) [binary/octet-stream] Saving to: ‘docker’ 100%[==================================================================================================================>] 15,119,491 3.17MB/s in 5.0s 2014-02-12 01:08:07 (2.91 MB/s) - ‘docker’ saved [15119491/15119491] $ chmod +x docker $ sudo ./docker -d [/var/lib/docker|9024eeb6] +job initserver() [/var/lib/docker|9024eeb6.initserver()] Creating server [/var/lib/docker|9024eeb6] +job init_networkdriver() [/var/lib/docker|9024eeb6.init_networkdriver()] creating new bridge for docker0 [/var/lib/docker|9024eeb6.init_networkdriver()] getting iface addr [/var/lib/docker|9024eeb6] -job init_networkdriver() = OK (0) 2014/02/12 01:08:27 WARNING: Your kernel does not support cgroup swap limit. Loading containers: : done. [/var/lib/docker|9024eeb6.initserver()] Creating pidfile [/var/lib/docker|9024eeb6.initserver()] Setting up signal traps [/var/lib/docker|9024eeb6] -job initserver() = OK (0) [/var/lib/docker|9024eeb6] +job serveapi(unix:///var/run/docker.sock) 2014/02/12 01:08:27 Listening for HTTP on unix (/var/run/docker.sock)
Filling the Container
In another terminal (the first is being used by the daemon), start the Ubuntu container. We do this by passing an image-name. If this image can’t be found locally, the tool will search the public repositor[y,ies]. If the image has not yet been downloaded, it will be automatically downloaded (“pulled”).
Whereas most images will look like “/”, some images have special aliases that take the place of both. Ubuntu has one such alias (which dotCloud, the Docker people, use for development): “ubuntu”.
First, we’re going to create/start the container.
$ sudo ./docker run -i -t ubuntu /bin/bash [sudo] password for dustin: Pulling repository ubuntu 9cd978db300e: Download complete eb601b8965b8: Download complete 9cc9ea5ea540: Download complete 5ac751e8d623: Download complete 9f676bd305a4: Download complete 511136ea3c5a: Download complete f323cf34fd77: Download complete 1c7f181e78b9: Download complete 6170bb7b0ad1: Download complete 7a4f87241845: Download complete 321f7f4200f4: Download complete WARNING: WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4] root@618cd8514fec:/# ls -l total 72 drwxr-xr-x 2 root root 4096 Jan 29 18:10 bin drwxr-xr-x 2 root root 4096 Apr 19 2012 boot drwxr-xr-x 11 root root 4096 Feb 12 06:34 dev drwxr-xr-x 56 root root 4096 Feb 12 06:34 etc drwxr-xr-x 2 root root 4096 Apr 19 2012 home drwxr-xr-x 12 root root 4096 Jan 29 18:10 lib drwxr-xr-x 2 root root 4096 Jan 29 18:10 lib64 drwxr-xr-x 2 root root 4096 Jan 29 18:10 media drwxr-xr-x 2 root root 4096 Apr 19 2012 mnt drwxr-xr-x 2 root root 4096 Jan 29 18:10 opt dr-xr-xr-x 249 root root 0 Feb 12 06:34 proc drwx------ 2 root root 4096 Jan 29 18:10 root drwxr-xr-x 5 root root 4096 Jan 29 18:10 run drwxr-xr-x 2 root root 4096 Jan 29 18:11 sbin drwxr-xr-x 2 root root 4096 Mar 5 2012 selinux drwxr-xr-x 2 root root 4096 Jan 29 18:10 srv dr-xr-xr-x 13 root root 0 Feb 12 06:34 sys drwxrwxrwt 2 root root 4096 Jan 29 18:10 tmp drwxr-xr-x 10 root root 4096 Jan 29 18:10 usr drwxr-xr-x 11 root root 4096 Jan 29 18:10 var
Now, add the dependencies for the sample Python application, and add the code.
root@618cd8514fec:/# mkdir app
root@618cd8514fec:/# cd app
root@618cd8514fec:/app# apt-get install python-pip
Reading package lists... Done
Building dependency tree... Done
The following extra packages will be installed:
python-pkg-resources python-setuptools
Suggested packages:
python-distribute python-distribute-doc
The following NEW packages will be installed:
python-pip python-pkg-resources python-setuptools
0 upgraded, 3 newly installed, 0 to remove and 63 not upgraded.
Need to get 599 kB of archives.
After this operation, 1647 kB of additional disk space will be used.
Do you want to continue [Y/n]?
Get:1 http://archive.ubuntu.com/ubuntu/ precise/main python-pkg-resources all 0.6.24-1ubuntu1 [63.1 kB]
Get:2 http://archive.ubuntu.com/ubuntu/ precise/main python-setuptools all 0.6.24-1ubuntu1 [441 kB]
Get:3 http://archive.ubuntu.com/ubuntu/ precise/universe python-pip all 1.0-1build1 [95.1 kB]
Fetched 599 kB in 26s (22.8 kB/s)
Selecting previously unselected package python-pkg-resources.
(Reading database ... 9737 files and directories currently installed.)
Unpacking python-pkg-resources (from .../python-pkg-resources_0.6.24-1ubuntu1_all.deb) ...
Selecting previously unselected package python-setuptools.
Unpacking python-setuptools (from .../python-setuptools_0.6.24-1ubuntu1_all.deb) ...
Selecting previously unselected package python-pip.
Unpacking python-pip (from .../python-pip_1.0-1build1_all.deb) ...
Setting up python-pkg-resources (0.6.24-1ubuntu1) ...
Setting up python-setuptools (0.6.24-1ubuntu1) ...
Setting up python-pip (1.0-1build1) ...
root@618cd8514fec:/app# pip install web.py
Downloading/unpacking web.py
Downloading web.py-0.37.tar.gz (90Kb): 90Kb downloaded
Running setup.py egg_info for package web.py
Installing collected packages: web.py
Running setup.py install for web.py
Successfully installed web.py
Cleaning up...
root@618cd8514fec:/app# cat < app.py
> #!/usr/bin/env python2.7
>
> import web
>
> urls = (
> '/', 'index'
> )
>
> class index:
> def GET(self):
> return 'Hello, world!'
>
> if __name__ == '__main__':
> app = web.application(urls, globals())
> app.run()
> CODE
root@618cd8514fec:/app# chmod u+x app.py
This is the code that we used (for easy copy-and-pasting):
#!/usr/bin/env python2.7
import web
urls = (
'/', 'index'
)
class index:
def GET(self):
return 'Hello, world!'
if __name__ == '__main__':
app = web.application(urls, globals())
app.run()
Make sure that the application starts correctly:
root@618cd8514fec:/app# ./app.py http://0.0.0.0:8080/
Feel free to do a cURL request to test. Afterwards, exit by doing CTRL+C, and running “exit”. We might’ve exited, but the container will still be running.
$ sudo ./docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 618cd8514fec ubuntu:12.04 /bin/bash 7 minutes ago Exit 0 berserk_engelbart
Commit the changes to the container (in the version-control sense). This won’t affect anything outside of your local system, yet.
$ sudo ./docker commit 618cd8514fec dsoprea/test_python_app 32278919fbe5b080a204fabc8ff430c6bdceaeb93faf5ad247a917de9e6b1f7a
Stop the container.
# sudo ./docker stop 618cd8514fec $ sudo ./docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Now, start it, again, with port-forwarding from port :1234 on the current system to :8080 on the container.
$ sudo ./docker run -d -p 1234:8080 dsoprea/test_python_app /app/app.py WARNING: WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4] 118ba88c5fa4e4102209a5a1dd226ae6588598812bf3ffab0692e5b0766d71d3
Test it using cURL.
$ curl http://localhost:1234 && echo Hello, world!
Publishing Your Docker Image
Now, we’ll push it up to the Docker Index, for public access (this is part is up to you). You’ll need to setup a free account, first. Here, I use my own account (“dsoprea”).
$ sudo ./docker push dsoprea/test_python_app
The push refers to a repository [dsoprea/test_python_app] (len: 1)
Sending image list
Please login prior to push:
Login against server at https://index.docker.io/v1/
Username: dsoprea
Password:
Email: myselfasunder@gmail.com
Login Succeeded
The push refers to a repository [dsoprea/test_python_app] (len: 1)
Sending image list
Pushing repository dsoprea/test_python_app (1 tags)
511136ea3c5a: Image already pushed, skipping
Image 6170bb7b0ad1 already pushed, skipping
Image 9cd978db300e already pushed, skipping
32278919fbe5: Image successfully pushed
3805625219a1: Image successfully pushed
Pushing tag for rev [3805625219a1] on {https://registry-1.docker.io/v1/repositories/dsoprea/test_python_app/tags/latest}
The image is now available at (for my account): https://index.docker.io/u/dsoprea/test_python_app/
If you actually want to do a fresh download/execution of your software, delete the local image (using the image ID).
$ sudo ./docker images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE dsoprea/test_python_app latest 3805625219a1 13 minutes ago 208 MB $ sudo ./docker rmi 3805625219a1 $ sudo ./docker run -d -p 1234:8080 dsoprea/test_python_app /app/app.py Unable to find image 'dsoprea/test_python_app' (tag: latest) locally Pulling repository dsoprea/test_python_app 3805625219a1: Download complete 511136ea3c5a: Download complete 6170bb7b0ad1: Download complete 9cd978db300e: Download complete 32278919fbe5: Download complete WARNING: WARNING: Docker detected local DNS server on resolv.conf. Using default external servers: [8.8.8.8 8.8.4.4] 531e18306f437277fcf19827afde2901ee6b78cd954213b693aa8ae73f651ea0 $ curl http://localhost:1234 && echo Hello, world!
Notice that at times we refer to the “image”, and at others we refer to the “container”. This might be clear to some and confusing to others. The image describes the template from which the container is constructed (instantiated).
Docker was built to coexist in a continuous-integration- and/or Github-type environment. There’s way more to the tool. It’d be well-worth your time to investigate it on your own, as you might find yourself integrating it into your development or deployment processes. Imagine automatically creating drop-in solutions alongside all of your projects, that you can publish with the ease of a version-control push.
Using AUFS to Combine Directories (With AWESOME Benefits)
A stackable or unification filesystem (referred to as a “union” filesystem) is one that combines the contents of many directories into a single directory. Junjiro Okajima’s AUFS (“aufs-tools”, under Ubuntu) is such an FS. However, there are some neat attributes that tools, such as Docker, take advantage of, and this is where it really gets interesting. I’ll discuss only one such feature, here.
AUFS has the concept of “branches”, where each branch is one directory to be combined. In addition, each branch has permissions imposed upon them: essentially “read-only” or “read-write”. By default, the first branch is read-write, and all others are read-only. With this as the foundation, AUFS presents a single, stacked filesystem, but begins to impose special handling on what can be modified, internally, while providing traditional filesystem behavior, externally.
When a delete is performed on a read-only branch, AUFS performs a “whiteout”, where the readonly director[y,ies] are untouched, but hidden files are created in the writable director[y,ies] to record the changes. Similar tracking, along with any, actual, new files, occurs when any change is applied to read-only directories. This also incorporates “copy on write” functionality, where copies of files are made only by necessity, and on demand.
$ mkdir /tmp/dir_a $ mkdir /tmp/dir_b $ mkdir /tmp/dir_c $ touch /tmp/dir_a/file_a $ touch /tmp/dir_b/file_b $ touch /tmp/dir_c/file_c $ sudo mount -t aufs -o br=/tmp/dir_a:/tmp/dir_b:/tmp/dir_c none /mnt/aufs_test/ $ ls -l /mnt/aufs_test/ total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_a -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_b -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_c $ ls -l /tmp/dir_c total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_c $ touch /mnt/aufs_test/new_file_in_unwritable $ ls -l /tmp/dir_c total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_c $ ls -l /tmp/dir_a total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_a -rw-r--r-- 1 dustin dustin 0 Feb 11 23:33 new_file_in_unwritable $ rm /mnt/aufs_test/file_c $ ls -l /tmp/dir_c total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_c $ ls -l /tmp/dir_a total 0 -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_a -rw-r--r-- 1 dustin dustin 0 Feb 11 23:33 new_file_in_unwritable $ ls -la /tmp/dir_a total 16 drwxr-xr-x 4 dustin dustin 4096 Feb 11 23:35 . drwxrwxrwt 17 root root 4096 Feb 11 23:35 .. -rw-r--r-- 1 dustin dustin 0 Feb 11 23:31 file_a -rw-r--r-- 1 dustin dustin 0 Feb 11 23:33 new_file_in_unwritable -r--r--r-- 2 root root 0 Feb 11 23:31 .wh.file_c -r--r--r-- 2 root root 0 Feb 11 23:31 .wh..wh.aufs drwx------ 2 root root 4096 Feb 11 23:31 .wh..wh.orph drwx------ 2 root root 4096 Feb 11 23:31 .wh..wh.plnk
Notice that we use mount rather than mount.aufs. It should also be mentioned that some filesystems on which the subordinate directories might be hosted could be problematic if they’re prone to bizarre behavior. cramfs, for example, is specifically mentioned in the manpage to have certain cases under which its behavior might be considered to be undefined.
AUFS seems to especially lend itself to process-containers.
For more information, visit the homepage and Okajima’s original announcement (2008).
Kernel Namespaces
If you’ve heard of LXC (I’ve written a post on them, before), then you’re already at least partially familiar with *kernel namespaces*. They are what allow different processes to have resources in a compartmentalized fashion. Here’s a great article, courtesy of dotCloud and the Docker project: PaaS under the hood.
Using ssl.wrap_socket for Secure Sockets in Python
Ordinarily, the prospect of having to deal with SSL-encrypted sockets would be enough to make the best of us take a long weekend. However, Python provides some prepackaged functionality to accommodate this. It’s called “wrap_socket”. The only reason that I ever knew about this was from reverse engineering, as I’ve never come upon this in a blog/article.
Here’s an example. Note that I steal the CA bundle from requests, for the purpose of this example. Use whichever bundle you happen to have available (they should all be relatively similar, but will generally be located different places on your system, depending on your OS/distribution).
import ssl
import socket
s_ = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s = ssl.wrap_socket(s_,
ca_certs='/usr/local/lib/python2.7/dist-packages/requests/cacert.pem',
cert_reqs=ssl.CERT_REQUIRED)
s.connect(('www.google.com', 443))
# s.cipher() - Returns a tuple: ('RC4-SHA', 'TLSv1/SSLv3', 128)
# s.getpeercert() - Returns a dictionary:
#
# {'notAfter': 'May 15 00:00:00 2014 GMT',
# 'subject': ((('countryName', u'US'),),
# (('stateOrProvinceName', u'California'),),
# (('localityName', u'Mountain View'),),
# (('organizationName', u'Google Inc'),),
# (('commonName', u'www.google.com'),)),
# 'subjectAltName': (('DNS', 'www.google.com'),)}
s.write("""GET / HTTP/1.1\r
Host: www.google.com\r\n\r\n""")
# Read the first part (might require multiple reads depending on size and
# encoding).
d = s.read()
s.close()
Obviously, your data sits in d, after this code runs.
You must be logged in to post a comment.