Perl File::Glob

How the heck have I been using perl pretty much daily for almost 15 years and NOT seen File::Glob?!

There are several scripts that I have written over the years where I wanted to just grab certain files by name. I have done database calls to generate the names, I have used File::Find, I have written the script to just take the files as listed as arguments.

And it could have been soooo much simpler!

#!/usr/bin/perl

use strict;
use warnings;
use File::Glob;

my @txt_files = <*[.]txt>;

for my $file ( @txt_files ) {
    do_something( $file );
}

sub do_something {
    ....
};

markdoc added to the toolbox

In my continuing quest to use (n)vi(m) as my text editor to get everything done with, and to not use server side scripting for websites when I don't need it, I installed markdoc on my home server this weekend.

Markdoc is a "simple" framework for generating static websites using markdown. I am using it to generate a static wiki at wiki.technomage.net. There used to be a docuwiki installed there, which didn't get used as much as it could have, mainly because I really don't like most of the web forms for adding/editing content out there.

With the current setup, I have the wiki on my home computer, and just run an rsync up to the server to update it. This is much like how I have nanoblogger setup. Which means if something happens to my server, I can still get to my stuff on my home server, which is also backed up using spideroak.

The only snags that I ran into with setting up markdoc was some not quite obvious settings.

I ran

    markdoc init wiki

And then moved some things around.

cd wiki
mv static .static
mv markdoc.yaml .markdoc.yaml
rmdir wiki

Then edited .markdoc.yaml to contain:

wiki-name: "wiki . technomage . net"
static-dir: ".static"
wiki-dir: "."

use-default-static: false

markdown:
    extensions: [codehilite]

I initially left out the 'use-default-static: false' line, and copied the .html/media/css/style.css to .static/media/css/style.css, editing style.css to remove the 650px width settings.

This way I can just cd ~/wiki and start editing. When I am done, a simple

markdoc build && rsync -a -e ssh .html/ matt@yoda.grephead.com:~/wiki.technomage.net

Builds everything and pushes it up to the server.

greppy, sage

IPv6 Certification Badge for
greppy

I have to say, I was not thinking that I would be finished with this so soon.

If anyone else is interested in doing this, I highly recommend prgmr.com and hover.com as your VPS provider and Registrar.

I still find it hard to wrap my head around the idea that I have more IP space assigned to my home network than exists in the entire ipv4 space.

The entire "current" internet is using 32bit address space. I have a 64bit assigned to my home network.

mharlow@vaganto:~$ echo '2^32' | bc -l
4294967296
mharlow@vaganto:~$ echo '2^64' | bc -l
18446744073709551616

ipv6 on the home network

As part of my goals for this year, I wanted to learn more about ipv6. I have the impression that my knowledge, like many of my contemporaries, was quite a bit more theoretical than practical.

I was partially inspired by the posts of Tom Perrine on his own adventures in ipv6 exploration, but my requirements differed just a bit from his. He wanted an off the shelf with reasonable cost for consumer grade equipment. I wanted to just get it working and not spend any money right now.

Last year sometime it was pointed out to me that Hurricane Electric offers ipv6 tunnels for free. So a few days ago I finally got around to signing up for one. The tunnelbroker.net signup was quick and painless.

What was not so painless was confirming that my current WiFi AP/Router, a Netgear WGR614v9, was not going to work with this. This caused me to step back and look at my network design again. Thankfully, being someone of a linux junky, I had a home server running Ubuntu 10.04 LTS on the network as well. After moving some things around and reconfiguring what was plugged into where, I ended up with my linux server plugged directly into the cable modem on eth0 with eth1 going to the switch, which had the WiFi AP plugged in, but no longer acting as the DHCP server.

Next up was getting my ipv6 tunnel talking. After using the example configurations from Hurricane Electric, which worked, I ended up with the following in my /etc/network/interfaces

auto 6in4
iface 6in4 inet6 v4tunnel
    address 2001:470:1f10:3bd::2
    netmask 64
    endpoint 209.51.181.2
    gateway 2001:470:1f10:3bd::1
    ttl 255

Now I could reach ipv6 sites from my server, but couldn't get there from the devices on my LAN.

So I started looking for how to hand out ipv6 addresses to the LAN, enter radvd. At first, I could get an ipv6 address on the LAN, but that was it, I couldn't even ping the server. A fellow member of LOPSA, StevenR, in the #lopsa IRC channel, pointed out that there are two prefix's handed out by Tunnel Broker. There is the ipv6 tunnel endpoint, which is a /64 address, and a routed /64 subnet.

ipv6 endpoint:      2001:470:1f10:3bd::2/64
ipv6 routed subnet: 2001:470:1f11:3bd::1/64

Notice that the routed subnet is one digit different. So that was the first part of the problem, so I ended up with the following /etc/radvd.conf

interface eth1
{
    AdvSendAdvert on;
    prefix 2001:470:1f11:3bd::2/64
    {
        AdvOnLink on;
        AdvAutonomous on;
    };
};

Devices on the network could now get an ipv6 address, but still could not route. For that to work, one more change needed to be made to /etc/network/interfaces

auto eth1
iface eth1 inet static
    address 172.27.1.2
    netmask 255.255.255.0
    up ip route add 2011:470:1f11:3bd::/64 eth1

And now, I can reach ipv6 addresses from devices on the LAN:

mharlow@wanderer ~ $ ping6 -n -c 5 ipv6.google.com
PING ipv6.google.com(2001:4860:400a:800::1012) 56 data bytes
64 bytes from 2001:4860:400a:800::1012: icmp_seq=1 ttl=58 time=42.7 ms
64 bytes from 2001:4860:400a:800::1012: icmp_seq=2 ttl=58 time=22.0 ms
64 bytes from 2001:4860:400a:800::1012: icmp_seq=3 ttl=58 time=23.1 ms
64 bytes from 2001:4860:400a:800::1012: icmp_seq=4 ttl=58 time=21.5 ms
64 bytes from 2001:4860:400a:800::1012: icmp_seq=5 ttl=58 time=22.4 ms

--- ipv6.google.com ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 21.589/26.381/42.745/8.197 ms

And now, I am progressing through the Hurricane Electric ipv6 certification.

IPv6 Certification Badge for
greppy

So to sum up:

Get an ipv6 tunnel from Tunnel Broker.

/etc/network/interfaces

# public/wan interface
auto eth0
iface eth0 inet dhcp

# private/lan interface
auto eth1
iface eth1 inet static
    address 172.27.1.2
    netmask 255.255.255.0
    # gateway 172.27.1.1
    up ip route add $Client_IPv6_Address_Prefix/64 eth1

auto 6in4
iface 6in4 inet6 v4tunnel
    address $Client_IPv6_Address
    netmask 64
    endpoint $Server_IPv4_Address
    gateway $Server_IPv6_Address
    ttl 255

/etc/radvd.conf

interface eth1
{
        AdvSendAdvert on;
        prefix $Client_IPv6_Address 
        {
                AdvOnLink on;
                AdvAutonomous on;
        };
};

Bug classification

I saw this on IRC this morning in ##hamradio on freenode:

04:21:21 SteveCooling | I've come up with a new classification of software bug                               
04:21:48 SteveCooling | In relation to types such as Heisenbug or Schrödinbug                                
04:22:13 SteveCooling | I now propose to establish "Hindenbug"                                               
04:23:44 SteveCooling | The kind of bug that causes a catastrophic failure rendering the system unusable, and
                      | lowering the confidence in the system to a point where it needs to be decommissioned 

Todo.txt, vimtodo, vimoutliner, vimwiki, taskpaper.vim, oh my.

Warning, I am going to ramble a bit. This is slightly edited stream of consciousness.

As part of my ongoing struggle to organize my life so that I can stop worrying about missing deadlines and goals, I decided to take a hard look at the issue/task/project tracking ( or lack thereof ) that I have for my job.

For the most part I have been using Todo.txt both in a shell on a linux box and on my Android phone to keep track of and update my todo list. This has worked pretty well. And while I have been using it for work stuff as well, it isn't ideal. I have to ssh out to my home server to update my todo list from work, or make the update on my personal phone. So my work todo list doesn't exist in my work environment. I also like to keep personal and work separate, it's part of how I strive to achieve some kind of balance between the two.

Keeping indvidual todos that have a binary state, either "done" or "not done" is easy with the todo.sh script or andriod app. That is really all I need for most of my personal stuff. For work though, I desire something that handles multiple projects with multiple parts in a sane way. I can't install software on my workstation, so most of the random tools that people have written that run on windows are out of the question. I do have a couple of linux boxes that I can run stuff on, so that is going to be where I aim to put stuff. It's also faster to open up a file and edit it over an ssh connection than to open a file on my workstation to edit, don't ask, I'm not allowed to fix that problem.

The contenders:

Most of these are vim or editor agnostic, I know about emacs OrgMode, and while I have used emacs in the past, it doesn't appeal to me

vimoutliner is available from the vim-addons debian/ubuntu package. I've tried to use it in the past, normally for about a week and then it gets dropped. It often feels too heavy for what I am wanting to do, too structured.

todo.sh, like I have been using, but I have already given some of my issues with that.

vimtodo started life as a way to work with the todo.txt file from todo.sh and went on to have a life of it's own. This one has great potential.

vimwiki could be awesome for documentation, but I don't think it will be able to make the cut for general task/project management. In theory the exported HTML files could be dropped onto our sharepoint, although not into what Microsoft has optimistically called a wiki on sharepoint, just a folder.

taskpaper.vim like vimtodo has great potential. It offers some basic syntax highlighting and the ability to track projects and also sub-projects. It does seem to be a little buggy in a couple of spots, like with indenting to make a subtask every time I hit enter.

Of course, I could always just roll my own formatted text file. And that does have it's own appeal as well. But before I do that I want to at least look at and poke the other possibilities out there. I need something simple enough that I can just open it up and start editing, even if I don't have vim or the custom syntax/plugin files handy. I'd also like it to be something that I can just shove into an email to send to my boss or someone on my team so that they have an idea of where I am on a particular project.

No matter what script/plugin/format I end up using, I think I am going to have to paraphrase Voltaire and put "Perfect is the enemy of good enough" at the beginning of it as a reminder to myself that this is a constant battle, it will never be totally won. Even if I could design the perfect format and corresponding scripts and syntax files for what I need, or think I need, today, in the future that WILL change.

& AND ; are line terminators

I kept running into an error when I would go to background a bunch of processes in a shell script.

$ for i in 1 2 3 4;do echo ${i} & ;done
-bash: syntax error near unexpected token `;'

I finally remembered, after some help from google, that both & and ; are line terminators, so I was doubling up on them. The correct syntax would be:

$ for i in 1 2 3 4;do echo ${i} & done

Hopefully by putting this on the blog, I will remember it. :)

Too many series, not enough books.

I just finished "Mission of Honor" by David Weber. The last book that has been published in the "Honor Harrington" series. I started this series because the other series that I was reading didn't have any new books out.

So now, I have yet another series that I am waiting for an author to publish the next book in. Yup, I did it to myself, again.

Books I am currently waiting for:

"Cold Days" of "The Dresden Files". Like any good author that isn't done with a series, Jim Butcher has left us hanging with what in the heck is going to happen next. "The Dresden Files" is currently my favorite series of books.

A Memory of Light" of "The Wheel of Time" series. This should be the last book in said series. Brandon Sanderson has done an awesome job taking this series over after the death of Robert Jordan. Based on his treatment of these books, I am tempted to pick up his "Mistborne" series. I am pretty sure that if I do that I will end up adding his other books to my library as well.

I initially started these series to be fillers while I waited for Jim Butcher and Brandon Sanderson to publish the next books in their series. Now I am torn as to which one I wish would come out next.

"Tricked" of "The Iron Druid Chronicles". Kevin Hearne and Jim Butcher I think both had the same teachers when it comes to endings to books.

"Raven Calls" of "The Walker Papers" by C. E. Murphy. Is anyone else noticing a trend of "Urban Fantasy" here? "The Iron Druid Chronicles", "The Walker Papers" and "The Dresden Files" are all about magic users set in our current history and society. They each approach it from a slightly different perspective, a crossover with all three would probably break the world, in more ways than one.

"A Rising Thunder" the next in the Honorverse of David Weber. This has been a very good series for me. I'm not sure if it is because I have been listening to the audiobooks of it or just because the writing is that good, or more likely, a combination of both, but I find myself having some very emotional reactions to things that happen in the book to various characters.

"A Song of Ice and Fire" or "The Game of Thrones" books by George R. R. Martin. I like these books, but due to the frequency, or lack there of, at which he releases new titles, I am not exactly looking forward to this with bated breath. For reasons that others who have read the books will understand, I personally beleive that the end of the series will end in a massive cracking of the world, killing everyone.

The problem now, is that none of these authors are due to release anything in these series for the next several months. So what is an addict to do? I am most likely going to take a page from what I used to do every year. I would re-read "The Dresden Files" books, normally timed so that I would finish the last book just before the new one was published. Thankfully, I think, I have plenty of books to get through if I want to get all of them done again.

  • "The Dresden Files", 13 books.
  • "The Wheel of Time", 13 books.
  • "The Honorverse", 12 books.
  • "The Walker Papers", 6 books.
  • "The Iron Druid Chronicles", 3 books.

That should keep me busy for a while, right? If not I can go back and read the 6 books in the "Codex Alera" series also by Jim Butcher again. And if even that is not enough to keep me occupied until one of these authors publishes another book, I do have plenty of other books to read. I could also spend some time getting my entire library dumped into goodreads.com.

Awesome Fail: sudo mount in .bashrc

I saw this in an opensource operating system support channel on a popular IRC network today.

user1 | how do I automount a HDD on boot? putting "sudo mount /dev/sdb1 /mnt/backup" in .bashrc probably wont work, since its a sudo command?
user2 | user1: man fstab has good instructions.
user1 | can I just add /dev/sdb1 /mnt/backup to the end of it? without all those ext3 and other specs in the end
user2 | no. you need some of them options, or it won't be valid.
user2 | also, you might well want to find out the UUID of the filesystem.
user1 | ouch
user1 | considering making one big LVM of all my disks

I know that I should have jumped into this conversation, but... I was at a loss at how to approach the level of the lack of understanding of things that I consider basic. I want people to realize that there are alternatives to closed systems. I want people to experiment, to learn. But I shudder when things like this come up.

That someone knows enough to be able to use 'LVM' in a sentance, but doesn't know how to use /etc/fstab scares the hell out of me.