Monday, December 28, 2009

Compiling ZynAddSubFX on Ubuntu 9.04

ZynAddSubFX does not compile from source by default and there is no configure file. Add the following packages and the build will succeed.


apt-get install fluid libmxml-dev libfftw3-3 mffm-fftw-dev libasound2-dev


I've never been happy with the appearance of ZynAddSubFX on linux. It seemed the fonts were all too big, crowding the screen and buttons. I was going to look into fixing this, which was the reason I downloaded the source version in the first place. But to my surprise, the version I compiled myself looked great. I'm guessing that fluid makes geometric decisions based on the video display it is used on and that the package maintainer uses a lower screen resolution than I do, thus the crappy display.

Thursday, December 24, 2009

M-Audio / Freebob and Ubuntu 9.10

In setting up a new Ubuntu 9.10 install with ubuntustudio, I had to do a few manual things. There is an UbuntuStudio distribution, but I typically opt to install what I need later, which is generally just the awesome audio stuff. If you go this route and want to use jack, be sure to also install the real time kernel.


sudo apt-get install ubuntustudio-audio
sudo apt-get install linux-rt


I use a M-Audio FiewWire Solo audio card and have used the freebob driver with success in the past. I was happy to see that it is installed by default now with qjackctl. But when I started jack initially I got errors suggesting the driver was not loaded. I had to install the driver and change permissions before I jack would start up.


sudo modprobe raw1397
sudo chmod a+rw /dev/raw1394


Chmod was transient however, and after rebooting /dev/raw1394 didn't have the correct permissions. I added a file called 40-permissions.rules and gave the audio group permission to access the device. I failed to mention earlier that I also had to add my personal user to the "audio" group before getting jack to start up.


sudo vim /etc/udev/rules.d/40-permissions.rules
# EEE1394 (firewire) devices
KERNEL=="raw1394", GROUP="audio"


So I was done right? No. After reboot I still didn't have permission to access the firewire device. ls told me why.


ls -altr /dev/raw1394
crw-rw---- 1 root video 171, 0 2009-12-24 19:55 /dev/raw1394


I guess by default /dev/raw1394 belongs to the video group. Well my user doesn't, and I'm not using firewire for video, so I switched it to belong to audio.


dextron@dextron:~$ sudo chgrp audio /dev/raw1394
ls -altr /dev/raw1394
crw-rw---- 1 root audio 171, 0 2009-12-24 19:55 /dev/raw1394


After another reboot, all is well. I'm wondering if I should have just left the group as "video" and added my user to the video group plus changing my udev rule (to use video instead of audio). I suspect there is no single right answer, but if you have any insights, please leave a comment. Thanks!

Monday, December 07, 2009

Creating digital synth sounds using Python

I used to really be into electronic music. I had a bunch of keyboards, synths and digital samplers and I would use them, along with a computer, to compose techno compositions. In fact, I took up c++ programming in the 90's with the intention of building a drum sequencer for Windows. Well that project didn't get finished and I got sidetracked writing code for corporations; after all you have to pay the bills first and find time to play around later.

Now seems like a good time to resume play.

Since Python is my language of choice these days, I thought I'd see what's out there by way of using the language to generate digital sounds. I found a wave generation library and a few sample snippets on the web, including code which can be used to produce a simple sine wave. I've been sampling since the 80's using Roland, Ensoniq and EMU samplers, and I have a pretty good understanding of the basics. But it turns out I had more to learn at the lowest level. Consider the following snippet of code, which can be used to generate a 90 Hz tone.


import wave, random, math, numpy

noise_output = wave.open('tone.wav', 'w')
noise_output.setparams((2, 2, 44100, 0, 'NONE', 'not compressed'))

# num of seconds
duration = 4

# Hz per second
samplerate = 44100

# total number of samples
samples = duration * samplerate

# pulse per second
frequency = 90 # Hz

"""
The time of one sample is the inverse of the sample rate, and the period is the inverse of the frequency, so the number of samples is also the sample rate divided by the frequency.
"""
period = samplerate / float(frequency) # in sample points

"""
This is the phase increment.
"""
omega = numpy.pi * .2 / period

"""
Creates the x-axis set with 'period' number of items. numpy.arange(int(period), dtype = numpy.float) produces {0..146}, but since each value is a factor of omega, the transformed set is in the range {0..0.627}.
"""
xaxis = numpy.arange(int(period), dtype = numpy.float) * omega

"""
This snippet calculates the sin for each value in xaxis and multiplies that value with 16384. Here we're creating the y-axis data.
"""
ydata = 16384 * numpy.sin(xaxis)


"""
If we were to graph the sets now, we would see an inclining sin wave. Resize creates an extended array which repeats the ydata chunk, the result being something that looks a bit more like a saw wave than a sin wave due to the omega calculation.
"""
signal = numpy.resize(ydata, (samples,))

for i in signal:
packed_value = wave.struct.pack('h', i)
noise_output.writeframes(packed_value)

noise_output.close()



http://en.wikipedia.org/wiki/Direct_digital_synthesis

Friday, December 04, 2009

Find all non-binary files in a directory

You've got to love that there are so many different ways to accomplish this. Here's one:


find . -type f | xargs file | grep -v ELF


1. Recursively finds all files in current directory
2. Pipes to file, which displays information about each file type
3. Pipes to grep, which checks to see that ELF is NOT in result text

file is a handy command. On Linux, binary files with show something like the following.


dvenable@dvenable:~/src$ file bin.exe
bin: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.15, not stripped

Thursday, December 03, 2009

Save with CTRL-S using VIM and copy/paste with CTRL-C, CTRL-V

Oh now this has got to be the best VIM find in years. I love VIM but get sick of copying from a web browser and having to visual paste in strange ways. It's my one holdover from Windows, the desire to use CTRL-C to copy and CTRL-V to paste. It turns out that this is easily remedied by modifying your .vimrc file and adding VIM mappings.


nmap <C-V> "+gP
imap <C-V> <ESC><C-V>i
vmap <C-C> "+y


That's it. Now enjoy the most standard shortcuts in your favorite VIM editor. The mappings looked so easy that I thought I'd try one of my own. Another thing that causes my fingers to grow weary is saving. The typical sequence is: press escape, then :w enter. I save all the time because I'm paranoid and I miss CTRL-S which is so much easier on the hands.


map <C-S> <ESC>:w<CR>
imap <C-S> <ESC>:w<CR>


Oh yes it is a sweet discovery!

Monday, November 30, 2009

Multi-file search and replace one-liner


find . -name "*.ksh" -exec sed -i 's/oldtext/newtext/g' {} \;

Tuesday, November 17, 2009

Centos 5, SELinux, and Bugzilla

I really don't like SELinux. There. I said it.

After installing bugzilla from the yum repository I found that it could not send email notifications. Why? SELinux. The solution? Here.


[dvenable@somecorporateserver ~]$ sudo chcon -R --reference=/var/www/html /usr/share/bugzilla

Friday, November 13, 2009

KVM on Centos 5

I've been a XEN man for sometime now, but yesterday my employer asked me to set up KVM on one of our Centos 5 servers. It was a snap.

Installation

yum install kvm qemu virt-manager libvirt


The next step is to modprobe the kvm module for your architecture. Use the module that is right for you:


modprobe kvm-intel
...or...
modprobe kvm-amd


If all goes well, you should have the kvm module loaded on your system by now. You can check this by running:


/sbin/lsmod | grep kvm


Setup the Bridge

If you want to access the virtual machine from the LAN, you'll need to set up a bridge on one or more of your network interfaces. Once this step is complete, virt-manager will take care of the remaining details when setting up a virtual machine using the GUI.

Check that a default bridge is configured. Run brctl show and you should see something like this.


[root@server ~]# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes


What you should see is a line that includes vnet0. Don't screw the pooch like I did and use virbr0 if vnet0 is not listed. virbr0 is used by NAT and you will hose the system if you go this route. IF vnet0 is not listed, restart networking. If still not present, use Google to resolve before continuing. Your goal is to see this:


bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes
vnet0 8000.000000000000 no


Create a file in network-scripts called ifcfg-vnet0...


vim /etc/sysconfig/network-scripts/ifcfg-vnet0

...and add the following if you want to configure the bridge for eth0:


DEVICE=vnet0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes

Now add the bridge:


brctl addif vnet0 eth0

Now you'll see:


[root@server ~]# brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.000000000000 yes
vnet0 8000.003048625732 no eth0


To make this sticky (available after reboot), add BRIDGE=vnet0 to /etc/sysconfig/network-scripts/ifcfg-eth0.

Create the Virtual Machine

It's possible to create a virtual machine from the command line like this:


qemu-kvm -hda windisk.img -cdrom winimg.iso -m 1024 -boot d

However, it's probably easier to manage things using virt-manager. Launch from the UI or from the command line.


virt-manager

Click on the first row to highlight it, right-click and select new to open the wizard. Follow the steps including choice of ISO and where to write the image. When you get to the network screen, choose Shared Physical Device and you will see your bridged eth0 interface. If you skipped bridging, you will not have this option so take a different one.

Once complete, start up the VM. Done.

Friday, November 06, 2009

Migrating schema from Oracle 11g Enterprise to 10g Express Edition

Here's the situation: you are working in an environment that uses Oracle Enterprise 11g, you want to do some development on your personal computer using your own database, and for whatever reason you cannot simply install a full-blown non-free version of Oracle on your laptop.

Your best free option is to install Oracle 10g XE, especially if you are running Ubuntu because installation is a snap.

http://www.oracle.com/technology/tech/linux/install/xe-on-kubuntu.html

Of course you should know that XE is stripped down, so read up and make sure that you won't be losing features that you'll absolutely need to run your production or test database.

The first step is to export your database. If you're like me an you attempt to simply export using the exp tool that ships with 11g, you will discover that there are incompatibilities between version which will prevent you from importing with 10g XE.

The easy solution? Just use the 10g version of the tools that ship with XE to dump the 11g database. In my environment, everything is Linux, so my quick-and-dirty approach was to create a temp directory on the 11g server and scp the 10g version of exp over to it.


cd $ORACLE_HOME/bin
scp exp me@11gserver:~/temp


Of course I don't have the needed 10g libs on the 11g server, so I just copy the all Oracle libs over to the same directory.


scp /usr/lib/oracle/xe/app/oracle/product/10.2.0/server/lib/* me@11gserver:~/temp


Now I ssh over to the 11g server and set up my environment to add the current directory (temp) to the LD_LIBRARY_PATH.


export LD_LIBRARY_PATH=.:/usr/lib32:/usr/local/lib32


Now to dump...


exp SYSTEM/stuff@11gserver:1521/yeah.stuff.com FILE=export.dmp OWNER=me_the_owner


Now copy the resulting export.dmp file back to your target computer, the one running 10g XE. (You know how to do it!)

Okay, now cross your fingers and attempt the import, hoping to succeed though you just know that critical 11g features won't be supported on XE and will probably cause all your effort to be for naught. But do it anyway.

In my case I now yell "Doh!" because I get the same damn error I had before. Did I mention the error? Yes, the one I got after importing initially, the export file created using 11g's exp. This is what I see on my development laptop.


imp SYSTEM/xx FILE=/home/me/export.dmp FULL=y

Import: Release 10.2.0.1.0 - Production on Fri Nov 6 08:32:09 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.


Connected to: Oracle Database 10g Express Edition Release 10.2.0.1.0 - Production

IMP-00010: not a valid export file, header failed verification
IMP-00000: Import terminated unsuccessfully


I check the head...aha I see my error. I was still using the darn 11g exp because I failed to put the 10g exp on the path on the 11g server before I ran exp.


head export.dmp
EXPORT:V11.01.00
DSYSTEM
RUSERS


Oh, and in case you are thinking you might just break out a HEX editor and change the EXPORT version in the header to the correct version, it won't work. Yes I thought of that too.

Okay, back to the 11g server to re-run. This time notice the ./ before exp to pick up the correct version of the executable.


./exp SYSTEM/stuff@11gserver:1521/yeah.stuff.com FILE=export.dmp OWNER=me_the_owner


Once you "get it right" move the export back to your development box. In my case, I had to create matching tablespaces and users before the importing with success. Once that was done, I ran the following...


imp SYSTEM/xx FILE=/home/me/export.dmp FULL=Y
...
lots of output here, such as
row rejected due to ORACLE error 12899
...
Import terminated successfully with warnings.


I believe my warnings are due to character set incompatibilities, and since I'm set to blow up this copy of the database in a development environment anyway, I'm not going to research them now.

I fired up SqlDeveloper, connected to my 10g XE instance, and selected some data. No problem! Who says you can't move an Oracle Enterprise 11g database to Oracle 10g XE?

Friday, October 30, 2009

Netbeans and Make

Netbeans for c++ uses the standard make tools. The top level Makefile is always the same for every project and is the correct place to add custom build steps. The nbproject subproject files are modified by the NetBeans editor, so it is best to avoid making changes there.

There are hooks in the top-level make where various custom steps can be inserted.


#
# There exist several targets which are by default empty and which can be
# used for execution of your targets. These targets are usually executed
# before and after some main targets. They are:
#
# .build-pre: called before 'build' target
# .build-post: called after 'build' target
# .clean-pre: called before 'clean' target
# .clean-post: called after 'clean' target
# .clobber-pre: called before 'clobber' target
# .clobber-post: called after 'clobber' target
# .all-pre: called before 'all' target
# .all-post: called after 'all' target
# .help-pre: called before 'help' target
# .help-post: called after 'help' target
#
# Targets beginning with '.' are not intended to be called on their own


You can also add your own targets, just as if you were building a makefile from scratch.

Tuesday, October 27, 2009

Generating HTML color syntax highlighting from PRE tag

I'm working on a Django template tag that should transform code embedded within a PRE tag and convert it to nicely formatted HTML with color syntax. To test, I inserted this block of code here, the same code I wrote to do the conversion.


#!/usr/bin/python

from django import template
from django.template.defaultfilters import stringfilter
from django.utils.safestring import mark_safe
from BeautifulSoup import BeautifulSoup
from pygments.lexers import guess_lexer, guess_lexer_for_filename
from pygments import highlight
from pygments.lexers import get_lexer_by_name, TextLexer
from pygments.formatters import HtmlFormatter
import re

register = template.Library()

@stringfilter
def tocode(value):
try:
commentSoup = BeautifulSoup(value)
c = commentSoup.findAll('pre')
for all in c:
brs = all.findAll('br')
for item in brs:
item.replaceWith('\n')
joined = ''.join(all.findAll(text=True))

if all.has_key('class'):
lex = get_lexer_by_name(all['class'], stripall=True)
else:
try:
lex = guess_lexer(joined)
except:
lex = BashSessionLexer
formatter = HtmlFormatter(linenos=True, cssclass="source")
result = highlight(joined, lex, formatter)
all.replaceWith(result)

return mark_safe(commentSoup)

except:
return value

register.filter('tocode', tocode)


This is how it works: You pull a feed from, for example, blogger, and look for PRE tags, assuming that there is something interesting (like a code snippet) inside. After discovering that Pygments' guess_lexer has a hard time identifying most of the snippets I feed it, I decided to make it possible to explicitly specify the PRE content type. I do this by tagging the PRE element with a class name. In this case, I use the class name verbatim to call the get_lexer_by_name method. So this...

<pre class="python" >

...will look up the python lexer and

<pre class="php" >...

will look up the php lexer.

The PHP lexer is very disappointing actually.

Originally, I set the code to use the TextLexer in the event that PRE class attribute was not present, but this was boring. I found that the python lexer produces more appealing results for almost all snippets, so now I'm using it as the default when the PRE attribute is not specified. Of course, not all content will be python lexerable, so in the case of exception I fail over to the TextLexer. The exception handling chain is a bit ugly but it gets the job done.

Thursday, October 08, 2009

Common git operations

Clone repository via ssh

git clone dvenable@dvenable:~/git/python


Show files that have changed:

git status


See all changes to a file:

git diff HEAD^ HEAD filename


See changes for a particular commit:

git show HEAD


Show commit log for a particular file:

git log filename.py


Show diff for all changes since branch master was created:

git diff master index.html

Friday, September 18, 2009

Ignite Tulsa

Last night's Ignite Tulsa event was a blast. The beer was cold, the first one was free, and the talks were often amusing.

The best presentation had to be "If someone gives you roses you should be pissed off" by Matthew Galloway. I follow Galloway on twitter, but, as one not twitter obsessed, haven't really paid too much attention to him before. I simply knew him as the guy who had T-Shirts made up that said, "I'm following @mattgalloway #supergenius".

Well a super genius he is, or if that's overstating it, he is at least a super dynamic speaker. His observations, from roses to Harley Davidson motorcycles, were poignant and so damn true. The point of his five minute twenty slide rant: Have an original thought already.

Hopefully he'll post his slide deck or a video will be made available.

I also want to give props to my former colleague and fellow entrepreneur Geoffrey Simpson. His presentation "Challenging Yourself" was a real inspiration. Geoff is always trying new things, and presenting to crowd that size might have been one of those first steps he described. His conviction, energy, and positive message definitely won over the crowd.

There were other notable presentations, too many to mention. Here's a last one: @OKLibrarian was hilarious.

Friday, August 21, 2009

Verified By Visa

I've been studying Verified By Visa from the Issuer perspective for an upcoming project. You can can get an official Visa overview by reading this document.

Verified By Visa is more-or-less the brand name for the 3-D Secure service. 3-D stands for "Three Domain", referring to the three parties (Visa calls them domains) that provide the software that comprise the service.

Issuer Domain
  • Implementor: Card holder account; Card issuer or processor
  • Servers: Access Control Server, Authentication Enrollment Server (or pages)

Interoperability Domain
  • Implementor: Visa
  • Servers: Visa Directory Server, Authentication Server

Acquirer Domain
  • Implementor: Merchant
  • Servers: Web Server fitted with Merchant Server Plug-in


The issuer implementation is a relatively straightforward secure HTTPS web service. It is even possible (and permitted) to use a single web server instance to fulfill the roles of both Access Control Server and Authentication Enrollment Server. The Issuer server accepts requests from merchant web pages via web requests sent AJAX style and makes requests of Visa's Interoperability Domain servers. All communication is done using a straightforward XML protocol.

It's quite the network dance that goes on between all of the players. Visa apparently distributes a JavaScript library to each merchant for use on their web sites that abstracts the details of the interaction for them, hiding the complexities of the communication to and from the Issuer and Visa servers.

I created a nice diagram that details the communication flow between the 3-D parties, but I probably shouldn't publish it. Ask me if you've got questions.

Wednesday, July 22, 2009

cx_Oracle on Ubuntu 9.04 64 bit

I had a heck of a time getting cx_Oracle for 64 bit. Using the 32 bit version didn't work either, possibly due to the ultimate problem that I eventually fixed locally.

Compounding the problem: Sourceforge was buggy as hell today and I couldn't download anything due to excessive redirects. What's up with Sourceforge?

Finally found the package here:

http://rpm.pbone.net/index.php3/stat/6/idpl/12200190

After installing, still no dice. Importing cx_Oracle failed with a message saying unable to locate the package. A search of my directories showed that the RPM dropped my site-package in the wrong location, at least wrong for Ubuntu 9.04. This fixed it:

sudo cp /usr/lib/python2.6/site-packages/cx_Oracle-5.0.1-py2.6.egg-info /var/lib/python-support/python2.6
sudo cp /usr/lib/python2.6/site-packages/cx_Oracle.so /var/lib/python-support/python2.6

/var/lib instead of /usr/lib... wtf

Monday, May 04, 2009

Startup Lessons #3 - What will it take to feed the founders?

If you can afford to develop your business plan and prototype after hours and on the weekend--- while keeping your day job---do it.

If you have a little bit of startup capital, assume that you will never receive another dime.

Be realistic about what it is going to take to feed your founders. Can they live on the amount in-hand for one year? If not, don't quit the day job.

Our business was born of four founders, all of whom were fairly mature in their careers. It felt like we were making big sacrifices by paying ourselves approximately 60% of our former incomes. Still---the combined amount required was considerable. With what we raised we could have paid two of the founders full-time salaries for a year. But we all went in full-time, leaving us with only 6 months running room. It wasn't enough.

Six more months would have afforded us the opportunity to complete a pilot we began at a financial institution---a pilot with great promise.

So why did we decide the four founders should quit their day jobs up front? A number of reasons, including a strong belief that, to make the most of our combined energies, we must all be 100% committed to the effort. That creativity would best flow if we could all work together in the same room. There may have even been a "fun" factor. All wanted in on the excitement of rapidly building something from the ground up.

What is the take away? Don't let irrational exuberance into the equation. Plan for the worst. If you can't support your founders for a year with what you have in-pocket, think hard. Stick with the day job or find other creative ways to support yourself while you build the company.

Tuesday, April 28, 2009

Correcting VIM indention, Python

First off, get your VIM editor up to speed.

http://henry.precheur.org/2008/4/18/Indenting_Python_with_VIM.html

Check that your filetype and settings are correct...


:filetype

filetype detection:ON plugin:ON indent:ON 1,0-1 Top


Now issue the magic VIM command to re-tab the entire file.


gg=G


Or here's an alternative that converts all tabs to spaces


:set tabstop=4
:set shiftwidth=4
:set expandtab
:retab

Monday, April 27, 2009

Revisiting Pyshards

So I had this idea last year after reading several white papers on database scaling techniques. I also had about a week's time between paid projects to spend however I liked. So I turned my ideas into pyshards, a quick and dirty horizontal database partitioning library. At the end of that week, I published my effort on Google Code for a number of reasons:


  1. I really couldn't find a Python-based toolkit like the one I was imagining at the time, and I needed it.

  2. I was looking for Python gigs and wanted to be able to easily refer hiring technologists to something I had written in Python. (Most of my previous work had been written in JAVA or C++, or could not be made public.)

  3. After years of using great free and open source, I was ready to give something back.

  4. I was curious if others would volunteer to help me build the library.



I received a number of messages from other coders saying they were looking a tool like the one I was building and would be interested in joining the project. But the messages were about as far as it went. Actual participation from the outside was nill.

I went on to use pyshards in my next project, but not quite in the capacity I had originally envisioned. I did use the tool to configure my shards and I used its distribution mechanism to evenly spread data across the many databases. I didn't end up use it for querying, as I needed something a little different. In the following months I went on to create a new page (in the Django sub-project) that visually communicated the shard organization and remaining capacity, but that was the only new work done.

Though the library was imperfect and incomplete, it certainly worked for my purposes. I gave it little thought over the next several months. My hands were full building a new system for the company I had started with my partners.

Jump ahead to PyCon 2009 in Chicago. I had a few hours to kill on the last day before catching my plane and decided to attend an OpenSpaces session called "Is my code Pythonic?" I had intended to simply listen in, but when no one offered up their code for review, I volunteered. A lot of my Python code is proprietary, so I decided to offer up a file from pyshards for review, since it was public. There was a LOT of feedback.

At this point you may be wondering, what do they mean by Pythonic? I generally understood it to mean that you should follow the Zen of Python coding principals and stick to the "pythonic" coding style.

In case you missed it, the Zen of Python is always near and dear if you are working in a Python interpreter.


me@mrroboto:~$ python
Python 2.5.2 (r252:60911, Jul 31 2008, 17:28:52)
[GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import this
The Zen of Python, by Tim Peters

Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than *right* now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!



So that takes care of the Zen, but what is the Pythonic coding style? There are lots of opinions, but in general it is whatever the core developers and expert users say it is, and that evolves over time even as the fabric of the community evolves.

Jump to the present. This morning I'm finally sitting down to take a good look at the patch that Jack Diederich submitted as well as the notes I took while discussing the code with Moshe Zadka. And as I bring the project back up for testing, I remember that the setup and configuration steps were very incomplete. Okay Devin, quit blogging and get to work on it.

Friday, April 24, 2009

Startup Lessons #2 - It takes a customer

There are startups and then there are startups. We were never going to be a "web-based" company. We had decided to use web-centric technology to address specific enterprise problems, so ultimately we would be an enterprise software company. The problem we set out to solve was summed up by one angel like this, "Everybody knows and understands that data behind the firewall is a mess." We hoped to fix that.

My partners worked on a number of large telco projects that ultimately failed, and for a variety of reasons. We understand what the corporation is left with after the legion of offshore programmers and consultants leave. We've watched the systems go live only to then crush under the weight of massive data. We got it that no matter how good the user interfaces, reports and controls seemed to be, what gets delivered somehow doesn't quite provide the value imagined months or years before getting into the project. There is often disappointment.

We came up with a concept would make data easy to access and integrate and that would be massively scalable. We decided to jump. We were going to launch a startup.

But this isn't your internet startup. There are a whole different set of problems when launching a software company that intends to sell to Fortune 1000 companies. With a web based company, you build it, and when you are ready to go, you make the site public. You are deployed. Typically the web startup's next problem is that no one shows for the party. You've got a whole different set of problems.

But for enterprise software, you have to build it, test it and then deploy it. That takes time---time to not only build the software, but to decide on a target market, to build customer relationships, to persuade someone at the company to let you in the door. It takes time to see through the initial deployments, to make it work in the real word, to seal a deal or two.

All of this comes out as you work through the business plan; you realize that it is going to take big money, bigger than you originally imagined, to make it through to those first customers. It's going to take VC money. It seemed to us, at the time, to be the the only way forward.

But there is a chicken and egg problem: The VCs won't give you the time of day until you have those first customers. Don't kid yourself into believing anything else. Seriously. This is not the late nineties and no one is lining up to throw money at every clever software idea. I swear.

Our team has a combined 50 years of experience in the telco space. We have a salesman with a proven track record, a versatile product manager, a data integration specialist with a knack for people, and a software architect. We knew we could build it and had good long-standing relationships in our space. The combination of our ideas, our relationships, and our technology gave us confidence that we would be able to get our product deployed and eventually sold. The VCs and angels liked hearing this, but they asked, what about the customer?

We know about big problems at Verizon and ATT, we told them. Are you in talks with them? Well, not them specifically, but we have talked to a number of other interested parties. You see, that's why we need your money. We need it to see us through until the software is mature enough and while we work up the relationship hierarchy looking for the best champion within these companies.

That's when they tell us to come back once we're deployed at a customer site.

Using our combined efforts we were able to raise some initial money. It wasn't much compared to what it would take to keep our team working full-time for a year (I'll get to this in another post), but it was a start and it gave us hope. We were able to take advantage of a government program to secure matching seed funds. But additional funding did not come as quickly as hoped and our rope grew shorter and shorter.

I built the software while my partners worked every possible existing relationship. The feedback was always positive. Yes, they really liked our technology. Yes, they wanted to work with us. Yes. Yes. Yes. By February we had developed a small sales pipeline, but we were out of money. With no customer, there could be no investment. With no investment, where does this leave the company?

I have omitted the fact, thus far, that the economy tanked during this our first 10 months. I suspect that had it not, we would have secured funding from one or more of the angels we were courting. There is of course no way to really know that.

At this point my partners are looking for services work. They will eventually find themselves engaged on location with one of our company's target customers. At that time, they will look for problems that our software, or new software, can solve. It may be that they'll learn of new problems that require a completely different solution, but once they find it, they'll be in a great place to recommend a software solution. They will have a technology foundation---that they own---that can be adapted to solve a number or data classification, access, performance and integration problems. They will present their ideas and have a much easier time of getting it deployed than they would if they were coming in from the outside.

This solves a number of problems. They will be able to articulate clearly the specific problem to the next investors, which was a bit of a problem during the first go-round. (Clearly they won't be hired by a client unless there is a real problem to solve in the first place!) They will be getting paid for their services work, eliminating the need to use investment to pay their salaries. And most importantly, they will be engaged with a real-world CUSTOMER.

Hindsight really is 20/20. We probably should have gone this route on month one instead of month ten.

Wednesday, April 15, 2009

Startup Lessons #1 - The Valley of Death

If nothing else I've gained considerable insight into the startup process during these last 10 months of chasing the dream. The lessons are abundant.

You've heard it said that it's all about the journey, not the destination. Well, not when you are a startup. The destination is everything.

But there is much to be said about the journey as well. If you are an entrepreneur and thinking of going off on your own, you won't find a lot of war stories. I imagine that those who succeed guard their formula for success and those who don't prefer to pretend that nothing happened. The way I see it we've neither failed nor succeeded at this point, so why not reflect on a few of the lessons learned?

Personally,the journey has been both rewarding and costly. There were a lot of great experiences. I met fellow entrepreneurs. I learned about what it takes to look someone in the eye and ask them for money. I was able to practice the art of giving presentations to Fortune 1000 CTOs and VPs, and of course, I was able to build a substantial---though incomplete---enterprise software system using some of my favorite technologies. And there is so much more--- consensus building when working with partners, learning to recognize when you are self-deluding, finding the energy to take action on the days when it seems there is no hope.

I probably wouldn't be blogging about this today if everything with the business had gone as planned. Friends and former co-workers keep asking me, "Is the company dead?" Well, no. But it is true that we're not where we had hoped to be at this point, that's for sure.

We recently held a second meeting with an Oklahoma-based VC and he said it best,"You're in the valley of death," a chasm between the initial seed funding and a substantial VC round. It's a place where companies who have burned through their initial funding but have yet to make it to revenue reside. Word to the wise: A substantial revenue generating customer relationship is paramount; this must happen before any serious investment consideration will be given to you by a venture fund.

If you go back and read our business plan, at this moment in time we should have a development staff, a beta release, and we should be engaged with at least two paid pilots. But that didn't happen and for a lot of reasons. So what went wrong and is there any way to turn things around now? I'll reflect and blog on it over the next several weeks.

A List

So many things are going on right now. Where to begin? Here is a list of things that I either need or want to do in no particular order.

  • Blog about the startup experience
  • Find gainful employment
  • Find the time to contribute to Catherine Devlin's sqlpython project
  • Find the time to test the diff that Jack Diederich contributed to the pyshards project, and then refactor everything else
  • Take my tax filing to the post office
  • Evaluate plone, drupal, wordpress and...
  • Revamp my home page (a simple resume currently)
  • Figure out what I want to be when I grow up
  • Find the time to finish recording a new electronica song I started writing a few weeks ago
  • Keep working on my contract job---the only item in here that pays---but without getting so buried in it that I can do nothing else
  • Watch out for typos or misspellings on my blog!

Sunday, March 29, 2009

Brave?

I did what I suspect was a fairly brave thing at PyCon. I attended an Open Spaced entitled "Is This Pythonic?" No one else was willing to put their code in front of the Python core developers (Jack Diederich and Moshe Zadka), so I offered up on of the files from my PyShards project. Now, keep in mind that I only decided that Python would be my language of Primacy about 6 months before writing the file that they reviewed. I tend to use the common subset of programming language features so that I can hop back and forth between programming languages without making my brain explode. Nonetheless, I wanted to see what it would take to make my code acceptable to the most pedantic of Python programmers, so I volunteered.

Check out the result:

http://jackdied.blogspot.com/2009/03/is-this-pythonic.html

Jack could not pythonicify the code to his satisfaction without relying on newer and trunk Python features. At the time of his writing, he was still not satisfied with his result.

It was an interesting, if not a bit nerve racking, exercise.

Sunday, March 22, 2009

Django templating

One of my contacts, John, wrote me yesterday, asking for some conceptual help with the Django templating system. I broke down template inheritance for him, using the following very simple example.

I told John...


I typically use Django's template inheritance in this way. A base page handles most of the boiler plate stuff and then I specialize in the inherited pages. URLs and the views (python functions) also get involved.

Here's a super simple page. (Imagine that each code snippet lives in its own file.) This one is base.html.


<html>
<body>
Part One
Part Two
</body>
</html>


I need two more things to display a page using the Django system, the URI routing expression in urls.py and a view. (I'm sure you get this, but to be clear I just want to call out all of the parts.)

The URI expression:


(r'^base$', base_function),


base_function is the view function, typically defined in another file. Something like this will display the page without really doing anything fancy with data.



def base_function(request):
return render_to_response("base.html")



From a browser, you go to /base/, and you see...
Part One Part Two

Now you want to either extend or replace a section of the template. First, let's extend. We modify the original template and add markers for the areas we will be extending or replacing.

<html>
<body>
{% block one %} Part One{% endblock %}
{% block two %} Part Two{% endblock %}
</body>
</html>

Suppose you just want to just change the second block. So you create a file called mod2.html.

{% extends "base.html" %}
{% block two %} A different thing to render in Part Two{% endblock %}

You add another URL expression in urls.py. So now it looks like...

(r'^base$', base_function),
(r'^mod2$', mod_alt_two_function),

and you add a second view function...



def mod_alt_two_function(request):
return render_to_response("mod2.html")



From a browser, you go to /mod2/, and you see...
Part One A different thing to render in Part Two

A third view might replace the first and second section....

(r'^base$', base_function),
(r'^mod2$', mod_alt_two_function),
(r'^all$', all_mod_function),




def all_mod_function(request):
return render_to_response("all.html")



{% extends "base.html" %}
{% block one %} Much different than
{% endblock %}
{% block two %} the original{% endblock %}

Producing this when you hit /all/ in the browser:

Much different than
the original

So that's the basics with template inheritance. You can also just include chunks of other templates by including them right in the middle of the page. And you can wrap the includes in logic, so that based upon the data you are passing to the template from the view function, you can make decisions about what to include. Here is a small snippet from one of my pages that includes one of four different HTML template 'chunks' based upon the values of the variables "result.format" and "result.render_hint".




{% ifequal result.format 'pipe' %}
{% ifequal result.render_hint 'email' %}
{% include 'email_result.html' %}
{% endifequal %}
{% ifnotequal result.render_hint 'email' %}
{% include 'table_properties.html' %}
{% endifnotequal %}
{% endifequal %}
{% ifequal result.format 'text' %}
{% include 'text_result.html' %}
{% endifequal %}
{% ifequal result.format 'tag' %}
{% include 'tag.html' %}
{% endifequal %}

Saturday, January 17, 2009

Android Pt. 3: Photo uploading

I found enough bits and pieces of code to hobble together a photo uploader. This blog post was helpful:


http://itp.nyu.edu/~dbo3/blog/?p=122


Speaking of helpful, once running an Android app with the emulator, you might wonder where Log statements are going. Standard out doesn't show them in the Eclipse editor, but you can tail them with this command:


adb logcat *:W


Where do I want my uploaded photos to go? I have been thinking about writing a small appengine app which could be used for coordinating, managing, and searching for the photographs. Maybe I'll use flickr for photo storage, at least initially.

Next up I need to find the code to upload the photos to flicker. Knowing that I will need an API key, I requested and received one from Amazon. I had heard that that flickr uses a hybrid rest api, and sure enough, after searching around I found no pure REST interface for authenticating and uploading.

I've decided to test the basic uploading steps in Python first. Once I get it working, I'll go back and write the more verbose version using JAVA.

I found that the first step in uploading is going through an authentication handshaking sequence.
First up, grab a frob from flickr.


import md5
import xmlrpclib

uri = "http://api.flickr.com/services/xmlrpc/"
key = YOUR_FLICKR_API_KEY
secret = YOUR_FLICKR_SECRET
proxy = xmlrpclib.ServerProxy( uri )
secretstr = secret + "api_key" + key
hexsig = md5.new( sigstr ).hexdigest()
resturi = "http://api.flickr.com/services/rest/?method=flickr.auth.getFrob&api_key="+key+"&api_sig="+sig
result = proxy.flickr.auth.getFrob( {'api_key': key, 'api_sig': sig} )

print result



At this point, I must take a break and rejoin the human race. More on this next time.