Wednesday, November 24, 2010

Thank God for Tripper McCarthy

A long time ago, in our galaxy, Tripper McCarthy posted this:


Tripper McCarthy Tue Jun 24 2003 19:19:31 GMT-0500 (CDT)

There have been several posts asking how to change the body of a message
(not the header) from within a handler. Here is a way to pull it off. It's a
little strange, but it works.

public void invoke(MessageContext msgContext) throws AxisFault {
try {
javax.xml.soap.SOAPMessage soapMessage =
msgContext.getRequestMessage();
String oldRequest =
soapMessage.getSOAPPart().getEnvelope().toString();

int index = oldRequest.indexOf(" index = oldRequest.indexOf(">",index);
String newRequest = oldRequest.substring(0,index+1) +
"5" +
oldRequest.substring(index+1,oldRequest.length());

ByteArrayInputStream istream = new
ByteArrayInputStream(newRequest.getBytes());
Message msg = new Message(istream, false);
msgContext.setRequestMessage(msg);

}
catch(Exception e) {
e.printStackTrace();
throw new AxisFault(e.getMessage());
}
}


Thanks Tripper. I've been beating my head for hours trying to modify the message using the deeply-nested object model in Handler. Your solution cut to the chase.

Friday, October 22, 2010

RedHat Enterprise 5 and Python

Update:

I found it difficult to make Python 2.6 the default distribution for RHEL. It jacks up YUM when you do it. So I'm abandoning that approach altogether. The following notes were taken before I decided to take a different approach.



Python 2.4 is the python that ships with RedHat Enterprise Linux 5. This will never do.

I didn't remove 2.4 because some suggested against it, saying that RedHat needs to keep this version around.

Suppose you want to go to version 2.6. Here's what to do: either install from repo (yum install python26), which requires that you enable EPEL, or build from Python from source. It will install along side 2.4. When you next run Python you will find that the OS is still defaulting to 2.4. To fix this run update-alternatives.


update-alternatives --config python


Run Python again and your system wide default should now be the latest version you installed.

Alert: After building my own 2.6 and following the process above, I discovered that I had inadvertently screwed up YUM. Beware this process until I post final comments.

Shrink PDF file on linux (command line)

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -sPAPERSIZE=a4 -dBATCH -sOutputFile=output.pdf input.pdf

Thursday, October 14, 2010

Convert 32 to 16 bit WAV (Linux) recipe

Check file info:



devin@studio:~/Music/samples$ sndinfo funky.wav
virtual_keyboard real time MIDI plugin for Csound
PortMIDI real time MIDI plugin for Csound
PortAudio real-time audio module for Csound
util sndinfo:
funky.wav:
srate 44100, stereo, 32 bit WAV, 9.835 seconds
(433737 sample frames)


Convert to 16 bit:

sox funky.wav -b 16 funky16.wav

Thursday, September 09, 2010

cx_Oracle and out cursor

If you need to call an Oracle stored procedure using cx_Oracle, and one of the arguments is an "out" cursor, this is how it is done.



connection = cx_Oracle.connect(connstr)
cursor = connection.cursor()
out = cursor.var(cx_Oracle.CURSOR)

items = cursor.callproc('EXAMPLE_PKG.get_some_data',
['02-SEP-2010',
'03-SEP-2010',
332123.0,
896798.0,
68567.0,
'xyz',
out
]

print items[6].fetchall()


A list of objects will be returned from callproc containing each of the arguments passed in. If any of the arguments passed are "out" arguments, they will have been modified to hold the results. In my case the seventh argument (items[6] --- zero indexed) returned a cursor. This cursor is then used to retrieve the result set.

I had trouble finding a good example of this usage pattern for cx_Oracle on the net, so here's one for the next lucky guy or girl who is digging for this answer.

Friday, July 30, 2010

OpenGL Intellivision Man

Not long ago I set out to turn an animated gif I found of the Intellivision Running Man into an OpenGL generated video clip. My goal was to shrink the images into tiny bitmaps, a.k.a. 8-bit, and then use the resulting bitmap data as positioning information within a 3D space. Another way of saying it: I wanted to render each pixel from the original images as a 3D block. Jump to the video to get a better idea about what I mean:

http://www.youtube.com/watch?v=E1rXt0l2pxY

Along the way I took a few turns. I never did reduce the image data to its smallest possible representation. I did reduce the image size for each image to 24x24 pixels and I converted the image data to grayscale. Using the handy PIL library, I was able to pull in the bitmap file data as a tuple of pixel values. I used each "pixel" to offset the drawing position in pyglet's on_draw handler.

The code I used to render the animation is a hack of an example found here:

http://code.google.com/p/pyglet-hene/source/browse/trunk/

This is a fairly ugly hack, meaning I didn't work to make the code beautiful. It's just the original code, chopped, altered and enhanced as needed to accomplish what I wanted to see rendered on the screen.

Though not provided here, to run this code an image directory needs to be provided named "data" containing a series of bitmaps. My directory contains this:


ls -ltr
total 32
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 9.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 8.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 6.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 5.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 4.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 3.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 2.bmp
-rw-r--r-- 1 dvenable dvenable 1654 2010-07-20 14:06 1.bmp


If you are so inclined to try your own little experiment, find a favorite animated gif on the web, extract each frame, reduce size, convert to grayscale and drop your own images in a directory similar to the one I created here. Your animation result should be similar to what I've produced here.

And here's the complete source:


from pyglet.gl import *
import pyglet
from pyglet.window import *
from pyglet import image
import os
from PIL import Image
import glob
from math import sin, cos

window = pyglet.window.Window(width=640, height=480, resizable=True)

y=0.0
x=-10.0
z=10.0
xspeed = 0.5
yspeed = 0.0
lx=ly=0
lz=-2
angle=ratio=0.0

boxcol = [ [1.0, 0.0, 0.0], # bright: red
[1.0, 0.5, 0.0], # orange
[1.0, 1.0, 0.0], # yellow
[0.0, 1.0, 0.0], # green
[0.0, 1.0, 1.0], # blue
]

# Dark: red, orange, yellow, green ,blue
topcol =[ [0.6, 0.0, 0.0],
[0.6, 0.25, 0.0],
[0.6, 0.6, 0.0],
[0.0, 0.6 ,0.0],
[0.0, 0.6, 0.6]]



box = None # display list storage
top = None #display list storage

yloop = None # loop for y axis
xloop = None # loop for x axies

bmpdata = None
nextimg = 0
files = None

def load_image_data():
global bmpdata, bmpdatalen, files

files = glob.glob('data/*.bmp')
files.sort()
bmpdata = map(lambda x: Image.open(x).getdata(), files.__iter__())
bmpdatalen = len(bmpdata)


def build_lists():
global box, top
box = glGenLists(2)

glNewList(box, GL_COMPILE) # new compiled box display list

# draw the box without the top (it will be store in display list
# and will not appear on the screen)
glBegin(GL_QUADS)

# front face
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, -1.0, 1.0)
glTexCoord2f(1.0, 0.0); glVertex3f(1.0, -1.0, 1.0)
glTexCoord2f(1.0, 1.0); glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, 1.0)
# back face
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, -1.0)
glTexCoord2f(0.0, 1.0); glVertex3f(1.0, 1.0, -1.0)
glTexCoord2f(0.0, 0.0); glVertex3f(1.0, -1.0, -1.0)
# right face
glTexCoord2f(1.0, 0.0); glVertex3f(1.0, -1.0, -1.0)
glTexCoord2f(1.0, 1.0); glVertex3f(1.0, 1.0, -1.0)
glTexCoord2f(0.0, 1.0); glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(0.0, 0.0); glVertex3f(1.0, -1.0, 1.0)
# left face
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0)
glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, 1.0, 1.0)
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0)
glEnd()

glEndList() # Done building the list

top=box+1

glNewList(top, GL_COMPILE) # new compiled top display list
# Top face
glBegin(GL_QUADS)
glTexCoord2f(0.0, 1.0); glVertex3f(-1.0, 1.0, -1.0)
glTexCoord2f(0.0, 0.0); glVertex3f(-1.0, 1.0, 1.0)
glTexCoord2f(1.0, 0.0); glVertex3f(1.0, 1.0, 1.0)
glTexCoord2f(1.0, 1.0); glVertex3f(1.0, 1.0, -1.0)

glTexCoord2f(1.0, 1.0); glVertex3f(-1.0, -1.0, -1.0)
glTexCoord2f(0.0, 1.0); glVertex3f(1.0, -1.0, -1.0)
glTexCoord2f(0.0, 0.0); glVertex3f(1.0, -1.0, 1.0)
glTexCoord2f(1.0, 0.0); glVertex3f(-1.0, -1.0, 1.0)
glEnd()
glEndList()



def load_gl_textures():
# load bitmaps and convert to textures
global texture, texture_file, texture_surf
#texture_file = os.path.join('data', 'cube.bmp')
texture_file = files[nextimg]
texture_surf = image.load(texture_file)
texture = texture_surf.get_texture()
glBindTexture(GL_TEXTURE_2D, texture.id)

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)



def init():
"""
Pyglet oftentimes calls this setup()
"""
glEnable(GL_TEXTURE_2D)

load_image_data()
load_gl_textures()
build_lists()

glShadeModel(GL_SMOOTH) # Enables smooth shading
glClearColor(0.0, 0.0, 0.0, 0.0) #Black background

glClearDepth(1.0) # Depth buffer setup
glEnable(GL_DEPTH_TEST) # Enables depth testing
glDepthFunc(GL_LEQUAL) # The type of depth test to do

glEnable(GL_LIGHT0) # quick and dirty lighting

#glEnable(GL_LIGHTING) # enable lighting
glEnable(GL_COLOR_MATERIAL) # enable coloring

glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST) # Really nice perspective calculations



@window.event
def on_draw():
global nextimg, bmpdata, x, y, z, lx, ly, lz

# Here we do all the drawing
glClear(GL_COLOR_BUFFER_BIT |GL_DEPTH_BUFFER_BIT)

# Select the texture
load_gl_textures()
#glBindTexture(GL_TEXTURE_2D, texture.id)

xloop = 1
yloop = 1

mandata = bmpdata[nextimg]
for idx in range (0, len(mandata)):
if (idx+1) % 24 == 0:
yloop += 1
xloop = 1
else:
xloop += 1
if mandata[idx] < 100:

glLoadIdentity() # reset our view
gluLookAt(x,y,z, x+lx , y+ly, z+lz, 0.0, 1.0, 0.0)

glTranslatef( xloop*1.8 - 30 ,
28 - yloop*2.4 ,
-60.0)
glColor3f(*boxcol[xloop % 4]) # select a box color
glCallList(box) # draw the box

glColor3f(*topcol[1])
glCallList(top) # draw the top


return pyglet.event.EVENT_HANDLED

def moveMeFlat(direction):
global x, z, y, lx, lz, ly
x = x - direction*(lx)*0.75;
y = y + direction*(ly)*0.5;
z = z + direction*(lz)*0.5;

def orientMe(ang):
global lx, lz
lx = sin(ang)
lz = -cos(ang)


def update(dt):
global z, angle
angle +=0.005
orientMe(angle)
moveMeFlat(0.5)

def update2(dt):
global nextimg
if nextimg < bmpdatalen-1:
nextimg += 1
else:
nextimg = 0


pyglet.clock.schedule_interval(update2, .1)
pyglet.clock.schedule(update)

@window.event
def on_resize(width, height):
if height==0:
height = 1
glViewport(0, 0, width, height)
glMatrixMode(GL_PROJECTION)
glLoadIdentity()

# Calculate the aspect ratio of the window
gluPerspective(45.0, 1.0*width/height, 0.1, 100.0)

glMatrixMode(GL_MODELVIEW)
glLoadIdentity()
return pyglet.event.EVENT_HANDLED

init()

pyglet.app.run()

Tuesday, July 20, 2010

Cross-compile ActiveMQ-cpp on Centos 5

To date an RPM for activemq-cpp is not available for Centos 5. I found RPMs in Fedora 13 and 14 repositories, but due to dependencies on fresher libs, they won't install.

Building for 32 bit architecture on 64 bit Centos 5 was a bit tricky. Here's the recipe I came up with that worked for me:

Install dependencies:


yum install expat-devel zlib-devel uuid-c++-devel openssl-devel


Download sources from here or other mirrors:


wget http://www.carfab.com/apachesoftware/activemq/activemq-cpp/source/activemq-cpp-library-3.2.1-src.tar.gz
wget http://mirror.candidhosting.com/pub/apache/apr/apr-1.4.2.tar.gz


Extract archives and build APR first. Note PKG_CONFIG_PATH, as this is one of the keys to ensuring that lib64 libraries are not found during link step.


[root@myserver] cd apr-1.4.2
[root@myserver] ./configure --prefix=/usr --libdir=/usr/lib CXXFLAGS="-m32" LDFLAGS="-m32" CFLAGS="-m32" --build=i686-redhat-linux-gnu PKG_CONFIG_PATH=/usr/lib/pkgconfig
[root@myserver] ./make
[root@myserver] ./make install


Next build and install ActiveMQ-cpp:


[root@myserver] cd ../activemq-cpp-library-3.2.1
[root@myserver] ./configure --prefix=/usr --libdir=/usr/lib CXXFLAGS="-m32" LDFLAGS="-m32" CFLAGS="-m32" --build=i686-redhat-linux-gnu PKG_CONFIG_PATH=/usr/lib/pkgconfig --with-apr=../apr-1.4.2/apr-1-config
[root@myserver] ./make
[root@myserver] ./make install

Tuesday, July 13, 2010

qemu-img

qemu-img is a handy tool for converting one virtual machine image format into another.


qemu-img convert windowsxp.img -O vdi windowsxp.vdi


I had need for its use after running into errors creating a virtual machine with virt-manager. KVM managed by virt-manager has been my recent virtualization solution of choice, that is, until last Friday when I hit a snag creating a new virtual machine for WindowsXP. The installation process got stuck and I wasn't able to recover. Following advice from somewhere it the net-realm, I opted to create the image using this easy command line two-liner.


dd if=/dev/zero of=vm8.img bs=2048k count=12000
kvm -m 512 -cdrom winxp.iso -boot d v8.img


The installation was successful and running the image from the command line using kvm was a snap, but what I really wanted was to place this image under the control of virt-manager.

I couldn't find a nice option to import an existing image, so if it exists it is not very intuitive. Apparently to place it under virt-manager's control I would need to create an xml file under /etc/libvirt/qemu/ and in addition I would probably need to convert the image from raw format to qcow2. But after messing around with this for too long, a colleague recommended I use virtualbox.

I used virtualbox in the past, so I figured a different approach might be worthwhile and should get me beyond the headache in the short term. But what about the work I did creating the original image? Installing WindowsXP is a hassle, not to mention the effort I'd put into installing subsequent software on the image.

And that's where qemu-img came in. virtualbox only reads vmi formatted images, thus I converted my raw image to vmi.

Updated: FAIL

Importing the new vmi was easy enough, but once I created the new virtual machine and started it...I got this error from virtualbox-osi:

"Failed to start the virtual machine windowsxp. VirtualBox can't operate in VMX root mode. Please disable the KVM kernel extension, recompile your kernel and reboot."

Okay...but I will silently resent you.

Updated: Original image hung after removing KVM. At this point I'm going to throw up my hands and create a new image and fresh install. I'm at the point of diminishing returns...

Wednesday, July 07, 2010

Going really 8-bit

Back when I was a kid, my nerd friends and I would create graphics for our Commodore computers by filling filling in squares on graph paper and calculating the bitmaps. A single character on the Commodore 64 and Vic 20 was an 8x8 grid.

I recently found an 8-bit image fondly remembered from my youth, the Intellivision Running Man, in animated GIF format. I converted the GIF to a short video, which I embedded in a video project.

Some of my video conversion notes were covered in another post and some are on my wiki.

I want to extract the Intellivision Running Man bitmaps for use in an opengl project that I'm thinking about doing. Though it would probably be simpler for me to just sit down with pen and paper to graph and calculate the bitmaps, I'm hoping to find tools that will allow me to extract the data directly from raw video.

I first needed to reduce the screen resolution as much as possible by shrinking the video and dropping frames. Here I shrank the video output to 24x24 and reduced the frame rate to 4 frames per second.

ffmpeg -i intel.avi -r 4 -s 24x24 intel8bit.avi

Next I extracted individual bitmap images:

ffmpeg -i intel8bit.avi -f image2 foo_%5d.bmp

I initially took one image into Gimp and used Desaturate to convert it to grayscale, and then I used Levels to reduce image to pure Black or White. But I didn't want to repeat this again and again. So I wrote a small Python script which achieved the same goal and along the way got to play with the PIL library, which I've not used before.


from PIL import Image
import sys, os

def bw(pt):
if pt>126:
return 255
else:
return 0

for infile in sys.argv[1:]:
f, e = os.path.splitext(infile)
outfile = f + "_mod" + e

im = Image.open(infile)

im = im.convert('L')
out = im.point(bw)

out.save(outfile)
print outfile


This little script loads a bitmap then uses convert() with the 'L' option to make it grayscale. (I couldn't find a comprehensive list of options for convert---the documentation could use some work.)

Finally, I used the point method that passes each pixel in the bitmap data to a custom function. Your function can do anything, but mine just looks at the value, makes higher level grays absolute black and lower level grays absolute white. point() returns a copy of the bitmap file (headers updated and intact) with the transformed result, which is then saved.

At this point I have true black and white bitmaps.

I wanted to quickly visualize my bitmap, to get a feel for how much more I want to reduce the size of my bitmaps. This little script renders my images to screen as a series of X's.


from PIL import Image
import sys, os


for infile in sys.argv[1:]:

im = Image.open(infile)
data = im.getdata()
for idx in range (0, len(data)):
i = data[idx]
if i == 255:
print 'X',
else:
print ' ',

if (idx+1) % 24 == 0:
print ''



The output looks like this:


devin@studio:~/src/py$ python showdata.py /home/devin/Video/bitmaps/foo_00041_mod.bmp

X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X
X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X
X X X X X X X X X X X X X X X X X X X X X X X X

Friday, June 25, 2010

Animated gif to video

This little recipe worked for the basic job of converting the giv to an avi.


mplayer animated.gif -vo yuv4mpeg
ffmpeg -i stream.yuv -an -r 24 -b 640 -s vga -aspect 4:3 test.avi


However I wanted to loop the gif, but ffmpeg's -loop_input and -loop_ouput had no affect. Decided to make a copy and cat a few times...but this didn't work as expected. (Save yourself time and don't copy this section!)


mencoder -oac copy -ovc copy -forceidx test.avi test2.avi -o test.avi
mencoder -oac copy -ovc copy -forceidx test.avi test2.avi -o test.avi
mencoder -oac copy -ovc copy -forceidx test.avi test2.avi -o test.avi


Tried this with no luck:


mplayer -loop 10 test.avi -vo yuv4mpeg


Finally found a good way:


avimerge -i test.avi test2.avi test3.avi - o mergedfile.avi


Sweet!

Wednesday, June 23, 2010

Logitech webcam issues with Ubuntu 10.04 LTS

Update:

Fixed (well, a workaround): Setting environment variable LD_PRELOAD prior to running video problem solves issues on my box.

$ LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so cheese


---Original Text follows---

cheese and other video programs have been working rather poorly on my AMD64 bit Ubuntu Studio install.

Errors like the following show up:


libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffd9
libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff
libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffec
libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffff
libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000ffd9
libv4lconvert: Error decompressing JPEG: unknown huffman code: 0000fffd


Rebuilding the latest drivers from linuxtv.org didn't help. I'm still looking for a solution.

Note: Same camera produces same error on a second computer running same version of Ubuntu.

lsusb

Bus 005 Device 004: ID 046d:089d Logitech, Inc. QuickCam E2500 series

gstreamer-properties is a great test tool for this kind of thing.


I'm going to log a bug on this one.

Sunday, June 20, 2010

New RSS feed

If you are feeding this blog into Google Reader, update your settings and pull from here instead:

http://www.devinvenable.com/rss/feed.xml

I pull my activity from several sources (including blogger.com) to display a nicely formatted aggregate of my online activity on my home page. But it hasn't been possible to subscribe to the aggregate until now. I use Universal Feed Parser to combine my various feeds and had hoped that there would be a nice method to easily turn the parsed values back into serialized ATOM. Poking around, I didn't find any such method, so I used PyRSS2Gen to pack the parsed dictionary back into RSS. It probably would have been just as easy to load an XML library and just write out XML, but I didn't want to have to read specifications or, really, do much work on this. Still took me at least an hour of my life, but that's not so bad. If it wasn't kind of fun I wouldn't be monkeying with it on my day off!

Friday, June 04, 2010

Allow Sendmail on Centos to accept connections on port 25

Need to use sendmail as an MTA? It's running but your box is not accepting incoming connections on port 25?

Perhaps you need to do this:


The default sendmail.cf file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc file, and either change the address specified in the Addr= option of the DAEMON_OPTIONS directive from 127.0.0.1 to the IP address of an active network device or comment out the DAEMON_OPTIONS directive all together by placing dnl at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf by executing the following command:



m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf


Then restart the service.

Wednesday, June 02, 2010

A little something I need to remember...

To turn off expandtab for editing makefiles, put the following in your vimrc:

autocmd FileType make setlocal noexpandtab

Sunday, May 30, 2010

Shrink PDF linux (command line)

gs -sDEVICE=pdfwrite -dCompatibilityLevel=1.4 -dPDFSETTINGS=/screen -dNOPAUSE -dQUIET -sPAPERSIZE=a4 -dBATCH -sOutputFile=output.pdf input.pdf

Thursday, May 20, 2010

Just for fun


.file "tiny.cpp"
.local _ZStL8__ioinit
.comm _ZStL8__ioinit,1,1
.section .rodata
.LC0:
.string "Hello World"
.text
.globl main
.type main, @function
main:
.LFB957:
.cfi_startproc
.cfi_personality 0x0,__gxx_personality_v0
pushl %ebp
.cfi_def_cfa_offset 8
movl %esp, %ebp
.cfi_offset 5, -8
.cfi_def_cfa_register 5
andl $-16, %esp
subl $16, %esp
movl $.LC0, 4(%esp)
movl $_ZSt4cout, (%esp)
call _ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc
movl $_ZSt4endlIcSt11char_traitsIcEERSt13basic_ostreamIT_T0_ES6_, 4(%esp)
movl %eax, (%esp)
call _ZNSolsEPFRSoS_E
movl $0, %eax
leave
ret
.cfi_endproc
.LFE957:
.size main, .-main
.type _Z41__static_initialization_and_destruction_0ii, @function
_Z41__static_initialization_and_destruction_0ii:
.LFB966:
.cfi_startproc
.cfi_personality 0x0,__gxx_personality_v0
pushl %ebp
.cfi_def_cfa_offset 8
movl %esp, %ebp
.cfi_offset 5, -8
.cfi_def_cfa_register 5
subl $24, %esp
cmpl $1, 8(%ebp)
jne .L5
cmpl $65535, 12(%ebp)
jne .L5
movl $_ZStL8__ioinit, (%esp)
call _ZNSt8ios_base4InitC1Ev
movl $_ZNSt8ios_base4InitD1Ev, %eax
movl $__dso_handle, 8(%esp)
movl $_ZStL8__ioinit, 4(%esp)
movl %eax, (%esp)
call __cxa_atexit
.L5:
leave
ret
.cfi_endproc
.LFE966:
.size _Z41__static_initialization_and_destruction_0ii, .-_Z41__static_initialization_and_destruction_0ii
.type _GLOBAL__I_main, @function
_GLOBAL__I_main:
.LFB967:
.cfi_startproc
.cfi_personality 0x0,__gxx_personality_v0
pushl %ebp
.cfi_def_cfa_offset 8
movl %esp, %ebp
.cfi_offset 5, -8
.cfi_def_cfa_register 5
subl $24, %esp
movl $65535, 4(%esp)
movl $1, (%esp)
call _Z41__static_initialization_and_destruction_0ii
leave
ret
.cfi_endproc
.LFE967:
.size _GLOBAL__I_main, .-_GLOBAL__I_main
.section .ctors,"aw",@progbits
.align 4
.long _GLOBAL__I_main
.weakref _ZL20__gthrw_pthread_oncePiPFvvE,pthread_once
.weakref _ZL27__gthrw_pthread_getspecificj,pthread_getspecific
.weakref _ZL27__gthrw_pthread_setspecificjPKv,pthread_setspecific
.weakref _ZL22__gthrw_pthread_createPmPK14pthread_attr_tPFPvS3_ES3_,pthread_create
.weakref _ZL20__gthrw_pthread_joinmPPv,pthread_join
.weakref _ZL21__gthrw_pthread_equalmm,pthread_equal
.weakref _ZL20__gthrw_pthread_selfv,pthread_self
.weakref _ZL22__gthrw_pthread_detachm,pthread_detach
.weakref _ZL22__gthrw_pthread_cancelm,pthread_cancel
.weakref _ZL19__gthrw_sched_yieldv,sched_yield
.weakref _ZL26__gthrw_pthread_mutex_lockP15pthread_mutex_t,pthread_mutex_lock
.weakref _ZL29__gthrw_pthread_mutex_trylockP15pthread_mutex_t,pthread_mutex_trylock
.weakref _ZL31__gthrw_pthread_mutex_timedlockP15pthread_mutex_tPK8timespec,pthread_mutex_timedlock
.weakref _ZL28__gthrw_pthread_mutex_unlockP15pthread_mutex_t,pthread_mutex_unlock
.weakref _ZL26__gthrw_pthread_mutex_initP15pthread_mutex_tPK19pthread_mutexattr_t,pthread_mutex_init
.weakref _ZL29__gthrw_pthread_mutex_destroyP15pthread_mutex_t,pthread_mutex_destroy
.weakref _ZL30__gthrw_pthread_cond_broadcastP14pthread_cond_t,pthread_cond_broadcast
.weakref _ZL27__gthrw_pthread_cond_signalP14pthread_cond_t,pthread_cond_signal
.weakref _ZL25__gthrw_pthread_cond_waitP14pthread_cond_tP15pthread_mutex_t,pthread_cond_wait
.weakref _ZL30__gthrw_pthread_cond_timedwaitP14pthread_cond_tP15pthread_mutex_tPK8timespec,pthread_cond_timedwait
.weakref _ZL28__gthrw_pthread_cond_destroyP14pthread_cond_t,pthread_cond_destroy
.weakref _ZL26__gthrw_pthread_key_createPjPFvPvE,pthread_key_create
.weakref _ZL26__gthrw_pthread_key_deletej,pthread_key_delete
.weakref _ZL30__gthrw_pthread_mutexattr_initP19pthread_mutexattr_t,pthread_mutexattr_init
.weakref _ZL33__gthrw_pthread_mutexattr_settypeP19pthread_mutexattr_ti,pthread_mutexattr_settype
.weakref _ZL33__gthrw_pthread_mutexattr_destroyP19pthread_mutexattr_t,pthread_mutexattr_destroy
.ident "GCC: (Ubuntu 4.4.3-4ubuntu5) 4.4.3"
.section .note.GNU-stack,"",@progbits

Tuesday, May 04, 2010

Inversion of Control pattern

I'm not a huge fan of the so-called Inversion of Control pattern. One blogger at theburningmonk.com writes, "You have Macaroni code when your application is chopped up into many little pieces and it's difficult to see the big picture which may exist only in your (or some one else's!) head."

Another blogger says, "In software design, you can often end up with Macaroni code when you overuse/misuse/abuse abstractions, and it's one of the main dangers of using Inversion of Control."

This echoes my own thoughts on the matter. It's all well and good to work toward code reuse, and encapsulation is a worthy goal. In my mind the DRY principal is the most important thing. Many ills of poor software design can be cured simply by keeping the code compact, elegant and devoid of redundancy regardless of whether or not the code is procedural, object-oriented, or tiny atomic classes bound together by an XML configuration file.

Procedural code gets a bad rap, as if code is inferior simply because it has a well defined entry and exit point. It's beautiful thing when a debugger can be used to examine a code path from end to end. It's beautiful when you can see the whole picture with your own eyes on a single page. No diagrams needed to comprehend...no scrolling through hundreds of lines of XML to find the relationships. It's all right there in the source code where it, very often, belongs.

Wednesday, April 28, 2010

Modes matter for password-less login

I typically setup keys to allow myself password-less access to remote development servers that I use all the time. Today the typical ssh-keygen/deploy public-key routine didn't work as expected. After deploying my public-key to the remote's authorized_keys, I still was getting prompted for a login.

Found this in /var/log/secure.


Apr 28 12:51:35 theserver sshd[16285]: Authentication refused: bad ownership or modes for directory /home/theuser


It turns out the the failure was due to the user's group permissions on the remote machine for two important folders. Both the home folder and the .ssh folder had the following permissions:


drwxrwx--- 36 theuser thegroup 4096 Apr 28 11:56 ..


chmod 700 for both /home/theuser and /home/theuser/.ssh fixed the problem.

Monday, April 26, 2010

Tulsa Developers tied to vendor technology

From time to time I take a look around to see what kind of programming talent is available in the Tulsa area. Most of what I find is tied to Oracle/Sun (Java) or Microsoft.

It's no secret that Tulsa has been heavy into Microsoft technology for years now. The community colleges and trade schools teach it, the recruiters can get their heads around it, and many conservative businesses would rather go with a name they know.

I remember a Williams employee steering me away from my Borland c++ compiler many moons ago, assuring me that Visual c++ was the future.

Well it was and it wasn't. I embraced Visual c++ and talked my first (serious) employer into letting me use it to write an application for a Phillips petroleum project. That work experience propelled me to my next development job, one I kept for almost a decade.

I wrote a heck of a lot of code using Visual c++ and the MFC framework, until it became my job to port the code to Unix flavors. For days, weeks, months I chased the not-quite-regular constants, the libraries that were similar but rarely identical to the standards. It was then that I started going cold on Microsoft.

I moved away from Microsoft development tools for a lot of reasons including a strong preference for open-source. There's so much free and truly open support on the web. When combined with Linux, open-source makes it possible to examine every nook and cranny of source code down to the kernel level.

Why would new developers gravitate to vendor tools when


  • they can't examine what is going on under the covers,

  • they can't control how long the technology will be supported before the vendor deprecates (or abandons) it in favor of new vendor tools,

  • they can't really have any substantial influence over the evolution of the technology?



I was thinking about these questions as I reviewed search results for "tulsa developers". It turns up these sites and not much more.

www.tulsadnug.org

tulsajava.com

Tulsa seems to have a flourishing Microsoft and Java community. Who is representing everything else that's happening in computer science?

According to Tiobe.com, the most popular programming language is JAVA. Microsoft-based technology doesn't rank until the fifth position, and then it falls off except for position eight. In Tulsa, I'm sure Visual Basic and C# would contend for two of the first three slots.

1. Java - 19.1%
2. C - 15.2%
3. C++ - 10.1%
4. PHP - 8.7%
5. Visual Basic - 8.4%
6. Perl - 6.2%
7. Python - 3.8%
8. C# - 3.7%
9. JavaScript - 3.1%
10. Ruby - 2.6%
11. Delphi - 2.1%

We all know that Tulsa, Oklahoma is a conservative city and not well known for risk taking. And perhaps there is more to the story that a simple web query immediately reveals. Ping.fm is Tulsa-based. Python-friendly Vidoop was founded here, though they relocated and then folded. Perhaps they should have stuck around.

Wednesday, April 21, 2010

Switch Primary Monitor in Ubuntu 10.4

The default Monitor Preferences dialog, while sweet, does not allow you to make your secondary monitor your primary. This is a problem if you want the top and bottom Ubuntu Panels displayed on the secondary monitor. Thankfully I found this sweet little script.


#!/bin/sh
#
# Change Primary Monitor for Gnome
# ver 1.0
#
# Copyright (c) 2010 michal@post.pl
#
# This file is free software. You can redistribute it
# and/or modify it under the terms of the GNU
# General Public License (GPL) as published by
# the Free Software Foundation, in version 3.
# It works for me. I hope it works for you as well.
# NO WARRANTY of any kind.
#


# get list of top-level gnome panels
getTopPanels() {
gconftool-2 --all-dirs /apps/panel/toplevels
}

# get monitor number for this panel
getMonitor() {
local PANEL=$1
gconftool-2 --get $PANEL/monitor
}

# set monitor to display on for given top-level panel
setMonitor() {
local PANEL=$1
local NEW=$2
gconftool-2 --set --type int $PANEL/monitor $NEW
}

# return number of connected monitors
getConnectedMonitors() {
xrandr --query | grep -c '^.* connected'
}

# compute next monitor
nextMonitor() {
# number of monitors
local CURRENT=$1
local MONITORS=$2
awk 'BEGIN{ print ('$CURRENT' + 1) % '$MONITORS'; }'
}

# logging finction
log() {
echo $@ 1>&2
}

# main logic below #############

MONITORS=`getConnectedMonitors`
log "Detected $MONITORS connected monitors"

getTopPanels | while read PANEL
do
MONITOR=`getMonitor $PANEL`
NEW=`nextMonitor $MONITOR $MONITORS`
log "Panel $PANEL is displayed on $MONITOR. Switching to monitor $NEW."
setMonitor $PANEL $NEW
done

Thursday, April 15, 2010

mod_security setup on Centos 5.4

Enable the EPEL repository.


rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-3.noarch.rpm


Install via yum.


yum install mod_security


This will load your basic mod_security configuration including the core rules.

Next I had to set SecDataDir in the config. This was not initially set and errors in the following form appeared in the log file.


ModSecurity: Unable to retrieve collection (name "", key ""). Use SecDataDir to define data directory first.


Fixed this up by creating SecDataDir and creating a directory for this purpose, making sure to give apache permission to use it.


vim /etc/httpd/modsecurity.d/modsecurity_crs_10_config.conf
( Added SecDataDir /usr/local/apache/modsec_data )
mkdir /usr/local/apache
mkdir /usr/local/apache/modsec_data
chown apache:apache /usr/local/apache/modsec_data
chown apache:apache /usr/local/apache


After a restart modsecurity successfully began applying rules, but rather than blocking problem requests (my intention) it merely logged warnings. I changed the SecDefaultAction in vim /etc/httpd/modsecurity.d/modsecurity_crs_10_config.conf from


SecDefaultAction "phase:2,pass"


to


SecDefaultAction "phase:2,deny,log,status:403"





vim /etc/httpd/modsecurity.d/modsecurity_crs_10_config.conf

Tuesday, March 30, 2010

Trouble using sockets in Django view

I ran into an issue while writing socket communication in code that was called by a Django view handler. After a Twisted-based and socket-based solution failed mysteriously, I discovered a telnet-based solution that worked.

Read more about it the details here:

http://wiki.devinvenable.com/mediawiki/index.php/Interesting_socket_behavior_exhibited_while_processing_Django_view

Comments or insights as to the cause of the problem behavior would be helpful.

Thursday, March 25, 2010

git merge

I used to work with a guy named Jay who, when encountering a particularly complex or difficult programming challenge, would state aloud, "Now that's powerful."

git is a powerful tool. And it really is but, my God, it sure has a way with obscure error messages.

At some point you'll not be able to pull changes from the master (I want to call it the HEAD via cvs, svn concepts) because you are in the middle of a conflicted merge. You'll know this because you'll see this message:


You are in the middle of a conflicted merge.


Unlike svn update, which will pull the latest changes and automatically merge what can be merged and insert diffs where it can't, you need to manually merge the changes. You can open the diff tool with this command:


git mergetool


mergetool doesn't actually work out the diffs---you do. It opens a default diff editor for each file in need of merging, which may be vimdiff if nothing else is installed on your system. You compare the differences, edit to make any changes, and save the file. mergetool then asks you if the diff is complete, and if so, opens the next file. This continues until you have resolved all conflicts.

By the way, mergetool works with a number of different diff tools. See the man page for a complete list. I went with meld, the gnome diff editor. Once installed via apt-get, mergetool opened meld without any additional configuration. Not quite sure how it knew which one I wanted, but it guessed and got it right.

So now you can pull the latest changes, right? Not so fast. You might see an error like this:


fatal: You have not concluded your merge. (MERGE_HEAD exists)


Ah, what does it all mean? I find this to be common thought when using git. Clemens Buchacher had the answer:


The following is an easy mistake to make for users coming from version
control systems with an "update and commit"-style workflow.

1. git merge
2. resolve conflicts
3. git pull, instead of commit


Yes, that would be me: a fellow used to the "update and commit" style workflow.

For my case, I just had to commit once and then I could, finally, pull changes I had committed on another server.

Monday, March 08, 2010

Automounting removable eSata drive on Centos 5.4

So my boss says, "I don't get it. When I have a gnome desktop open on the Centos box, my new eSata II removable mounts. But if I don't have a desktop open it doesn't get mounted when I reboot the box."

I ask The Google about it, and it tells me:

http://wiki.centos.org/TipsAndTricks/HAL

But try as I might, gnome-volume-manager fails to detect and mount the eSata drive. The idea of running gnome daemons when in fact I'm working with a server in which gnome is rarely utilized (except for when my boss runs a remote desktop) gives me pause. Hmmm, I think, wouldn't this be a good job for udev rules?

Running dmesg tells me that the device is present and that it's getting mounted as /dev/sdb1. (Or you can tail /var/log/messages) Running the following gives me a list of attributes that are visible to udev on my device.


# udevinfo -a -p $(udevinfo -q path -n /dev/sdb)


With this is see several attributes which can be used for matching when the udev rules are applied. I take the following two:


SYSFS{rev}=="ST6O"
SYSFS{model}=="Hitachi HDT72101"


I want to mount the block device as /media/removable each time the device is plugged. I also want to unmount when the device is removed. The following worked for me.


ACTION=="add",SYSFS{rev}=="ST6O",SYSFS{model}=="Hitachi HDT72101",KERNEL=="sd?1",NAME="REMOVABLE", RUN+="/bin/mount /dev/REMOVABLE /media/removable"
ACTION=="remove",SYSFS{rev}=="ST6O",SYSFS{model}=="Hitachi HDT72101",RUN+="/bin/umount /media/removable"


For more information, refer to this helpful page.

http://reactivated.net/writing_udev_rules.html

Monday, February 22, 2010

strptime

Can't find strptime in datetime because you're working in a Python 2.4 environment?


Python 2.4.3 (#1, Jul 27 2009, 17:56:30)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from datetime import datetime
>>> d = datetime.strptime("21-JAN-08", "%d-%b-%y")
Traceback (most recent call last):
File "", line 1, in ?
AttributeError: type object 'datetime.datetime' has no attribute 'strptime'


strptime was added to datetime in 2.5. Prior to 2.5 you pull it from the typically lower-level library 'time'.


>>> import time
>>> time.strptime("21-JAN-08", "%d-%b-%y")
(2008, 1, 21, 0, 0, 0, 0, 21, -1)


Ah, there it is. Of course, you wanted a datetime, didn't you?


>>> t = time.strptime("21-JAN-08", "%d-%b-%y")
>>> datetime(*t[0:6])
>>> datetime.datetime(2008, 1, 21, 0, 0)

Tuesday, February 09, 2010

mod_mono, Centos 5 64 bit, and SELinux Part 2

The trick to creating a SELinux policy is setting the mode to be permissive, which prevents nothing but logs all of the infractions to audit.log, and then using the log to generate the policy. After running my mod_mono based application for a bit in permissive mode, I used this command to generate a local policy.



egrep 'http|mono' /var/log/audit/audit.log | audit2allow -M myhttp


Here is the result:



module myhttp 1.0;

require {
type httpd_tmp_t;
type device_t;
type initrc_t;
type httpd_t;
type httpd_sys_script_t;
type http_port_t;
type port_t;
type inotifyfs_t;
class process { execstack execmem getsched ptrace };
class unix_stream_socket connectto;
class chr_file { read write ioctl };
class tcp_socket name_connect;
class file execute;
class sem { unix_read write unix_write associate read destroy };
class shm { unix_read read write unix_write associate };
class dir read;
}

#============= httpd_sys_script_t ==============
allow httpd_sys_script_t http_port_t:tcp_socket name_connect;
allow httpd_sys_script_t httpd_tmp_t:file execute;
allow httpd_sys_script_t inotifyfs_t:dir read;
allow httpd_sys_script_t self:process { execmem getsched ptrace };
allow httpd_sys_script_t self:sem { unix_read write unix_write associate read destroy };

#============= httpd_t ==============
allow httpd_t device_t:chr_file { read write ioctl };
allow httpd_t httpd_sys_script_t:unix_stream_socket connectto;
allow httpd_t initrc_t:shm { unix_read read write unix_write associate };
allow httpd_t port_t:tcp_socket name_connect;
allow httpd_t self:process { execstack execmem };

Monday, February 08, 2010

mod_mono, Centos 5 64 bit, and SELinux

Getting mod_mono up and running on Ubuntu 9.10 is relatively simple. Install the packages, drop in a test asmx file, browse to the URL and you are done.

 
apt-get install libapache2-mod-mono mono-apache-server2


My experience getting the same demo file with Centos 5 running SELinux was a bit more involved. First off, here's the complete simple web service. You should be able to drop it into your document root and browse to the appropriate URL, once mod_mono is properly installed.


<%@ WebService Language="c#" Codebehind="TestService.asmx.cs" Class="WebServiceTests.TestService" %>

using System;
using System.Web.Services;
using System.Web.Services.Protocols;

namespace WebServiceTests
{
public class TestService : System.Web.Services.WebService
{
[WebMethod]
public string Echo (string a)
{
return a;
}

[WebMethod]
public int Add (int a, int b)
{
return a + b;
}
}
}


On Centos 5, install these packages:


yum install mod_mono xsp mono-web


To enable mod_mono for Apache and run the xsp demo programs, add something like the following to the tail end of your http.conf file. Be sure to check that the paths used here are the same on your machine. (Note that I'm using a 64 bit Centos installation.)



AddType application/x-asp-net .aspx
AddType application/x-asp-net .asmx
AddType application/x-asp-net .ashx
AddType application/x-asp-net .asax
AddType application/x-asp-net .ascx
AddType application/x-asp-net .soap
AddType application/x-asp-net .rem
AddType application/x-asp-net .axd
AddType application/x-asp-net .cs
AddType application/x-asp-net .config
AddType application/x-asp-net .Config
AddType application/x-asp-net .dll
AddType application/x-asp-net .asp
DirectoryIndex index.aspx
DirectoryIndex Default.aspx
DirectoryIndex default.aspx


Alias /demo /usr/lib64/xsp/test
MonoApplications "/demo:/usr/lib64/xsp/test"
MonoServerPath /usr/bin/mod-mono-server


You are likely to run into myriad problems if using SELinux. Start with giving permissions to run mono to httpd.


chcon -t httpd_sys_content_t '/usr/bin/mono'


Each time you hit your URL you will likely encounter another SELinux error. You can repeat this process again and again until you come up with a final policy that will allow apache access to mono, its directories, and dependencies. My final policy looked like this.


module mymono 1.0;

require {
type lib_t;
type tmp_t;
type mono_exec_t;
type httpd_t;
type httpd_sys_script_t;
class process ptrace;
class sock_file { write create };
class sem create;
class file { read execute_no_trans };
}

#============= httpd_sys_script_t ==============
allow httpd_sys_script_t self:sem create;

#============= httpd_t ==============
allow httpd_t lib_t:file execute_no_trans;
allow httpd_t mono_exec_t:file { read execute_no_trans };
allow httpd_t self:process ptrace;
allow httpd_t tmp_t:sock_file { write create };



Mono makes extensive use of a temp directory known as the wapi directory. It is possible for you to specify your own temp directory in your http.conf file or else the default will be used: /tmp/.wapi.

It took awhile to discover that /tmp/.wapi needed different permissions. The best clue I could get from messages was:


Feb 8 08:43:32 carbon setroubleshoot: SELinux is preventing the mono from using potentially mislabeled files (mod_mono_server_global). For complete SELinux messages. run sealert -l a00a5946-cec1-4291-a410-e74c5f96edfd


This was corrected by running...


restorecon -R -v /tmp/.wapi


...as suggested by sealert.

Just as I thought I was finished, as the mono test application was finally working, I found additional errors in the /var/log/audit/audit.log. This policy was the fix:


module mynotify 1.0;

require {
type httpd_t;
type inotifyfs_t;
class dir read;
}

#============= httpd_t ==============
allow httpd_t inotifyfs_t:dir read;



Are we done yet? I sure hope so. I read elsewhere on the web that there is a plan to get the proper SELinux configuration into the mod_mono RPMs. Until that happens, I hope that this info will help you to get your mod_mono setup working.

Note: After rebooting, I had to relabel the temp and bin directory with these two commands:


restorecon -R -v /tmp/.wapi
chcon -t httpd_sys_content_t '/usr/bin/mono'


I'm currently looking for a better, permanent solution.

Monday, January 25, 2010

ISO 9797 algorithm 3

Last Friday I set out to implement ISO 9797 algorithm 3 using the OpenSSL library. I did not have the specification handy, so I decided to to the best I could with what I could find by way of examples on the net.


I came across a description of the algorithm in this 2005 thread (http://www.derkeiler.com/Newsgroups/sci.crypt/2005-02/0374.html). This was posted in a query by someone named Christian. He also posted his keys, data and the expected answer.


H0 = 0
stages 1 to n: Hj = Enc(K, Dj XOR H{j-1})
MAC = Enc(K, Dec(K', Hn))


Francois Grieu replied with, "This is very likely ISO/IEC 9797-1, using DES as the block cipher,
padding method 2, MAC algorithm 3." He provided an answer by sharing sample code in "some near-extinct dialect".


set m0 72C29C2371CC9BDB #message
set m1 65B779B8E8D37B29
set m2 ECC154AA56A8799F
set m3 AE2F498F76ED92F2

set pd 8000000000000000 #padding

set iv 0000000000000000 #initialisation vector

set k0 7962D9ECE03D1ACD #key
set k1 4C76089DCE131543

set xx {iv} # setup
for mj in {m0} {m1} {m2} {m3} {pd} # for each block including padding
set xx `xor {xx} {mj}` # chain
set xx `des -k {k0} -c {xx}` #encrypt
end
set xx `des -k {k1} -d {xx}` #decrypt
set xx `des -k {k0} -c {xx}` #encrypt
echo {xx} #show result

5F1448EEA8AD90A7


I've implemented the same in c for the purpose of research.


#include openssl/des.h
#include memory
#include string.h

//message + padding
const unsigned char msg[40] = { 0x72, 0xC2, 0x9C, 0x23, 0x71, 0xCC, 0x9B, 0xDB,
0x65, 0xB7, 0x79, 0xB8, 0xE8, 0xD3, 0x7B, 0x29,
0xEC, 0xC1, 0x54, 0xAA, 0x56, 0xA8, 0x79, 0x9F,
0xAE, 0x2F, 0x49, 0x8F, 0x76, 0xED, 0x92, 0xF2,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };

//initialization vector
unsigned char iv[8] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };

unsigned char k0[8] = { 0x79, 0x62, 0xD9, 0xEC, 0xE0, 0x3D, 0x1A, 0xCD };
unsigned char k1[8] = { 0x4C, 0x76, 0x08, 0x9D, 0xCE, 0x13, 0x15, 0x43 };


void print_hex(const unsigned char *bs, int n) {

for (int i = 0; i < n; i++)
printf("%02x", bs[i]);
printf("\n");
}

void des_ecb_crypt(unsigned char* input, unsigned char* output, int encrypt, unsigned char* key) {

des_key_schedule sched;
des_set_key((des_cblock *) key, sched);

DES_ecb_encrypt((const_DES_cblock *)input,
(const_DES_cblock *)output,
&sched,
encrypt);
}

void xor_block(unsigned char* src, unsigned char* dest) {

for (int x = 0; x < 8; x++) {
src[x] = src[x] ^ dest[x];
}
}

int main(int argc, char* argv[]) {

unsigned char output[8];
unsigned char xx[8];
unsigned char block[8];
int offset = 0;

memcpy(xx, iv, 8);

// Chain and encrypt 5 8-bit blocks
for (int x = 0; x < 5; x++) {

memcpy(block, &msg[offset] , 8);
offset+=8;

//set xx `xor {xx} {mj}` # chain
xor_block(xx, block);

//set xx `des -k {k0} -c {xx}` #encrypt
des_ecb_crypt(xx, output, DES_ENCRYPT, k0);
memcpy(xx, output, 8);
}


des_ecb_crypt(xx, output, DES_DECRYPT, k1);
memcpy(xx, output, 8);

des_ecb_crypt(xx, output, DES_ENCRYPT, k0);
memcpy(xx, output, 8);

print_hex(xx, 8);
return 1;
}

Friday, January 15, 2010

Finding hidden characters in file using Vim

Show newlines and tab location:

:set list

Back to normal:

:set nolist

View open file as hex:

:%!xxd

Tuesday, January 05, 2010

Apache, mod_wsgi, Django, SELinux

I've come to rely on the fact that if something isn't working after an installation on Centos 5, it is probably due to bad SELinux permissions. I've learned to live with SELinux and generally can handle any complication it throws at me. I've learned to keep an eye on /var/log/messages and /var/log/audit/audit.log. I've learned to use sealert and how to make my own local policies. Yet still it finds a way to throw me under the bus.

This time around I'm installing Django under Apache. I finally moved from mod_python to mod_wsgi. With mod_wsgi, less configuration is needed in httpd.conf, but more is needed in an external config file. For example, my original mod_python configuration looked like this:


<Directory "/var/www/html/python">
AddHandler mod_python .py
PythonHandler mptest
PythonDebug On
</Directory>

<Location "/">
SetHandler python-program
PythonHandler django.core.handlers.modpython
SetEnv DJANGO_SETTINGS_MODULE hsm.settings
PythonDebug On
PythonPath "['/home/stuff/src/python', '/home/stuff/src/python/hsm'] + sys.path"
</Location>


The second section is the Django deployment. (The first I included to remind myself how simple it is to deploy a very basic test program using mod_python. )

With mod_wsgi, the apache config looks like this. A single line identifies the location of the wsgi configuration, and the Directory element is used to give Apache permission to access the script.


WSGIScriptAlias / /usr/local/www/wsgi-scripts/hsm.wsgi

<Directory /usr/local/www/wsgi-scripts>
Order allow,deny
Allow from all
</Directory>



My custom wsgi script includes these lines. The last line keeps the wsgi handler from puking on
print lines that might be included in your file.


import os
import sys

os.environ['DJANGO_SETTINGS_MODULE'] = 'hsm.settings'

import django.core.handlers.wsgi
application = django.core.handlers.wsgi.WSGIHandler()

sys.stdout = sys.stderr


For this particular Django application, I decided to use sqlite3 instead of mysql. But attempting to launch proved problematic.



[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] return query.execute_sql(return_id)
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] File "/usr/lib/python2.4/site-packages/django/db/models/sql/subqueries.py", line 320, in execute_sql
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] cursor = super(InsertQuery, self).execute_sql(None)
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] File "/usr/lib/python2.4/site-packages/django/db/models/sql/query.py", line 2369, in execute_sql
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] cursor.execute(sql, params)
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] File "/usr/lib/python2.4/site-packages/django/db/backends/util.py", line 19, in execute
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] return self.cursor.execute(sql, params)
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] File "/usr/lib/python2.4/site-packages/django/db/backends/sqlite3/base.py", line 193, in execute
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] return Database.Cursor.execute(self, query, params)
[Tue Jan 05 11:02:08 2010] [error] [client 10.8.8.62] OperationalError: attempt to write a readonly database


To read and write to the sqlite database, I had to:

1. Ensure the database file and its parent directory was owned by the apache user. (The location of the database file is specified in the Django settings.py file for your project.)

2. Add a Directory entry to the httpd.conf file for the location of my database file. By default, apache does not have access to directories not under DocumentRoot.

3. Apply SELinux level permissions to the database file and its parent directory.

It took me quite awhile to discover that I had SELinux level permissions because I did not receive notifications in the /var/log/messages file. Normally all security exceptions arrive there and it is easy enough to tail the file and look for events. For whatever reason, the alerts were not appearing in the log, at least not every time. For example, the settroubleshoot alert browser (sealert -b) showed the exact problem that occurred at 11:02 AM (below), but the /var/log/messages file had no corresponding entry. The log did have a similar message from 10:59. The log either does not receive duplicate messages for SELinux events, or that some kind of bug is to blame.


Summary:

SELinux is preventing the httpd from using potentially mislabeled files
./sqlite3.db (usr_t).

Detailed Description:

SELinux has denied the httpd access to potentially mislabeled files
./sqlite3.db. This means that SELinux will not allow httpd to use these files.
Many third party apps install html files in directories that SELinux policy
cannot predict. These directories have to be labeled with a file context which
httpd can access.

Allowing Access:

If you want to change the file context of ./sqlite3.db so that the httpd daemon
can access it, you need to execute it using chcon -t httpd_sys_content_t
'./sqlite3.db'. You can look at the httpd_selinux man page for additional
information.



The solution was to modify the SELinux permissions on the folder that contained the sqlite database:


sudo chcon -R system_u:object_r:httpd_sys_content_t database_folder


Hope this information will help the next unlucky person to deploy in a similar environment.