Handling Images from the Command Line

Recently, I needed to do some work from the Windows Command Line, and I needed to deal with a few images. Along the way, I found some great tools. All these tools are free. Can come in useful when automating.

Manipulation

ImageMagick – This is a collection of command line tools. You can do image conversion, view properties, transform, transparency, join, overlay, add special special effects, and tons more. Also has APIs for C, C++, Java, .NET, Perl, PHP, Python, Ruby, and others. Highly recommend it.

Screen Capture


CmdCapture – Takes a screenshot of your desktop from the command line.

IECapt – Capture Internet Explorer’s rendering of a web page into a BMP, JPEG or PNG image file.

Cutycapt – Capture WebKit’s rendering of a web page into a variety of vector and bitmap formats, including SVG, PDF, PS, PNG, JPEG, TIFF, GIF, and BMP.

wkhtmltopdf – Convert HTML to PDF using the Webkit rendering engine, and Qt.

yUML and ColdFusion

I just tried to write a quick script in Python that scans CFCs and generates a yUML URL to diagram. I pointed my script to my root CFC path and I got a 13K strlen URL. I pasted it in the address bar to see what happened and I got the following:

Request-URI Too Large
 
The requested URL's length exceeds the capacity limit for this server.
Apache/2.2.3 (Debian) Phusion_Passenger/2.0.2 Server at Ess000235.gtcust.grouptelecom.net Port 80

I wonder what the limitation is. I suppose I’ll have to do a CFC per diagram and then bind them together somehow. I’m choosing Python so this script can be part of my build script.

Here’s the code so far, which of course, could be optimized:

import re
import os
 
# UML Syntax
# http://yuml.me/diagram/class/[User|Property1;Property2|Method1();Method2()]
# http://yuml.me/diagram/class/
# [
#   User
#   |
#     Property1;
#     Property2
#   |
#     Method1();
#     Method2()
#  ]
 
 
# Master Path
ROOT_PATH = 'C:\\temp\\cf-yuml'
 
def SearchForFile( rootpath, searchfor, includepath = 0 ):
 
  # Search for a file recursively from a root directory.
  #  rootpath  = root directory to start searching from.
  #  searchfor = regexp to search for, e.g.:
  #                 search for *.jpg : \.exe$                     
  #  includepath = appends the full path to the file
  #                this attribute is optional
  # Returns a list of filenames that can be used to loop
  # through.
  #
  # TODO: Use the glob module instead. Could be faster.  
  names = []
  append = ""
  for root, dirs, files in os.walk( rootpath ): 
    for name in files:
      if re.search( searchfor, name ):
        if includepath == 0:
          root = ""          
        else:          
          append = "\\"
        names.append( root + append + name )        
  return names  
 
 
def getCFCInfo ( FILE, path ):
  FILE.seek( 0, 0 )  
  CFCLines = FILE.readlines()
 
  CFCFunctions  = []
  CFCProperties = []
  CFC           = {}
 
  for i in CFCLines:
    # Get names of methods  
    if re.search( "^<cffunction", i , re.IGNORECASE | re.MULTILINE ):    
      CFCFunctions.append( re.search( r'name\s*=\s*"([\w$-]+)"', i, re.DOTALL | re.IGNORECASE).group(1) )
 
  # Get names of properties
    if re.search( "^<cfproperty", i , re.IGNORECASE | re.MULTILINE ):    
      CFCProperties.append( re.search( r'name\s*=\s*"([\w$-]+)"', i, re.DOTALL | re.IGNORECASE).group(1) )     
 
  CFC = { "properties":CFCProperties, "methods":CFCFunctions }  
 
  # Generate URL
  strFunctions  = ""
  strProperties = ""
 
  for i in CFCFunctions:
    strFunctions  += i + "();"
 
  for i in CFCProperties:
    strProperties += i + ";"  
 
  CFCFileName = re.search(r"\\([\w-]+)\.cfc$", path, re.DOTALL | re.IGNORECASE).group(1)  
  return "[" + CFCFileName + "|" + ( strProperties.strip()[:-1] + "|" if strProperties.strip()[:-1] else "" ) + strFunctions.strip()[:-1] + "]"  
 
URL = ""
 
for i in SearchForFile( ROOT_PATH, "\.cfc$", 1 ):
  CFCFile = open( i, "r" )
  URL += getCFCInfo( CFCFile, i ) + ","
  CFCFile.close()
 
URL = URL[:-1]
print "http://yuml.me/diagram/class/" + URL

I’ll keep working on this as time goes on. So far it just goes through all the CFC’s from the path you point to. It will crawl through all sub directories. There’s no relationship between classes, however. Not yet at least.

Python and SQL Server

Setting up Python to connect to SQL Server was relatively easy. First, you select a DB API driver. I chose pyodbc because I saw a Python article on Simple-Talk. There are two simple steps:

  1. Install Pywin32. Get the latest. It’s a dependency for pyodbc.
  2. Install pyodbc. Get it for the version of Python you’re using.

Once you’ve done this, you can query your SQL Server db as so:

import pyodbc
 
connection = pyodbc.connect('DRIVER={SQL Server};SERVER=192.168.0.5;DATABASE=MyAwesomeDB;UID=sa;PWD=password')
cursor = connection.cursor()
 
cursor.execute("select * from states")
 
for row in cursor:
  print row.StateID, row.Abbreviation, row.Name

For more snippets and a tutorial, check out the documentation.

Now let’s try something more interesting. Let’s try doing some inserts and see how long it takes.

import win32api
import uuid
import pyodbc 
 
connection = pyodbc.connect('DRIVER={SQL Server};SERVER=192.168.0.5;DATABASE=MrSkittles;UID=sa;PWD=password')
cursor = connection.cursor()
 
_start = win32api.GetTickCount()
 
for i in range( 0, 10000 ):  
  # Let's insert two pieces of data, both random UUIDs. 
  sql = "INSERT INTO Manager VALUES( '" + str( uuid.uuid4() ) + "', '" + str( uuid.uuid4() ) + "' )"  
  cursor.execute( sql )
  connection.commit()
 
_end = win32api.GetTickCount()
_total = _end - _start
 
print "\n\nProcess took", _total * .001, "seconds"

After some tests, 10,000 records took roughly 20-30 seconds. 1,000,000 records took 30 to 40 minutes. A bit slow, but it’s not a server machine. My machine is a Core Duo, 1.8Ghz x 2, at ~4GB with PAE on WindowsXP, but I ran this on a VMware VM with 1GB and SQL Server 2005 w/Windows Server 2003. The table was a two column table both varchar(50). On a server machine, it should be a helluva lot faster.

Twitter: Migrate People You’re Following to Another Account

I just had to migrate people I was following from one account to another account. I was able to do this using TweepML. You’ll need to have people in lists first though. The only drawback was that it didn’t create the lists in the new account so I had to manually create the lists in Twitter. Annoying, but will have to do until Twitter has this function built-in.

Converting M4P to MP3

The M4P format is proprietary to iPhones/iPods. I needed to modify some songs that I got from the iTunes store. First I used Sharepod to remove the M4P files from my iPhone. Then I used SoundForge to record the song from Winamp playback. Winamp does not play M4P files natively, so you’ll have to install the M4P plugin. Also, you may have to run the Windows Master Volume tool to properly unmute the audio source.

If you want to burn M4P files from iTunes, create a local playlist, and drag-and-drop the M4P files you got from Sharepod into the local iTunes playlist.

IIS Logs Scripts

While working with some IIS logs, I decided to start practicing my Python. I put together some handy Python functions to work with IIS Log files. These will come in handy. On a 3GB, 2.5GHz, running WinXP machine, these functions take about 3 seconds to process a 180MB Text file. Python code could be optimized to be faster if you’re dealing with larger sized files.

#!/usr/bin/env python
 
# An IIS log file can have various log properties. Everytime you add new columns to log for
# in IIS, it creates a new row full of columns.
import re
import os
 
MainLogDelimiter = "#Software: Microsoft Internet Information Services 6.0"
TestFile         = "C:\\Dan\\IIS-Log-Import\\Logs\\not-the-same.txt"
BigTestFile      = "C:\\Dan\\IIS-Log-Import\\Logs\\ex090914\\ex090914.log"
LogsDir          = "C:\\Dan\\IIS-Log-Import\\Logs"
 
def SearchForFile( rootpath, searchfor, includepath = 0 ):
 
  # Search for a file recursively from a root directory.
  #  rootpath  = root directory to start searching from.
  #  searchfor = regexp to search for, e.g.:
  #                 search for *.jpg : \.exe$                     
  #  includepath = appends the full path to the file
  #                this attribute is optional
  # Returns a list of filenames that can be used to loop
  # through.
  #
  # TODO: Use the glob module instead. Could be faster.  
  names = []
  append = ""
  for root, dirs, files in os.walk( rootpath ): 
    for name in files:
      if re.search( searchfor, name ):
        if includepath == 0:
          root = ""          
        else:          
          append = "\\"
        names.append( root + append + name )        
  return names  
 
 
def isSameLogProperties( FILE ):
  # Tests to see if a log file has the same number of columns throughout
  # This is in case new column properties were added/subtracted in the course
  # of the log file.
  FILE.seek( 0, 0 )
  SubLogs = FILE.read().split( MainLogDelimiter )
 
  # SubLogs[0] Stores the number of different log variations in the log file  
  SubLogs[0] = len( SubLogs ) - 1    
 
  # Grab the column names from the log file, separated by space
  columns = re.search( "^#Fields:\s([\w\-()\s]+)$", SubLogs[1], re.IGNORECASE | re.MULTILINE ).group(1)   
  LogSameProperties = True
 
  for i in range( 2, SubLogs[0] + 1 ):
    # If there are columns
    if ( len( columns ) > 0 ):    
      if ( columns != re.search( "^#Fields:\s([\w\-()\s]+)$", SubLogs[i], re.IGNORECASE | re.MULTILINE ).group(1) ):        
        LogSameProperties = False
        break  
 
  return LogSameProperties
 
 
def getFirstColumn( FILE ):
  # This gets the columns from a log file. It returns only the first columns, and ignores another column
  # row that may exist in case new columns were added/subtracted in IIS. 
  # input: FILE
  # output: 1 single element List
  FILE.seek( 0, 0 )
  names = []
  # Grab the column names from the log file, separated by space
  names.append( re.search( "^#Fields:\s([\w\-()\s]+)$", FILE.read().split( MainLogDelimiter )[1], re.IGNORECASE | re.MULTILINE ).group(1).strip() )
  return names
 
 
def getAllColumns( FILE ):
  # This gets all the columns from a log file. 
  # input: FILE
  # output: List
  FILE.seek( 0, 0 )  
  names = []
  SubLogs = FILE.read().split( MainLogDelimiter )    
  # SubLogs[0] Stores the number of different log variations in the log file  
  SubLogs[0] = len( SubLogs ) - 1        
  for i in range( 1, SubLogs[0] + 1 ):        
    names.append( re.search( "^#Fields:\s([\w\-()\s]+)$", SubLogs[i], re.IGNORECASE | re.MULTILINE ).group(1).strip() )  
  return names  
 
 
# EXAMPLE:
# Loop through all the IIS log files in the directory
# for file in SearchForFile( LogsDir, "\.txt$", 1 ):  
LogFile = open( file, "r" )
if ( isSameLogProperties( LogFile ) ):
  print file, "the same"
else:
  print file, "not the same"
LogFile.close()

Sharepod Manages iPhone/iPod Songs

Just tried out Sharepod. It’s a great free program to extract songs from your device. No installation needed, since it’s a standalone executable. No ads or sign-ups. I did come across some quirks though.

I was able to:

  • Delete MP3s from my iPhone
  • Copy MP3s from my computer onto my iPhone
  • Copy MP3s from my iPhone onto my computer

I was not able to:

  • Update a playlist when I copied an MP3 from my computer to the iPhone – still need iTunes
  • Do anything with the Photos options on Sharepod – when I click on  “Photos” the application locks up.

Remove Comments from IIS Logs

If you think that Log Parser is a bit on the slow side (i.e. if you’re dealing with big IIS logs) and you want to bulk import your logs into SQL Server, then you’ll have to remove # comments from the log files. Microsoft has the PrepWebLog Utility to do this, but it seems to choke for files that are > 100 MB. Also, you’ll have to write this as a batch file so it goes through a whole directory of files.

I wrote a Perl script that’s relatively fast (faster than PrepWebLog) and it can crawl folders/subfolders recursively. Here it is:

# parse.pl
# example: 
#   parse c:\temp\logs\logs*\*.log
#
# Requirement: no spaces in the directory names and file names.
# This gets called via run.bat. 
 
 
sub getFileList 
{    
    # This function returns an array of file list based on filter
    # This is the filter they can put in.       
    # Returns a file with full path. 
    # Example of filters: getFileList ( "*.log" );
    @files = <@_[0]>;
    return @files;    
}
 
 
sub remove_comments
{
  # Remove # pound sign comments from files. 
  # @_[0] = filename
 
  open (my $in, "<", @_[0] ) 
      or die "in: @_[0]";
 
  open (my $out, ">", "@_[0].txt") 
      or die "out: @_[0]";
 
  while( my $line = <$in>)
  {
      print $out $line
          unless $line =~ /^#/;
  }
 
  close $in;
  close $out;
}
 
 
########## MAIN #############
$arg = @ARGV[0];
 
# Location of root directory of logs files
#$arg = 'c:\temp\logs\logs*\*.log';
 
# Replace slashes
$arg =~ s/\\/\\\\/g;
 
# Loop through all the log files. 
for $file (getFileList ($arg))
{  
  print ( "Processing file $file ... \n" );    
  remove_comments( $file );  
}

The Perl script gets called via run.bat:

REM No spaces in directory and file names.
perl Parse.pl D:\statesites\W3SVC*\*.log
pause