Gradle deployment script for Grails webapp

I haven’t had much success with finding useful deployment strategies and/or scripts for Grails anywhere. The extent of the documentation I’ve been able locate for deployment simply tells you to create a WAR and upload it to the servlet container.

Not terribly helpful if you want to run a formal process.

So, for my Grails webapps, I came up with this. I create a file in the “gradle” directory named “deploy.gradle” containing the following:

Also in the “gradle” directory is a subdirectory named “deploy” where I have the files specific to the environments to which I can deploy, such as “staging.gradle”:

Using the script above, I can deploy a particular branch from within my git repository to a specific environment thus:

It’s probably not perfect, but since I’m new to Gradle and Grails, I think it’s a pretty good start!

CM3 v5.10 on Ubuntu

This was a fairly difficult task to figure out (no documentation out there), so I thought I’d share how I ended up getting the latest version of the Critical Mass Modula-3 compiler installed.

Please note that I was working an an emulated 32-bit operating system, so if you’re working on a 64-bit installation, make changes where applicable.

  1. You must first download an actual binary copy of CM3. This isn’t actively mentioned anywhere in the source documentation, but you will notice that as you try to do any of the scripts, you will always fail before anything else happens. If you pay close attention, you will see the script is trying to run “cm3” … which is annoying, because it’s CM3 you’re trying to install :PGo to the URL https://modula3.elegosoft.com/cm3/ and scroll down to the section “Target Platform LINUXLIBC6”. I just downloaded the first file, so it’s the whole shebang.
  2. Extract the file and run “cminstall” – this will install the cm3 compiler in a place you wouldn’t necessary expect: /usr/local/cm3/bin – this means you won’t be able to invoke cm3 right away, so you must …
  3. Add the cm3 binary directory to the path thus:
  4. Next, clone the latest source respository:
  5. From the directory cm3/scripts, run the upgrade.sh script

And presto – your CM3 should now be upgraded all the way.

Update:

When trying to build all the additional packages (using “do-cm3-all.sh buildship“), even if you have ODBC drivers installed, the process will fail when building the “db” packages. To fix this:

This holds true for the UI package, as well, specifically lixXaw and libXmu:

 

Windows Server + IIS + PHP 5.3 + ImageMagick + PDFs

After 3 days and hours of frustration, here is the steps to follow for getting php_imagick working on Windows server running IIS:

  • make sure everything you install is 32-bit – this is because PHP is 32-bit, and introducing any 64-bit component will break the whole thing
  • download php_imagick here: http://windows.php.net/downloads/pecl/releases/imagick/3.1.2/php_imagick-3.1.2-5.3-nts-vc9-x86.zip
  • place only the php_imagick.dll file in the “ext” subdirectory of PHP
  • install a legacy version of ImageMagick – this is because as the forum thread http://stackoverflow.com/questions/8457744/installing-imagemagick-onto-xampp-windows-7 points out, PHP 5.3 is compiled with a different version of MSVC than the latest version of ImageMagick, and the two don’t talk to each other. Unfortunately, ImageMagick doesn’t keep archives of old versions, but thankfully other sites do: http://ftp.sunet.se/pub/multimedia/graphics/ImageMagick/binaries/ImageMagick-6.6.5-10-Q16-windows-dll.exe
    For simplicity sake, install the file in “C:\ImageMagick”
  • install a legacy version of GhostScript – again, this is to make sure the versions are compatible with each other. Again, Ghostscript themselves don’t seem to keep legacy files, but you can get them SourcrForge: http://sourceforge.net/projects/ghostscript/files/GPL%20Ghostscript/8.62/gs862w32.exe/download
    For simplicity sake, install the file in “C:\Ghostscript”
  • After all this, ImageMagick still won’t see Ghostscript as a delegate for handling PDF’s – you must edit the “config\delegates.xml” file and replace all instances of “@[email protected]” with the full path to the Ghostscript binary (note the forward slashes) as described http://stackoverflow.com/questions/13304832/ghostscripts-file-path-in-imagemagick: C:/Ghostscript/8.62/bin/gswin32c.exe
  • At this stage, any command-line testing of the ImageMagick to Ghostscript communication will probably work – but it won’t work under IIS. This is because by default, Ghostscript uses “C:\Windows\Temp”, but IIS doesn’t have permission to that directory. You must grant read/write access to that directory to “IIS_IUSRS” (or php_imagick will keep reporting that there is no delegate) as explained here: http://www.wizards-toolkit.org/discourse-server/viewtopic.php?f=1&t=24757&p=110439&sid=35e443f4faf1b92d68632a72c4000d3e#p110439
  • Finally – REBOOT! None of this will work without rebooting at least once so the entire OS has references to the newly installed software and libraries.

With any luck, you should now have a working php_imagick installed – you can confirm (at least part) of this using phpinfo() – although this may report php_imagick installed, it’s not a guarantee that PDF support will be working. You won’t know that part until you actually try to read in a PDF.

REAL Subdomains under Linode

While trying to add a sub-domain to my Linode account, I did the typical Google search to see if someone else had done so already, and I could just copy their instructions. Alas, there were plenty of posts titled “Subdomain on Linode” (or something to that effect), but they all only showed how to register a host under the DNS, and set up the <VirtualHost> records under Apache.

A host is not the same as a sub-domain.

Let’s assume we have a domain “mydomain.com”. Adding in “lab.mydomain.com” as an “A” record will simply make a host. You cannot then do “work.lab.mydomain.com” using this method – something which should be perfectly acceptable under an actual sub-domain.

Herewith, I present the correct procedure for adding in a sub-domain under the Linode DNS (using the above example of “mydomain.com”).

  1. Under the Domain zone for “mydomain.com”, add in the following “NS” records:
    1. ns1.linode.com -> lab.mydomain.com
    2. ns2.linode.com -> lab.mydomain.com
    3. ns3.linode.com -> lab.mydomain.com
    4. ns4.linode.com -> lab.mydomain.com
    5. ns5.linode.com -> lab.mydomain.com
  2. Create a new domain zone for “lab.mydomain.com”

Presto. You can now add in host records such as “www.lab.mydomain.com” or whatever you want.

WordPress Shortcodes – My Way

As anyone whose work in WordPress whose tried to create their own shortcodes knows, it can be a nuisance. Trying to come up with unique names for the shortcodes so as not to cause conflicts, supporting nested shortcodes, etc., etc. It can be a challenge.

Instead of using functions, however, I’ve started using enclosures and classes. Such a class itself registers shortcodes which it can have embedded. And to overcome the actual shortcode tag itself conflicting – I’ve found you can “namespace” those, too. Here’s an actual example:

So, what we have here is a shortcode “sunsport:tiles:start” which creates an instance of our class. That instantiation registers a new shortcode “sunsport:tiles:create”, which would be unavailable otherwise, thus we avoid have to check to make sure it’s properly enclosed in a parent “start” shortcode, and we gracefully deregister it at the end of the run.

It’s probably worth include the “fragments/tiles/start.php” file just for reference:

And here’s the actual usage:

There’s is one word of warning – do not do a naming convension like this:

  • parent shortcode – sunsport:tiles
    • child shortcode – sunsport:tiles:create

The child shortcode will never fire. For some reason, it seems WordPress doesn’t actually read in the full shortcode in this scenario – instead of “sunsport:tiles:create” firing, WordPress will simple re-run “sunsport:tiles”.

That caveat aside, I find this feels a lot cleaner and less collision-prone than other examples I’ve seen.

Another “WTF?!” IE9 Bug

With Internet Explorer’s complete lack of support for any of the neat and useful CSS styles, one always has to revert to Microsoft’s disgusting “filter” hack. The filters don’t take in very many useful parameters (such as color stops in gradients) and disable text anti-aliasing. 

But here’s something you probably really didn’t see coming – under IE9 only (this doesn’t affect IE8), filters completely cripple events. If you define any mouse over or even click events, they will not fire.

This created a situation where I could no longer use a horizontal sliding accordion, because IE doesn’t support text rotation and uses a … you guessed it … filter.

I hate Microsoft so much … so very very much …

XMLSerializer for Internet Explorer

While trying to convert a jQuery element object into a string, I noticed that all the major browsers support “XMLSerializer”, which does precisely that task. Of course, Internet Explorer is the exception. However, IE does offer the “outerHTML” property on DOM elements, which seems to do the same thing.

I herewith present an extremely short JavaScript snippet which allows global use of XMLSerializer

Lithium Problem on Rackspace

Today I came across a situation where I was deploying a PHP-based webapp written in Lithium and running on a Rackspace cloud site. In my scenario, I noticed 2 symptoms (appearing differently, but having the same cause).

  1. if the Lithium app is a subdirectory of another webapp (in my example, the main site is WordPress), you will always get a WordPress “Oops! The page you are looking for does not exist.” error.
  2. if the Lithium app is in the root, you will get an “Internal Server Error” page.

As it turns out, the problem is the .htaccess file included with Lithium.

I don’t think there’s anything wrong with the .htaccess per se, but under Rackspace you seem to have to include the “RewriteBase” directive.

So, as a result, you must edit all 3 .htaccess files in your Lithium project thus:

  • /.htaccess – RewriteBase /
  • /app/.htaccess – RewriteBase /app/
  • /app/webroot/.htaccess – RewriteBase /app/webroot/

If your webapp is a subdirectory, this subdirectory name will need to prepended to RewriteBase path:

  • /.htaccess – RewriteBase /subdir/
  • /app/.htaccess – RewriteBase /subdir/app/
  • /app/webroot/.htaccess – RewriteBase /subdir/app/webroot/

And presto, it now magically works!

XML-RPC under Ruby on Rails

On a current project, I needed to develop a series of web services for a custom single-signon (unified login) for a bunch of different websites to share. The project needed to be in Ruby on Rails, since that is what is available to the servers, and needed to use a protocol which PHP, Java and Ruby could all understand.

At first I tried to investigate using ActiveResource, but I found this to be excessively Rails-centric, and it only seemed to provide basic CRUD functionality. I needed these webservices to do a lot more work, with a lot more parameters. Since the Rails community (and a lot of web developers in general) seem to rave about RESTful services, my next direction was to write a custom series of REST-based webservices. As I proceeded, it became glaringly obvious that REST services have on major shortcoming – complex data.

Since the basic idea of REST is that all parameters become part of the URI itself (something, I might add, ActiveResource violates right away), you immediately have a problem when it comes to things like street addresses. Everyone’s solution is to use basic HTTP parameters or to URL-encode the data, but to me these solution tainted the point of REST, and made for some truly hideous looking URL’s, respectively.

I also didn’t like the lack of type control. This same lack also negated using JSON-RPC or even some custom YAML-based solution. All the experiences I had with SOAP left a bad taste in my mouth for that one, so what was I to do?

Thankfully, I discovered that Ruby had XML-RPC functionality built right in. But the problem arose that all examples I could find of using it (since Ruby’s documentation is complete and utter dog-sh*t) only showed the XML-RPC server running in stand-alone mode. I certainly couldn’t do this. So after much tinkering, I herewith present a controller class which you can extend to offer very simple XML-RPC calls from within a Rails environment:

#
# This class provides a framework for XML-RPC services on Rails
#
require 'xmlrpc/server'
class WebServiceController < ApplicationController

  # XML-RPC calls are not session-aware, so always turn this off
  session :off

  def initialize
    @server = XMLRPC::BasicServer.new
    # loop through all the methods, adding them as handlers
    self.class.instance_methods(false).each do |method|
      unless ['index'].member?(method)
        @server.add_handler(method) do |*args|
          self.send(method.to_sym, *args)
        end
      end
    end
  end

  def index
    result = @server.process(request.body)
    puts "\n\n----- BEGIN RESULT -----\n#{result}\n----- END RESULT -----\n"
    render :text => result, :content_type => 'text/xml'
  end

end

Here is a working example of using the above code:

class StringController < WebServiceController

  def upper_case(s)
    s.upcase
  end

  def down_case(s)
    s.downcase
  end

end

Invoking the remove method couldn’t be any simpler:

require 'xmlrpc/client'
require 'pp'


server = XMLRPC::Client.new2("http://localhost:3010/string")

result = server.call("upper_case", "This is my string")
pp result

result = server.call("down_case", "This is my string")
pp result

I certainly hope this simple bit of Ruby will help anyone else who may have suffered trying to figure this out as I have.

Enjoy!

When “Agile”, “Dynamic” and “Typeless” Become a Hindrance.

In recent years, I’ve seen the apparent rising popularity of “Agile” programming, powered by “dynamic” languages such as “Ruby”. While these things seem warm and fluffy at first, in the long term with large projects, they really can become a difficult beast to control.

Let’s take “Ruby on Rails” as an example. I was recently tasked with upgrading our version of Rails from 1.2.3 all the way up to 2.3.2, no easy task to be sure. We decided that a gradual, version-by-version upgrade would work the best, since it would allow us to catch deprecations and handle the framework changes a bite at a time.

What I discovered in the end was that Rails itself uses so much “magic”, and directly modifies so many core Ruby API’s that it was impossible to predict what would break between versions. As an example, a minor version change from 1.2.3 to 1.2.4 caused unexpected changes in how dates were handled. The change-logs made no reference to such modifications. The change from 2.1 to 2.2 resulted in a host of modules no longer being identified. Oh sure, if you printed it out, Ruby “thought” it was available, but the minute you tried to actually use anything within the module, you would receive “uninitialized constant” errors on the module name itself.

I know everyone has their own opinions on the matter, but from where I stand using Ruby on Rails on a system even vaguely large and complex simply raises far too many question marks on predictability and a clean upgrade path.

As a comparison, let’s refer to Java 2 Enterprise. Every piece of code I have ever written in J2EE version 1.0 runs without any issue under a J2EE version 3 container. Not even a re-compile was required. I do realise that the comparison is somewhat unfair – but the point still remains. I have very rarely needed to make major changes to any Java-based application to cater for a new framework – especially between minor version numbers.

I hate to make a blanket case for all agile/dynamic frameworks, so I’ll admit that was probably unfair (Grails, Django are on my to-do list). However, my case is even held fast on a more superficial level – my own benchmarks on a variety of languages clearly show that although time and effort may be saved on initial development time, dynamic/typeless languages will always suffer in raw speed, and agile frameworks will always suffer from a degree of unpredictability when upgrading (this becomes immensely exaggerated when “auto-magic” comes into play, as there is no clear path when anything does break).

To be clear, I won’t abandon agile frameworks – if I just want a simple website up, and expect maybe a dozen hits a month, I’m certainly not going to do it in a full blown enterprise container – but in the same token, I would certainly not use any such framework in a banking environment!