2010 a review

Well 2010 has certainly been a different year, one of unprecedented personal change but ultimately has to go down as a successful one. In September I became a dad which is by far the biggest challenge I have ever faced, but also the most rewarding. As if this wasn’t enough change I have also left the company I joined after graduating in 1999, 11 years ago. One things for certain not heading back to work in January is going to be wierd! So what next? I plan on taking some of the skills I have built up over the past decade and take them to market myself. Its time to brush off some old development skills, learn some new ones and take up the challenge of going it alone.

In terms of this blog how have I done? Well I set myself the challenge of writing one post a week, and as expected I’ve fallen short of this. In reality I’ve struggled to output one post every two weeks, but overall I am pleased with this achievement. Given a full time job and over the past few months a baby I think my output was always going to suffer. I’m going to set myself a similar target for this next year. Working for myself may leave me with some extra time to work on my own stuff. However if things take off who knows it may even suffer more!

So goodbye 2010 and hello 2011 an exciting year I hope. And hey if you’re reading this and need some development work please get in touch!

When Internet Mobs Attack

A few years ago when I read Clay Shirky’s Here Comes Everybody I remember thinking wow that’s pretty neat. Reading stories of how ordinary citizens could leverage the collective power of the internet to right wrongs, the power of one could explode exponentially through friends of friends. The little man suddenly had a voice. But as we head into a social decade a few things I have read and seen have given me pause for thought. What if the little man is wrong?

Last month a story broke out of twitterverse of a publication Cook’s Source who had abused copyright and used an author’s recipe without consent. The individuals involved fought it out in private for a little while, but one party offended the other, who without a course to resolution took the argument public. The internet was incredulous and within hours the incident had exploded and the collective mob was involved. I watched from afar with a little interest. Things obviously got out of hand quickly and before long Cook’s Source name was mud, advertisers were being harassed and allegedly the individuals involved were being personally harassed. Long story short the original offender apologised and that should have been the end of it. By this time though the damage had been done, the business does not look to have survived. Last week the site had an explanation and apology with the broken tone of someone who deeply regretted what had happened, it explained the magazine was unlikely to survive, today the domain redirects to www.intuit.com a site offering website building services.

This incident whilst in some ways understandable – the individuals response to the copyright infringement was terrible and ill informed – was it deserving of bringing about the demise of a business? The facts about this case seem fairly straight cut, but at the end of the day all the internet mob had to go on was the word of one person, what if she had been lying? It could be argued that the internet mob with its virtually limitless resources could do the fact checking and “fake mob inciters” could be outted early on, but can you count on this? I find it a little disturbing that a virtual mob can wield such power.

Large corporations can probably manage PR disasters such as this, and many employ specialist PR companies for just such occasions. The smaller businesses may not be so lucky.

Need more convincing about the potential of mobs? watch this Ignite video:

A bit dramatic, probably a step further than what we’re talking about here, but scarily not beyond the realms of possibility.

SEO for this site

I’ve wanted to write a blog post about SEO for a while now. SEO can be a bit of a dark art and in reality can you really expect SEO to work if you think about the numbers? As of December 2009 there were 234 million websites, no doubt as we approach the end of 2010 there are millions more. I think given these numbers it is a fair bet that for any given subject there are likely to be many websites, and here is the issue, if every one of these sites applies SEO principles then getting above another site in page rankings is a tall order. Bearing this in mind I have always been a little sceptical whether doing SEO for a site such as mine is even worth it? My audience is small, sure I’d like to increase my reach otherwise why write a blog, but is SEO really going to help?

As a long time developer I’ve picked up more than the basics for SEO in terms of adding good meta data to your site. I was going to write a post about me doing this for the site, but there’s probably a thousand other sites out there who have written about this so what new material could I write? Alongside this I am like most good programmers lazy by nature, in writing about the techniques used to SEO up this site I’d probably have to do more development, not something a lazy guy wants to do. WordPress runs this site, and despite me not being a great lover of plug-ins (I have some good reasons for this, another post perhaps), I decided in this instance it was probably worth using a plug-in. And of course with wordpress there is really only one worthy option, Michael Tobert’s All in One SEO Pack.

This is a work out of the box plug-in, just install it and you’re done. Of course you can do more than simply install it, the plug-in comes with a multitude of options, which work at both a global level and can be overridden on a post by post basis. I have chosen to tag all of my posts and use these tags as my meta keywords. The meta title is generated from the post title and meta description is also auto generated. This does mean I have to think a little bit more about my posts, make sure the tags are relevant and that the title is related to the subject matter at hand. All small points and really are things every post should have anyway.Watch Brothers (2015) Full Movie Online Streaming Online and Download

So what about the results? Well since installing the plug-in a couple of months ago the traffic to my site from search engines now accounts for about a third of the traffic. OK these are still modest numbers but they are absolutely an improvement. What’s more for some tags I have some excellent ranking, one particular post about <canvas> ranks on the first page of Google for the term beginPath(). This is exactly why I wrote the article to give a practical example for people wanting to get to grips with canvas, helping people find it is excellent.

Is there more I could do, absolutely. As yet I haven’t generated a Google Sitemap for my webmaster tools. And I could definitely start getting links back to my blog from other sources which should help my SEO ranking. I’m in no hurry to do this but going forward hopefully my audience will grow. I’m still sceptical about targeted SEO but for my blog this use case is perfect.

2 minute silence

Its not often I see a piece of marketing and think genius, but that is exactly what I thought when I found out about The Royal British Legion’s 2 minute silence single.


Anyone who resides in this country should be grateful to those who have kept us a free country over countless years. Remembrance Sunday is a chance to reflect on people who have lost their lives in pursuit of this goal. Every year we should pay out respects and the 2 minute silence at 11 am on the nearest Sunday to the 11th November affords us with a chance to do this. Buying a poppy is of course a must and something I will do every year without fail however the 2 minute silence can pass by all too easily.

And this is the genius in the campaign, if enough people support this campaign and the song reaches number one, then perhaps, just perhaps radios will be forced into silence on this day. A small thing but anything which raises awareness is a good thing. At the same time you will be raising more money for this worthy campaign instead of wasting it on the latest N-Dubz track or whatever is in the charts at the moment.

I’ve just purchased it off of iTunes which meant I also got the poignant video accompaniment featuring the faces of many a famous person. The silence was very moving.

I’d urge everyone to stump up the cash and buy this.Watch Full Movie Online Streaming Online and Download

http://www.facebook.com/poppysingle2010

Theme Functionally Complete

The more eagle eyed amongst you (probably only me) will notice that I’ve made a few changes to the site skin. I haven’t had the time in the past few months to do much to the skin but I’ve finally done some work on the last large element of the site which needed some love – the sidebar. There is absolutely nothing of note here, I’ve used nothing new that isn’t part of another aspect of the skin for the sidebar.

I’ve also made a smal tweak to the homepage posts. One thing I’ve seen around the net are little date cards for a post, which I quite like, so I decided to create one for a post using pure css. I made a small change to the template to bring the date into its own bit of mark up then used border-radius on the top and bottom corners of each element. This for me highlights one of the main issues with using css3. Because it is so new and essentially not defined, different browsers implement these new features in different ways. Firefox declares indvidual corners of a radius using -moz-border-radius-topleft and Safari and Chrome declares it using -webkit-border-top-left-radius, clearly both very different. W3C currently defines the property as border-top-left-radius so webkit is closest, but as the spec is in working draft status it could easily change. I’ve used both declarations in my css to achieve a single card from two <div>s.

.post-date .post-month {
  -moz-border-radius-topleft: 5px;
  -moz-border-radius-topright: 5px;
  -webkit-border-top-left-radius: 5px;
  -webkit-border-top-right-radius: 5px;
}

.post-date .post-day{
  -moz-border-radius-bottomleft: 5px;
  -moz-border-radius-bottomright: 5px;
  -webkit-border-bottom-left-radius: 5px;
  -webkit-border-bottom-right-radius: 5px;
}

And that’s mostly it for interesting stuff. I’m posting mostly to declare the skin complete, from here on in any tweaks to the skin should be just that, tweaks. Overall I’m satisfied with the layout but highly dissatisfied with the overall colour scheme, it doesn’t sit on the page well for me. It should be pretty easy to tweak the colours in the future thanks to it being mostly defined through CSS, might see if I can get a bit more help on it from someone with a bit more creative talent than me!

I’ve taken screenshots of the skin at each point and that could be a post for the future, as the next task is to add a bit of behaviour to the site. This of course means a trip into JavaScript land, stay tuned!

Deleting files owned by apache in bash using sudo

When I started this blog one request I had from @neilcrookes was to share a few tidbits in the world of system administration. Let me caveat this straightaway I’m definitely no sys admin, but over the years I’ve picked up a few tips and tricks which help us developers do common tasks on a web server. Hopefully what I present here is a secure solution to an issue, but if it isn’t secure please correct my knowledge in the comments!

Problem: How to delete files owned by the web server user from the command line?

If you have files created by scripts on a webserver they are often created as the user running the web server, in most cases apache (or maybe www-data). To achieve this I often transfer the ownership of the directory where the files need to be created to the user apache:

chown -R apache:apache cache

This works well but presents an issue if you need to delete these files manually. If you have root access, great you can login as that user and delete the files, however the web site files should not be owned by root already they should be owned by a different user, and you should probably be doing any work on these files as the user they are owned by and not root! Watch Full Movie Online Streaming Online and Download

So lets say you are running a deployment script and want to delete cache files as part of that deployment. The user you are running the deployment script as isn’t root so doesn’t have permission to delete the files owned by apache :-( You could run the deployment, switch to the root user then delete the files, but this is long winded, and probably requires the root password. What I really want to be able to do is delete those files owned by apache as the user running the script.

So in summary what I want is:

Delete files only owned by apache when running as another user (not root).

Solution: sudoers

Note: This is a guide from a CentOS server, YMMV on other Linux distros. I am also assuming some level of bash commandline fu from anyone reading this guide.

On most systems there is a command ‘sudo’ which lets you run commands at an elevated permissions level. What you can do is determine what can be done by a user by editing a config file called sudoers (/etc/sudoers). To do this you need to be root and use the program ‘visudo’ (/usr/sbin/visudo), it is based on ‘vi’ so you need to know how to use vi to edit it. There is a lot you can do in the sudoers file which I won’t go into but in order to to allow the user alastairb to delete files owned by apache add the following two lines to the sudoers file.

Runas_Alias WWW = apache
alastairb ALL = (WWW) NOPASSWD: /bin/rm

Substitute ‘apache’ for your web server user i.e. www-data and ‘alastairb’ for your shell user. Save the file (if you have any syntax errors they will be reported). Now you can delete files owned by apache by issuing the following command:

sudo -u apache rm [file]

The important bits are the Runas_Alias and the (WWW), this specifies that the command can only be run as the apache user. Without this you could elevate permissions to delete any file owner by any user which is definitely not a good thing!

Adam Thomas Binns

Exactly one month ago today my life was changed forever. On 11/09/2010 my wife and I were blessed with the birth of a baby boy. After a relatively easy (well for me anyway) labour a screeching bundle of joy was delivered into my trembling arms, a moment I will never forget. He was a healthy 7lb 10oz, all the right bits in all the right places and I can say without a hint of bias the most beautiful baby I had ever seen. We’ve named him Adam Thomas Binns.

Since that day what have I learnt?

  • Babies are forever, they take a lot of time and devotion
  • There’s no manual
  • You can’t reason with a screaming baby
  • Frustration and bewilderment can instantly be replaced by joy with a single smile
  • Babies poo a lot
  • What works one day, fails the next
  • Nothing can prepare you for the change a baby brings – you can try, but I guarentee it will be more than you can ever imagine.
  • It has taken me one month to write this post!

I’d like to say a huge Thank You to every one of our friends and family for all their kind messages and presents we’ve received. Abi and I have been overwhelmed by everyone’s generoisty.

One month down, and a lot more to come, wish us luck!

Apple’s social network for music – Ping

Last night at Apple’s Fall event they had the usual slew of product updates. An iPod is now apparently an iPhone without the ability to make calls, unless of course you count FaceTime as the ability to make a call? All very nice updates to their always sparkling, if a little expensive, product range. All these announcements I expect were met with the usual gasps, wows and applause you would expect (I don’t know as I don’t own a mac, iphone, ipad or ipod touch so couldn’t watch the event live) personally I greeted them with a hint of jealous meh (jealous as I can’t justify spending so much on what are no doubt good products). But then Apple did announce something of interest to me – Ping.

Were they getting into the world of golf, an iDriver or iSwing perhaps? No of course not they were pushing into the social media market. They effectively just created their version of last.fm only with a potential 160 million strong membership. With ping you create an account in iTunes where you can like songs/albums follow artists and also follow other people. Charts act as an instance recommendation engines which is further augmented by what you friends and artists are listening too. So roughly speaking all that last.fm did. The key difference here is size of the potential user base. last.fm claimed to have 40 million active users, iTunes has 160 million users which is quite a potential footprint. The shareholders at last.fm, spotify and MySpace must be a bit worried.

The ability to share my music tastes however isn’t what excites me. Hell my music tastes are so appalling I can’t even get embarrassed by it. So letting people know what I like and getting recommendations and nearest concerts etc isn’t high on my list of needs. What does excite me about this is the natural extension of the social graph. Data like this is just so excellent, and if its made available to all then its a big win in my book. And this is where I start to worry, iTunes is such a closed system, there’s no word on an API to access this data who will benefit from people using Ping, at the moment its Apple and the heavy users of iTunes. Hopefully over time this will change and APIs will open this data and if the data is based on the Open Graph Protocol then all the better.

So in the interests of adding my footprint to the social graph I signed up to Ping today. The experience was a bit disappointing, the sign up process was straightforward but after that point I was on my own. The people page to get followers suggested connecting through Facebook, which would have been great, I could then have leveraged the power of my social graph to find friends who had signed up and bands I have already liked in Facebook. Unfortunately I couldn’t find anyway of connecting to Facebook. Sure I know these bands and some key people but I had to search for these entities explicitly not something I have the time or inclination to do. Hopefully in a couple of days this will be ironed out and the full power of the social graph can be unleashed.

Until then I’ve added a link to the top of my blog to follow me on Ping, please do so and you’ll see the full spectrum of my musical tastes.

Twitter’s Tweet Button

Hot on the heals of Facebook’s Like buttons Twitter today announced a button of their on to place on your site. Tweet buttons are nothing new and Tweetmeme has had one available for a long time. It looks as though Twitter has been working closely with Tweetmeme on their implementation although I am not currently aware of there being any news that Tweetmeme has been aquired by Twitter.

As simple to use as the Facebook Social Plug-ins you simple visit the tweetbutton page choose your options and you are presented with a code to cut and paste into your site.

There are a number of options available, the most interesting is the ability to specify multiple accounts (up to two) which you can follow after using the button. This is particularly useful for sites run by multiple contributors presenting options to follow the person who wrote the an article as well as the main site twitter feed. There is also a developer API where you can roll your own code for the button and tweak to your specific needs.

As usual with these easy share options I’ve given it a go on my site. I’ve copied and pasted the code into the post page and placed it with my Facebook Like buttons. In doing this I’ve also removed my @anywhere tweet box which didn’t seem as useful.

Apps drive us to tiers

Some issues we face in developments are hard. The internet has a myriad of component parts that we have the unenviable task of cobbling together into what the end user sees. Given that there are so many layers between a user firing up a browser or interface, frontend, backend, database, server, network etc troubleshooting can be hard. Sometimes bugs are introduced that are obvious and take moments to identify and hopefully implement a fix, other times tracking down an issue can take hours or days.

I’d like to share one such issue my development team recently had the displeasure of having to track down. We recently completed a project to launch an iPhone application. In theory this should be easy to test. There’s a limited number of iPhone devices and its a relatively small number of variables. If it works on the iPhone 4, 3GS and 3G etc its likely to be good. This particular app however introduce another tier. Data for the application was to be updated on a periodic basis so the good design decision was taken that data would come from a web service. Now the app which was previously self contained had a number of extra variables introduced. We could now potentially have problems in the network, server, database or back-end tiers. More testing to be done but nothing that was insurmountable. A few development months later we have a polished app pulling data from a JSON web service all running nicely. App is submitted into the wild and the fun begins.

Although the app had been tested numerous times on numerous iPhones on the various networks with no problems we started getting reports of the app not pulling in data. App is again installed on multiple devices and our testing can’t easily reproduce the error. Was it an issue with the app itself? Or was it an issue in one of the other tiers. Given that the app worked well most of the time we decided to concentrate our efforts on determining the conditions under which its was failing. Lets take a quick look at the variables we had in play:

  • iPhone model – 4, 3GS, 3, 2, 1
  • Connectivity – WIFI, 3G
  • Networks – O2, Orange, T-Mobile, Three, Vodafone

Lots of combinations here but a finite set, working through the combinations methodically we discovered that the app was failing to load data only when it was being run over 3G intermittently on some networks. The key here was the intermittent nature over 3G. Something was wrong the data sometimes over 3G – that’s a lot of some’s (never a good thing). Here’s what we did know. The intermittent issue returned the following error over 3G:

kCFErrorDomainCFNetwork error 303

Googling this did not return too much helpful information, it certainly didn’t relate to the HTTP 303 error code. We were further on but not really close to a solution. The error pointed to something to do with the network and therefore the transport of the data. Was there something wrong with the data?

As I said before the data was be sent as JSON, what could possibly be wrong with the data over 3G? Reading up a little on 3G suggested that the packet size for data transport is smaller over 3G could this be what was causing the issue? We embarked on a number of avenues relating to the data, was it:

  • Data encoding – could some of the UTF-8 encoded characters be going wrong over 3G?
  • Data Length – Was the length of the data – 46K be too long?
  • Data Transport – was the fact the output was being transmitted gzipped a problem on 3G?
  • URL structure – was 3G getting confused by the URLs themselves?

All of the above were tried with some success. Like I said this was intermittent, sometimes it worked other times it didn’t, just as we thought we’d found the solution continued testing revealed we hadn’t. Two fruitless days later we were getting frustrated and of course our client wasn’t too happy either. Finally we had the idea to look at how the data was being returned. JSON should be returned with an “application/json” Content-Type header, this was what was occurring, could it be that some networks, or even some network transmitters were having issues with this Content-Type? Could changing it from “application/json” to the more common “text/html” make a difference? Indeed it could, we had success, after much testing we finally had a fix!

I’m posting this so that hopefully others who come across this error don’t have to go through what we did, and also to highlight just how difficult a job developers have sometimes. A special shout out to @MozMorris who went through two days of pain to find this solution!