Blog

Welcome to my blog. Recent posts:

Using ng-include in a JADE view

Posted by on 2:46 pm in JavaScript | 0 comments

Referencing angular partials from a jade view I recently ran into a weird problem whereby I couldn’t get a jade template to reference an angular partial using the ng-include directive. I assumed it was as simple as passing in the relative path to the partial view as per: div(ng-include="/partials/featured-courses/") 1 div(ng-include="/partials/featured-courses/") or: div(ng-include='partials/featured-courses') 1 div(ng-include='partials/featured-courses') but this was just completely ignored by Angular – I could see from Firebug that no HTTP request was even being made for this file, so I knew there was some syntax issue here. So I googled it with Bing, as you do, and found this StackOverflow article entitled ‘JADE templating engine does not render ng-include‘. As the answers specify, there’s an odd syntax to getting this working. I followed the guidelines and it worked. The syntax is as folows: div(ng-include='\'partials/featured-courses\'') 1 div(ng-include='\'partials/featured-courses\'') I have no idea why this particular syntax is required – if anyone knows, please feel free to drop me an email and I’ll update the...

read more

Pushing from GIT to Heroku

Posted by on 11:09 am in Cloud, Other | 0 comments

Pushing from GIT to Heroku I recently attempted to publish a small node.js application from my local machine under source control using git, to Heroku (http://www.heroku.com). While this should have been as simple as entering: git push heroku master 1 git push heroku master I was met with the error: Permission denied (publikey) fatal: Could not read from remote repository. Public key issues are never good Public key issues are never good – even less so when the error message is prefixed with ‘fatal’ 😉 The error message seemed quite clear and explicit – it seems that my local public key didn’t match that on Heroku and thus I could not authenticate. To resolve this I deleted the existing public key in my .ssh folder (located at c:\users\\.ssh\ and generated a new one. From a command prompt: cd c:\program files (x86)\git\bin ssh-keygen.exe 12 cd c:\program files (x86)\git\binssh-keygen.exe when prompted for the path I put in the path to the .ssh folder as follows: Enter the file in which to save the key: c:\users\lee\.ssh\id_rsa I left the password blank as this is a private machine and I’m too lazy to re-enter the pass-phrase every session. This generated a new key for me. Adding to the local key store From there I needed to add the key into the local git keystore. This is needed to ensure that both Git and eventually Heroku, are using the exact same key. To accomplish this open up a bash shell (after you’ve installed git for windows you can right click any folder and select ‘Git Bash’) and enter: $ eval `ssh-agent -s` $ ssh-add 12 $ eval `ssh-agent -s`$ ssh-add note the back-tick (above the tab key!) not apostrophe’s and also notice the eval statement. ssh-agent alone will not work Assuming you didn’t rename your key when generating (id_rsa), ssh-add will look for the default key name in the default key path. If you did change the file name you’ll need to pass this in as a parameter. If it worked, you should be prompted with: Identity added: /c/Users/lee/.ssh/id_rsa Sync the keys with Heroku Now you have a brand new key added into your local git you can start migrating this to Heroku. If you have no other keys in Heroku, run the following commands from a command prompt: heroku restart heroku keys:clear heroku keys:add 123 heroku restartheroku keys:clearheroku keys:add Assuming all went well and the output of the last step was: Uploading SSH public key …done then you should be good to go. Executing heroku:keys 1 heroku:keys should show your newly created and uploaded key. Ready, set, go! Finally, git push herku master 1 git push herku master should now succeed and you’re good to...

read more

Adding public holidays to Outlook 2013

Posted by on 3:56 am in Other | 0 comments

Maybe it was just me that didn’t know about this nice little hidden option, but in Outlook 2013 you can automatically add public holidays to your calendar. Normally I’d have gone online and found a public holidays calendar for this calendar year, downloaded it, imported it and fiddled about with in (inevitably importing it in to a new calendar by mistake or something). However, this is built in in Outlook 2013. To activate, simply click: File -> Options -> Calendar Click ‘Calendar Options’ Click the ‘Add Holidays’ button From the list of available countries, select any countries you are interested in. For me this was just United Kingdom Click OK. Outlook with them semlessly import all these directly into your primary calendar and you’re done. Nice little trick – if only I’d have...

read more

A pain in the node!

Posted by on 11:47 pm in JavaScript | 0 comments

I was recently playing with a node.js and Express application and ran into a small issue that wouldn’t let me render any templated content to the client. The issue was presenting itself in a number of ways, firstly: The first error I was getting was: app.configure(function() { ^ TypeError: Object function (req, res, next) { app.handle(req, res, next); } has no method ‘configure’ If you look online for express documentation, most will be using the app.configure syntax. However, as of version 4.0.0 of express, the app.configure method has been removed. See here (these release notes) for more detail. Looking in my package.json file I had the dependency for express set to latest: JavaScript "dependencies": { "express": "latest", "jade": "~1.3.1" } 1234 "dependencies": {"express": "latest","jade": "~1.3.1"} This is inherently dangerous for this very reason. As node and express are evolving so quickly, the likelihood of breaking changes between versions is great. The quick fix for me was to enforce an older (compatible) version that I know worked with the rest of the app: JavaScript "dependencies": { "express": "3.4.0", "jade": "~1.3.1" } 1234 "dependencies": {"express": "3.4.0","jade": "~1.3.1"} Then, once I rolled back the version of express (this is only a test app – the forward thinking thing to do would be fix the now obsolete code!) I started getting the following error: Error: No default engine was specified and no extension was provided This is strange as I clearly had specified a view engine: JavaScript app.configure(function () { app.set('view engine', 'jade'); app.set('views', __dirname + 'srv/views/'); }); 1234 app.configure(function () {app.set('view engine', 'jade');app.set('views', __dirname + 'srv/views/');}); After searching online and reading lots of solutions (most related back to daft errors like this transpired), I started to doubt the validity of the error message. Turns out that it was in fact the path to the views folder that was failing and it was nothing to do with the view engine (jade) at all. A quick fix to the path and everything started working: JavaScript app.configure(function () { app.set('view engine', 'jade'); app.set('views', __dirname + '/srv/views'); }); 1234 app.configure(function () {app.set('view engine', 'jade');app.set('views', __dirname + '/srv/views');}); The simplest mistakes often cause the biggest headaches. Despite its rapid change/release cycle and constantly battling with new versions and breaking changes, I do love the ‘bare metal’ feel of programming in...

read more

NPM giving SSL error: SELF_SIGNED_CERT_IN_CHAIN

Posted by on 12:33 pm in JavaScript | 0 comments

If you dabble a bit with node.js in your spare time you may have noticed that as of Feb 27th 2014, NPM no longer works. If you look at the output from npm, you will probably see the following error: npm ERR! fetch failed https://registry.npmjs.org/connect/-/connect-2.13.0.tgz <strong>npm ERR! Error: SSL Error: SELF_SIGNED_CERT_IN_CHAIN</strong> npm ERR! at ClientRequest.<anonymous> (C:\Program Files (x86)\nodejs\node_modules\npm\node_modules\request\main.js:525:26 ) npm ERR! at ClientRequest.g (events.js:192:14) npm ERR! at ClientRequest.EventEmitter.emit (events.js:96:17) npm ERR! at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1462:7) npm ERR! at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:111:23) npm ERR! at CleartextStream.socketOnData [as ondata] (http.js:1367:20) npm ERR! at CleartextStream.CryptoStream._push (tls.js:526:27) npm ERR! at SecurePair.cycle (tls.js:880:20) npm ERR! at EncryptedStream.CryptoStream.write (tls.js:267:13) npm ERR! at Socket.ondata (stream.js:38:26) npm ERR! If you need help, you may report this log at: npm ERR! <http://github.com/isaacs/npm/issues> npm ERR! or email it to: npm ERR! npm-@googlegroups.com 1234567891011121314151617 npm ERR! fetch failed https://registry.npmjs.org/connect/-/connect-2.13.0.tgz<strong>npm ERR! Error: SSL Error: SELF_SIGNED_CERT_IN_CHAIN</strong>npm ERR!     at ClientRequest.<anonymous> (C:\Program Files (x86)\nodejs\node_modules\npm\node_modules\request\main.js:525:26)npm ERR!     at ClientRequest.g (events.js:192:14)npm ERR!     at ClientRequest.EventEmitter.emit (events.js:96:17)npm ERR!     at HTTPParser.parserOnIncomingClient [as onIncoming] (http.js:1462:7)npm ERR!     at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:111:23)npm ERR!     at CleartextStream.socketOnData [as ondata] (http.js:1367:20)npm ERR!     at CleartextStream.CryptoStream._push (tls.js:526:27)npm ERR!     at SecurePair.cycle (tls.js:880:20)npm ERR!     at EncryptedStream.CryptoStream.write (tls.js:267:13)npm ERR!     at Socket.ondata (stream.js:38:26)npm ERR! If you need help, you may report this log at:npm ERR!     <http://github.com/isaacs/npm/issues>npm ERR! or email it to:npm ERR!     npm-@googlegroups.com The key is in the second line here, an SSL exception is being thrown due to the use of self-signed certificates: npm ERR! Error: SSL Error: SELF_SIGNED_CERT_IN_CHAIN After a bit of digging about I found a post on the official npm blog confirming that as of 27/Feb/14 self-signed certificates are no longer supported and that npm is effectively broken. Quite humorously, this same blog post recommends installing an updated version of npm using npm itself, which would be fine, if the entire problem wasn’t an inability to use npm due to the cert errors. Whoops! Some people are suggesting disabling SSL on npm by changing the config, but this is risky for a number of reasons, not least because that opens you up to man-in-the-middle attacks and no longer validates that you are indeed talking to the authentic npm repository. Personally I took both of these and ran the following: npm config set strict-ssl false npm install npm –g npm config set strict-ssl true 123 npm config set strict-ssl falsenpm install npm –gnpm config set strict-ssl true Here we are disabling SSL to allow us to grab the latest version of npm (which doesn’t suffer from the self-signed cert. problem) then immediately re-enabling SSL. This to me is a quick solution to this problem. The other option is to uninstall node.js, download the latest version and install that. The above option should be quickest and most hassle free I’d...

read more

Enabling PowerShell remoting to an Azure VM

Posted by on 11:42 pm in Azure, Other | 0 comments

I recently had the need to run some PowerShell scripts against a new VM created in Windows Azure using PowerShell Remoting. I thought this would be a simple enough job (and in truth, it is), but you need to know a couple of things. By default, PowerShell uses active directory to identify and authenticate users, but of course standalone Azure VM’s aren’t part of a domain. Therefore you’ll need to add the public IP address of the VM to the trusted hosts on your client. From the Azure portal, open port 5985 for PowerShell (the portal should open 5986 by default). To do this, go to: Virtual Machines > YOUR VM > Endpoints > ADD. Complete the resultant dialog: From the client machine, start PowerShell and type Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value '11.22.33.44' 1 Set-Item -Path WSMan:\localhost\Client\TrustedHosts -Value '11.22.33.44' Obviously substituting the IP address of the Azure VM you obtain from the Azure Portal If you already have trusted hosts, use –Concatenate to avoid overwriting the others. To be sure, Get-Item -Path WSMan:\localhost\Client\TrustedHosts 1 Get-Item -Path WSMan:\localhost\Client\TrustedHosts Should show you the entry you just created. Now, to connect to the Azure VM and start the PowerShell session: Enter-PSSession -ComputerName 11.22.33.44 -Credential 11.22.33.44\USERNAME 1 Enter-PSSession -ComputerName 11.22.33.44 -Credential 11.22.33.44\USERNAME Substitute in the username you created in the Azure portal when creating the VM (or any user you’ve since set up on the box with the relevant permissions) and you should be presented with a login box to confirm the password. Once that is done, your PowerShell session should be active. Happy...

read more

Unable to create a new storage account from PowerShell

Posted by on 11:00 pm in Azure | 0 comments

I recently ran into a problem creating a storage account using PowerShell. The rather obscure error I was getting was: New-AzureStorageAccount : Specified argument was out of the range of valid values. 1 New-AzureStorageAccount : Specified argument was out of the range of valid values. After lots of digging about coming up blank at the likes of Bing, Google and Stack Overflow I worked out the root cause – the name I was using for the storage account exceeded the 24 character length imposed by Azure! Simple error that I should have spotted, and attempting to create that environment in the portal surfaced the underlying cause immediately, but error messages like this really do not help. Nothing like a helpful error message is...

read more

Auto starting and stopping an EC2 instance at a given time

Posted by on 11:33 pm in AWS, Cloud | 0 comments

This blog post covers the process of automatically starting and stopping an EC2 instance at a given point in time. In my case I needed to spin up an instance, do some work and then shut it down afterwards. This is perfectly suited to AWS and cloud computing and typifies the ethos of elastic scaling and capacity-on-demand. In order to start/stop an instance you will need to make use of the AWS Auto Scaling capabilities which are both straight-forward but very, very powerful. The first thing that you’ll need to do is set up the Auto scaling tools. Amazon have an excellent step-by-step guide posted here If done correctly you should be able to open a new command prompt (or powershell terminal) and type as-cmd 1 as-cmd You should get a listing of all auto-scaling commands: Command Name Description ------------ ----------- as-attach-instances Attaches Instances to Auto Scaling group as-create-auto-scaling-group Create a new Auto Scaling group. as-create-launch-config Creates a new launch configuration. as-create-or-update-tags Create or update tags. as-delete-auto-scaling-group Deletes the specified Auto Scaling group. as-delete-launch-config Deletes the specified launch configuration. as-delete-notification-configuration Deletes the specified notification configuration. as-delete-policy Deletes the specified policy. as-delete-scheduled-action Deletes the specified scheduled action. as-delete-tags Delete the specified tags as-describe-account-limits Describes limits for the account. as-describe-adjustment-types Describes all policy adjustment types. as-describe-auto-scaling-groups Describes the specified Auto Scaling groups. as-describe-auto-scaling-instances Describes the specified Auto Scaling instances. as-describe-auto-scaling-notification-types Describes all Auto Scaling notification types. as-describe-launch-configs Describes the specified launch configurations. as-describe-metric-collection-types Describes all metric colle... metric granularity types. as-describe-notification-configurations Describes all notification...given Auto Scaling groups. as-describe-policies Describes the specified policies. as-describe-process-types Describes all Auto Scaling process types. as-describe-scaling-activities Describes a set of activit...ties belonging to a group. as-describe-scheduled-actions Describes the specified scheduled actions. as-describe-tags Describes tags as-describe-termination-policy-types Describes all Auto Scaling termination policy types. as-disable-metrics-collection Disables collection of Auto Scaling group metrics. as-enable-metrics-collection Enables collection of Auto Scaling group metrics. as-execute-policy Executes the specified policy. as-put-notification-configuration Creates or replaces notifi...or the Auto Scaling group. as-put-scaling-policy Creates or updates an Auto Scaling policy. as-put-scheduled-update-group-action Creates or updates a scheduled update group action. as-resume-processes Resumes all suspended Auto... given Auto Scaling group. as-set-desired-capacity Sets the desired capacity of the Auto Scaling group. as-set-instance-health Sets the health of the instance. as-suspend-processes Suspends all Auto Scaling ... given Auto Scaling group. as-terminate-instance-in-auto-scaling-group Terminates a given instance. as-update-auto-scaling-group Updates the specified Auto Scaling group. help version Prints the version of the CLI tool and the API. For help on a specific command, type ' --help' Getting started Now you can start implementing auto scaling. Auto scaling on a schedule requires a number of components which form the what, where and when to scale: 1. A launch configuration (the ‘what’) 2. A scaling group (the ‘where’) 3. A schedule policy (the ‘when’) 1. [WHAT] Create the launch configuration From your command prompt enter the following command: as-create-launch-config "screenshotter-launch-config" --image-id "ami-12345678" --instance-type "m1.medium" 1 as-create-launch-config "screenshotter-launch-config" --image-id "ami-12345678" --instance-type "m1.medium" Obviously substitute the name of the AMI in here – you can either pick one of the AMI’s from the EC2 marketplace or if you have an existing EC2 instance just right click on it and select ‘Create Image’. Remember the name you enter here (the first parameter) as you’ll need this in the next step. If this worked, your console should ouput: OK-Created Launch Config...

read more

IAM: Best practices

Posted by on 5:00 pm in AWS, Cloud | 0 comments

The more I play with (and love) AWS as a platform, the more the significance and power of IAM becomes. This post outlines IAM and how it should be used effectively within Amazon’s cloud environment. What is IAM? IAM, or Identity and Access Management is the primary means of securing users, groups and permissions. IAM is complimentary to services such as security groups and Access Control Lists (which govern Instance and Subnet security respectively). Best Practices Amazon strongly recommends (and I completely agree) that the root/master account should not be used for anything other than administering the Amazon account and creating administrative accounts. Thereafter you should be logged in as a named user (if the master account is a company account) with the minimum required privileges to do your job. With great power comes great responsibility, and its quite easy with AWS to inadvertently terminate the wrong instance (thus bringing a production server offline) or incorrectly route traffic to the wrong subnet for example. Therefore you should take the time to scope out what level of access each user requires, what functions they need to be able to perform to do their job and then match this up with the IAM policy generation tools. Amazon have done a fantastic job of giving granular access to the services and its contained functions – to the point where you can permit a user to reboot a server but not terminate it, or retrieve content from S3 but not upload (or vice-versa).Thus, as part of your getting started with IAM you should navigate to the IAM section of AWS and set up user(s) pertaining to the role they need to perform Another best practice is that if a user does not need API access, do not generate the key(s) necessary to enable it. Sure, they’re inherently obscure but why risk it? Oddly enough Amazon defaults the ‘Generate and access key for each user’ checkbox to ticked, so unless you explicitly disable this they will be created. Conversely, if a user ONLY needs API access and doesn’t need to access the console, then do not generate an IAM password that would allow them to log in to the AWS console. By default new accounts do not have a login password, and thus you control who has access to the AWS console. As with all security in IT, the least required privileges, the better! IAM roles are a relatively new addition to the IAM offering. Roles allow you to assign effective permissions to a particular role as you would a user or group, but assign this to an EC2 instance. Why would you do this? Well, prior to roles, developers would have to embed API keys & secrets in their code or user-data (or use some other mechanism for getting credentials onto an instance) in order to permit it to access another AWS service such as S3. Now, when creating an instance (or templating via CloudFormation), you can set the permitted role and permissions will automatically be effective without the need to set credentials. A third recommendation is to enable MFA. MFA, or Multi Factor Authentication (also known as two-factor auth on services like Outlook.com or GMail for example) ties a users account to a secondary authentication device. Traditionally this is a separate token...

read more

Azure: Sites, roles and services

Posted by on 11:02 pm in Azure, Cloud | 0 comments

As windows Azure continues to grow from a new PaaS offering to a fully features IaaS platform, the range of services on offer continues to grow. With this in mind it is becoming increasingly difficult to differentiate between the different ways of hosting your code in Azure. This post outlines the four different options at the time of writing, two of which comprise the Azure cloud service offering and the other two remain standalone services. In order to understand these, two key concepts need to be defined: PaaS Platform as a Service = Provides a layer of abstraction in a cloud environment whereby the consumer need only worry about the code, configuration and deployment. The provider (in this case Azure) managed and maintains the network, servers, security and storage. Iaas Infrastructure as a Service = The rawest and most basic form of cloud computing, IaaS provides access to physical or more commonly virtual hardware in the form of Virtual Machines. Once provisioned, the VM and its underlying operating system remain the responsibility of the customer. All facets of its operation including storage, security, maintenance and monitoring are handled by the customer. This provides excellent control but with it a much higher maintenance burden. The main Azure code hosting choices are: Web Role (Cloud Service) A web role allows you to host your code inside an Azure cloud service meaning your site can scale to almost any size very, very easily. When Web roles are employed alongside worker roles and other services such as Azure Service Bus, they offer the most complete means of building modern cloud architectures on the Microsoft stack. Worker Role (Cloud Service) Worker Roles are headless servers that perform continuous processing without ever surfacing a front end. Typical scenarios for worker roles include processing data from service calls, processing messages off a queue or perform other blocking executions that you wouldn’t want tying up your front end. In both web and worker roles the Azure platform will still manage the underlying operating system on your behalf, but the code needs to be aware that this is happening and provision for scenarios where such maintenance may take an instance offline. Unlike Azure Websites (see below) this isn’t taken care of automatically. Web Site(s) Azure websites is the newest PaaS offering that offers a ‘fully managed’ VM environment. The Azure platform manages and maintains the underlying operating system, installing updates and performing routine maintenance meaning you only need be concerned with your code and not the platform. The platform also manages the migration of your site from one host to another so that if the underlying host fails or is recycled for maintenance, no downtime is witnessed and no data is lost. Web Sites offer a range of hosted languages including the full .net stack, PHP and Node Virtual Machines The VM offering gives the ultimate control – you spin up a VM and its yours thereafter. Much like creating an EC2 instance in AWS you are free to do with this machine anything you like. You are therefore responsible for OS updates, monitoring uptime and availability, scaling and all of the other goodness that PaaS gives you over IaaS. With the freedom offered by a VM comes options – now you can run any language on the...

read more