Browse Source

basic gatsby version

theming
Will Webberley 3 years ago
parent
commit
0ee6109e8d
109 changed files with 12795 additions and 1943 deletions
  1. 15
      .gitignore
  2. 5
      .prettierrc
  3. 4
      Gemfile
  4. 49
      Gemfile.lock
  5. 22
      LICENSE
  6. 1
      README.md
  7. 12
      _config.yml
  8. 15
      _includes/footer.html
  9. 43
      _includes/header.html
  10. 7
      _layouts/default.html
  11. 10
      _layouts/post.html
  12. 35
      _posts/2012-09-20-digisocial-hackathon.md
  13. 21
      _posts/2012-10-10-seminar-retweeting.md
  14. 22
      _posts/2012-10-31-socialshower.md
  15. 49
      _posts/2012-11-13-delving-into-android.md
  16. 14
      _posts/2013-01-21-research-poster-day.md
  17. 28
      _posts/2013-02-18-scriptslide.md
  18. 35
      _posts/2013-02-21-playing-with-flask-and-mongodb.md
  19. 68
      _posts/2013-03-06-deploying-to-heroku.md
  20. 18
      _posts/2013-03-07-gower-tides-app-released.md
  21. 13
      _posts/2013-03-30-a-bit-of-light-construction-on-an-easter-weekend.md
  22. 33
      _posts/2013-04-05-normal-service-resumed-ajax-+-python-+-amazon-s3.md
  23. 34
      _posts/2013-04-08-a-simple-outbound-mail-server.md
  24. 13
      _posts/2013-04-11-cardiff-open-sauce-hackathon.md
  25. 24
      _posts/2013-04-16-trials-of-eduroam.md
  26. 13
      _posts/2013-04-23-flyingsparx.net-on-digital-ocean.md
  27. 13
      _posts/2013-04-25-eartub.es.md
  28. 22
      _posts/2013-05-03-is-twitters-new-api-really-such-a-nightmare?.md
  29. 15
      _posts/2013-05-07-contribution-to-heroku-dev-center.md
  30. 15
      _posts/2013-05-26-gower-tides-open-sourced.md
  31. 21
      _posts/2013-06-12-wekapy.md
  32. 12
      _posts/2013-06-20-accidental-kernel-upgrades-on-digital-ocean.md
  33. 12
      _posts/2013-07-03-magic-seaweeds-awesome-new-api.md
  34. 20
      _posts/2013-07-31-gower-tides-v1.4.md
  35. 20
      _posts/2013-08-31-a-rather-french-week.md
  36. 18
      _posts/2013-09-02-zoned-network-sound-streaming-the-problem.md
  37. 29
      _posts/2013-09-14-casastream.md
  38. 12
      _posts/2013-10-05-workshop-presentation-in-germany.md
  39. 30
      _posts/2014-01-07-llavac.md
  40. 15
      _posts/2014-01-17-direct-to-s3-uploads-in-node.js.md
  41. 15
      _posts/2014-01-28-seminar-at-kings-college-london.md
  42. 11
      _posts/2014-03-17-node.js-contribution-to-herokus-dev-center.md
  43. 11
      _posts/2014-03-26-talk-on-open-source-contribution.md
  44. 24
      _posts/2015-01-20-end-of-an-era.md
  45. 26
      _posts/2015-01-27-nhs-hack-day.md
  46. 13
      _posts/2015-02-05-developing-useful-apis-for-the-web.md
  47. 19
      _posts/2015-02-18-web-and-social-computing.md
  48. 30
      _posts/2015-04-28-media-and-volume-keys-in-i3.md
  49. 28
      _posts/2015-05-01-using-weka-in-go.md
  50. 31
      _posts/2015-05-12-nintendos-hotspot-api.md
  51. 17
      _posts/2015-05-27-android-consuming-nintendo-hotspot-data.md
  52. 26
      _posts/2017-03-16-two-year-update.md
  53. 39
      _posts/2017-06-22-cenode.md
  54. 68
      _posts/2017-06-26-cenode-iot.md
  55. 38
      _posts/2017-07-19-cenode-alexa.md
  56. 14
      _posts/2017-08-18-security-lights.md
  57. 9
      _publications/2011-09-retweeting-message-forwarding-twitter.md
  58. 9
      _publications/2013-09-inferring-interesting-tweets-network.md
  59. 9
      _publications/2015-05-conversational-sensemaking.md
  60. 9
      _publications/2015-05-information-from-human-sensors.md
  61. 9
      _publications/2016-01-retweeting-beyond-expectation.md
  62. 70
      _sass/_syntax-highlighting.scss
  63. 109
      _sass/main.scss
  64. 36
      blog/index.html
  65. 92
      community/index.html
  66. 52
      css/main.scss
  67. BIN
      cv.pdf
  68. 30
      feed.xml
  69. 6
      gatsby-config.js
  70. 59
      index.html
  71. BIN
      media/blog/android-hotspot.png
  72. BIN
      media/blog/casastream1.png
  73. BIN
      media/blog/casastream2.png
  74. BIN
      media/blog/cenode-alexa.png
  75. BIN
      media/blog/decking.png
  76. BIN
      media/blog/digisocial_logo.png
  77. BIN
      media/blog/french-gorge.JPG
  78. BIN
      media/blog/french-house.JPG
  79. BIN
      media/blog/french-pyrenes.JPG
  80. BIN
      media/blog/interrailing.png
  81. BIN
      media/blog/mongo-storage.png
  82. BIN
      media/blog/nhshackday.png
  83. BIN
      media/blog/nhshackday2.jpg
  84. BIN
      media/blog/nintendozone1.png
  85. BIN
      media/blog/nintendozone2.png
  86. BIN
      media/blog/sdi_bett.jpg
  87. BIN
      media/blog/security-lights.png
  88. BIN
      media/blog/socialshower_image.png
  89. BIN
      media/blog/tides-main.png
  90. BIN
      media/blog/tides-settings.png
  91. BIN
      media/blog/twitter-javascript.png
  92. BIN
      media/blog/twitter-ratelimit.png
  93. BIN
      media/output/casastream.png
  94. BIN
      media/output/gower_tides.png
  95. BIN
      media/output/gower_tides_ios.png
  96. BIN
      media/output/health_explorer_wales.png
  97. BIN
      media/output/heroku.png
  98. BIN
      media/output/nzone_finder.png
  99. BIN
      media/output/patients_please.png
  100. BIN
      media/will.jpg

15
.gitignore

@ -1,7 +1,8 @@
*.log
*.sw*
.DS_Store
node_modules/
.sass-cache/
_site/
build.tar.gz
# Project dependencies
# https://www.npmjs.org/doc/misc/npm-faq.html#should-i-check-my-node_modules-folder-into-git
.cache
node_modules
yarn-error.log
# Build directory
/public

5
.prettierrc

@ -0,0 +1,5 @@
{
"semi": false,
"singleQuote": true,
"trailingComma": "es5"
}

4
Gemfile

@ -1,4 +0,0 @@
source 'https://rubygems.org'
gem 'jekyll'
gem 'jekyll-paginate'

49
Gemfile.lock

@ -1,49 +0,0 @@
GEM
remote: https://rubygems.org/
specs:
addressable (2.5.0)
public_suffix (~> 2.0, >= 2.0.2)
colorator (1.1.0)
ffi (1.9.17)
forwardable-extended (2.6.0)
jekyll (3.4.0)
addressable (~> 2.4)
colorator (~> 1.0)
jekyll-sass-converter (~> 1.0)
jekyll-watch (~> 1.1)
kramdown (~> 1.3)
liquid (~> 3.0)
mercenary (~> 0.3.3)
pathutil (~> 0.9)
rouge (~> 1.7)
safe_yaml (~> 1.0)
jekyll-paginate (1.1.0)
jekyll-sass-converter (1.5.0)
sass (~> 3.4)
jekyll-watch (1.5.0)
listen (~> 3.0, < 3.1)
kramdown (1.13.2)
liquid (3.0.6)
listen (3.0.8)
rb-fsevent (~> 0.9, >= 0.9.4)
rb-inotify (~> 0.9, >= 0.9.7)
mercenary (0.3.6)
pathutil (0.14.0)
forwardable-extended (~> 2.6)
public_suffix (2.0.5)
rb-fsevent (0.9.8)
rb-inotify (0.9.8)
ffi (>= 0.5.0)
rouge (1.11.1)
safe_yaml (1.0.4)
sass (3.4.23)
PLATFORMS
ruby
DEPENDENCIES
jekyll
jekyll-paginate
BUNDLED WITH
1.11.2

22
LICENSE

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 gatsbyjs
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

1
README.md

@ -0,0 +1 @@
# Personal website

12
_config.yml

@ -1,12 +0,0 @@
title: Will Webberley
baseurl: "" # the subpath of your site, e.g. /blog
paginate: 5
paginate_path: "/blog/page/:num/"
permalink: pretty
exclude: [package.json, Gemfile, Gemfile.lock, _config.yml]
collections:
- publications
markdown: kramdown
gems: [jekyll-paginate]

15
_includes/footer.html

@ -1,15 +0,0 @@
<script src="https://code.jquery.com/jquery-3.2.0.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.0/js/materialize.min.js"></script>
<script>
(function (){
$('.button-collapse').sideNav();
})();
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-40575035-1', 'auto');
ga('send', 'pageview');
</script>
</body>
</html>

43
_includes/header.html

@ -1,43 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>{% if page.title %}{{ page.title | escape }}{% else %}{{ site.title | escape }}{% endif %}</title>
<link rel="stylesheet" href="{{ "/css/main.css" | prepend: site.baseurl }}">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/materialize/0.98.0/css/materialize.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/font-awesome/4.7.0/css/font-awesome.min.css">
<link rel="alternate" type="application/rss+xml" title="{{ site.title }}" href="{{ "/feed.xml" | prepend: site.baseurl | prepend: site.url }}">
</head>
<body>
<nav class="blue-grey darken-2">
<div class="nav-wrapper">
<a href="/" class="brand-logo">
<span>Will Webberley</span>
</a>
<ul id="nav-mobile" class="right hide-on-med-and-down">
<li><a href="/blog">Blog</a></li>
<li><a href="/research">Research</a></li>
<li><a href="/project">Projects</a></li>
<li><a href="/community">Community</a></li>
</ul>
<a href="#" data-activates="mobile-nav" class="button-collapse"><i class="fa fa-bars"></i></a>
<ul class="side-nav" id="mobile-nav">
<li><a class="waves-effect" href="/blog">Blog</a></li>
<li><a class="waves-effect" href="/research">Research</a></li>
<li><a class="waves-effect" href="/project">Project</a></li>
<li><a class="waves-effect" href="/community">Community</a></li>
<li><div class="divider"></div></li>
<li><a class="waves-effect" href="mailto:will@flyingsparx.net"><i class="fa fa-paper-plane"></i> will@flyingsparx.net</a></li>
<li><a class="waves-effect" href="https://twitter.com/flyingSparx"><i class="fa fa-twitter"></i> @flyingSparx</a></li>
<li><a class="waves-effect" href="https://github.com/flyingsparx"><i class="fa fa-github"></i> flyingsparx</a></li>
</ul>
</div>
</nav>

7
_layouts/default.html

@ -1,7 +0,0 @@
{% include header.html %}
<div class="container">
{{ content }}
</div>
{% include footer.html %}

10
_layouts/post.html

@ -1,10 +0,0 @@
---
layout: default
---
<a href="/blog" class="blue-grey btn"><i class="fa fa-arrow-left"></i> Back to blog</a>
<article>
<h3 class="post-title" itemprop="name headline">{{ page.title }}</h3>
{{ content }}
</article>

35
_posts/2012-09-20-digisocial-hackathon.md

@ -1,35 +0,0 @@
---
year: 2012
month: 9
day: 20
title: DigiSocial Hackathon
layout: post
---
<p>
We recently held our DigiSocial Hackathon. This was a collaboration between the Schools of
Computer Science and Social Sciences and was organised by myself and a few others.
</p>
<p>The website for the event is hosted <a href="http://users.cs.cf.ac.uk/W.M.Webberley/digisocial/" target="_blank">here</a>.</p>
<img src="/media/blog/digisocial_logo.png" alt="DigiSocial logo" class="blog-image"/>
<p>The idea of the event was to try and encourage further ties between the different Schools of the University. The
University Graduate College (UGC) provide the funding for these events, which must be applied for, in the hope
that good projects or results come out of it.</p>
<p>We had relatively good responses from the Schools of Maths, Social Sciences, Medicine, and ourselves, and had a turnout of around 10-15
for the event on the 15th and 16th September. Initially, we started to develop ideas for potential projects. Because of the
nature of the event, we wanted to make sure they were as cross-disciplined as possible. A hackday, in itself, is pretty
computer science-y so we needed to apply a social or medical spin on our ideas.</p>
<p>Eventually, we settled into two groups: one working on a social-themed project based on crimes in an area (both in terms of
distribution and intensity) in relation to the food hygiene levels in nearby establishments; another focusing on hospital wait times
and free beds in South Wales. Effectively, then, both projects are visualisations of publicly-available datasets.</p>
<p>I worked on the social project with Matt Williams, Wil Chivers and Martin Chorley, and it is viewable <a href="http://ukcrimemashup.nomovingparts.net/" target="_blank">here</a>. </p>
<p>Overall the event was a reasonable success; two projects were
completed and we have now made links with the other Schools which will hopefully allow us to do similar events together in the
future.</p>

21
_posts/2012-10-10-seminar-retweeting.md

@ -1,21 +0,0 @@
---
year: 2012
month: 10
day: 10
title: "Seminar: Retweeting"
layout: post
---
<p>
I gave a seminar on my current research phase. </p>
<p>
I summarised my work over the past few months; in particular, the work on the network structure of Twitter, the way in which tweets
propagate through different network types, and the implications of this. I discussed the importance of precision and recall as metrics
for determining a timeline\'s quality and how this is altered through retweeting in different network types.
</p>
<p>
I concluded by talking about my next area of research; how I may use the model used for the previous experimentation to determine if
a tweet is particularly interesting based on its features. Essentially, this boils down to showing that tweets are siginificantly
interesting (or uninteresting) by looking at how they compare to their <i>predicted</i> retweet behaviours as produced by the model.</p>
<p>The slides for the talk (not much use independently!) are available
<a href="http://willwebberley.net/downloads/research-fts/presentation.html" target="_blank">here</a>.</p>

22
_posts/2012-10-31-socialshower.md

@ -1,22 +0,0 @@
---
year: 2012
month: 10
day: 31
title: SocialShower
layout: post
---
<p>
A few weeks ago I wrote some PHP scripts that can retrieve some of your social interactions and display them in a webpage (though the
scripts could easily be modified to return JSON or XML instead). When styled, they can produce effects similar to those on the Contact
page of this website (<a href="#contact">here</a>).</p>
<img src="/media/blog/socialshower_image.png" alt="SocialShower" class="blog-image"/>
<p>Currently they are available for retrieving recent tweets from Twitter, recent listens from Last.fm and recent uploads to Picasa Web
Albums.</p>
<p>The scripts run, in their current state, when the appropriate function is called from the included script. As a result, this could seriously slow down the page-load time if called as part of the page request.
If embedded in a webpage, they should be run through an AJAX call after the rest of the page has loaded.</p>
<p>The repo for the code (and example useage) is available from <a href="https://github.com/flyingsparx/SocialShower" target="_blank">Github</a>.</p>

49
_posts/2012-11-13-delving-into-android.md

@ -1,49 +0,0 @@
---
year: 2012
month: 11
day: 13
title: Delving into Android
layout: post
---
<img src="/media/blog/tides-main.png" alt="Tides Main Activity" class="blog-image"/>
<p>
I've always been interested in the development of smartphone apps, but have never really had the opportunity
to actually hava a go. Whilst I'm generally OK with development on platforms I feel comfortable with, I've always
considered there to be no point in developing applications for wider use unless you have a good idea about first thinking
about the direction for it to go.
</p>
<p>
My Dad is a keen surfer and has a watch which tells the tide changes as well as the time. It shows the next event (i.e. low- or high-tide)
and the time until that event, but he always complains about how inaccurate it is and how it never correctly predicts the tide
schedule for the places he likes to surf.</p>
<p>He uses an Android phone, and so I thought I'd try making an app for him that would be more accurate than his watch, and
maybe provide more interesting features. The only tricky criterion, really, was that he needed it to predict the tides offline, since
the data reception is very poor in his area.</p>
<p>I got to work on setting up a database of tidal data, based around the location he surfs in, and creating a basic UI in which to display it.
When packaging the application with an existing SQLite database, this <a href="https://github.com/jgilfelt/android-sqlite-asset-helper" target="_blank">helper class</a> was particularly useful.</p>
<img src="/media/blog/tides-settings.png" alt="Tides Settings Activity" class="blog-image"/>
<p>
A graphical UI seemed the best approach for displaying the data, so I
tried <a href="http://androidplot.com/" target="_blank">AndroidPlot</a>, a highly-customisable graphing
library, to show the tidal patterns day-by-day. This seemed to work OK (though not entirely accurately - tidal patterns form
more of a cosine wave rather than the zigzags my graph produced, but the general idea is there), so I added more features, such as
a tide table (the more traditional approach) and a sunrise and sunset timer.
</p>
<p>I showed him the app at this stage, and he decided it could be improved by adding weather forecasts. Obviously, preidcting the
weather cannot be done offline, so having sourced a decent <a href="http://www.worldweatheronline.com/" target="_blank">weather API</a>,
I added the weather forecast for his area too. Due to the rate-limiting of World Weather Online, a cache is stored in a database
on the host for this website, which, when queried by the app, will make the request on the app's behalf and store the data until
it is stale.</p>
<p>I added a preferences activity for some general customisation, and that's as far as I've currently got. In terms of development,
I guess it's been a good introduction to the ideas behind various methodologies and features, such as the manifest file, networking,
local storage, preferences, and layout design. I'll create a Github repository for it when I get round to it.</p>

14
_posts/2013-01-21-research-poster-day.md

@ -1,14 +0,0 @@
---
year: 2013
month: 1
day: 21
title: Research Poster Day
layout: post
---
<p>
Each January the School of Computer Science hosts a poster day in order for the research students to demonstrate their current work to
other research students, research staff and undergraduates. The event lets members of the department see what other research is being done outside of their own group and gives researchers an opportunity to defend their research ideas.
</p>
<p>This year, I focused on my current research area, which is to do with inferring how interesting a Tweet is based on a comparison between simulated retweet patterns and the propagation behaviour demonstrated by the Tweet in Twitter itself. The poster highlights recent work in the build-up to this, a general overview of how the research works, and finishes with where I want to take this research in the future.</p>
<p>The poster is available <a href="http://www.willwebberley.net/downloads/poster_day_2013.pdf" target="_blank">here</a>.</p>

28
_posts/2013-02-18-scriptslide.md

@ -1,28 +0,0 @@
---
year: 2013
month: 2
day: 18
title: ScriptSlide
layout: post
---
<p>
I've taken to writing most of my recent presentations in plain HTML (rather than using third-party software or services). I used
JavaScript to handle the appearance and ordering of slides. An example (to show what I mean) is
<a href="http://www.willwebberley.net/downloads/scriptslide" target="_blank">here</a>.
</p>
<p>I bundled the JS into a single script, <span class="code">js/scriptslide.js</span>, which can be configured
using the <span class="code">js/config.js</span> script. </p>
<p>There is a <a href="https://github.com/flyingSparx/ScriptSlide" target="_blank">Github repo</a> for the code, along with example usage and instructions.</p>
<p>
Most configuration can be done by using the <span class="code">js/config.js</span> script, which supports many features including:</p>
<ul>
<li>Set the slide transition type (appear, fade, slide)</li>
<li>Set the logos, page title, etc.</li>
<li>Configure the colour scheme</li>
</ul>
<p>
Then simply create an HTML document, set some other styles (there is a template in <span class="code">css/styles.css</span>), and
put each slide inside <span class="code">&lt;section&gt;...&lt;/section&gt;</span> tags. The slide menu is then generated autmatically
when the page is loaded.
</p>

35
_posts/2013-02-21-playing-with-flask-and-mongodb.md

@ -1,35 +0,0 @@
---
year: 2013
month: 2
day: 21
title: Playing with Flask and MongoDB
layout: post
---
<p>
I've always been a bit of an Apache/PHP fanboy - I find the way they work together logical and easy to set up and I enjoy how
easy it is to work with databases and page-routing in PHP. </p>
<p>
However, more recently I've found myself looking for other ways to handle web applications and data. I've messed around with Node.js, Django, etc., in the past but, particularly with Django, found that there seems to be a lot of setting-up involved even in creating quite small applications. Despite this, I understand that once setup properly Django can scale very well and managing large applications becomes very easy.</p>
<p>
<a href="http://flask.pocoo.org/" target="_blank">Flask</a> is a Python web framework (like Django, except smaller) which focuses on
its easiness and quickness to setup and its configurability. Whilst it doesn't, by default, contain all the functionality that
larger frameworks provide, it is extensible through the use of extra modules and addons.
</p>
<p>I thought I'd use it for a quick play around to introduce myself to it. Most of this post is for my own use to look back on.</p>
<p>As it is Python, it can be installed through pip or easy_install:</p>
<pre class="shell"># easy_install flask</pre>
<p>Note: If Python is not yet installed, then install that (and its distribution tools) for your system first.
For example, in Arch Linux:</p>
<pre class="shell"># pacman -S python2 python2-distribute</pre>
<p>
<p>In terms of data storage, I used <a href="http://www.mongodb.org/" target="_blank">MongoDB</a>, a non-SQL, document-oriented
approach to handling data. This can be downloaded and installed from their
<a href="http://www.mongodb.org/downloads" target="_blank">website</a> or your own distro may distribute it.
For example, in Arch:</p>
<pre class="shell"># pacman -S mongodb</pre>
<p>MongoDB can be started as a standard user. Create a directory to hold the database and then start it as follows:
<pre class="shell">$ mkdir data/db<br />$ mongodb --dbpath data/db</pre>
<p>This will start the server and runs, by default, on port 27017. The basic setup is now complete, and you can now start working on the application.</p>
<hr />
<p>A complete example (including all necessary code and files) is available in <a href="https://github.com/flyingsparx/MongoFlask" target="_blank">this repository</a>. This also includes a more comprehensive walkthrough to getting started.</p>

68
_posts/2013-03-06-deploying-to-heroku.md

@ -1,68 +0,0 @@
---
year: 2013
month: 3
day: 6
title: Deploying to Heroku
layout: post
---
<p>In my <a href="http://www.willwebberley.net/#post/2013-02-21" target="_blank">last post</a>, I
talked about developing Python applications using Flask (with MongoDB to handle data). The next stage
was to consider deployment options so that the application can be properly used.</p>
<p>Python is a popular language in the cloud, and so there are many cloud providers around who support
this kind of application
(<a href="http://aws.amazon.com/elasticbeanstalk/" target="_blank">Amazon Elastic Beanstalk</a>,
<a href="https://developers.google.com/appengine/" target="_blank">Google App Engine</a>,
<a href="https://www.pythonanywhere.com/" target="_blank">Python Anywhere</a>, etc.), but
<a href="http://www.heroku.com/" target="_blank">Heroku</a> seems the most attractive option
due to its logical deployment strategy, scalability and its range of addons (including providing
the use of MongoDB).</p>
<p>First, download the Heroku toolbelt from their website. This allows various commands
to be run to prepare, deploy and check the progress, logs and status of applications. Once
installed, log into your account using your Heroku email address and password:</p>
<pre class="shell">$ heroku login</pre>
<p>Install the dependencies of your project (this should usually be done inside a virtual Python environment).
In my case, these are <span class="code">Flask</span> and <span class="code">Flask-MongoAlchemy</span>:</p>
<pre class="shell">$ pip install Flask<br />
$ pip install Flask-MongoAlchemy</pre>
<p>We now declare these dependencies so that they can be installed for your deployed app.
This can be done using pip, which will populate a file of dependencies:</p>
<pre class="shell">$ pip freeze > requirements.txt</pre>
<p>The file requirements.txt should now list all the dependencies for the application. Next is
to declare how the application should be run (Heroku has web and worker dynos). In this case,
this is a web app. Add the following to a file <span class="code">Procfile</span>:</p>
<pre class="shell">web: python app_name.py</pre>
<p>This basically tells Heroku to execute <span class="code">python app_name.py</span> to start
a web dyno.</p>
<p>The application setup can now be tested using <span class="code">foreman</span>
(from the Heroku toolbelt). If successful (and you get the
expected outcome in a web browser), then the app is ready for deployment:</p>
<pre class="shell">$ foreman start</pre>
<p>Lastly, the app needs to be stored in Git and pushed to Heroku. After preparing a suitable
<span class="code">.gitignore</span> for the project, create a new Heroku app, initialize, commit
and push the project:</p>
<pre class="shell">$ heroku create<br />
$ git init<br />
$ git add .<br />
$ git commit -m "initial commit"<br />
$ git push heroku master</pre>
<p>Once done (assuming no errors), check its state with:</p>
<p class="code">$ heroku ps</p>
<p>If it says something like <span class="code">web.1: up for 10s</span> then the
application is running. If it says the application has crashed, then check the logs for errors:</p>
<pre class="shell">$ heroku logs</pre>
<p>Visit the live application with:</p>
<pre class="shell">$ heroku open</pre>
<p>Finally, I needed to add the database functionality. I used MongoHQ, which features useful tools
for managing MongoDB databases. Add this addon to your application using:</p>
<pre class="shell">$ heroku addons:add mongohq:sandbox</pre>
<p>This adds the free version of the addon to the application. Visit the admin interface
from the Apps section of the website to add a username and password. These (along with the
host and port) need to be configured in your application in order to work. e.g.:</p>
<pre class="python">
app.config['MONGOALCHEMY_USER'] = 'will'<br />
app.config['MONGOALCHEMY_PASSWORD'] = 'password'<br />
app.config['MONGOALCHEMY_SERVER'] = 'sub.domain.tld'<br />
etc.</pre>
<p>It may be that this step will need to be completed earlier if the application depends on
the database connection to run.</p>

18
_posts/2013-03-07-gower-tides-app-released.md

@ -1,18 +0,0 @@
---
year: 2013
month: 3
day: 7
title: Gower Tides App Released
layout: post
---
<a href="https://play.google.com/store/apps/details?id=net.willwebberley.gowertides">
<img alt="Get it on Google Play"
src="https://developer.android.com/images/brand/en_generic_rgb_wo_60.png" class="blog-image"/>
</a>
<p>A <a href="http://www.willwebberley.net/#post/2012-11-13" target="_blank">few posts back</a>, I talked
about the development of an Android app for tide predictions for South Wales. This app is now on <a href="https://play.google.com/store/apps/details?id=net.willwebberley.gowertides" target="_blank" style="color:#A4C639;">Google Play</a>.</p>
<p>If you live in South Wales and are vaguely interested in tides/weather, then you should probably download it :)</p>
<p>The main advantage is that the app does not need any data connection to display the tidal data, which is useful in areas
with low signal. In future, I hope to add further features, such as a more accurate tide graph (using a proper 'wave'),
surf reports, and just general UI updates.</p>

13
_posts/2013-03-30-a-bit-of-light-construction-on-an-easter-weekend.md

@ -1,13 +0,0 @@
---
year: 2013
month: 3
day: 30
title: A bit of light construction on an Easter weekend
layout: post
---
<p>
<img alt="wow, look" src="/media/blog/decking.png" class="large-image blog-image"/>
It's a well-known fact that computer scientists fear all forms of physical labour above everything else (except perhaps awkward social mingling). </p>
<p>Despite this, I managed to turn about two tonnes of material into something vaguely resembling 'decking' in my back garden this weekend. It makes the area look much nicer, but whether it actually stays up is a completely different matter.
</p>

33
_posts/2013-04-05-normal-service-resumed-ajax-+-python-+-amazon-s3.md

@ -1,33 +0,0 @@
---
year: 2013
month: 4
day: 5
title: "Normal service resumed: AJAX + Python + Amazon S3"
layout: post
---
<p>
I wanted a way in which users can seamlessly upload images for use in the Heroku application discussed in previous posts.</p>
<p>Ideally, the image would be uploaded through AJAX as part of a data-entry form, but without having to refresh the page or anything else that would disrupt the user's experience. As far as I know, barebones JQuery does not support AJAX uploads, but <a href="http://www.malsup.com/jquery/form/#file-upload" target="_blank">this handy plugin</a> does.</p>
<h3>Handling the upload (AJAX)</h3>
<p>I styled the file input nicely (in a similar way to <a href="http://ericbidelman.tumblr.com/post/14636214755/making-file-inputs-a-pleasure-to-look-at" target="_blank">this guy</a>) and added the JS so that the upload is sent properly (and to the appropriate URL) when a change is detected to the input (i.e. the user does not need to click the 'upload' button to start the upload).</p>
<h3>Receiving the upload (Python)</h3>
<p>The backend, as previously mentioned, is written in Python as part of a Flask app. Since Heroku's customer webspace is read-only, uploads would have to be stored elsewhere. <a href="http://boto.s3.amazonaws.com/index.html" target="_blank">Boto</a>'s a cool library for interfacing with various AWS products (including S3) and can easily be installed with <span class="code">pip install boto</span>. From this library, we're going to need the <span class="code">S3Connection</span> and <span class="code">Key</span> classes:</p>
<pre class="python">
from boto.s3.connection import S3Connection<br />
from boto.s3.key import Key
</pre>
<p>Now we can easily handle the transfer using the <span class="code">request</span> object exposed to Flask's routing methods:</p>
<pre class="python">
file = request.files['file_input_name']<br />
con = S3Connection(<'AWS_KEY'>, <'AWS_SECRET'>)<br />
key = Key(con.get_bucket(<'BUCKET_NAME'>))<br />
key.set_contents_from_file(file)
</pre>
<p>Go to the next step for the AWS details and the bucket name. Depending on where you chose your AWS location as (e.g. US, Europe, etc.), then your file will be accessible as something like <span class="code">https://s3-eu-west-1.amazonaws.com/<BUCKET_NAME>/<FILENAME></span>. If you want, you can also set, among other things, stuff like the file's mime type and access type:</p>
<pre class="python">
key.set_metadata('Content-Type', 'image/png')<br />
key.set_acl('public-read')</pre>
<h3>Setting up the bucket (Amazon S3)</h3>
<p>Finally you'll need to create the bucket. Create or log into your AWS account, go to the AWS console, choose your region (if you're in Europe, then the Ireland one is probably the best choice) and enter the S3 section. Here, create a bucket (the name needs to be globally unique). Now, go to your account settings page to find your AWS access key and secret and plug these, along with the bucket name, into the appropriate places in your Python file.</p>
<p>And that's it. For large files, this may tie up your Heroku dynos a bit while they carry out the upload, so this technique is best for smaller files (especially if you're only using the one web dyno). My example of a working implementation of this is available <a href="https://github.com/flyingsparx/niteowl-web/blob/master/api.py" target="_blank">in this file</a>.</p>

34
_posts/2013-04-08-a-simple-outbound-mail-server.md

@ -1,34 +0,0 @@
---
year: 2013
month: 4
day: 8
title: A simple outbound mail server
layout: post
---
<p>
Being able to send emails is an important part of a server's life, especially if it helps support a website. If you manage your own servers for running a website and need to send outgoing email (e.g. for newsletters, password resets, etc.), then you'll need to run an SMTP server to handle this for you.
</p>
<p>
You will need to have properly configured your DNS settings for email to work properly. This is because most email providers will run rDNS (reverse-DNS) lookups on incoming email to ensure it isn't someone else pretending to send emails from your domain. An rDNS lookup basically involves matching the resolved IP from your domain name (after the "@" sign in the email address) to the domain name addressed by the IP in DNS. If the r-DNS lookup fails, then email providers may automatically mark your emails as spam.
</p>
<p>Your DNS host settings should point your domain name towards the IP of your host as an A record. In addition, it is sometimes necessary to add a TXT record (for the "@" subdomain) as <span class="code">v=spf1 ip4:xxx.xxx.xxx.xxx -all</span>. This indicates to mail providers that the IP (represented by the <span class="code">x</span>'s) is authorised to send mail for this domain. This further reduces the chance that your email will be marked as spam. Since we are not intending to receive mail at this server, either leave the MX records blank, configure them to indicate a different server, set up a mail-forwarder, or something else.
</p>
<p>The following mail server set up is aimed at Arch Linux, but the gist of it should be compatible for many UNIX-based systems. The mail server I am covering is <a href="http://www.postfix.org/" target="_blank">postfix</a>. This can easily be installed (e.g. on Arch):</p>
<pre class="shell">
# pacman -S postfix</pre>
<p>Once installed, edit the configuration file in <span class="code">/etc/postfix/main.cf</span> so that these lines read something like this:</p>
<pre class="shell">
myhostname = mail.domain.tld<br />
mydomain = domain.tld<br />
myorigin = domain.tld</pre>
<p>Next, edit the file <span class="code">/etc/postfix/aliases</span> such that:</p>
<pre class="shell">
root: your_username</pre>
<p>Replace <span class="code">your_username</span> with the user who should reveive <span class="code">root</span>'s mail.</p>
<p>Finally, refresh the alias list, enable the service so that postfix starts on boot, and then start postfix:</p>
<pre class="shell">
# cd /etc/postfix && newaliases<br />
# systemctl enable postfix.service<br />
# systemctl start postfix.service</pre>
<p>You should now be able to send mail (e.g. through PHP, Python, Ruby, etc.) through this server. If you run the website on the same machine, simply tell the application to use <span class="code">localhost</span> as the mail server, though this is usually default anyway.</p>

13
_posts/2013-04-11-cardiff-open-sauce-hackathon.md

@ -1,13 +0,0 @@
---
year: 2013
month: 4
day: 11
title: Cardiff Open Sauce Hackathon
layout: post
---
<p>
Next week I, along with others in a team, am taking part in <a href="http://www.cs.cf.ac.uk/hackathon/" target="_blank">Cardiff Open Sauce Hackathon</a>.</p>
<p>If you're in the area and feel like joining in for the weekend then sign up at the link above.</p>
<p>The hackathon is a two-day event in which teams work to 'hack together' smallish projects, which will be open-sourced at the end of the weekend. Whilst we have a few ideas already for potential projects, if anyone has any cool ideas for something relatively quick, but useful, to make, then please <a href="/contact">let me know</a>!
</p>

24
_posts/2013-04-16-trials-of-eduroam.md

@ -1,24 +0,0 @@
---
year: 2013
month: 4
day: 16
title: Trials of Eduroam
layout: post
---
<p>
I've been having trouble connecting to Eduroam, at least reliably and persistently, in some barebones BNU/Linux installs and basic window managers. Eduroam is the wireless networking service used by many Universities in Europe, and whilst it would probably work fine using the tools provided by heavier DEs, I wanted something that could just run quickly and independently.
</p>
<p>Many approaches require the editing of loads of config files (especially true for <span class="code">netcfg</span>), which would need altering again after things like password changes. The approach I used (for Arch Linux) is actually really simple and involves the use of the user-contributed <span class="code">wicd-eduroam</span> package available in the <a href="https://aur.archlinux.org/packages/wicd-eduroam/" target="_blank">Arch User Repository</a>.</p>
<p>Obviously, <span class="code">wicd-eduroam</span> is related to, and depends on, <span class="code">wicd</span>, a handy network connection manager, so install that first:</p>
<pre class="shell">
# pacman -S wicd<br />
$ yaourt -S wicd-eduroam</pre>
<p>(If you don't use <span class="code">yaourt</span>, download the <a href="https://aur.archlinux.org/packages/wi/wicd-eduroam/wicd-eduroam.tar.gz" target="_blank">tarball</a> and build it using the <span class="code">makepkg</span> method.)</p>
<p><span class="code">wicd</span> can conflict with other network managers, so stop and disable them before starting and enabling <span class="code">wicd</span>. This will allow it to startup at boot time. e.g.:</p>
<pre class="shell">
# systemctl stop NetworkManager<br />
# systemctl disable NetworkManager<br />
# systemctl start wicd<br />
# systemctl enable wicd</pre>
<p>Now start <span class="code">wicd-client</span> (or set it to autostart), let it scan for networks, and edit the properties of the network <span class="code">eduroam</span>. Set the encryption type as <span class="code">eduroam</span> in the list, enter the username and password, click OK and then allow it to connect.</p>

13
_posts/2013-04-23-flyingsparx.net-on-digital-ocean.md

@ -1,13 +0,0 @@
---
year: 2013
month: 4
day: 23
title: flyingsparx.net On Digital Ocean
layout: post
---
<p>My hosting for <a href="http://www.willwebberley.net" target="_blank">willwebberley.net</a> has nearly expired, so I have been looking for renewal options.</p>
<p>These days I tend to need to use servers for more than simple web-hosting, and most do not provide the flexibility that a VPS would. Having (mostly) full control over a properly-maintained virtual cloud server is so much more convenient, and allows you to do tonnes of stuff beyond simple web hosting.</p>
<p>I have some applications deployed on <a href="https://www.heroku.com" target="_blank">Heroku</a>, which is definitely useful and easy for this purpose, but I decided to complement this for my needs by buying a 'droplet' from <a href="https://www.digitalocean.com" target="_blank">Digital Ocean</a>.</p>
<p>Droplets are DO's term for a server instance, and are super quick to set up (55 seconds from first landing at their site to a booted virtual server, they claim) and very reasonably priced. I started an Arch instance, quickly set up nginx, Python and uwsgi, and started this blog and site as a Python app running on the Flask microframework.</p>
<p>So far, I've had no issues, and everything seems to work quickly and smoothly. If all goes to plan, over the next few months I'll migrate some more stuff over, including the backend for the Gower Tides app.</p>

13
_posts/2013-04-25-eartub.es.md

@ -1,13 +0,0 @@
---
year: 2013
month: 4
day: 25
title: eartub.es
layout: post
---
<p>Last weekend I went to <a href="http://www.cs.cf.ac.uk/hackathon" target="_blank">CFHack Open Sauce Hackathon</a>. I worked in a team with <a href="http://christopher-gwilliams.com" target="_blank">Chris</a>, <a href="https://twitter.com/OnyxNoir" target="_blank">Ross</a> and <a href="http://users.cs.cf.ac.uk/M.P.John/" target="_blank">Matt</a>.</p>
<p>We started work on <a href="http://eartub.es" target="_blank">eartub.es</a>, which is a web application for suggesting movies based on their sound tracks. We had several ideas for requirements we wanted to meet but, due to the nature of hackathons, we didn't do nearly as much as what we thought we would!</p>
<p>For now, eartubes allows you to search for a movie (from a 2.5 million movie database) and view other movies with similar soundtracks. This is currently based on cross matching the composer between movies, but more in-depth functionality is still in the works. We have nearly completed Last.fm integration, which would allow the app to suggest movies from your favourite and most listened-to music, and are working towards genre-matching and other, more complex, learning techniques. The registration functionality is disabled while we add this extra stuff.</p>
<p>The backend is written in Python and runs as a Flask application. Contrary to my usual preference, I worked on the front end of the application, but also wrote our internal API for Last.fm integration. It was a really fun experience, in which everyone got on with their own individual parts, and it was good to see the project come together at the end of the weekend.</p>
<p>The project's source is on <a href="https://github.com/encima/eartubes" target="_blank">Github</a>.</p>

22
_posts/2013-05-03-is-twitters-new-api-really-such-a-nightmare?.md

@ -1,22 +0,0 @@
---
year: 2013
month: 5
day: 3
title: Is Twitter's New API Really Such a Nightmare?
layout: post
---
<p>When the first version of the Twitter API opened, writing applications to interface with the popular microblogging service was a dream. Developers could quickly set up apps and access the many resources provided by the API and third parties were fast in creating easy-to-use wrappers and interfaces (in loads of different languages) for embedding Twitter functionality in all sorts of applications and services.</p>
<p>The API began by using Basic Authentication, at least for making the requests that required authentication (e.g. writing Tweets, following users, etc.). This is, generally, a Very Bad Idea, since it meant that client applications were required to handle the users' usernames and passwords and transmit these, along with any extra required parameters, in every request made to the API. Users had no idea (and no control over) what the organisations behind these applications did with the access credentials once they were provided with them.</p>
<p>Then, in 2010, the API moved on to OAuth. This was a much better authentication protocol, as it meant that users could directly authorise apps from Twitter itself and easily view which functions each individual app would be able to perform with their Twitter account. In addition, it meant that applications didn't need to receive and/or store the user's username and password; instead, an access token would be sent back to the app (after authentication), which would then be used to make the requests to the API. This access token could then be sent, along with the application's own key and secret key, with requests to the API, which would be able to recognise the authenticating user based on the access token and restrict/allow actions based on who the user is. Since apps could safely store the user's access key without too many security implications, it meant that the procedure was much more personalised and streamlined for the end-users.</p>
<p>What was cool was that there were still several methods exposed by Twitter's API that <i>didn't</i> require authentication. Things like retrieving a user's recent Tweets or the public timeline involved a simple JSON request that could easily be made from a client without authenticating first. This was particularly useful when used with JavaScript as clients could still request the information and, due to the distributed nature of clients (i.e. not making requests from a single IP or application signature), they wouldn't generally reach the rate limit for these methods.</p>
<img src="/media/blog/twitter-javascript.png" class="blog-image" />
<p>It meant that you could embed a Twitter feed showing your recent Tweets on your website without having to hop through your own servers first.</p>
<p>Now Twitter have opened v1.1 of their API, with all methods from the previous version becoming deprecated and should expect complete removal some time in 2013. The main disadvantage with version 1.1 is that now <strong>all</strong> requests to the API will require OAuth authentication. This means that client-side JavaScript Twitter requests will no longer be safely available (as clients would have access to the application's private key, amongst other things), and developers will be forced to use Twitter's own massive and unstylable <a href="https://dev.twitter.com/docs/embedded-timelines" target="_blank">widgets</a>. Twitter themselves also (sensibly, I suppose) discourage users from trying to write their own client-side code for this.</p>
<p>Of course, you could modify your app so that your server makes the requests, authenticated with your own account, and then passes the response to the browser, but if your site is fairly popular and caching requests isn't appropriate for your purposes then you are at risk of running into rate limit issues. This leads me to another (slightly less important) disadvantage. Whilst the API used to grant each authenticated application 350 requests per hour, the rate limit system has now become unnecessarily complicated, with many methods having completely different request allowances per window (which has now been reduced to 15 minutes). On top of this, many resources actually have <strong>two</strong> rate limits - one for that particular user, and one for the app itself. They also have a <a href="https://dev.twitter.com/docs/rate-limiting/1.1/limits" target="_blank">handy table</a> outlining the rate limits of each method. It's starting to become a bit more of a mess for developers, with many more things to think about.</p>
<img src="/media/blog/twitter-ratelimit.png" class="blog-image" />
<br />
<p>Despite all the additional strictness with the API, there are actually several advantages. Requests that are user-focused (i.e. have a separate user-based rate limit) mean that your application, if used correctly, may be able to access more information before reaching the limits. This is also true of some of the application-based resources, such as "GET search/tweets" and "GET statuses/user_timeline", which now allow many more requests to be made in the same time frame than in API v1.</p>
<p>For other methods, though, it's not so great. Most of the user-based rate-limited methods allow 15 requests per window (equating to one request per minute). For me, and others who research Twitter, who require a fair amount of data, this will become a nightmare. There are also many app developers who are being impacted pretty heavily by the new changes, which includes Twitter's (slightly evil) new policy to <a href="http://www.theverge.com/2012/8/16/3248079/twitter-limits-app-developers-control" target="_blank">restrict apps to 100,000 users</a>.</p>
<br />
<p>Generally, there is a different set of advantages and disadvantages every way you look at it, but with the web's turn to the ubiquitous availability and propagation of information, and some other open and awesome APIs (including <a href="https://developer.foursquare.com" target="_blank">Foursquare's</a> and <a href="http://www.last.fm/api" target="_blank" >Last.fm's</a>), then it's hard to know in which direction Twitter is heading at the moment.</p>

15
_posts/2013-05-07-contribution-to-heroku-dev-center.md

@ -1,15 +0,0 @@
---
year: 2013
month: 5
day: 7
title: Contribution to Heroku Dev Center
layout: post
---
<p>The <a href="https://devcenter.heroku.com" target="_blank">Heroku Dev Center</a> is a repository of guides and articles to provide support for those writing applications to be run on the <a href="https://heroku.com" target="_blank">Heroku</a> platform.</p>
<p>I recently contributed an article for carrying out <a href="https://devcenter.heroku.com/articles/s3-upload-python" target="_blank">Direct to S3 File Uploads in Python</a>, as I have previously used a very similar approach to interface with Amazon's Simple Storage Service in one of my apps running on Heroku.</p>
<p>The approach discussed in the article focuses on avoiding as much server-side processing as possible, with the aim of preventing the app's web dynos from becoming too tied up and unable to respond to further requests. This is done by using client-side JavaScript to asynchronously carry out the upload directly to S3 from the web browser. The only necessary server-side processing involves the generation of a temporarily-signed (using existing AWS credentials) request, which is returned to the browser in order to allow the JavaScript to successfully make the final <span class="code">PUT</span> request.</p>
<p>The guide's <a href="https://github.com/flyingsparx/FlaskDirectUploader" target="_blank">companion git repository</a> hopes to demonstrate a simple use-case for this system. As with all of the Heroku Dev Center articles, if you have any feedback (e.g. what could be improved, what helped you, etc.), then please do provide it!</p>

15
_posts/2013-05-26-gower-tides-open-sourced.md

@ -1,15 +0,0 @@
---
year: 2013
month: 5
day: 26
title: Gower Tides Open-Sourced
layout: post
---
<p>This is just a quick post to mention that I have made the source for the <a href="https://play.google.com/store/apps/details?id=net.willwebberley.gowertides" target="_blank">Gower Tides</a> app on Google Play public.</p>
<p>The source repository is available on <a href="https://github.com/flyingsparx/GowerTides" target="_blank">GitHub</a>. From the repository I have excluded:
<ul>
<li><strong>Images & icons</strong> - It is not my place to distribute graphics not owned or created by me. Authors are credited in the repo's README and in the application.</li>
<li><strong>External libraries</strong> - The app requires a graphing package and a class to help with handling locally-packaged SQLite databases. Links to both are also included in the repo's README.</li>
<li><strong>Tidal data</strong> - The tidal data displayed in the app has also been excluded. However, the format for the data stored by the app should be relatively obvious from its access in the <a href="https://github.com/flyingsparx/GowerTides/blob/master/src/net/willwebberley/gowertides/utils/DayDatabase.java" target="_blank">source</a>.</li>
</ul></p>

21
_posts/2013-06-12-wekapy.md

@ -1,21 +0,0 @@
---
year: 2013
month: 6
day: 12
title: WekaPy
layout: post
---
<p>Over the last few months, I've started to use Weka more and more. <a href="http://www.cs.waikato.ac.nz/ml/weka/" target="_blank">Weka</a> is a toolkit, written in Java, that I use to create models with which to make classifications on data sets.</p>
<p>It features a wide variety of different machine learning algorithms (although I've used the logistic regressions and Bayesian networks most) which can be trained on data in order to make classifications (or 'predictions') for sets of instances.</p>
<p>Weka comes as a GUI application and also as a library of classes for use from the command line or in Java applications. I needed to use it to create some large models and several smaller ones, and using the GUI version makes the process of training the model, testing it with data and parsing the classifications a bit clunky. I needed to automate the process a bit more.</p>
<p>Nearly all of the development work for my PhD has been in Python, and it'd be nice to just plug in some machine learning processes over my existing code. Whilst there are some wrappers for Weka written for Python (<a href="https://github.com/chrisspen/weka" target="_blank">this project, <a href="https://pypi.python.org/pypi/PyWeka" target="_blank">PyWeka</a>, etc.), most of them feel unfinished, are under-documented or are essentially just instructions on how to use <a href="http://www.jython.org/" target="_blank">Jython</a>.</p>
<p>So, I started work on <a href="https://github.com/flyingsparx/WekaPy" target="_blank">WekaPy</a>, a simple wrapper that allows efficient and Python-friendly integration with Weka. It basically just involves subprocesses to execute Weka from the command line, but also includes several areas of functionality aimed to provide more of a seamless and simple experience to the user.</p>
<p>I haven't got round to writing proper documentation yet, but most of the current functionality is explained and demo'd through examples <a href="https://github.com/flyingsparx/WekaPy#example-usage" target="_blank">here</a>. Below is an example demonstrating its ease of use</p>
<pre class="python">
model = Model(classifier_type = "bayes.BayesNet")<br />
model.train(training_file = "train.arff")<br />
model.test(test_file = "test.arff")
</pre>
<p>All that is needed is to instantiate the model with your desired classifier, train it with some training data and then test it against your test data. The predictions can then be easily extracted from the model as shown <a href="https://github.com/flyingsparx/WekaPy#accessing-the-predictions" target="_blank">in the documentation</a>.</p>
<p>I hope to continue updating the library and improving the documentation when I get a chance! Please let me know if you have any ideas for functionality.</p>

12
_posts/2013-06-20-accidental-kernel-upgrades-on-digital-ocean.md

@ -1,12 +0,0 @@
---
year: 2013
month: 6
day: 20
title: Accidental Kernel Upgrades on Digital Ocean
layout: post
---
<p>I today issued a full upgrade of the server at flyingsparx.net, which is hosted by <a href="https://www.digitalocean.com" target="_blank">Digital Ocean</a>. By default, on Arch, this will upgrade every currently-installed package (where there is a counterpart in the official repositories), including the Linux kernel and the kernel headers.</p>
<p>Digital Ocean maintain their own kernel versions and do not currently allow kernel switching, which is something I completely forgot. I rebooted the machine and tried re-connecting, but SSH couldn't find the host. Digital Ocean's website provides a console for connecting to the instance (or 'droplet') through VNC, which I used, through which I discovered that none of the network interfaces (except the loopback) were being brought up. I tried everything I could think of to fix this, but without being able to connect the droplet to the Internet, I was unable to download any other packages.</p>
<p>Eventually, I contacted DO's support, who were super quick in replying. They pointed out that the upgrade may have also updated the kernel (which, of course, it had), and that therefore the modules for networking weren't going to load properly. I restored the droplet from one of the automatic backups, swapped the kernel back using DO's web console, rebooted and things were back to where they should be.</p>
<p>The fact that these things can be instantly fixed from their console and their quick customer support make Digital Ocean awesome! If they weren't possible then this would have been a massive issue, since the downtime also took out this website and the backend for a couple of mobile apps. If you use an Arch instance, then there is a <a href="https://www.digitalocean.com/community/articles/pacman-syu-kernel-update-solved-how-to-ignore-arch-kernel-upgrades" target="_blank">community article</a> on their website explaining how to make pacman ignore kernel upgrades and to stop this from happening.</p>

12
_posts/2013-07-03-magic-seaweeds-awesome-new-api.md

@ -1,12 +0,0 @@
---
year: 2013
month: 7
day: 3
title: Magic Seaweed's Awesome New API
layout: post
---
<p>Back in March, I emailed <a href="http://magicseaweed.com" target="_blank">Magic Seaweed</a> to ask them if they had a public API for their surf forecast data. They responded that they didn't at the time, but that it was certainly on their to-do list. I am interested in the marine data for my <a href="https://play.google.com/store/apps/details?id=net.willwebberley.gowertides" target="_blank">Gower Tides</a> application.</p>
<p>Yesterday, I visited their website to have a look at the surf reports and some photos, when I noticed the presence of a <a href="http://magicseaweed.com/developer/api" target="_blank">Developer</a> link in the footer of the site. It linked to pages about their new API, with an overview describing exactly what I wanted.</p>
<p>Since the API is currently in beta, I emailed them requesting a key, which they were quick to respond with and helpfully included some further example request usages. They currently do not have any strict <a href="http://flyingsparx.net/blog/13/5/3/is-twitter's-new-api-really-such-a-nightmare?" target="_blank">rate limits</a> in place, but instead have a few <a href="http://magicseaweed.com/developer/terms-and-conditions" target="_blank">fair practice terms</a> to discourage developers from going a bit trigger happy on API requests. They also request that you use a hyperlinked logo to accredit the data back to them. Due to caching, I will not have to make too many requests (since the application will preserve 'stale' data for 30 minutes before refreshing from Magic Seaweed, when requested), so hopefully that will keep the app's footprint down.</p>
<p>I have written the app's new <a href="https://github.com/flyingsparx/GowerTidesBackend" target="_blank">backend support</a> for handling and caching the surf data ready for incorporating into the Android app soon. So far, the experience has been really good, with the API responding with lots of detailed information - almost matching the data behind their own <a href="http://magicseaweed.com/Llangennith-Rhossili-Surf-Report/32/" target="_blank">surf forecasts</a>. Hopefully they won't remove any of the features when they properly release it!</p>

20
_posts/2013-07-31-gower-tides-v1.4.md

@ -1,20 +0,0 @@
---
year: 2013
month: 7
day: 31
title: Gower Tides v1.4
layout: post
---
<img src="https://flyingsparx.net/static/media/v1-4_surf.png" class="blog-image" alt="Surf forecasts" />
<p>Last week I released a new version of the tides Android app I'm currently developing.</p>
<p>The idea of the application was initially to simply display the tidal times and patterns for the Gower Peninsula, and that this should be possible without a data connection. Though, as the time has gone by, I keep finding more and more things that can be added!</p>
<p>The latest update saw the introduction of 5-day surf forecasts for four Gower locations - Llangennith, Langland, Caswell Bay, and Hunts Bay. All the surf data comes from <a href="http://magicseaweed.com" target="_blank">Magic Seaweed</a>'s API (which I <a href="http://flyingsparx.net/post/2013/7/3" target="_blank">talked about</a> last time).</p>
<img src="https://flyingsparx.net/static/media/v1-4_location.png" class="blog-image right" alt="Location choices" />
<p>The surf forecasts are shown, for each day they are available, as a horizontal scroll-view, allowing users to scroll left and right within that day to view the forecast at different times of the day (in 3-hourly intervals).<br />
Location selection is handled by a dialog popup, which shows a labelled map and a list of the four available locations in a list view.</p>
<p>The <a href="https://github.com/flyingsparx/GowerTidesBackend" target="_blank">backend support</a> for the application was modified to now also support 30-minute caching of surf data on a per-location basis (i.e. new calls to Magic Seaweed would not be made if the requested <i>location</i> had been previously pulled in the last 30 minutes). The complete surf and weather data is then shipped back to the phone as one JSON structure.</p>
<img src="https://flyingsparx.net/static/media/v1-4_tides.png" class="blog-image" alt="Tides view update" />
<p>Other updates were smaller but included an overhaul of the UI (the tide table now looks a bit nicer), additional licensing information, more speedy database interaction, and so on.</p>
<p>If you are interested in the source, then that is available <a href="https://github.com/flyingsparx/GowerTides" target="_blank">here</a>, and the app itself is on <a href="https://play.google.com/store/apps/details?id=net.willwebberley.gowertides&hl=en" target="_blank">Google Play</a>. If you have any ideas, feedback or general comments, then please let me know!</p>

20
_posts/2013-08-31-a-rather-french-week.md

@ -1,20 +0,0 @@
---
year: 2013
month: 8
day: 31
title: A rather French week
layout: post
---
<p>I recently spent a week in France as part of a holiday with some of my family. Renting houses for a couple of weeks in France or Italy each summer has almost become a bit of a tradition, and it's good to have a relax and a catch-up for a few days. They have been the first proper few days (other than the <a href="http://flyingsparx.net/blog/13/3/30/a-bit-of-light-construction-on-an-easter-weekend/" target="_blank">decking-building adventure</a> back in March) I have had away from University in 2013, so I felt it was well-deserved!</p>
<img src="/media/blog/french-house.JPG" class="large-image blog-image" />
<p>This year we stayed in the Basque Country of southern France, relatively near Biarritz, in a country farmhouse. Although we weren't really within walking distance to anywhere, the house did come with a pool in the garden, with a swimmable river just beyond, and an amazing, peaceful setting. </p>
<p>Strangely enough, there was no Internet installation at the house, and no cellular reception anywhere nearby. This took a bit of getting-used to, but after a while it became quite relaxing not having to worry about checking emails, texts, and Twitter. The only thing to cause any stress was a crazed donkey, living in the field next door, who would start braying loudly at random intervals through the nights, waking everyone up.</p>
<img src="/media/blog/french-gorge.JPG" class="large-image blog-image" />
<p>As might be expected, the food and drink was exceptional. Although we did end up eating in the house each evening (to save having someone sacrifice themselves to be the designated driver), the foods we bought from the markets were very good, and the fact that wine cost €1.50 per bottle from the local Intermarché gave very little to complain about.</p>
<p>The majority of most days was spent away from the house, visiting local towns, the beaches and the Pyrenees. We spent a few afternoons walking in the mountains, with some spectacular scenery.</p>
<img src="/media/blog/french-pyrenes.JPG" class="large-image blog-image" />

18
_posts/2013-09-02-zoned-network-sound-streaming-the-problem.md

@ -1,18 +0,0 @@
---
year: 2013
month: 9
day: 2
title: "Zoned Network Sound-Streaming: The Problem"
layout: post
---
<p>For a while, now, I have been looking for a reliable way to manage zoned music-playing around the house. The general idea is that I'd like to be able to play music from a central point and have it streamed over the network to a selection of receivers, which could be remotely turned on and off when required, but still allow for multiple receivers to play simulataneously.</p>
<p>Apple's <a href="http://www.apple.com/uk/airplay/" target="_blank">AirPlay</a> has supported this for a while now, but requires the purchasing of AirPlay compatible hardware, which is expensive. It's also very iTunes-based - which is something that I do not use.</p>
<p>Various open-source tools also allow network streaming. <a href="http://www.icecast.org/" target="_blank">Icecast</a> (through the use of <a href="https://code.google.com/p/darkice/" target="_blank">Darkice</a>) allows clients to stream from a multimedia server, but this causes pretty severe latency in playback between clients (ranging up to around 20 seconds, I've found) - not a good solution in a house!</p>
<p><a href="http://www.freedesktop.org/wiki/Software/PulseAudio/" target="_blank">PulseAudio</a> is partly designed around being able to work over the network, and supports the discovery of other PulseAudio sinks on the LAN and the selection a sound card to transmit to through TCP. This doesn't seem to support multiple sound card sinks very well, however.</p>
<p>PulseAudio's other network feature is its RTP broadcasting, and this seemed the most promising avenue for progression in solving this problem. RTP utilises UDP, and PulseAudio effecively uses this to broadcast its sound to any devices on the network that might be listening on the broadcast address. This means that one server could be run and sink devices could be set up simply to receive the RTP stream on demand - perfect!</p>
<p>However, in practice, this turned out not to work very well. With RTP enabled, PulseAudio would entirely flood the network with sound packets. Although this isn't a problem for devices with a wired connection, any devices connected wirelessly to the network would be immediately disassociated from the access point due to the complete saturation of PulseAudio's packets being sent over the airwaves.</p>
<p>This couldn't be an option in a house where smartphones, games consoles, laptops, and so on require the WLAN. After researching this problem a fair bit (and finding many others experiencing the same issues), I found <a href="http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Network/RTP/" target="_blank">this page</a>, which describes various methods for using RTP streaming from PulseAudio and includes (at the bottom) the key that could fix my problems - the notion of compressing the audio into MP3 format (or similar) before broadcasting it.</p>
<p>Trying this technique worked perfectly, and did not cause network floods anywhere nearly as severely as the uncompressed sound stream; wireless clients no longer lost access to the network once the stream was started and didn't seem to lose any noticeable QoS at all. In addition, when multiple clients connected, the sound output would be nearly entirely simultaneous (at least after a few seconds to warm up).</p>
<p>Unfortunately, broadcasting still didn't work well over WLAN (sound splutters and periodic drop-outs), so the master server and any sound sinks would need to be on a wired network. This is a small price to pay, however, and I am happy to live with a few Ethernet-over-power devices around the house. The next stage is to think about what to use as sinks. Raspberry Pis should be powerful enough and are <i>significantly</i> cheaper than Apple's equivalent. They would also allow me to use existing sound systems in some rooms (e.g. the surround-sound in the living room), and other simple speaker setups in others. I also intend to write a program around PulseAudio to streamline the streaming process and a server for discovering networked sinks.</p>
<p>I will write an update when I have made any more progress on this!</p>

29
_posts/2013-09-14-casastream.md

@ -1,29 +0,0 @@
---
year: 2013
month: 9
day: 14
title: CasaStream
layout: post
---
<p>In my <a href="http://flyingsparx.net/blog/2013/9/2/zoned-network-sound-streaming-the-problem" target="_blank">last post</a> I discussed methods for streaming music to different zones in the house. More specifically I wanted to be able to play music from one location and then listen to it in other rooms at the same time and in sync.</p>
<p>After researching various methods, I decided to go with using a compressed MP3 stream over RTP. Other techniques introduced too much latency, did not provide the flexibility I required, or simply did not fulfill the requirements (e.g. not multiroom, only working with certain applications and non-simultaneous playback).</p>
<p>To streamline the procedure of compressing the stream, broadcasting the stream, and receiving and playing the stream, I have started a project to create an easily-deployable wrapper around PulseAudio and VLC. The system, somewhat cheesily named <a href="https://github.com/flyingsparx/CasaStream" target="_blank">CasaStream</a> and currently written primarily in Python, relies on a network containing one machine running a CasaStream Master server and any number of machines running a CasaStream Slave server.</p>
<img src="/media/blog/casastream1.png" class="large-image blog-image" />
<p>The Master server is responsible for compressing and broadcasting the stream, and the Slaves receive and play the stream back through connected speakers. Although the compression is relatively resource-intensive (at least, for the moment), the Slave server is lightweight enough to be run on low-powered devices, such as the Raspberry Pi. Any machine that is powerful enough to run the Master could also simultaneously run a Slave, so a dedicated machine to serve the music alone is not required.</p>
<img src="/media/blog/casastream2.png" class="blog-image" />
<p>The Master server also runs a web interface, allowing enabling of the system and to disable and enable Slaves. Slave servers are automatically discovered by the Master, though it is possible to alter the scan range from the web interface also. In addition, the selection of audio sources to stream (and their output volumes) and the renaming of Slaves are available as options. Sound sources are usually automatically detected by PulseAudio (if it is running), so there is generally no manual intervention required to 'force' the detection of sources.</p>
<p>My current setup consists of a Master server running on a desktop machine in the kitchen, and Slave servers running on various other machines throughout the house (including the same kitchen desktop connected to some orbital speakers and a Raspberry Pi connected to the surround sound in the living room). When all running, there is no notable delay between the audio output in the different rooms.</p>
<p>There are a few easily-installable dependencies required to run both servers. Both require Python (works on V2.*, but I haven't tested on V3), and both require the Flask microframework and VLC. For a full list, please see the <a href="https://github.com/flyingsparx/CasaStream/blob/master/README.md" target="_blank">README</a> at the project's home, which also provides more information on the installation and use.</p>
<p>Unfortunately, there are a couple of caveats: firstly, the system is not reliable over WLAN (the sound gets pretty choppy), so a wired connection is recommended. Secondly, if using ethernet-over-power to mitigate the first caveat, then you may experience sound dropouts every 4-5 minutes. To help with this problem, the Slave servers are set to restart the stream every four minutes (by default).</p>
<p>This is quite an annoying issue, however, since having short sound interruptions every few minutes is very noticeable. Some of my next steps with this project, therefore, are based around trying to find a better fix for this. In addition, I'd like to reduce the dependency footprint (the Slave servers really don't need to use a fully-fledged web server), reduce the power requirements at both ends, and to further automate the installation process.</p>

12
_posts/2013-10-05-workshop-presentation-in-germany.md

@ -1,12 +0,0 @@
---
year: 2013
month: 10
day: 5
title: Workshop Presentation in Germany
layout: post
---
<p>Last week I visited Karlsruhe, in Germany, to give a presentation accompanying a recently-accepted paper. The paper, "Inferring the Interesting Tweets in Your Network", was in the proceedings of the Workshop on Analyzing Social Media for the Benefit of Society (<a href="http://www.cs.cf.ac.uk/cosmos/node/12" target="_blank">Society 2.0</a>), which was part of the Third International Conference on Social Computing and its Applications (<a href="http://socialcloud.aifb.uni-karlsruhe.de/confs/SCA2013/" target="_blank">SCA</a>).</p>
<p>Although I only attended the first workshop day, there was a variety of interesting talks on social media and crowdsourcing. My own talk went well and there was some useful feedback from the attendees.</p>
<p>I presented my recent work on the use of machine learning techniques to help in identifying interesting information in Twitter. I rounded up some of the results from the Twinterest experiment we ran a few months ago and discussed how this helped address the notion of information <i>relevance</i> as an extension to global <i>interestingness</i>.</p>
<p>I hadn't been to Germany before this, so it was also a culturally-interesting visit. I was only there for two nights but I tried to make the most of seeing some of Karlsruhe and enjoying the traditional food and local beers!</p>

30
_posts/2014-01-07-llavac.md

@ -1,30 +0,0 @@
---
year: 2014
month: 1
day: 7
title: llavac
layout: post
---
<p>Have you ever wanted to be able write Java in Welsh? No? Neither have I. However, with half an hour spare, I thought it'd be a fun (yet relatively pointless) little project to help learn the basics of Perl.</p>
<p><span class="code">llavac</span> is a Perl script acting as a simple wrapper for the command-line Java compiler (<span class="code">javac</span>). It works by carrying out basic string replacements on a currently incomplete set of Welsh Java keywords in order to create a temporary 'English' Java source file, which is then compiled and deleted.</p>
<p>Below is a simple "helo!" Java program written in Welsh.</p>
<pre class="java">
cyhoedd dosbarth Example{
cyhoedd sefydlog ddi-rym main(String[] args){
System.out.println("helo!");
}
}
</pre>
<p>Use <span class="code">llavac</span> in place of directly running <span class="code">javac</span> to compile it, before running it as a normal Java program. Compiler errors will still be shown in English, however!</p>
<pre class="shell">
$ ./llavac.pl Example.java
$ java Example
</pre>
<p>The script is available from <a href="https://github.com/flyingsparx/llavac" target="_blank">this repository</a>.</p>

15
_posts/2014-01-17-direct-to-s3-uploads-in-node.js.md

@ -1,15 +0,0 @@
---
year: 2014
month: 1
day: 17
title: Direct-to-S3 Uploads in Node.js
layout: post
---
<p>A while ago I wrote an <a href="https://devcenter.heroku.com/articles/s3-upload-python" target="_blank">article</a> for <a href="https://heroku.com" target="_blank">Heroku</a>'s Dev Center on carrying out direct uploads to S3 using a Python app for signing the PUT request. Specifically, the article focussed on Flask but the concept is also applicable to most other Python web frameworks.</p>
<p>I've recently had to implement something similar, but this time as part of an <a href="http://nodejs.org" target="_blank">Node.js</a> application. Since the only difference between the two approaches is literally just the endpoint used to return a signed request URL, I thought I'd post an update on how the endpoint could be constructed in Node.</p>
<p>The front-end code in the companion repository demonstrates an example of how the endpoint can be queried to retrieve the signed URL, and is available <a href="https://github.com/flyingsparx/FlaskDirectUploader/blob/master/templates/account.html" target="_blank">here</a>. Take a look at that repository's README for information on the front-end dependencies.</p>
<p>The full example referenced by the Python article is in a <a href="https://github.com/flyingsparx/FlaskDirectUploader" target="_blank">repository</a> hosted by GitHub and may be useful in providing more context.</p>

15
_posts/2014-01-28-seminar-at-kings-college-london.md

@ -1,15 +0,0 @@
---
year: 2014
month: 1
day: 28
title: Seminar at King's College London
layout: post
---
<p>Last week, I was invited to give a seminar to the Agents and Intelligent Systems group in the <a href="http://www.kcl.ac.uk/nms/depts/informatics/index.aspx" target="_blank">Department of Informatics</a> at King's College London.</p>
<p>I gave an overview of my PhD research conducted over the past two or three years, from my initial research into retweet behaviours and propagation characteristics through to studies on the properties exhibited by Twitter's social graph and the effects that the interconnection of users have on message dissemination.</p>
<p>I finished by outlining our methods for identifying interesting content on Twitter and by demonstrating its relative strengths and weaknesses as were made clear by crowd-sourced validations carried out on the methodology results.</p>
<p>There was some very interesting and useful questions from the audience, some of which is now being taken into consideration in my thesis. It was also good to visit another computer science department and to hear about the work done independently and collaboratively by its different research groups.</p>
<p>The slides from the seminar are available <a href="http://flyingsparx.net/static/downloads/kcl_seminar_2014.pdf">here</a> and there is a <a href="http://inkings.org/2014/02/03/tweets-and-retweets" target="_blank">blog post</a> about it on the Department of Informatics' website.</p>

11
_posts/2014-03-17-node.js-contribution-to-herokus-dev-center.md

@ -1,11 +0,0 @@
---
year: 2014
month: 3
day: 17
title: Node.js Contribution to Heroku's Dev Center
layout: post
---
<p>I recently wrote a new article for Heroku's Dev Center on carrying out asynchronous direct-to-S3 uploads using Node.js.</p>
<p>The article is based heavily on the previous <a href="http://flyingsparx.net/blog/13/5/7/contribution-to-heroku-dev-center/">Python version</a>, where the only major change is the method for signing the AWS request. This method was outlined in an <a href="http://flyingsparx.net/blog/2014/1/17/direct-to-s3-uploads-in-node.js/">earlier blog post</a>.</p>
<p>The article is available <a href="https://devcenter.heroku.com/articles/s3-upload-node">here</a> and there is also a <a href="https://github.com/flyingsparx/NodeDirectUploader">companion code repository</a> for the example it describes.</p>

11
_posts/2014-03-26-talk-on-open-source-contribution.md

@ -1,11 +0,0 @@
---
year: 2014
month: 3
day: 26
title: Talk on Open-Source Contribution
layout: post
---
<p>Today I gave an internal talk at the School of Computer Science & Informatics about open-source contribution.</p>
<p>The talk <!--[slides available <a href="http://flyingsparx.net/static/downloads/open_source_contributions.pdf">here</a>]--> described some of the disadvantages of the ways in which hobbyists and the non-professional sector publicly publish their code. A lot of the time these projects do not receive much visibility or use from others.</p>
<p>Public contribution is important to the open-source community, which is driven largely by volunteers and enthusiasts, so the point of the talk was to try and encourage people to share expert knowledge through contributing documentation (wikis, forums, articles, etc.), maintaining and adopting packages, and getting more widely involved.</p>

24
_posts/2015-01-20-end-of-an-era.md

@ -1,24 +0,0 @@
---
year: 2015
month: 1
day: 20
title: End of an Era
layout: post
---
<p>I recently received confirmation of my completed PhD! I submitted my thesis in May 2014, passed my viva in September and returned my final corrections in December.</p>
<div id="phd_insta">
<blockquote class="instagram-media" data-instgrm-version="4" style=" background:#FFF; border:0; border-radius:3px; box-shadow:0 0 1px 0 rgba(0,0,0,0.5),0 1px 10px 0 rgba(0,0,0,0.15); margin: 1px; max-width:658px; padding:0; width:99.375%; width:-webkit-calc(100% - 2px); width:calc(100% - 2px);"><div style="padding:8px;"> <div style=" background:#F8F8F8; line-height:0; margin-top:40px; padding:50% 0; text-align:center; width:100%;"> <div style=" background:url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAACwAAAAsCAMAAAApWqozAAAAGFBMVEUiIiI9PT0eHh4gIB4hIBkcHBwcHBwcHBydr+JQAAAACHRSTlMABA4YHyQsM5jtaMwAAADfSURBVDjL7ZVBEgMhCAQBAf//42xcNbpAqakcM0ftUmFAAIBE81IqBJdS3lS6zs3bIpB9WED3YYXFPmHRfT8sgyrCP1x8uEUxLMzNWElFOYCV6mHWWwMzdPEKHlhLw7NWJqkHc4uIZphavDzA2JPzUDsBZziNae2S6owH8xPmX8G7zzgKEOPUoYHvGz1TBCxMkd3kwNVbU0gKHkx+iZILf77IofhrY1nYFnB/lQPb79drWOyJVa/DAvg9B/rLB4cC+Nqgdz/TvBbBnr6GBReqn/nRmDgaQEej7WhonozjF+Y2I/fZou/qAAAAAElFTkSuQmCC); display:block; height:44px; margin:0 auto -44px; position:relative; top:-22px; width:44px;"></div></div><p style=" color:#c9c8cd; font-family:Arial,sans-serif; font-size:14px; line-height:17px; margin-bottom:0; margin-top:8px; overflow:hidden; padding:8px 0 7px; text-align:center; text-overflow:ellipsis; white-space:nowrap;"><a href="https://instagram.com/p/xzT4r4EHl0/" style=" color:#c9c8cd; font-family:Arial,sans-serif; font-size:14px; font-style:normal; font-weight:normal; line-height:17px; text-decoration:none;" target="_top">A photo posted by Will Webberley (@flyingsparx)</a> on <time style=" font-family:Arial,sans-serif; font-size:14px; line-height:17px;" datetime="2015-01-13T17:00:22+00:00">Jan 13, 2015 at 9:00am PST</time></p></div></blockquote>
</div>
<script async defer src="//platform.instagram.com/en_US/embeds.js"></script>
<script>
setTimeout(function(){
while(document.getElementById("phd_insta").getElementsByClassName("instagram-media")[0] == null){}
document.getElementById("phd_insta").getElementsByClassName("instagram-media")[0].style.margin = "1px auto";
},2000);
</script>
<p>I was examined internally by <a href="http://burnap.org" target="_blank">Dr Pete Burnap</a> and also by <a href="http://www.iis.ee.ic.ac.uk/~j.pitt/Home.html" target="_blank">Dr Jeremy Pitt</a> of Imperial College London.</p>
<p>The whole PhD was an amazing experience, even during the more stressful moments. I learnt a huge amount across many domains and I cannot thank my supervisors, <a href="http://users.cs.cf.ac.uk/Stuart.M.Allen" target="_blank">Dr Stuart Allen</a> and <a href="http://users.cs.cf.ac.uk/R.M.Whitaker" target="_blank">Prof Roger Whitaker</a>, enough for their fantastic support and guidance throughout.</p>

26
_posts/2015-01-27-nhs-hack-day.md

@ -1,26 +0,0 @@
---
year: 2015
month: 1
day: 27
title: NHS Hack Day
layout: post
---
<p>This weekend I took part in the <a href="http://nhshackday.com" target="_blank">NHS Hack Day</a>. The idea of the event is to bring healthcare professionals together with technology enthusiasts in order to build stuff that is useful for those within the NHS and for those that use it. It was organised by <a href="https://twitter.com/amcunningham" target="_blank">AnneMarie Cunningham</a>, who did a great job in making the whole thing run smoothly!</p>
<img src="/media/blog/nhshackday2.jpg" class="large-image blog-image" />
<p class="small">This was our team! The image is released under a Creative Commons BY-NC2.0 license by <a href="https://www.flickr.com/photos/paul_clarke" target="_blank">Paul Clarke</a>.</p>
<p>I was asked to go along and give a hand by <a href="http://martinjc.com" target="_blank">Martin</a>, who also had four of his MSc students with him. <a href="http://mattjw.net" target="_blank">Matt</a>, previously from <a href="http://cs.cf.ac.uk" target="_blank">Cardiff CS&I</a>, also came to provide his data-handling expertise.</p>
<img src="/media/blog/nhshackday.png" class="large-image blog-image" />
<p>We built a webapp, called <a href="http://compjcdf.github.io/nhs_hack/app.html" target="_blank">Health Explorer Wales</a>, that attempts to visualise various data for health boards and communities in Wales. One of the main goals of the app was to make it maintainable, so that users in future could easily add their own geographic or numeric data to visualise. For this, it was important to decide on an extensible <a href="https://github.com/CompJCDF/nhs_hack/blob/master/data/descriptors.json" target="_blank">data schema</a> for describing data, and suitable data formats.</p>
<p>Once the schema was finalised, we were able to go ahead and build the front-end, which used <a href="http://d3js.org" target="_blank">D3.js</a> to handle the visualisations. This was the only third-party library we used in the end. The rest of the interface included controls, such as a dataset-selector and controls for sliding back through time (for timeseries data). The app is purely front-end, which means it can essentially be shipped as a single HTML file (with linked scripts and styles).</p>
<p>We also included an 'add dataset' feature, which allows users to add a dataset to be visualised, as long as the schema is observed. In true hackathon style, any exceptions thrown will currently cause the process to fail silently ;) The <a href="https://github.com/CompJCDF/nhs_hack" target="_blank">GitHub repository</a> for the app contains a wiki with some guidance on data-formatting. Since the app is front-end only, any data added is persisted using HTML5 local storage and is therefore user-specific.</p>
<p>Generally, I am pleased with the result. The proof-of-concept is (mostly) mobile-friendly, and allows for easily showing off data in a more comprehensible way than through just using spreadsheets. Although we focussed on visualising only two datatypes initially (we all <3 <a href="https://twitter.com/_r_309" target="_blank">#maps</a>), we hope to extend this by dropping in modules for supporting new formats in the future.</a>
<p>There were many successful projects completed as part of the event, including a new 'eye-test' concept involving a zombie game using an Oculus Rift and an app for organising group coastal walks around Wales. A full list of projects is available on the event's <a href="http://nhshackday.com/previous/events/2015/01/cardiff" target="_blank">website</a>. I really enjoyed the weekend and hope to make the next one in London in May!</p>

13
_posts/2015-02-05-developing-useful-apis-for-the-web.md

@ -1,13 +0,0 @@
---
year: 2015
month: 2
day: 5
title: Developing Useful APIs for the Web
layout: post
---
<p>Yesterday, I gave a talk about my experiences with developing and using RESTful APIs, with the goal of providing tips for structuring such interfaces so that they work in a useful and sensible way.</p>
<iframe style="display:block; margin:10px auto;" src="https://docs.google.com/presentation/d/1lKIx5LZNOWhUgv299sMev5lASTyxIiMC1XUZCsZgYUc/embed?start=false&loop=false&delayms=5000" frameborder="0" width="480" height="299" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
<p>I went back to first principles, with overviews of basic HTTP messages as part of the request-response cycle and using sensible status codes in HTTP responses. I discussed the benefits of 'collection-oriented' endpoint URLs to identify resources that can be accessed and modified and the use of HTTP methods to describe what to do with these resources.</p>

19
_posts/2015-02-18-web-and-social-computing.md

@ -1,19 +0,0 @@
---
year: 2015
month: 2
day: 18
title: Web and Social Computing
layout: post
---
<p>This week I begin lecturing a module for <a href="http://cs.cf.ac.uk" target="_blank">Cardiff School of Computer Science and Informatics</a>' postgraduate MSc course in <a href="http://courses.cardiff.ac.uk/postgraduate/course/detail/p071.html" target="_blank">Advanced Computer Science</a>.</p>
<p>The module is called Web and Social Computing, with the main aim being to introduce students to the concepts of social computing and web-based systems. The course will include both theory and practical sessions in order to allow them to enhance their knowledge derived from literature with the practice of key concepts. We'll also have lots of guest lectures from experts in specific areas to help reinforce the importance of this domain.</p>
<p>As part of the module, I will encourage students to try and increase their web-presence and to interact with a wider community on the Internet. They'll do this by engaging more with social media and by maintaining a blog on things they've learned and researched.</p>
<p>Each week, the students will give a 5-minute <a href="http://en.wikipedia.org/wiki/Ignite_%28event%29" target="_blank">Ignite-format</a> talk on the research they've carried out. The quick presentation style will allow everyone in the group to convey what they feel are the most important and relevant parts in current research across many of the topics covered in the module.</p>
<p>We'll cover quite a diverse range of topics, starting from an introduction to networks and a coverage of mathematical graph theory. This will lead on to social networks, including using APIs to harvest data in useful ways. Over the last few weeks, we'll delve into subjects around socially-driven business models and peer-to-peer finance systems, such as BitCoin.</p>
<p>During the course, I hope that students will gain practical experience with various technologies, such as <a href="https://networkx.github.io" target="_blank">NetworkX</a> for modelling and visualising graphs in Python, <a href="http://www.cs.waikato.ac.nz/ml/weka" target="_blank">Weka</a> for some machine learning and classification, and good practices for building and using web APIs.</p>

30
_posts/2015-04-28-media-and-volume-keys-in-i3.md

@ -1,30 +0,0 @@
---
year: 2015
month: 4
day: 28
title: Media and volume keys in i3
layout: post
---
<p>As is the case with many people, all music I listen to on my PC these days plays from the web through a browser. I'm a heavy user of Google Play Music and SoundCloud, and using Chrome to handle everything means playlists and libraries (and the way I use them through extensions) sync up properly everywhere I need them.</p>
<p>On OS X I use <a href="http://beardedspice.com" target="_blank">BeardedSpice</a> to map the keyboard media controls to browser-based music-players, and the volume keys adjusted the system as they should. Using <a href="https://i3wm.org" target="_blank">i3</a> (and other lightweight window managers) can make you realise what you take for granted when using more fully-fledged arrangements, but it doesn't take long to achieve the same functionality on such systems.</p>
<p>A quick search revealed <a href="https://github.com/borismus/keysocket" target="_blank">keysocket</a> - a Chrome extension that listens out for the hardware media keys and is able to interact with a large list of supported music websites. In order to get the volume controls working, I needed to map i3 through to <span class="code">alsa</span>, and this turned out to be pretty straight-forward too. It only required the addition of three lines to my i3 config to handle the volume-up, volume-down, and mute keys:</p>
<pre>
bindsym XF86AudioRaiseVolume exec amixer -q set Master 4%+ unmute
bindsym XF86AudioLowerVolume exec amixer -q set Master 4%- unmute
bindsym XF86AudioMute exec amixer -q set Master toggle
</pre>
<p>And for fun added the block below to <span class="code">~/.i3status.conf</span> to get the volume displayed on the status bar:</p>
<pre>
volume master {
format = "♪ %volume "
device = "default"
mixer = "Master"
mixer_idx = 0
}
</pre>

28
_posts/2015-05-01-using-weka-in-go.md

@ -1,28 +0,0 @@
---
year: 2015
month: 5
day: 1
title: Using Weka in Go
layout: post
---
<p>A couple of years ago I wrote a <a href="https://flyingsparx.net/blog/13/6/12/wekapy" target="_blank">blog post</a> about wrapping some of <a href="http://www.cs.waikato.ac.nz/ml/weka" target="_blank">Weka</a>'s classification functionality to allow it to be used programmatically in Python programs. A small project I'm currently working on at home is around taking some of the later research from my PhD work to see if it can be expressed and used as a simple web-app.</p>
<p>I began development in <a href="https://golang.org" target="_blank">Go</a> as I hadn't yet spent much time working with the language. The research work involves using a Bayesian network classifier to help infer a <a href="http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6686092&tag=1" target="_blank">tweet's interestingness</a>, and while Go machine-learning toolkits do <a href="http://biosphere.cc/software-engineering/go-machine-learning-nlp-libraries" target="_blank">exist</a>, I wanted to use my existing models that were serialized in Java by Weka.</p>
<p>I started working on <a href="https://github.com/flyingsparx/WekaGo" target="_blank">WekaGo</a>, which is able to programmatically support simple classification tasks within a Go program. It essentially just manages the model, abstracts the generation of <a href="http://www.cs.waikato.ac.nz/ml/weka/arff.html" target="_blank">ARFF</a> files, and executes the necessary Java to make it quick and easy to train and classify data:</p>
{% highlight go %}
model := wekago.NewModel("bayes.BayesNet")
...
model.AddTrainingInstance(train_instance1)
...
model.Train()
model.AddTestingInstance(train_instance1)
...
model.Test()
{% endhighlight %}
<p>Results from the classification can then be examined, as <a href="https://github.com/flyingsparx/WekaGo/blob/master/README.md" target="_blank">described</a>.</p>

31
_posts/2015-05-12-nintendos-hotspot-api.md

@ -1,31 +0,0 @@
---
year: 2015
month: 5
day: 12
title: Nintendo's Hotspot 'API'
layout: post
---
<p>Since getting a DS, <a href="http://www.nintendo.com/3ds/built-in-software/streetpass" target="_blank">StreetPass</a> has become quite addictive. It's actually pretty fun checking the device after walking through town or using public transport to see a list of Miis representing the people you've been near recently, and the minigames (such as StreetPass Quest) that require you to 'meet' people in order to advance also make it more involved. Essentially the more you're out and about, the further you can progress - this is further accentuated through Play Coins, which can be used to help 'buy' your way forward and are earned for every 100 steps taken whilst holding the device.</p>
<img src="/media/blog/nintendozone2.png" class="blog-image" />
<p>The DS systems can also use relay points in Nintendo Zone hotspots to collect StreetPass hits. These zones are special WiFi access points hosted in certain commercial venues (e.g. in McDonalds and Subway restaurants), and allow you to 'meet' people around the world who also happen to be in another Nintendo Zone at the same time. As such, users can get a lot of hits very quickly (up to a maximum of 10 at a time). There are various ways people have <a href="https://gbatemp.net/threads/how-to-have-a-homemade-streetpass-relay.352645" target="_blank">found</a> to set up a 'home' zone, but Nintendo have also published a <a href="https://microsite.nintendo-europe.com/hotspots" target="_blank">map</a> to display official nearby zones.</p>
<p>However, their map seems a little clunky to use while out and about, so I wanted to see if there could be an easier way to get this information more quickly. When using the map, the network logs revealed <span class="code">GET</span> requests being made to:</p>
<pre>
https://microsite.nintendo-europe.com/hotspots/api/hotspots/get
</pre>
<p>The location for which to retrieve data is specified through the <span class="code">zoom</span> and <span class="code">bbox</span> parameters, which seem to map directly to the zoom level and the bounds reported by the underlying Google Maps API being used. For some reason, the parameter <span class="code">summary_mode=true</span> also needs to be set. As such, a (unencoded) request for central Cardiff may look like this:</p>
<pre>
/hotspots/api/hotspots/get?summary_mode=true&zoom=18&bbox=51.480043,-3.180592,51.483073,-3.173028
</pre>
<p>Where the coordinates (<span class="code">51.480043,-3.180592</span>) and (<span class="code">51.483073,-3.173028</span>) respectively represent the lower-left and upper-right corners of the bounding box. The response is in JSON, and contains a lat/lng for each zone, a name, and an ID that can be used to retrieve more information about the host's zone using this URL format:</p>
<pre>https://microsite.nintendo-europe.com/hotspots/#hotspot/&lt;ID&gt;</pre>
<p>When the map is zoomed-out (to prevent map-cluttering) a zone 'group' might be returned instead of an individual zone, for each of which the size is indicated. Zooming back in to a group then reveals the individual zones existing in that area.</p>
<img src="/media/blog/nintendozone1.png" class="blog-image right" />
<p>It seems that this server endpoint does not support cross-origin resource-sharing (CORS), which means that the data is not retrievable for a third-party web-app (at least, without some degree of proxying) due to browser restrictions. However, and especially since the endpoint currently requires no session implementation or other kind of authentication, the data seems very easily retrievable and manageable for non-browser applications and other types of systems.</p>

17
_posts/2015-05-27-android-consuming-nintendo-hotspot-data.md

@ -1,17 +0,0 @@
---
year: 2015
month: 5
day: 27
title: "Android: Consuming Nintendo Hotspot Data"
layout: post
---
<img src="/media/blog/android-hotspot.png" class="left">
<p>I recently <a href="https://flyingsparx.net/blog/2015/5/12/nintendos-hotspot-api" target="_blank">blogged about</a> Nintendo Hotspot data and mentioned it could be more usefully consumable in a native mobile app.</p>
<p>As such, I wrote a small Android app for retrieving this data and displaying it on a Google Map. The app shows nearby hotspots, allows users to also search for other non-local places, and shows information on the venue hosting the zone.</p>
<p>The app is available on the <a href="https://play.google.com/store/apps/details?id=net.flyingsparx.spotpassandroid" target="_blank">Play Store</a> and its source is published on <a href="https://github.com/flyingsparx/NZone-finder" target="_blank">GitHub</a>.</p>
<div class="clear"></div>

26
_posts/2017-03-16-two-year-update.md

@ -1,26 +0,0 @@
---
title: Two Year Update
layout: post
---
I haven't written a post since summer 2015. It's now March 2017 and I thought I'd write an update very briefly covering the last couple of years.
I finished researching and lecturing full-time in the summer of 2015. It felt like the end of an era; I'd spent around a third of my life at the <a href="http://www.cardiff.ac.uk/computer-science" target="_blank">School of Computer Science & Informatics</a> at <a href="http://cf.ac.uk" target="_blank">Cardiff University</a>, and had experienced time there as an undergraduate through to postgrad and on to full-time staff. However, I felt it was time to move on and to try something new, although I was really pleased to be able to continue working with them on a more casual part-time basis - something that continues to today.
In that summer after leaving full-time work at Cardiff I went <a href="http://www.interrail.eu" target="_blank">interrailing</a> around Europe with my friend, Dan. It was an amazing experience through which I had a taste of many new European cities where we met lots of interesting people. We started by flying out to Berlin, and from there our route took us through Prague, Krakow, Budapest, Bratislava, Vienna, Munich, Koblenz, Luxembourg City, Brussels, Antwerp, and then finished in Amsterdam (which I'd been to before, but always love visiting).
<img src="/media/blog/interrailing.png" style="width:100%;max-height:none;height:auto;">
<p class="center-align"><em>Some photos from the Interrail trip taken from <a href="https://instagram.com/flyingsparx" target="_blank">my Instagram</a>.</em></p>
After returning, I moved to London to start a new full-time job with <a href="https://www.chaser.io" target="_blank">Chaser</a>. Having met the founders David and Mark at a previous <a href="https://www.siliconmilkroundabout.com" target="_blank">Silicon Milkroundabout</a>, Chaser was so great to get involved with - I was part of a fab team creating fin-tech software with a goal to help boost the cashflows in small-medium sized businesses. Working right in the City was fun and totally different to what seemed like a much quieter life in Cardiff. Whilst there, I learned loads more about web-based programming and was able to put some of the data-analysis skills from my PhD to use.
At the end of 2015 I was to move back to South Wales to begin a new job at <a href="https://simplydo.io" target="_blank">Simply Do Ideas</a> as a senior engineer. Again, this was a totally different experience involving a shift from fin-tech to ed-tech and a move from the relentless busy-ness of London to the quieter (but no less fun) life of Caerphilly - where our offices were based. Since I was to head the technical side of the business, I was able to put my own stamp on the company and the product, and was able to help decide its future and direction.
<img src="/media/blog/sdi_bett.jpg" class="blog-image large-image">
<p class="center-align"><em>Myself and Josh representing Simply Do Ideas at Bett 2017 in London.</em></p>
In February 2016 I was honoured to be promoted to the Simply Do Ideas board and to have been made the company's Chief Technology Officer. Over the last year myself and the rest of the team have been proud to be part of a company growing very highly respected in a really interesting and exciting domain, and we're all very excited about what's to come in the near (and far) future!
I still continue to work with Cardiff University on some research projects and to help out with some of the final-year students there, I hope to write a little more about this work soon.
I feel so lucky to have been able to experience so much in such a short time frame - from academic research and teaching, being a key part of two growth startups, heading a tech company's technology arm, being a member of a board along with very highly-respected and successful entrepreneurs and business owners, and getting to meet such a wide range of great people. I feel like I've grown and learned so much - both professionally and personally - from all of my experiences and from everyone I've met along the way.

39
_posts/2017-06-22-cenode.md

@ -1,39 +0,0 @@
---
title: CENode
layout: post
---
Whilst working on the [ITA Project](http://usukita.com) - a collaborative research programme between the UK MoD and the US Army Research Laboratory - over the last few years, one of my primary areas has been to research around controlled natural languages, and working with [Cardiff University](http://cf.ac.uk) and [IBM UK](https://www.ibm.com/uk-en)'s [Emerging Technology](https://emerging-technology.co.uk) team to develop CENode.
As part of the project - before I joined - researchers at IBM developed the [CEStore](https://github.com/ce-store/ce-store), which aims to provide tools for working with [ITA Controlled English](https://developer.ibm.com/open/2016/06/16/ce-store-and-controlled-english-puts-ita-science-library-in-the-spotlight). Controlled English (CE) is a subset of the English language which is structured in a way that attempts to remove ambiguity from statements, enabling machines to understand 'English'
inputs.
Such a language was developed partly to support multi-agent systems consisting of a mixture of humans and machines, and to allow each agent to be able to communicate with one another using the same protocol in coalition scenarios. In these systems, there may be agents on the ground who submit information to the CEStore in CE, which is able to parse and understand the inputs. The CEStore may then pass the information on to other interested parties or may give an agent (such as a drone,
camera, sensor, or other equipment) a task (follow, intersect, watch, etc.) based on the complement of the existing knowledge and the new input.
An [old example](https://pdfs.semanticscholar.org/d5d5/65fcadcb35579b5ee25cdaa713afa14f7835.pdf) we use combines the CEStore with a system capable of assigning missions to sensors or equipment (see [this paper](https://users.cs.cf.ac.uk/A.D.Preece/publications/download/spie2012a.pdf)). This example focuses on 'John Smith', who is known to the CE system as a HVT (high-value target) owning a black car with licence plate 'ABC 123'. A human agent on the ground may later observe a speeding car and issue information into the system through an interface on their mobile device or via a microphone;
`there is a car named car1 which has black as colour and has 'ABC 123' as licence plate and is travelling north on North Road`
The system receiving the message can put together that this speeding car most likely contains John Smith (since it's known that he owns a car with this licence plate), and so can task a nearby drone to follow it based on the coordinates of the road and the direction of travel.
A human agent being able to type or speak this precise type of English is unlikely, particularly in emergency or rapid-response scnearios, and so the CEStore has a level of understanding of 'natural' language, and is able to translate many sentences from natural language English into CE - enabling agents to, largely, speak in a more native fashion.
The usefulness of the CEStore project led us to consider possibilities of a (lighter) version of a CEStore that could run on mobile devices in a decentralised network of CE-capable devices without relying on a centralised node responsible for parsing and translating all CE inputs. Such a system would also have the benefit of supporting a network of distributed 'nodes', each with the ability to maintain their own distinct knowledge bases and to understand and 'speak' CE - and thus the
concept for CENode was produced.
A key motivation for this was to support those agents who may not have a consistent network connection to a central server, but who still need knowledge support and the ability to report information - thus building the local knowledge base and improving inferences. Then, once the agent can re-establish a connection to other nodes, new information can propagate through the network.
The [CENode](http://cenode.io) project (with [source hosted on GitHub](https://github.com/flyingsparx/CENode)) began with a focus on supporting our [SHERLOCK experiments](http://ieeexplore.ieee.org/abstract/document/7936494), which had traditionally been powered using the CEStore. Using CENode, users of SHERLOCK experienced benefits such as auto-correct and typing suggestions, the ability to continue working offline (with information syncing when a network is re-established), and the display of a personalised 'dashboard' indicating the local agent's view of the world represented by the game.
The SHERLOCK experiment was even [covered by the BBC](http://www.bbc.co.uk/news/technology-34423291).
Since then, the CENode project has grown, and many of the features enjoyed by the CEStore (which is written in Java and deployed using Apache Tomcat) have been re-implemented for CENode. The library supports rules that fire given specific inputs, simple natural language understanding and parsing, querying through CE inputs, the CE cards [blackboard architecture](https://pdfs.semanticscholar.org/d5d5/65fcadcb35579b5ee25cdaa713afa14f7835.pdf), and policies - enabling CENode instances to communicate with each other in different topologies.
CENode is written in JavaScript, since this allows it to be downloaded to and cached on any JavaScript-supporting browser (for example, on a mobile phone or tablet), and to run as a Node app.
In addition to using the CE-based ('cards') interfaces, CENode can be interacted-with using the JavaScript bindings and can expose RESTful APIs when run as a Node app, enabling several types of CENode deployments to work together as part of a single system.
Check out a demo of the library [here](http://cenode.io/demo/index.html), which wraps a simple user interface around the library's JavaScript bindings. In the demo, the local CENode agent is preloaded with some knowledge about planets and stars. Try asking it questions or teaching it something new. Additionally, we have deployed a service called [CENode Explorer](http://explorer.cenode.io) which can launch cloud-based CENode instances and allows you to browse the knowledge base.
We hope to continue to maintain CENode as part of the project, and to discover more interesting use-cases. There are already clear pathways for its use in voice assistants, bots, and as a protocol for communication in IoT devices (some work for which is already underway). Those interested in developing with the library can get started using [the CENode Wiki](https://github.com/flyingsparx/CENode/wiki).

68
_posts/2017-06-26-cenode-iot.md

@ -1,68 +0,0 @@
---
title: CENode in IoT
layout: post
---
In my [previous post](/2017/06/22/cenode/) I discussed CENode and briefly mentioned its potential for use in interacting with the Internet of Things. I thought I'd add a practical example of how it might be used for this and for 'tasking' other systems.
I have a few [Philips Hue](http://www2.meethue.com/en-US) bulbs at home, and the Hue Bridge that enables interaction with the bulbs exposes a nice RESTful API. My aim was to get CENode to use this API to control my lights.
A working example of the concepts in this post is available [on GitHub](https://github.com/flyingsparx/CENode-IoT) (as a small webapp) and here's a short demo video (which includes a speech-recognition component):
<iframe src="https://player.vimeo.com/video/223169323" width="640" height="480" style="margin:20px auto;display:block; max-width: 100%;" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
The first step was to [generate a username for the Bridge](https://developers.meethue.com/documentation/configuration-api#71_create_user), which CENode can use to authenticate requests through the API.
I use [CE cards](https://pdfs.semanticscholar.org/d5d5/65fcadcb35579b5ee25cdaa713afa14f7835.pdf) to supply instructions to a CENode agent, since this is the generally recognised method for interaction between CE-capable devices. When instantiating a node, any number of CE 'models' may be passed in order to form a base knowledge set to work from. Here is such a model for giving CENode a view of the Hue 'world':
{% highlight javascript %}
const lightModel = [
'conceptualise a ~ hue bridge ~ h that has the value V as ~ address ~ and has the value W as ~ token ~',
'conceptualise a ~ hue bulb ~ h that has the value C as ~ code ~ and has the value V as ~ strength ~',
'conceptualise an ~ iot card ~ I that is a card and ~ targets ~ the hue bulb D and has the value P as ~ power ~ and has the value B as ~ brightness ~ and has the value S as ~ saturation ~ and has the value H as ~ hue ~ and has the value C as ~ colour ~',
'there is a hue bridge named bridge1 that has \'192.168.1.2\' as address and has \'abc123\' as token',
];
{% endhighlight %}
The model tells the node about Hue Bridges, bulbs, and a new type of card called an `iot card`, which supports properties for controlling bulbs. Finally, we instantiate a single bridge with an appropriate IP address and the username/token generated earlier.
Next the CENode instance needs to be created and its agent prepared:
{% highlight javascript %}
const node = new CENode(CEModels.core, lightModel);
const hueBridge = node.concepts.hue_bridge.instances[0];
updateBulbs();
node.attachAgent();
node.agent.setName('House');
{% endhighlight %}
The `updateBulbs()` function ([see it here](https://github.com/flyingsparx/CENode-IoT/blob/master/app.js)) makes a request to the Bridge to download data about known Hue bulbs, which are added to the node's knowledge base. For example;
```
there is a hue bulb named 'Lounge' that has '7' as code
```
The `code` property is the unique identifier the bridge uses to determine the bulb on the network.
Finally, all that was needed was to include a handler function for `iot card`s and to add this to the CENode agent:
{% highlight javascript %}
node.agent.cardHandler.handlers['iot card'] = (card) => {
if (card.targets){
const data = {};
if (card.power) data.on = card.power === 'on';
if (card.brightness) data.bri = parseInt(card.brightness)
if (card.saturation) data.sat = parseInt(card.saturation)
if (card.hue) data.hue = parseInt(card.hue)
request('PUT', hueBridge, '/lights/' + card.targets.code + '/state', data);
}
};
{% endhighlight %}
The function makes an appropriate request to the Hue Bridge based on the properties of the `iot card`. Now, we can submit sentences like this in order to interact with the system (e.g. to turn the 'Lounge' bulb on):
```
there is an iot card named card1 that is to the agent House and has 'instruction' as content and targets the hue bulb 'Lounge' and has 'on' as power
```
And that's it, really. This post contains only the more interesting components of the experiment, but hopefully provides an indication of how the library may be used for simple inter-device communication. The [full demo](https://github.com/flyingsparx/CENode-IoT/blob/master/app.js) includes extra code to handle the UI for a webapp and extra utility functions.

38
_posts/2017-07-19-cenode-alexa.md

@ -1,38 +0,0 @@
---
title: "Alexa, ask Sherlock..."
layout: post
---
I have recently [posted about CENode](/2017/06/22/cenode/) and how it might be [used in IoT systems](/2017/06/26/cenode-iot/).
Since CENode is partially designed to communicate directly with humans (particularly those out and about or "in the field") it makes sense for inputs and queries to be provided via voice in addition to or instead of a text interface. Whilst this has been explored in the browser (including in the [previous Philips Hue control demo](/2017/06/26/cenode-iot/)), it made sense to also try to leverage the Alexa voice service to interact with a CENode instance.
The [Alexa Voice Service](https://developer.amazon.com/alexa-voice-service) and [Alexa Skills Kit](https://developer.amazon.com/alexa-skills-kit) are great to work with, and it was relatively straight forward to create a skill to communicate with CENode's [RESTful API](https://github.com/flyingsparx/CENode/wiki/CEServer-Usage).
The short video below demonstrates this through using an Amazon Echo to interact with a standard, non-modified CENode instance running on [CENode Explorer](http://explorer.cenode.io) that is partly pre-loaded with the "space" scenario used in our main [CENode demo](http://cenode.io/demo/index.html). The rest of the post discusses the implementation and challenges.
<iframe src="https://player.vimeo.com/video/226199106" width="640" height="480" style="margin:20px auto;display:block; max-width: 100%;" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
Typical Alexa skills are split into ["intents"](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference), which describe the individual ways people might interact with the service. For example, the questions "what is the weather like today?" and "is it going to rain today?" may be two intents of a single weather skill.
The skill logic is handled by [AWS Lambda](https://aws.amazon.com/lambda), which is used to associate each intent with an action. When someone gives a voice command, the Alexa Voice Service (AVS) determines which intent is being called for which service, and then passes the control over to the appropriate segment in the Lambda function. The function returns a response to the AVS, which is read back out to the user.
The strength of Alexa's ability to recognise speech is largely dependent on the information given to build each intent. For example, the intent "what is the weather like in {cityName}?", where `cityName` is a variable with several different possibilities generated during the build, will accurately recognise speech initiating this intent because the sentence structure is so well defined. A single intent may have several ways of calling it - "what's the weather like in...", "tell me what
the weather is in...", "what's the weather forecast for...", etc. - which can be bundled into the model to further improve the accuracy even in noisy environments or when spoken by people with strong accents.
Since CENode is designed to work with an entire input string, however, the voice-to-text accuracy is much lower, and thus determining the intent and its arguments is harder. Since we need CENode to handle the entire input, our demo only has a single intent with two methods of invocation (slots):
- `ask Sherlock {sentence}`
- `tell Sherlock {sentence}`
Since 'Sherlock' is also provided as the invocation word for the service, both slots implicitly indicate both the service and the single intent to work with. I used 'Sherlock' as the name for the skill as it's a name we've used before for CENode-related apps and it is an easy word for Alexa to understand!
`sentence` is the complete body to be processed by CENode - e.g. "Jupiter is a planet" or "what is Jupiter?" - giving a typical full Echo invocation: "Alexa, tell Sherlock Jupiter is a planet". The `Alexa` segment tells the Echo to begin listening, the `tell Sherlock` component determines the skill and intent to use, and the remainder of the sentence is the body provided to CENode.
Since we only have a single intent, using either 'ask' or 'tell' in the invocation is irrelevant since it is CENode that will try and work out what is meant from the sentence body - whether a question or an input of information. The two slots are only used for the benefit of the human user and so invocations such as "tell Sherlock what is Jupiter?" still work.
At this stage, the AWS Lambda function handling the intent makes a standard HTTP POST request to a CENode instance, and the response is directly passed back to the Alexa service for reading-out to the user. As such, CENode itself provides all of the error-handling and misunderstood inputs, making the Alexa service itself combined with the Lambda function, in this scenario, very 'thin'.
<img src="/media/blog/cenode-alexa.png" style="width:100%;max-width:620px;max-height:none;height:auto;">