Compare commits

..

352 Commits

Author SHA1 Message Date
453491082b Add Taskfile 2025-01-07 15:33:21 +00:00
c861a0be37 Add rate limit info to Traefik note 2025-01-07 12:38:47 +00:00
7fd8a8f5e5 Small typo and editorials
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-01-03 21:40:11 +00:00
4cb11d909d Small tweaks to OSA blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2025-01-01 11:16:36 +00:00
33b113f136 Add online safety act blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-30 23:39:45 +00:00
63458222ab Update build image
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-20 20:01:31 +00:00
82d13c84b7 Add printer blog post
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-12-20 17:31:29 +00:00
721ecd5e79 Update book updated at date
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-08 19:13:24 +00:00
ad450ae6df Update books note. Add service status link.
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-12-08 19:09:40 +00:00
7c7303b682 Update books note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-11-02 09:59:04 +00:00
30865981cb Add aerc blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-22 19:23:57 +01:00
90a88576ed Add aerc note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-21 20:57:24 +01:00
0028d82c81 Update books note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-10-20 19:52:45 +01:00
a5e99d0b0c Add obsidian note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-24 21:57:39 +01:00
792a062a4a Updated uses note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-23 19:34:12 +01:00
c5a7c9d53a Add nb note and blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-22 10:41:33 +01:00
73e4a7e06a Typo fix and small sentence restructure
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-20 17:18:09 +01:00
813c55d230 Updated books note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-07 20:01:03 +01:00
8125bf377f Add Hypermedia APIs blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-07 19:56:51 +01:00
cf574b6b46 Add note on jrnl
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-01 21:21:56 +01:00
4f1938e90d Add note on Tailscale sidecars
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-09-01 20:24:57 +01:00
a93b877a6a Add Borgmatic blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-08-19 22:24:26 +01:00
ca245cb46c attempt to resolve some RSS issues
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-08-11 19:20:25 +01:00
8e13059230 updated Vaultwarden and Photoprism notes
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-29 20:31:16 +01:00
2bdb5c8e6c minor tweaks
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-22 13:14:23 +01:00
93ac4419f5 use alternative Hugo build container
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-07-15 22:37:20 +01:00
ab6cabde6e tweaks to adoption blog post
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-07-15 22:31:31 +01:00
c54437370d first draft family blog post 2024-07-08 22:11:33 +01:00
4a1e8a4fbc update books and podcasts notes
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-07-02 22:15:05 +01:00
8325801bca ensure images all present 2024-07-02 21:50:38 +01:00
092d5c9b2e Merge branch 'valuable-humans' 2024-07-02 21:48:54 +01:00
864c95db11 update gem capsule processor 2024-07-02 21:48:46 +01:00
00f686134b finalise blog post 2024-07-02 21:12:42 +01:00
c47e7e8fd0 small books update
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2024-07-02 19:38:32 +01:00
0e9c7ca98e small books update 2024-07-02 19:36:29 +01:00
58ee4ed3c8 add new books
Some checks are pending
ci/woodpecker/push/woodpecker Pipeline is pending
2024-05-03 16:23:04 +01:00
60c13b7216 add cloudfront note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2024-05-02 21:02:34 +01:00
08d341057c Updated books note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-12-18 18:47:40 +00:00
8525dfa052 Add Postgres note
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-31 17:52:20 +00:00
ba36993f2f Add MongoDB setup note
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-31 17:19:47 +00:00
4c7a391da8 Updated books note
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-10-31 12:08:37 +00:00
feb6fd55c0 Add PDF tool link
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-16 12:53:52 +02:00
03e6d3d9dc Add some finance and media links
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-16 12:50:30 +02:00
96b9a5f8a5 Updated Backblaze URLs
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-03 21:02:54 +01:00
285aefdfaa Add recent books
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-08-01 15:33:39 +01:00
496c410340 updated bucket policy
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-07-04 22:33:22 +01:00
40be105f75 Added books to the “books” note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-06-11 19:44:15 +01:00
68ad56f3c0 Add blog post about automatic gemini publication
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-06-01 21:42:33 +00:00
1beda7d99d Use full width where possible
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-20 12:18:08 +00:00
c9af302f36 Updated notes menu
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-20 08:52:25 +00:00
907d749e13 Updated link rendering issue
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-20 07:57:26 +00:00
bb8de14682 Updated homepage
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-20 07:48:24 +00:00
23a35c163e Update styles for tags, notes, posts
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-20 07:24:39 +00:00
e2eb4ef179 Updated book note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-05-05 19:10:04 +00:00
66fc355cb5 Rebuild woodpecker post with new slug
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-23 18:07:06 +00:00
97a3468e36 Update s3cmd sync to include mime type
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-23 17:15:02 +00:00
1625f63869 Open external links in a new tab, and style with an icon
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-23 17:05:33 +00:00
6c1dcbd200 Add Woodpecker note and blog post
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-23 16:52:24 +00:00
ac5da699de Update Bunny S3 hosting note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-23 15:33:59 +00:00
Will Webberley
e208ff2326 Updated book feed
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-14 09:20:36 +00:00
Will Webberley
fd6830dc22 Better handle self-references in Gemini
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 13:30:01 +00:00
Will Webberley
a09af6a5e8 Add website links to gem capsule
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 13:09:56 +00:00
Will Webberley
f35324370d Fix typo in reference to About note
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 12:42:28 +00:00
Will Webberley
60a3b2320c Improved link handling
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 12:36:08 +00:00
Will Webberley
72fdc2ea65 Add better date info to gem posts/notes
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 08:23:13 +00:00
Will Webberley
c7a88f83a4 Add notes to gem capsule
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-13 07:39:47 +00:00
Will Webberley
d4e09f6800 Update ASCII art for Gemini
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-12 18:33:38 +00:00
Will Webberley
c7a311c892 Another update for bash compatibility
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-04-12 18:29:30 +00:00
Will Webberley
574dde8566 Fix issue in build
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-04-12 18:27:41 +00:00
Will Webberley
7dee3ef482 Offload Curl data to file 2023-04-12 18:26:28 +00:00
Will Webberley
c924bf184f Update build file
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-04-12 18:15:37 +00:00
Will Webberley
c02e5f1b12 add deploy phase for Gemini 2023-04-12 18:12:31 +00:00
Will Webberley
3f94261fa9 Add icon
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-26 11:38:38 +00:00
Will Webberley
7034b1c7c4 Attempt to add an RSS feed image
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-26 11:36:45 +00:00
Will Webberley
07ef3437cf Add an enclosure image to the RSS feed
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-26 11:22:41 +00:00
Will Webberley
c8bfff7703 Remove deploy script
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-25 18:03:05 +00:00
Will Webberley
c97615fa75 Attempt to fix deploy bug
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-25 18:00:55 +00:00
Will Webberley
07b52df9a3 Attempt to escape colon
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-25 17:58:04 +00:00
Will Webberley
aa31e1801c Remove curl line 2023-03-25 17:53:09 +00:00
Will Webberley
dd73aadc6c Try renaming key 2023-03-25 17:51:09 +00:00
Will Webberley
243735859e Re-add bunny 2023-03-25 17:48:38 +00:00
Will Webberley
1ee4939fe4 try to remove Bunny bits 2023-03-25 17:47:36 +00:00
Will Webberley
e42479364f Ensure only runs on main deploy 2023-03-25 17:42:05 +00:00
Will Webberley
673cfd19ad Include purge 2023-03-25 17:39:56 +00:00
Will Webberley
c50ce8abde Include purge step 2023-03-25 17:39:22 +00:00
Will Webberley
3077bbb16d Explicitly load secrets
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:36:56 +00:00
Will Webberley
7a7b094428 Echo token
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:35:35 +00:00
Will Webberley
d1e3670a5f Check a cat
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:33:13 +00:00
Will Webberley
45c4969335 Another try
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:32:03 +00:00
Will Webberley
d4cacf7d91 Specify the config file
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:28:10 +00:00
Will Webberley
c981bbd205 Another attempt
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:26:25 +00:00
Will Webberley
aa3f968182 Update to s3cmd config
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:22:53 +00:00
Will Webberley
6c9612bd4c Update s3cmd config
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:21:26 +00:00
Will Webberley
c4db8a3954 Fixed typo
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:14:36 +00:00
Will Webberley
117f00eb1e Update depoloy stage
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 17:13:36 +00:00
Will Webberley
dda5c8bfb8 Add deploy stage
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 16:58:17 +00:00
Will Webberley
139160d45a Alpine step
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-25 16:56:43 +00:00
Will Webberley
5afbbd867e Try and install s3cmd
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 16:53:59 +00:00
Will Webberley
ecf8cbb207 Update Hugo image
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
2023-03-25 16:49:45 +00:00
Will Webberley
1e27155fad Fix pipeline
Some checks failed
ci/woodpecker/push/woodpecker Pipeline failed
2023-03-25 16:44:31 +00:00
Will Webberley
b1d8d043a0 Trying woodpecker 2023-03-25 16:41:57 +00:00
Will Webberley
a89fb05ce6 Add an action
Some checks failed
Explore-Gitea-Actions
2023-03-25 16:04:28 +00:00
Will Webberley
df568d28ec Add Bunny security headers info 2023-03-24 22:02:34 +00:00
Will Webberley
38e23a9b05 Updated podcasts note 2023-03-24 21:46:36 +00:00
Will Webberley
2a879f4b86 Add Vaultwarden note 2023-03-24 20:35:39 +00:00
Will Webberley
04001b0e18 Updated links note 2023-03-24 20:08:17 +00:00
Will Webberley
2a4dc86a58 Add book logging blog post 2023-03-24 18:36:04 +00:00
Will Webberley
e512b6546a Adding a couple of books 2023-03-24 17:16:07 +00:00
Will Webberley
c0ccd7b2b9 Update book note format 2023-03-22 15:50:49 +00:00
Will Webberley
07063e3823 Updated Gemini address 2023-03-22 08:04:28 +00:00
Will Webberley
082805308f Add thank you note for exif stripping blog post 2023-03-11 14:27:57 +00:00
Will Webberley
a3dd5df935 Fixed header issue 2023-03-07 13:35:21 +00:00
Will Webberley
51c2d8141f Further style updates 2023-03-07 13:13:41 +00:00
Will Webberley
7fe00a2367 Fixed issue in small horizontal scroll tweaks in the header 2023-03-07 13:07:01 +00:00
Will Webberley
43f18ce050 Added blog photos for key posts 2023-03-07 13:00:36 +00:00
Will Webberley
3ec8f0b6b8 Small tweaks 2023-03-06 08:56:00 +00:00
Will Webberley
d8f8e34cc0 Self host encryption blog post 2023-03-05 09:36:25 +00:00
Will Webberley
ee7b83b88b Fixed issue in EXIF tags blog post 2023-02-03 05:53:57 +00:00
Will Webberley
bc99836fd8 Add Raspberry Pi note 2022-12-10 17:57:42 +00:00
Will Webberley
4756512e6f Updated some notes 2022-12-10 17:16:49 +00:00
Will Webberley
f02c35bb97 Added window-hiding workflow blog post 2022-12-10 16:09:10 +00:00
Will Webberley
ba1900339d Updated mac setup note 2022-12-04 19:25:32 +00:00
7c720609eb added Mac setup note 2022-12-04 16:19:52 +00:00
af37036a23 Ensure footer is always pushed to the bottom of the viewport 2022-11-25 22:09:25 +00:00
c15c87b578 Add donate/donations page/notes 2022-11-25 21:58:49 +00:00
ebacd8be90 added Photoprism blog post and donations note 2022-11-25 21:12:45 +00:00
2d155140bb add tailscale virtual host blog post 2022-10-27 21:27:48 +01:00
54e7849e9b Added Joplin blog post 2022-10-02 21:06:10 +01:00
6822bcd86f Add s3 notes 2022-10-02 00:35:56 +01:00
8aafde9a41 Add S3 bunny note 2022-10-02 00:21:19 +01:00
b7d1125904 Add bunny hosting note 2022-10-01 23:55:54 +01:00
20567abfee Move note stuff to using partials 2022-09-29 00:11:55 +01:00
19d14821f9 Restructured notes page 2022-09-28 23:52:17 +01:00
fedc76f915 Added gitea note 2022-09-28 21:07:29 +01:00
770f4e743a Added note on volume encryption 2022-09-28 20:03:21 +01:00
68a8b83e07 Added Nextcloud note 2022-09-28 19:34:48 +01:00
fb3e91483f added Nitter note 2022-09-28 19:20:00 +01:00
c56f450a80 Added teddit note 2022-09-28 19:12:38 +01:00
beffda35a6 Added FreshRSS note 2022-09-28 19:04:22 +01:00
1623434b75 Added traefik note 2022-09-28 18:50:13 +01:00
b8534e4c33 Added Umami note 2022-09-28 18:36:13 +01:00
d4dafe911d add Alpine unzip note 2022-09-27 19:52:23 +01:00
1c912fdc7b Add Monica note 2022-09-27 19:30:09 +01:00
e3af93ee30 Updated deploy script 2022-09-25 20:46:06 +01:00
1c6863b4c2 Added Joplin note 2022-09-25 20:43:31 +01:00
a283b3cb06 Add section about lifecycle policies on backup note 2022-09-25 20:27:58 +01:00
d4114bd673 Add backups note 2022-09-23 08:22:50 +01:00
be70bb666d Added link to Photoprism 2022-09-20 20:47:09 +01:00
21346b887c Added Photoprism note 2022-09-20 20:44:01 +01:00
943c77eef9 Updated 'uses' note 2022-09-20 20:19:49 +01:00
31a0e13fac Changes to ensure improved lighthouse scores 2022-09-19 16:04:10 +01:00
1aadaee327 Merge branch 'hugo-post' 2022-09-03 18:12:08 +02:00
980c771ff6 remove un-needed files 2022-09-03 18:12:03 +02:00
fa83dc2446 completed hugo post 2022-09-03 18:11:35 +02:00
aaaa98e6c1 first draft progress 2022-09-03 16:57:56 +02:00
75399a76a3 add description metatags to pages 2022-09-02 13:57:43 +02:00
2eeae041b6 fixed wrap issue 2022-09-01 16:50:32 +02:00
a9c03e746e fixed slugs for recent posts 2022-08-29 16:24:44 +02:00
9ce267cc7d Fix RSS validation issues 2022-08-29 16:16:08 +02:00
9f47dc1adc remove un-needed files 2022-08-28 18:29:46 +02:00
4705669adf hugo merge 2022-08-28 18:06:13 +02:00
fcb5619dba removed un-needed file 2022-08-28 16:45:57 +02:00
68f8ef5615 improvements to RSS 2022-08-28 16:45:29 +02:00
7b1044188e Update date on piwigo photos post 2022-08-27 14:16:19 +02:00
9a9b21acc0 Minor updates 2022-08-27 13:46:38 +02:00
17b3484bb9 Markdownified research entries 2022-08-27 13:37:45 +02:00
7e409ad089 Markdown-ified projects 2022-08-27 13:06:40 +02:00
51f8c3a485 markdown-driven content 2022-08-26 17:42:04 +02:00
53b946a268 Self-highlighting navbar 2022-08-26 17:12:46 +02:00
7f7cf2d425 Configured blog post slugs 2022-08-26 13:25:10 +02:00
e4587deab1 Updated some blog slugs 2022-08-25 20:39:28 +02:00
1a3c6a52d1 Improved blog post formatting 2022-08-25 14:39:21 +02:00
ea53f8b0e0 Improved RSS formatting 2022-08-25 13:44:57 +02:00
93aa5e4938 Tweaks to RSS layout 2022-08-25 13:01:33 +02:00
1e561962c6 Added MTD blog post 2022-08-25 11:43:47 +01:00
95951aef28 enforce new gitignore 2022-08-24 18:36:36 +02:00
15dc02896c Added partials for blog and note headers 2022-08-24 18:23:31 +02:00
88fbc30f34 Tags page and added notes 2022-08-24 18:05:17 +02:00
f1f0187ebb Taxonomy update 2022-08-23 20:38:49 +02:00
4e625873d5 Display tags on blog posts 2022-08-23 20:27:48 +02:00
1c72267bd6 Improved image processing 2022-08-23 20:13:55 +02:00
892d4fe138 initial Hugo draft 2022-08-23 19:22:10 +02:00
fe60103df8 Fixed typos! 2022-07-20 21:54:03 +01:00
ea3160c1b6 Add workout blog post 2022-07-20 21:04:55 +01:00
f65e0caa86 added iPad coding blog post 2022-06-12 12:59:58 +01:00
5383575b7b Updates to build process 2022-06-11 13:24:02 +01:00
63eca5abb8 Make the “old post” limit 3 years 2022-05-25 18:14:39 +00:00
9664db6e5f Add Parcel post 2022-05-25 18:04:15 +00:00
501939b155 replace fortawesome with react-icons 2022-05-23 20:31:17 +02:00
b23612de2e Resolved lint issues 2022-05-23 17:51:11 +00:00
18b860977e Minor updates to notes header component 2022-05-23 17:49:50 +00:00
1d39d32951 Remove research header 2022-05-23 17:20:56 +00:00
7102e9166a Update yarn. Remove SA. 2022-05-23 17:12:51 +00:00
8f9c47f7a6 Update to use umami 2022-05-09 14:16:55 +00:00
7e1c73a46f Added Ledger Sankey post 2022-04-24 14:44:08 +00:00
00bb0342b5 Add contact details to RSS feed 2022-04-14 11:59:04 +00:00
fe899c7b23 Added alcohol blog post 2022-04-12 17:54:48 +00:00
3d46f0fb8a Re-ordered homepage 2022-04-11 13:47:39 +00:00
76b23af649 small tweaks 2022-04-11 08:08:12 +00:00
74529dd7f6 Added usage note 2022-04-09 19:58:49 +00:00
9e690dff0c Added more links and ideas to notes 2022-04-09 19:26:48 +00:00
15880eeb84 Updated styles and homepage 2022-04-09 13:54:00 +00:00
e50dc994cb Deployed final Traefik post draft 2022-04-09 13:23:08 +00:00
812adf9c0d First draft complete 2022-04-09 11:57:59 +00:00
b94c1dac5e Started Traefik post 2022-04-08 21:22:20 +00:00
21134f682a added Nextcloud OCC blog post 2022-01-28 19:19:28 +00:00
a10f2ac91f added web3 blog post 2022-01-21 08:32:07 +00:00
eefaf74c5f website updates 2022-01-13 19:35:15 +00:00
4110645fbe added 100DaysToOffload review post 2022-01-13 19:23:05 +00:00
903b37fae8 added Flutter web push blog post 2021-12-21 19:20:47 +00:00
665b0ebcee added 'Element One' blog post 2021-12-15 21:54:50 +00:00
9151c8a798 added nextcloud object storage blog post 2021-12-11 11:57:45 +00:00
940f59d65d added 'the Idiot Brain' blog post 2021-12-08 18:35:56 +00:00
4cebb88773 added 'incoming mail parse' blog post 2021-12-05 13:25:18 +00:00
ae3dca3fdb added 'open-source projects' blog post 2021-12-02 16:03:54 +00:00
c94102c374 added restic blog post 2021-11-27 10:22:48 +00:00
e2ccb4d1ea added Nightfall City blog post 2021-11-25 18:46:18 +00:00
a0834db0a2 small updates 2021-11-20 15:00:05 +00:00
bd958e3651 added Webzine blog post 2021-11-20 14:46:19 +00:00
b808e49a9f added 'Rebel Ideas' blog post 2021-11-17 13:00:10 +00:00
eb1d26c62a added website creativity blog post 2021-11-13 13:28:09 +00:00
91f8d19ec5 added 'bathroom DIY' blog post 2021-11-10 21:19:41 +00:00
7578ca9b2d added AV blog post 2021-11-08 09:31:19 +00:00
901a2c9076 added 'clam AV' blog post 2021-11-06 15:40:16 +00:00
fbc71a35b2 added 'thinking positive' blog post 2021-10-30 19:46:47 +01:00
97b17bba5b added FreeBSD blog post 2021-10-29 09:47:19 +01:00
ba6a3f81ea added Extraterrestrial and loft conversion blog posts 2021-10-24 21:48:24 +01:00
735df0d0ad added TWiT blog post 2021-10-17 21:17:27 +01:00
ac291671d4 added BSV Wales meetup blog post 2021-10-13 19:43:35 +01:00
70a81fb87a added Dotty blog post 2021-10-11 18:22:44 +01:00
d4dc2aad80 added 'This is Going to Hurt' blog post 2021-10-06 21:29:24 +01:00
47b23e3180 added 'Pinephone Phosh' blog post 2021-10-02 16:14:29 +01:00
8cba39c8b4 Added 'The Secret Barrister' blog post 2021-09-30 18:01:10 +01:00
73f9656c0f added Duolingo blog post 2021-09-25 20:15:16 +01:00
2845bf8386 added 'Accessibility is for everyone' blog post 2021-09-22 20:36:49 +01:00
6d916e9f8f added 'Pacman signature issues' blog post 2021-09-19 13:19:30 +01:00
5c9be3a724 added 'Twitter review' blog post 2021-09-18 11:06:33 +01:00
a87bfbe7f9 added 'Columbus Day' blog post 2021-09-12 12:26:34 +01:00
f78681e785 added telegram notifications blog post 2021-09-09 11:22:33 +01:00
b0211a5ff7 added 'addictive Twitter' blog post 2021-09-07 15:33:01 +01:00
53b5d09b0c added SSO Tools blog post 2021-09-02 21:01:52 +01:00
30d819deea updated darknet diaries blog post 2021-08-29 14:14:03 +02:00
0ac04e8617 added Darknet Diaires podcast blog post 2021-08-28 19:18:04 +02:00
0faca8f8c5 added 'exif stripping' blog post 2021-08-28 18:52:17 +02:00
e50b60f4d4 updated post title 2021-08-21 15:14:13 +02:00
fb9b57785c added 'react theming' blog post 2021-08-21 15:09:25 +02:00
1c04ae65d5 added bookwyrm blog post 2021-08-20 18:24:24 +02:00
ae63401d5f added 'gardn renovation' blog post 2021-08-14 16:15:58 +01:00
ee7d51af1e added Treadl blog post 2021-08-13 18:16:07 +01:00
028b0c859f added 'pinephone update' blog post 2021-08-09 18:56:16 +01:00
26794d7c51 added 'dev stack' blog post 2021-08-04 20:00:41 +01:00
1abf2a4288 added dog blog post 2021-08-03 20:02:43 +01:00
78178de666 added syntax highlighting blog post 2021-07-28 19:01:07 +01:00
7857f7d6e0 updated blog post timing 2021-07-27 18:54:26 +01:00
2a7107c4a3 added capsule.town blog post 2021-07-26 21:07:49 +01:00
a92a0492f1 added ATP blog post 2021-07-26 20:30:57 +01:00
d268e25532 added 'view previews' blog post 2021-07-19 18:16:16 +02:00
ae96dc027a added 'The Night Circus' blog post 2021-07-14 19:35:58 +02:00
a540eaad82 added FreshRSS blog post 2021-07-12 13:05:43 +02:00
218b075d1d added '5AM Club' blog post 2021-07-07 20:56:45 +01:00
ab514ec114 added 'client-side image-resizing' blog post 2021-07-06 22:15:06 +01:00
70308d44f1 added 'project hail mary' blog post 2021-07-05 20:48:57 +01:00
743596f5b7 added Blurhash blog post 2021-06-30 22:27:27 +01:00
b88d9ce13f added 'whoogle' blog post 2021-06-24 22:54:12 +01:00
5046bb2034 added 'Wales Tech Week' blog post 2021-06-23 23:25:58 +01:00
760f4ab92f added 'Anxious People' blog post 2021-06-19 19:37:30 +01:00
9e4aad7cb8 added tmuxinator blog post 2021-06-18 20:58:50 +01:00
2585ca259a added 'rss entire posts' blog post 2021-06-12 12:10:34 +01:00
e960b60462 resized images 2021-06-12 10:33:11 +01:00
2a65f83eb9 added beekeeping blog post 2021-06-11 20:02:39 +01:00
22b25113cc added married blog post 2021-06-10 09:07:27 +01:00
8b934779b1 added 'gaming' blog post 2021-06-02 22:05:04 +01:00
998f0c624d added HG Wells blog post 2021-05-26 22:57:40 +01:00
19d3f50e73 added Apple rant blog post 2021-05-20 20:34:39 +01:00
9d61aca17d added 'B2 backups' blog post 2021-05-18 23:25:57 +01:00
d8a1874e9d update to include capsule.town project 2021-05-14 22:33:26 +01:00
dfec15dc2b added 'running' blog post 2021-05-12 20:23:39 +01:00
49aa51ead2 added 'notes/todo lists' blog post 2021-05-09 22:35:32 +01:00
b5addc85ad amended 'data sovereignty' blog post 2021-05-05 20:59:26 +01:00
48c4d2c1da added 'data sovereignty' blog post 2021-05-05 20:56:12 +01:00
3bf67acd80 added 'Go Time' blog post 2021-05-04 19:49:11 +01:00
d16586cc46 added 'Starting with the Pinephone' blog post 2021-04-27 22:02:40 +01:00
bdc1f3e0b9 add '35 under 35' blog post 2021-04-26 11:51:16 +01:00
e12c6006d3 add Steve Jobs blog post 2021-04-25 17:45:09 +01:00
3f4414a37f added business account reporting with Ledger blog post 2021-04-18 15:04:20 +01:00
fe85f1e9c1 adjusted blog post timestamp 2021-04-18 13:10:07 +01:00
13c331f601 added giver of stars blog post 2021-04-18 13:09:07 +01:00
6aa3ba94bc added invisalign blog post 2021-04-12 22:54:12 +01:00
cd89dc0768 add 'facebook scraping fediverse' blog post 2021-04-07 20:53:12 +01:00
e4cf40b56f fixed typo 2021-04-04 14:19:13 +01:00
3dd9d10e84 added '3 mail clients' blog post 2021-04-04 12:15:57 +01:00
ccf3ce65b2 added 'http simplicity' blog post 2021-03-31 23:16:37 +01:00
1528bcc8cc added 'pinephone and pinetime' blog post 2021-03-27 16:43:46 +00:00
63ac26ff79 add the great alone blog post 2021-03-23 19:56:00 +00:00
81da23349b added matrix homeserver blog post 2021-03-22 11:56:22 +00:00
80899eca70 added "blood, sweat, and pixels" blog post 2021-03-17 19:31:03 +00:00
75afc90f6b refactored. fixed typo 2021-03-15 11:16:16 +00:00
3d49c810e7 tidied up post 2021-03-15 10:56:18 +00:00
de2c29f519 working draft 2021-03-15 09:08:34 +00:00
7b941ac07d added Red October blog post 2021-03-10 20:23:36 +00:00
4660a3aa6e added 'getting mail' blog post 2021-03-08 19:30:32 +00:00
9affe8080e add gatsby RSS blog post 2021-03-04 22:30:29 +00:00
f85181ba05 added flask serverless blog post 2021-02-28 22:11:15 +00:00
aafad31cd5 added google photos post 2021-02-25 14:33:52 +00:00
fa48804b8a updated blog post 2021-02-21 19:15:48 +00:00
281244b113 added phone-answering and solarpunk blog posts 2021-02-20 22:09:28 +00:00
9d207bcedc fixed theme for successful builds 2021-02-18 18:34:34 +00:00
fdc412bf15 small updates 2021-02-18 13:34:43 +00:00
e27b58cf07 removed all inline styles 2021-02-18 13:22:08 +00:00
8bae9b9efd improved Alert styles 2021-02-18 12:53:22 +00:00
dbb433f0f2 theme switching, default themes, storing settings 2021-02-18 11:04:26 +00:00
0d2074f1d4 self host recursive font 2021-02-15 11:34:47 +00:00
62f17269dc added midnight library blog post 2021-02-13 21:38:27 +00:00
c9f085ae24 added ssh jumping bastion server blog post 2021-02-10 23:31:34 +00:00
16f456db62 added styled components plugin 2021-02-09 23:15:32 +00:00
4792fb40cf removed pattern.css 2021-02-09 23:01:01 +00:00
99a7fd5e44 resized images for performance 2021-02-09 22:47:00 +00:00
19ba514cc6 added plain text accounting note 2021-02-09 22:25:56 +00:00
680da96914 added links note 2021-02-09 19:45:01 +00:00
f4d3509ad4 updated styling slightly 2021-02-07 20:35:56 +00:00
ee79610935 added monica blog post 2021-02-07 19:48:24 +00:00
df9da30262 add pettern.css dependency 2021-02-07 13:20:38 +00:00
71c1fea11d added zustand blog post 2021-02-06 00:47:58 +00:00
415c6b65c0 added zustand blog post 2021-02-05 23:46:34 +00:00
c6d3c5af1a improved global styles 2021-02-04 14:10:01 +00:00
3234a21fdd aded RSS opinion blog post 2021-02-03 22:38:23 +00:00
0135ebde64 ensure only blog posts are included in RSS 2021-02-03 15:41:20 +00:00
515ed8e035 add copy button for feeds URL 2021-02-03 00:09:52 +00:00
aee7e97cf2 add support for 'notes' 2021-02-03 00:01:19 +00:00
80d7b5e289 small tweaks 2021-02-02 22:19:19 +00:00
4ccc15b7cb add support for article em and strong tags styling 2021-02-02 21:38:22 +00:00
397e4ad433 added 'blogging for devs' post 2021-02-02 20:45:35 +00:00
47a1a260f1 add 'about this site' page 2021-02-02 00:31:18 +00:00
f29e63099a add suppport for linking to multiple feeds 2021-02-01 22:48:21 +00:00
0c6e5168bc added sqlite blog post 2021-02-01 21:33:05 +00:00
99503f2321 add extra RSS feeds 2021-01-31 23:32:48 +00:00
96ca7c6e3b added Jo Spain blog post 2021-01-31 18:58:17 +00:00
112c231d0c improved header/nav styles 2021-01-31 15:56:53 +00:00
ea2a6e39fa updated fonts to recursive 2021-01-31 00:50:52 +00:00
3d90555633 add recent posts to homepage 2021-01-31 00:07:57 +00:00
4d06d13c71 added tags, custom alert boxes, etc. 2021-01-30 23:54:36 +00:00
70b7f0360c reconfigured npm packages 2021-01-30 21:16:12 +00:00
b44c0a76ae updated blog post format, updated gatsby handlers 2021-01-30 21:03:31 +00:00
d1e42bcb89 removed un-needed image: 2021-01-30 15:36:34 +00:00
8b7674b897 added self-hosted gitea post 2021-01-30 15:35:18 +00:00
b309043863 restructured site 2021-01-29 19:25:26 +00:00
3c38f05227 add 100 days to offload blog post 2021-01-29 18:22:48 +00:00
a141f51c16 updated homepage 2021-01-29 00:26:24 +00:00
537a797d9a publish posts at 15:30 2021-01-28 23:17:30 +00:00
c3eb10ad89 added gemini blog post 2021-01-28 23:13:57 +00:00
aa053c82a9 updated mastodon link 2021-01-28 10:09:42 +00:00
ae4f5d0d7d added RSS 2021-01-27 20:11:40 +00:00
535 changed files with 12641 additions and 12138 deletions

19
.gitignore vendored
View File

@ -1,12 +1,9 @@
# Project dependencies
# https://www.npmjs.org/doc/misc/npm-faq.html#should-i-check-my-node_modules-folder-into-git
.cache
node_modules
yarn-error.log
# Build directory
/public
public/
*.sw*
.now
resources/
hugo_build.lock
.DS_Store
.vercel
.venv
gemini/capsule/log
gemini/capsule/notes

0
.hugo_build.lock Normal file
View File

BIN
.nova/Artwork Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

4
.nova/Configuration.json Normal file
View File

@ -0,0 +1,4 @@
{
"workspace.art_style" : 1,
"workspace.color" : 5
}

View File

@ -1,5 +0,0 @@
{
"semi": false,
"singleQuote": true,
"trailingComma": "es5"
}

27
.woodpecker.yml Normal file
View File

@ -0,0 +1,27 @@
branches: main
pipeline:
build:
image: hugomods/hugo:latest
commands:
- hugo -D
deploy:
image: alpine
secrets: [ S3_ACCESS_KEY, S3_SECRET_ACCESS_KEY, BUNNY_KEY ]
commands:
- apk update
- apk add s3cmd curl
- s3cmd --configure --access_key=$S3_ACCESS_KEY --secret_key=$S3_SECRET_ACCESS_KEY --host=https://eu-central-1.linodeobjects.com --host-bucket="%(bucket)s.eu-central-1.linodeobjects.com" --dump-config > /root/.s3cfg
- s3cmd -c /root/.s3cfg sync --no-mime-magic --guess-mime-type public/* s3://wilw.dev
- 'curl -X POST -H "AccessKey: $BUNNY_KEY" https://api.bunny.net/pullzone/907104/purgeCache'
deploy_gemini:
image: python:3.10
secrets: [ CAPSULE_TOWN_KEY ]
commands:
- cd gemini
- pip install python-frontmatter
- python process_capsule.py
- CAPSULE=$(tar -czf /tmp/c.tar.gz -C capsule . && cat /tmp/c.tar.gz | base64)
- 'echo "{\"capsuleArchive\": \"$CAPSULE\"}" > /tmp/capsule_file'
- 'curl -X PUT -H "Content-Type: application/json" -H "api-key: $CAPSULE_TOWN_KEY" -d @/tmp/capsule_file https://api.capsule.town/capsule'

View File

@ -1 +0,0 @@
# Personal website

39
Taskfile.yml Normal file
View File

@ -0,0 +1,39 @@
version: '3'
tasks:
default:
desc: Run Hugo server
deps:
- hugo server
deploy:
desc: Full deployment
deps:
- deploy-web
- deploy-gemini
deploy-web:
desc: Deploy website
cmds:
- hugo
- aws --profile personal s3 sync public s3://wilw.dev
- 'curl -X POST -H "AccessKey: $BUNNY_PERSONAL" https://api.bunny.net/pullzone/907104/purgeCache'
deploy-gemini:
desc: Deploy Gemini capsule
dir: 'gemini'
deps:
- install-gemini-deps
cmds:
- bash -c "source .venv/bin/activate && python process_capsule.py"
- 'export CAPSULE=$(tar -czf /tmp/c.tar.gz -C capsule . && cat /tmp/c.tar.gz | base64) && echo "{\"capsuleArchive\": \"$CAPSULE\"}" > /tmp/capsule_file'
- 'curl -X PUT -H "Content-Type: application/json" -H "api-key: $CAPSULE_TOWN_KEY" -d @/tmp/capsule_file https://api.capsule.town/capsule'
install-gemini-deps:
desc: Install Python dependencies for Gemini
dir: 'gemini'
cmds:
- cmd: python3.12 -m venv .venv
ignore_error: true
- cmd: bash -c "source .venv/bin/activate && pip install python-frontmatter"
ignore_error: true

View File

@ -1 +0,0 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html><head><meta http-equiv="refresh" content="0;url=http://advancedsearch2.virginmedia.com/main?ParticipantID=jqlc435patgs4w79dx7g33u8otdryt35&FailedURI=http%3A%2F%2Fstatic%2Fmedia%2Fblog%2Facnh1.png&FailureMode=1&Implementation=&AddInType=4&Version=pywr1.0&ClientLocation=uk"/><script type="text/javascript">url="http://advancedsearch2.virginmedia.com/main?ParticipantID=jqlc435patgs4w79dx7g33u8otdryt35&FailedURI=http%3A%2F%2Fstatic%2Fmedia%2Fblog%2Facnh1.png&FailureMode=1&Implementation=&AddInType=4&Version=pywr1.0&ClientLocation=uk";if(top.location!=location){var w=window,d=document,e=d.documentElement,b=d.body,x=w.innerWidth||e.clientWidth||b.clientWidth,y=w.innerHeight||e.clientHeight||b.clientHeight;url+="&w="+x+"&h="+y;}window.location.replace(url);</script></head><body></body></html>

6
archetypes/default.md Normal file
View File

@ -0,0 +1,6 @@
---
title: "{{ replace .Name "-" " " | title }}"
date: {{ .Date }}
draft: true
---

BIN
assets/avatar.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

86
assets/highlight.css Normal file
View File

@ -0,0 +1,86 @@
/* Background */ .bg { color:#f8f8f2;background-color:#272822; }
/* PreWrapper */ .chroma { color:#f8f8f2;background-color:#272822; }
/* Other */ .chroma .x { }
/* Error */ .chroma .err { color:#960050;background-color:#1e0010 }
/* CodeLine */ .chroma .cl { }
/* LineLink */ .chroma .lnlinks { outline:none;text-decoration:none;color:inherit }
/* LineTableTD */ .chroma .lntd { vertical-align:top;padding:0;margin:0;border:0; }
/* LineTable */ .chroma .lntable { border-spacing:0;padding:0;margin:0;border:0; }
/* LineHighlight */ .chroma .hl { background-color:#3c3d38 }
/* LineNumbersTable */ .chroma .lnt { white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f }
/* LineNumbers */ .chroma .ln { white-space:pre;-webkit-user-select:none;user-select:none;margin-right:0.4em;padding:0 0.4em 0 0.4em;color:#7f7f7f }
/* Line */ .chroma .line { display:flex; }
/* Keyword */ .chroma .k { color:#66d9ef }
/* KeywordConstant */ .chroma .kc { color:#66d9ef }
/* KeywordDeclaration */ .chroma .kd { color:#66d9ef }
/* KeywordNamespace */ .chroma .kn { color:#f92672 }
/* KeywordPseudo */ .chroma .kp { color:#66d9ef }
/* KeywordReserved */ .chroma .kr { color:#66d9ef }
/* KeywordType */ .chroma .kt { color:#66d9ef }
/* Name */ .chroma .n { }
/* NameAttribute */ .chroma .na { color:#a6e22e }
/* NameBuiltin */ .chroma .nb { }
/* NameBuiltinPseudo */ .chroma .bp { }
/* NameClass */ .chroma .nc { color:#a6e22e }
/* NameConstant */ .chroma .no { color:#66d9ef }
/* NameDecorator */ .chroma .nd { color:#a6e22e }
/* NameEntity */ .chroma .ni { }
/* NameException */ .chroma .ne { color:#a6e22e }
/* NameFunction */ .chroma .nf { color:#a6e22e }
/* NameFunctionMagic */ .chroma .fm { }
/* NameLabel */ .chroma .nl { }
/* NameNamespace */ .chroma .nn { }
/* NameOther */ .chroma .nx { color:#a6e22e }
/* NameProperty */ .chroma .py { }
/* NameTag */ .chroma .nt { color:#f92672 }
/* NameVariable */ .chroma .nv { }
/* NameVariableClass */ .chroma .vc { }
/* NameVariableGlobal */ .chroma .vg { }
/* NameVariableInstance */ .chroma .vi { }
/* NameVariableMagic */ .chroma .vm { }
/* Literal */ .chroma .l { color:#ae81ff }
/* LiteralDate */ .chroma .ld { color:#e6db74 }
/* LiteralString */ .chroma .s { color:#e6db74 }
/* LiteralStringAffix */ .chroma .sa { color:#e6db74 }
/* LiteralStringBacktick */ .chroma .sb { color:#e6db74 }
/* LiteralStringChar */ .chroma .sc { color:#e6db74 }
/* LiteralStringDelimiter */ .chroma .dl { color:#e6db74 }
/* LiteralStringDoc */ .chroma .sd { color:#e6db74 }
/* LiteralStringDouble */ .chroma .s2 { color:#e6db74 }
/* LiteralStringEscape */ .chroma .se { color:#ae81ff }
/* LiteralStringHeredoc */ .chroma .sh { color:#e6db74 }
/* LiteralStringInterpol */ .chroma .si { color:#e6db74 }
/* LiteralStringOther */ .chroma .sx { color:#e6db74 }
/* LiteralStringRegex */ .chroma .sr { color:#e6db74 }
/* LiteralStringSingle */ .chroma .s1 { color:#e6db74 }
/* LiteralStringSymbol */ .chroma .ss { color:#e6db74 }
/* LiteralNumber */ .chroma .m { color:#ae81ff }
/* LiteralNumberBin */ .chroma .mb { color:#ae81ff }
/* LiteralNumberFloat */ .chroma .mf { color:#ae81ff }
/* LiteralNumberHex */ .chroma .mh { color:#ae81ff }
/* LiteralNumberInteger */ .chroma .mi { color:#ae81ff }
/* LiteralNumberIntegerLong */ .chroma .il { color:#ae81ff }
/* LiteralNumberOct */ .chroma .mo { color:#ae81ff }
/* Operator */ .chroma .o { color:#f92672 }
/* OperatorWord */ .chroma .ow { color:#f92672 }
/* Punctuation */ .chroma .p { }
/* Comment */ .chroma .c { color:#75715e }
/* CommentHashbang */ .chroma .ch { color:#75715e }
/* CommentMultiline */ .chroma .cm { color:#75715e }
/* CommentSingle */ .chroma .c1 { color:#75715e }
/* CommentSpecial */ .chroma .cs { color:#75715e }
/* CommentPreproc */ .chroma .cp { color:#75715e }
/* CommentPreprocFile */ .chroma .cpf { color:#75715e }
/* Generic */ .chroma .g { }
/* GenericDeleted */ .chroma .gd { color:#f92672 }
/* GenericEmph */ .chroma .ge { font-style:italic }
/* GenericError */ .chroma .gr { }
/* GenericHeading */ .chroma .gh { }
/* GenericInserted */ .chroma .gi { color:#a6e22e }
/* GenericOutput */ .chroma .go { }
/* GenericPrompt */ .chroma .gp { }
/* GenericStrong */ .chroma .gs { font-weight:bold }
/* GenericSubheading */ .chroma .gu { color:#75715e }
/* GenericTraceback */ .chroma .gt { }
/* GenericUnderline */ .chroma .gl { }
/* TextWhitespace */ .chroma .w { }

448
assets/main.scss Normal file
View File

@ -0,0 +1,448 @@
body {
font-family: system-ui, -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, 'Open Sans', 'Helvetica Neue', sans-serif;
margin: 0px;
min-height: 100vh;
display: flex;
flex-direction: column;
}
header {
max-width:960px;
margin: 10px auto;
width: 100%;
.main{
display: flex;
align-items: center;
margin-bottom: 10px;
padding: 0px 10px 0px 10px;
a.avatar {
display: inline-block;
margin-right: 10px;
img {
height: 50px;
width: 50px;
border-radius: 50%;
transition: opacity 0.2s;
&:hover{
opacity: 0.8;
}
}
}
.details{
flex: 1;
a.title{
display: inline-block;
font-size: 23px;
margin-bottom: 5px;
text-decoration: none;
font-weight: bold;
color: initial;
}
.socials{
a{
text-decoration: none;
margin-right: 5px;
margin-bottom: 5px;
padding: 4px;
border-radius: 5px;
background: linen;
color: rgba(0,0,0,0.75);
display: inline-block;
transition: background 0.2s;
&:hover{
background: lightskyblue;
}
}
}
}
}
nav{
padding: 0px 10px 0px 10px;
display: flex;
background: rgb(247,244,244);
border-radius: 4px;
padding: 0px 10px;
a{
padding: 10px 10px;
margin-right: 5px;
display:inline-block;
text-transform: uppercase;
font-size: 15px;
color: black;
&.active{
text-decoration: none;
font-weight: bold;
background: rgb(240,237,237);;
}
}
}
}
main {
width: 100%;
max-width:960px;
margin: 0px auto;
flex: 1;
.content-wrapper{
padding: 0px 10px 10px;
}
}
footer {
border-top: 2px solid linen;
padding: 10px 15px 10px 15px;
margin-top: 30px;
@media (min-width: 900px) {
display: flex;
justify-content: space-between;
align-items: center;
}
.left {
display:flex;
align-items: baseline;
@media (max-width: 550px) {
justify-content: center;
display: initial;
text-align: center;
}
.copyright{
font-size: 13px;
color: rgb(100,100,100);
}
a {
margin-left: 10px;
font-size: 13px;
}
}
.right{
@media (min-width: 550px) {
display: flex;
justify-content: space-between;
align-items: center;
}
.carbonbadge{
@media (min-width: 550px) {
margin-left: 10px;
}
}
.kb-club {
font-size: 1em;
@media (max-width: 550px) {
display: block;
margin: 10px auto;
text-align: center;
}
a {
text-decoration: none;
color: #212121;
padding: .25rem 0;
}
.kb-club-bg, .kb-club-no-bg {
border: 1px solid rgb(2, 90, 84);
padding: 3px 6px;
}
.kb-club-no-bg{
border-radius: 4px 0px 0px 4px;
}
.kb-club-bg {
font-weight: bold;
background: rgb(2, 90, 84);
color: white;
border-radius: 0px 4px 4px 0px;
}
}
}
}
table {
width: 100%;
thead {
background: rgb(230,230,230);
th {
padding: 5px;
}
}
tbody{
background: rgb(245,245,245);
td {
padding: 5px;
}
}
}
.alert {
background-color: lightcyan;
border-radius: 5px;
max-width: 800px;
margin: 20px auto;
padding: 10px 10px 10px 20px;
border-left: 10px solid rgba(0,0,0,0.2);
h3 {
margin-top: 0px;
}
&.green{
background-color: #F5F5DC;
}
&.grey{
background-color: rgb(240,240,240);
}
&.non-centered{
margin: 20px 0px;
}
}
.two-columns{
display: grid;
grid-column-gap: 20px;
grid-template-columns: repeat(auto-fit, minmax(300px, 1fr));
}
.nav-layout{
.nav {
min-width: 250px;
border-radius: 10px;
background: rgb(247,244,244);
.menu {
a {
display: block;
margin-bottom: 8px;
padding: 3px 5px;
}
}
}
@media only screen and (min-width: 501px) {
display: flex;
position: relative;
.nav{
position: sticky;
top: 10px;
margin-right: 15px;
max-height: 80vh;
overflow-y: scroll;
h3 {
display: none;
}
&::after{
position: sticky;
display: block;
padding: 5px 0px;
bottom: 0px;
left: 0px;
width: 100%;
text-align: center;
box-shadow: 0px 0px 10px rgba(0,0,0,0.2);
background: rgb(220,217,217);
font-size: 15px;
color: rgb(50,50,50);
content: '↕️ This menu is scrollable';
}
}
.content {
flex: 1;
}
}
@media only screen and (max-width: 500px) {
.nav {
margin-bottom: 10px;
margin-right: 0px;
h3 {
a {
text-decoration: none;
&:after {
content: '⬇️';
margin-left: 10px;
}
}
}
&:hover, &:active, &:focus {
.menu {
display: block;
}
}
.menu {
display: none;
}
}
}
}
.project, .research-item {
margin-top: 30px;
display:flex;
align-items: start;
.logo{
padding-top: 15px;
margin-right: 20px;
img{
max-width: 100px;
max-height: 100px;
}
}
.details{
flex: 1;
h4{
margin-top: 0px;
margin-bottom: 10px;
}
.platforms{
a {
margin-left: 5px;
}
}
.journal{
font-size: small;
margin-top: 2px;
}
}
}
.blog-page-header{
display: flex;
justify-content: space-between;
align-items: baseline;
}
.blog-summary, .note-summary {
background-color: #FFFAF0;
border-radius: 5px;
padding: 10px 10px 10px 20px;
border-left: 10px solid rgba(0,0,0,0.1);
margin-bottom: 15px;
display: flex;
align-items: center;
h3{
margin-top: 0px;
.date{
font-size: small;
margin-left: 20px;
}
}
.summary {
* {
font-size: smaller;
}
p {
margin-bottom: 10px;
}
img {
max-width: 100%;
margin: 10px 0px;
}
}
&::before {
margin-right: 20px;
content: '📝';
font-size: 30px;
}
&.blog-summary {
&::before {
content: '📝';
}
}
&.note-summary {
background-color: #e3f0fc;
&::before {
content: '📔';
}
}
}
.blog-post, .note-entry {
.details{
text-align: center;
}
.header-image {
display: block;
max-width: 90%;
margin: 20px auto;
}
.note-details {
text-align: left;
}
.navigation{
text-align: center;
a {
margin-right: 10px;
}
}
article{
padding-top: 20px;
border-top: 1px solid rgb(230,230,230);
font-size: large;
margin-bottom: 100px;
>*:not(.highlight) {
display: block;
max-width: 500px;
margin-left: auto;
margin-right: auto;
code {
background-color: #FDF6E3;
border-radius: .3em; color: #657B83;
}
}
img{
display: block;
margin: 20px auto;
max-width: 100%;
max-height: 600px;
}
blockquote{
padding: 10px;
background-color: rgb(245,245,230);
p {
margin-top: 0px;
}
}
pre {
white-space: pre-wrap;
padding: 5px;
box-shadow: inset 0px 0px 10px 0px rgba(0,0,0,0.1);
border-radius: 4px;
}
}
}
.tag{
display: inline-block;
padding: 4px;
margin-right: 15px;
border-radius: 5px;
background-color: azure;
color:rgba(0,0,0,0.6);
text-decoration: none;
&::before {
content: '🏷️';
margin-right: 5px;
}
}
main a, nav a {
background: rgb(247,244,244);
color: black;
padding: 3px;
display: inline-block;
border-radius: 3px;
transition: background 0.2s;
&:hover{
background: rgb(220,217,217);
}
&.active{
font-weight: bold;
text-decoration: none;
background: rgb(240,237,237);
}
// Below courtesy of Christian Oliff (https://christianoliff.com/blog/styling-external-links-with-an-icon-in-css)
&[href^="http"]::after,&[href^="https://"]::after {
content: "";
width: 11px;
height: 11px;
margin-left: 4px;
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='16' height='16' fill='currentColor' viewBox='0 0 16 16'%3E%3Cpath fill-rule='evenodd' d='M8.636 3.5a.5.5 0 0 0-.5-.5H1.5A1.5 1.5 0 0 0 0 4.5v10A1.5 1.5 0 0 0 1.5 16h10a1.5 1.5 0 0 0 1.5-1.5V7.864a.5.5 0 0 0-1 0V14.5a.5.5 0 0 1-.5.5h-10a.5.5 0 0 1-.5-.5v-10a.5.5 0 0 1 .5-.5h6.636a.5.5 0 0 0 .5-.5z'/%3E%3Cpath fill-rule='evenodd' d='M16 .5a.5.5 0 0 0-.5-.5h-5a.5.5 0 0 0 0 1h3.793L6.146 9.146a.5.5 0 1 0 .708.708L15 1.707V5.5a.5.5 0 0 0 1 0v-5z'/%3E%3C/svg%3E");
background-position: center;
background-repeat: no-repeat;
background-size: contain;
display: inline-block;
}
}

BIN
assets/will.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

18
bucket-policy.json Normal file
View File

@ -0,0 +1,18 @@
{
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::wilw.dev/*"
]
}
]
}

49
config.toml Normal file
View File

@ -0,0 +1,49 @@
baseURL = 'https://wilw.dev'
languageCode = 'en-gb'
title = 'Will Webberley'
[params]
author = 'Will Webberley'
description = "Will Webberley's personal website."
[permalinks]
blog = '/blog/:year/:month/:day/:slug/'
[menu]
[[menu.main]]
name = 'About'
url = '/'
pageRef = '/'
[[menu.main]]
name = 'Blog'
url = '/blog/'
pageRef = 'blog'
[[menu.main]]
name = '🌱 Notes'
url = '/notes/'
pageRef = 'notes'
[[menu.main]]
name = 'Projects'
url = '/projects/'
pageRef = 'projects'
[outputFormats]
[outputFormats.RSS]
mediatype = "application/rss"
baseName = "rss"
[markup]
[markup.highlight]
anchorLineNos = true
codeFences = true
guessSyntax = false
hl_Lines = ''
hl_inline = false
lineAnchors = ''
lineNoStart = 1
lineNos = false
lineNumbersInTable = true
noClasses = false
noHl = false
tabWidth = 2
pygmentsUseClasses = true

39
content/_index.md Normal file
View File

@ -0,0 +1,39 @@
---
title: Home
description: Will Webberley's Personal Website
---
**Hello and welcome.** I'm a technology lead & enthusiast in Wales. I enjoy 🏋️‍♂️ fitness, I love to ✈️ travel, and I'm a 🐶 proud dog dad!
I'm into startups & small businesses, indie or open-source tech projects, and [self-hosting](/tags/selfhost).
## Background
💡 Since 2016 I have been Chief Technology Officer at enterprise SaaS company [Simply Do Ideas](https://www.simplydo.co.uk). Before this I was a software engineer at [Chaser](https://www.chaserhq.com).
📦 I build and maintain a number of projects - both [open-source](/projects) and commercial. I am a co-founder of [Trialflare](https://www.trialflare.com).
🎓 I completed [my PhD](/research#phd) at [Cardiff University](https://cardiff.ac.uk)'s [School of Computer Science & Informatics](https://www.cardiff.ac.uk/computer-science) in 2015.
🤓 I worked on the IBM-led UK MoD and US Army Research Labs coalition [ITA project](https://en.wikipedia.org/wiki/NIS-ITA) as a postdoctoral research associate.
👨‍🏫 I lectured the Advanced Computer Science MSc module Web & Social Computing
and the Computer Science BSc module Human-Computer Interaction
## What's on this website?
📝 I write about technology and things I find interesting on [my blog](/blog) *([📥 RSS feeds](/feeds) available).*
🌱 I curate a collection of [thoughts, links and notes](/notes).
🚀 I (occasionally) publish additional content on my geminispace at [gemini://wilw.capsule.town](gemini://wilw.capsule.town) *(see [this post](/blog/2021/01/20/project-gemini) for help with opening this link).*
👨‍💻 Some of my research publications are [available here](/research).
🪴 [Find out more](/this) about this website and its purpose.
## How to contact me
You can follow me on Mastodon (📣 [@wilw@fosstodon.org](https://fosstodon.org/@wilw)) and Pixelfed (🖼️ [@wilw@pixelfed.social](https://pixelfed.social/@wilw)).
You can also get in touch directly with me on Telegram ([@wilw88](https://t.me/wilw88)).

View File

@ -0,0 +1,32 @@
---
date: "2012-09-20T09:15:00Z"
title: DigiSocial Hackathon
description: "My attendance at the DigiSocial Hackathon"
tags: [event, cardiffuniversity]
---
We recently held our DigiSocial Hackathon. This was a collaboration between the Schools of
Computer Science and Social Sciences and was organised by myself and a few others.
The website for the event is hosted [here](http://users.cs.cf.ac.uk/W.M.Webberley/digisocial/).
![DigiSocial logo](/media/blog/digisocial_logo.png)
The idea of the event was to try and encourage further ties between the different Schools of the University. The
University Graduate College (UGC) provide the funding for these events, which must be applied for, in the hope
that good projects or results come out of it.
We had relatively good responses from the Schools of Maths, Social Sciences, Medicine, and ourselves, and had a turnout of around 10-15
for the event on the 15th and 16th September. Initially, we started to develop ideas for potential projects. Because of the
nature of the event, we wanted to make sure they were as cross-disciplined as possible. A hackday, in itself, is pretty
computer science-y so we needed to apply a social or medical spin on our ideas.
Eventually, we settled into two groups: one working on a social-themed project based on crimes in an area (both in terms of
distribution and intensity) in relation to the food hygiene levels in nearby establishments; another focusing on hospital wait times
and free beds in South Wales. Effectively, then, both projects are visualisations of publicly-available datasets.
I worked on the social project with Matt Williams, Wil Chivers and Martin Chorley, and it is viewable [here](http://ukcrimemashup.nomovingparts.net/).
Overall the event was a reasonable success; two projects were
completed and we have now made links with the other Schools which will hopefully allow us to do similar events together in the
future.

View File

@ -1,22 +1,16 @@
---
year: 2012
month: 10
day: 10
date: "2012-10-10T17:00:00Z"
title: "Seminar: Retweeting"
description: "Giving an seminar on my research"
layout: post
tags: [talk, research]
---
<p>
I gave a seminar on my current research phase. </p>
<p>
I gave a seminar on my current research phase.
I summarised my work over the past few months; in particular, the work on the network structure of Twitter, the way in which tweets
propagate through different network types, and the implications of this. I discussed the importance of precision and recall as metrics
for determining a timeline\'s quality and how this is altered through retweeting in different network types.
</p>
<p>
I concluded by talking about my next area of research; how I may use the model used for the previous experimentation to determine if
a tweet is particularly interesting based on its features. Essentially, this boils down to showing that tweets are siginificantly
interesting (or uninteresting) by looking at how they compare to their <i>predicted</i> retweet behaviours as produced by the model.</p>
<p>The slides for the talk (not much use independently!) are available
<a href="http://willwebberley.net/downloads/research-fts/presentation.html" target="_blank">here</a>.</p>
interesting (or uninteresting) by looking at how they compare to their _predicted_ retweet behaviours as produced by the model.

View File

@ -1,50 +1,42 @@
---
year: 2012
month: 11
day: 13
date: "2012-11-13T17:00:00Z"
title: Delving into Android
description: "Starting some Android development"
layout: post
tags: [android, project, technology]
---
<img src="/media/blog/tides-main.png" alt="Tides Main Activity" class="blog-image"/>
<p>
![Tides Main Activity](/media/blog/tides-main.png)
I've always been interested in the development of smartphone apps, but have never really had the opportunity
to actually hava a go. Whilst I'm generally OK with development on platforms I feel comfortable with, I've always
considered there to be no point in developing applications for wider use unless you have a good idea about first thinking
about the direction for it to go.
</p>
<p>
My Dad is a keen surfer and has a watch which tells the tide changes as well as the time. It shows the next event (i.e. low- or high-tide)
and the time until that event, but he always complains about how inaccurate it is and how it never correctly predicts the tide
schedule for the places he likes to surf.</p>
schedule for the places he likes to surf.
<p>He uses an Android phone, and so I thought I'd try making an app for him that would be more accurate than his watch, and
He uses an Android phone, and so I thought I'd try making an app for him that would be more accurate than his watch, and
maybe provide more interesting features. The only tricky criterion, really, was that he needed it to predict the tides offline, since
the data reception is very poor in his area.</p>
the data reception is very poor in his area.
I got to work on setting up a database of tidal data, based around the location he surfs in, and creating a basic UI in which to display it.
When packaging the application with an existing SQLite database, this [helper class](https://github.com/jgilfelt/android-sqlite-asset-helper) was particularly useful.
![Tides Settings Activity](/media/blog/tides-settings.png)
<p>I got to work on setting up a database of tidal data, based around the location he surfs in, and creating a basic UI in which to display it.
When packaging the application with an existing SQLite database, this <a href="https://github.com/jgilfelt/android-sqlite-asset-helper" target="_blank">helper class</a> was particularly useful.</p>
<img src="/media/blog/tides-settings.png" alt="Tides Settings Activity" class="blog-image"/>
<p>
A graphical UI seemed the best approach for displaying the data, so I
tried <a href="http://androidplot.com/" target="_blank">AndroidPlot</a>, a highly-customisable graphing
tried [AndroidPlot]](http://androidplot.com/), a highly-customisable graphing
library, to show the tidal patterns day-by-day. This seemed to work OK (though not entirely accurately - tidal patterns form
more of a cosine wave rather than the zigzags my graph produced, but the general idea is there), so I added more features, such as
a tide table (the more traditional approach) and a sunrise and sunset timer.
</p>
<p>I showed him the app at this stage, and he decided it could be improved by adding weather forecasts. Obviously, preidcting the
weather cannot be done offline, so having sourced a decent <a href="http://www.worldweatheronline.com/" target="_blank">weather API</a>,
I showed him the app at this stage, and he decided it could be improved by adding weather forecasts. Obviously, preidcting the
weather cannot be done offline, so having sourced a decent [weather API](http://www.worldweatheronline.com/),
I added the weather forecast for his area too. Due to the rate-limiting of World Weather Online, a cache is stored in a database
on the host for this website, which, when queried by the app, will make the request on the app's behalf and store the data until
it is stale.</p>
it is stale.
<p>I added a preferences activity for some general customisation, and that's as far as I've currently got. In terms of development,
I added a preferences activity for some general customisation, and that's as far as I've currently got. In terms of development,
I guess it's been a good introduction to the ideas behind various methodologies and features, such as the manifest file, networking,
local storage, preferences, and layout design. I'll create a Github repository for it when I get round to it.</p>
local storage, preferences, and layout design. I'll create a Github repository for it when I get round to it.

View File

@ -0,0 +1,11 @@
---
date: "2013-01-21T17:00:00Z"
title: Research Poster Day
description: "Attending the research poster day"
tags: [research, cardiffuniversity]
---
Each January the School of Computer Science hosts a poster day in order for the research students to demonstrate their current work to
other research students, research staff and undergraduates. The event lets members of the department see what other research is being done outside of their own group and gives researchers an opportunity to defend their research ideas.
This year, I focused on my current research area, which is to do with inferring how interesting a Tweet is based on a comparison between simulated retweet patterns and the propagation behaviour demonstrated by the Tweet in Twitter itself. The poster highlights recent work in the build-up to this, a general overview of how the research works, and finishes with where I want to take this research in the future.

View File

@ -0,0 +1,24 @@
---
date: "2013-02-18T17:00:00Z"
title: ScriptSlide
description: "Small JS library: scriptslide"
tags: [javascript, project, technology]
---
I've taken to writing most of my recent presentations in plain HTML (rather than using third-party software or services). I used
JavaScript to handle the appearance and ordering of slides.
I bundled the JS into a single script, `js/scriptslide.js` which can be configured
using the `js/config.js` script.
There is a [GitHub repo](https://github.com/willwebberley/ScriptSlide) for the code, along with example usage and instructions.
Most configuration can be done by using the `js/config.js` script, which supports many features including:
- Set the slide transition type (appear, fade, slide)
- Set the logos, page title, etc.
- Configure the colour scheme
Then simply create an HTML document, set some other styles (there is a template in `css/styles.css`), and
put each slide inside `<section>...</section>` tags. The slide menu is then generated autmatically
when the page is loaded.

View File

@ -0,0 +1,15 @@
---
date: "2013-03-07T17:00:00Z"
title: Gower Tides App Released
description: "Announcing the release of my Gower Tides Android app"
tags: [android, project, technology]
---
A [few posts back](/blog/2012/11/13-delving-into-android), I talked
about the development of an Android app for tide predictions for South Wales. This app is now on [Google Play](https://play.google.com/store/apps/details?id=net.willwebberley.gowertides).
If you live in South Wales and are vaguely interested in tides/weather, then you should probably download it :)
The main advantage is that the app does not need any data connection to display the tidal data, which is useful in areas
with low signal. In future, I hope to add further features, such as a more accurate tide graph (using a proper 'wave'),
surf reports, and just general UI updates.

View File

@ -0,0 +1,10 @@
---
date: "2013-03-30T17:00:00Z"
title: Decking Building
description: "Building a deck for the garden"
tags: [life]
---
![A new decking](/media/blog/decking.png)
I managed to turn about two tonnes of material into something vaguely resembling 'decking' in my back garden this weekend. It makes the area look much nicer, but whether it actually stays up is a completely different matter.

View File

@ -0,0 +1,46 @@
---
date: "2013-04-05T17:00:00Z"
title: "AJAX + Python + Amazon S3"
description: "Direct to S3 uploads using Python and AWS S3"
tags: [python, aws, s3, technology]
slug: ajax-+-python-+-amazon-s3
---
I wanted a way in which users can seamlessly upload images for use in the Heroku application discussed in previous posts.
Ideally, the image would be uploaded through AJAX as part of a data-entry form, but without having to refresh the page or anything else that would disrupt the user's experience. As far as I know, barebones JQuery does not support AJAX uploads, but [this handy plugin](http://www.malsup.com/jquery/form/#file-upload) does.
### Handling the upload (AJAX)
styled the file input nicely (in a similar way to [this guy](http://ericbidelman.tumblr.com/post/14636214755/making-file-inputs-a-pleasure-to-look-at)) and added the JS so that the upload is sent properly (and to the appropriate URL) when a change is detected to the input (i.e. the user does not need to click the 'upload' button to start the upload).
### Receiving the upload (Python)
he backend, as previously mentioned, is written in Python as part of a Flask app. Since Heroku's customer webspace is read-only, uploads would have to be stored elsewhere. [Boto](http://boto.s3.amazonaws.com/index.html)'s a cool library for interfacing with various AWS products (including S3) and can easily be installed with `pip install boto`. From this library, we're going to need the `S3Connection` and `Key` classes:
```
from boto.s3.connection import S3Connection
from boto.s3.key import Key
```
Now we can easily handle the transfer using the `request` object exposed to Flask's routing methods:
```
file = request.files['file_input_name']
con = S3Connection(<'AWS_KEY'>, <'AWS_SECRET'>)
key = Key(con.get_bucket(<'BUCKET_NAME'>))
key.set_contents_from_file(file)
```
Go to the next step for the AWS details and the bucket name. Depending on where you chose your AWS location as (e.g. US, Europe, etc.), then your file will be accessible as something like `https://s3-eu-west-1.amazonaws.com/<BUCKET_NAME>/<FILENAME>`. If you want, you can also set, among other things, stuff like the file's mime type and access type:
```
key.set_metadata('Content-Type', 'image/png')
key.set_acl('public-read')
```
### Setting up the bucket (Amazon S3)
Finally you'll need to create the bucket. Create or log into your AWS account, go to the AWS console, choose your region (if you're in Europe, then the Ireland one is probably the best choice) and enter the S3 section. Here, create a bucket (the name needs to be globally unique). Now, go to your account settings page to find your AWS access key and secret and plug these, along with the bucket name, into the appropriate places in your Python file.
And that's it. For large files, this may tie up your Heroku dynos a bit while they carry out the upload, so this technique is best for smaller files (especially if you're only using the one web dyno). My example of a working implementation of this is available [in this file](https://github.com/willwebberley/niteowl-web/blob/master/api.py).

View File

@ -0,0 +1,12 @@
---
date: "2013-04-11T17:00:00Z"
title: Cardiff Open Sauce Hackathon
description: "Attending the Cardiff Open Sauce Hackathon"
tags: [event, cardiffuniversity]
---
Next week I, along with others in a team, am taking part in [Cardiff Open Sauce Hackathon](http://www.cs.cf.ac.uk/hackathon/).
If you're in the area and feel like joining in for the weekend then sign up at the link above.
he hackathon is a two-day event in which teams work to 'hack together' smallish projects, which will be open-sourced at the end of the weekend. Whilst we have a few ideas already for potential projects, if anyone has any cool ideas for something relatively quick, but useful, to make, then please let me know!

View File

@ -0,0 +1,30 @@
---
date: "2013-04-16T17:00:00Z"
title: Trials of Eduroam
description: "Connecting to Eduroam using Arch Linux"
tags: [linux, wifi, technology]
---
I've been having trouble connecting to Eduroam, at least reliably and persistently, without heavy desktop environments or complicated network managers. Eduroam is the wireless networking service used by many Universities in Europe, and whilst it would probably work fine using the tools provided by heavier DEs, I wanted something that could just run quickly and independently.
Many approaches require the editing of loads of config files (especially true for `netcfg`), which would need altering again after things like password changes. The approach I used (for Arch Linux) is actually really simple and involves the use of the user-contributed `wicd-eduroam` package available in the [Arch User Repository](https://aur.archlinux.org/packages/wicd-eduroam/).
Obviously, `wicd-eduroam` is related to, and depends on, `wicd`, a handy network connection manager, so install that first:
```
# pacman -S wicd
$ yaourt -S wicd-eduroam
```
(If you don't use `yaourt` download the [tarball](https://aur.archlinux.org/packages/wi/wicd-eduroam/wicd-eduroam.tar.gz) and build it using the `makepkg` method.)
`wicd` can conflict with other network managers, so stop and disable them before starting and enabling `wicd`. This will allow it to startup at boot time. e.g.:
```
# systemctl stop NetworkManager
# systemctl disable NetworkManager
# systemctl start wicd
# systemctl enable wicd
```
Now start `wicd-client` (or set it to autostart), let it scan for networks, and edit the properties of the network `eduroam` Set the encryption type as `eduroam` in the list, enter the username and password, click OK and then allow it to connect.

View File

@ -0,0 +1,16 @@
---
date: "2013-04-23T17:00:00Z"
title: flyingsparx.net On Digital Ocean
description: "Deploying my personal website to Digital Ocean"
tags: [digitalocean, technology]
---
My hosting for [my website](http://www.willwebberley.net) has nearly expired, so I have been looking for renewal options.
These days I tend to need to use servers for more than simple web-hosting, and most do not provide the flexibility that a VPS would. Having (mostly) full control over a properly-maintained virtual cloud server is so much more convenient, and allows you to do tonnes of stuff beyond simple web hosting.
I have some applications deployed on [Heroku](https://www.heroku.com), which is definitely useful and easy for this purpose, but I decided to complement this for my needs by buying a 'droplet' from [Digital Ocean](https://www.digitalocean.com).
Droplets are DO's term for a server instance, and are super quick to set up (55 seconds from first landing at their site to a booted virtual server, they claim) and very reasonably priced. I started an Arch instance, quickly set up nginx, Python and uwsgi, and started this blog and site as a Python app running on the Flask microframework.
So far, I've had no issues, and everything seems to work quickly and smoothly. If all goes to plan, over the next few months I'll migrate some more stuff over, including the backend for the Gower Tides app.

View File

@ -0,0 +1,16 @@
---
date: "2013-04-25T17:00:00Z"
title: eartub.es
description: "Working on eartub.es at the Cardiff Open Sauce Hackathon"
tags: [event, project, cardiffuniversity]
---
Last weekend I went to [CFHack Open Sauce Hackathon](http://www.cs.cf.ac.uk/hackathon). I worked in a team with [Chris](http://christopher-gwilliams.com), [Ross](https://twitter.com/OnyxNoir) and [Matt](http://users.cs.cf.ac.uk/M.P.John/).
We started work on [eartub.es](http://eartub.es), which is a web application for suggesting movies based on their sound tracks. We had several ideas for requirements we wanted to meet but, due to the nature of hackathons, we didn't do nearly as much as what we thought we would!
For now, eartubes allows you to search for a movie (from a 2.5 million movie database) and view other movies with similar soundtracks. This is currently based on cross matching the composer between movies, but more in-depth functionality is still in the works. We have nearly completed Last.fm integration, which would allow the app to suggest movies from your favourite and most listened-to music, and are working towards genre-matching and other, more complex, learning techniques. The registration functionality is disabled while we add this extra stuff.
The backend is written in Python and runs as a Flask application. Contrary to my usual preference, I worked on the front end of the application, but also wrote our internal API for Last.fm integration. It was a really fun experience, in which everyone got on with their own individual parts, and it was good to see the project come together at the end of the weekend.
The project's source is on [GitHub](https://github.com/encima/eartubes).

View File

@ -0,0 +1,14 @@
---
date: "2013-05-07T17:00:00Z"
title: Contribution to Heroku Dev Center
description: "Contributing a blog post on direct uploads to S3 to the Heroku Dev Center"
tags: [contribution, heroku, python, aws, s3]
---
The [Heroku Dev Center](https://devcenter.heroku.com) is a repository of guides and articles to provide support for those writing applications to be run on the [Heroku](https://heroku.com) platform.
I recently contributed an article for carrying out [Direct to S3 File Uploads in Python](https://devcenter.heroku.com/articles/s3-upload-python), as I have previously used a very similar approach to interface with Amazon's Simple Storage Service in one of my apps running on Heroku.
The approach discussed in the article focuses on avoiding as much server-side processing as possible, with the aim of preventing the app's web dynos from becoming too tied up and unable to respond to further requests. This is done by using client-side JavaScript to asynchronously carry out the upload directly to S3 from the web browser. The only necessary server-side processing involves the generation of a temporarily-signed (using existing AWS credentials) request, which is returned to the browser in order to allow the JavaScript to successfully make the final `PUT` request.
The guide's [companion git repository](https://github.com/willwebberley/FlaskDirectUploader) hopes to demonstrate a simple use-case for this system. As with all of the Heroku Dev Center articles, if you have any feedback (e.g. what could be improved, what helped you, etc.), then please do provide it!

View File

@ -0,0 +1,14 @@
---
date: "2013-05-26T17:00:00Z"
title: Gower Tides Open-Sourced
description: "Open-sourcing Gower Tides Android app"
tags: [android, project, technology]
---
This is just a quick post to mention that I have made the source for the [Gower Tides](https://play.google.com/store/apps/details?id=net.willwebberley.gowertides) app on Google Play public.
The source repository is available on [GitHub](https://github.com/willwebberley/GowerTides). From the repository I have excluded:
- **Images & icons** - It is not my place to distribute graphics not owned or created by me. Authors are credited in the repo's README and in the application.
- **External libraries** - The app requires a graphing package and a class to help with handling locally-packaged SQLite databases. Links to both are also included in the repo's README.
- **Tidal data** - The tidal data displayed in the app has also been excluded. However, the format for the data stored by the app should be relatively obvious from its access in the [source](https://github.com/willwebberley/GowerTides/blob/master/src/net/willwebberley/gowertides/utils/DayDatabase.java).

View File

@ -0,0 +1,28 @@
---
date: "2013-06-12T17:00:00Z"
title: WekaPy
description: "Weka bindings for Python"
tags: [weka, python, machinelearning, project, technology]
---
Over the last few months, I've started to use Weka more and more. [Weka](http://www.cs.waikato.ac.nz/ml/weka/) is a toolkit, written in Java, that I use to create models with which to make classifications on data sets.
It features a wide variety of different machine learning algorithms (although I've used the logistic regressions and Bayesian networks most) which can be trained on data in order to make classifications (or 'predictions') for sets of instances.
Weka comes as a GUI application and also as a library of classes for use from the command line or in Java applications. I needed to use it to create some large models and several smaller ones, and using the GUI version makes the process of training the model, testing it with data and parsing the classifications a bit clunky. I needed to automate the process a bit more.
Nearly all of the development work for my PhD has been in Python, and it'd be nice to just plug in some machine learning processes over my existing code. Whilst there are some wrappers for Weka written for Python ([this project](https://github.com/chrisspen/weka), [PyWeka](https://pypi.python.org/pypi/PyWeka), etc.), most of them feel unfinished, are under-documented or are essentially just instructions on how to use [Jython](http://www.jython.org/).
So, I started work on [WekaPy](https://github.com/willwebberley/WekaPy), a simple wrapper that allows efficient and Python-friendly integration with Weka. It basically just involves subprocesses to execute Weka from the command line, but also includes several areas of functionality aimed to provide more of a seamless and simple experience to the user.
I haven't got round to writing proper documentation yet, but most of the current functionality is explained and demo'd through examples [here](https://github.com/willwebberley/WekaPy#example-usage). Below is an example demonstrating its ease of use.
```
model = Model(classifier_type = "bayes.BayesNet")
model.train(training_file = "train.arff")
model.test(test_file = "test.arff")
```
All that is needed is to instantiate the model with your desired classifier, train it with some training data and then test it against your test data. The predictions can then be easily extracted from the model as shown [in the documentation](https://github.com/willwebberley/WekaPy#accessing-the-predictions).
I hope to continue updating the library and improving the documentation when I get a chance! Please let me know if you have any ideas for functionality.

View File

@ -0,0 +1,14 @@
---
date: "2013-06-20T17:00:00Z"
title: Accidental Kernel Upgrades on Digital Ocean
description: "Issues when accidentally upgrading the kernel in Arch Linux on Digital Ocean"
tags: [linux, digitalocean, technology]
---
I today issued a full upgrade of the server at flyingsparx.net, which is hosted by [Digital Ocean](https://www.digitalocean.com). By default, on Arch, this will upgrade every currently-installed package (where there is a counterpart in the official repositories), including the Linux kernel and the kernel headers.
Digital Ocean maintain their own kernel versions and do not currently allow kernel switching, which is something I completely forgot. I rebooted the machine and tried re-connecting, but SSH couldn't find the host. Digital Ocean's website provides a console for connecting to the instance (or 'droplet') through VNC, which I used, through which I discovered that none of the network interfaces (except the loopback) were being brought up. I tried everything I could think of to fix this, but without being able to connect the droplet to the Internet, I was unable to download any other packages.
Eventually, I contacted DO's support, who were super quick in replying. They pointed out that the upgrade may have also updated the kernel (which, of course, it had), and that therefore the modules for networking weren't going to load properly. I restored the droplet from one of the automatic backups, swapped the kernel back using DO's web console, rebooted and things were back to where they should be.
The fact that these things can be instantly fixed from their console and their quick customer support make Digital Ocean awesome! If they weren't possible then this would have been a massive issue, since the downtime also took out this website and the backend for a couple of mobile apps. If you use an Arch instance, then there is a [community article](https://www.digitalocean.com/community/articles/pacman-syu-kernel-update-solved-how-to-ignore-arch-kernel-upgrades) on their website explaining how to make pacman ignore kernel upgrades and to stop this from happening.

View File

@ -0,0 +1,14 @@
---
date: "2013-07-03T17:00:00Z"
title: Magic Seaweed's Awesome New API
description: "Making use of the Magic Seaweed web API for surf data"
tags: [project, android, technology]
---
Back in March, I emailed [Magic Seaweed](http://magicseaweed.com) to ask them if they had a public API for their surf forecast data. They responded that they didn't at the time, but that it was certainly on their to-do list. I am interested in the marine data for my [Gower Tides](https://play.google.com/store/apps/details?id=net.willwebberley.gowertides) application.
Yesterday, I visited their website to have a look at the surf reports and some photos, when I noticed the presence of a [Developer](http://magicseaweed.com/developer/api) link in the footer of the site. It linked to pages about their new API, with an overview describing exactly what I wanted.
Since the API is currently in beta, I emailed them requesting a key, which they were quick to respond with and helpfully included some further example request usages. They currently do not have any strict rate limits in place, but instead have a few [fair practice terms](http://magicseaweed.com/developer/terms-and-conditions) to discourage developers from going a bit trigger happy on API requests. They also request that you use a hyperlinked logo to accredit the data back to them. Due to caching, I will not have to make too many requests (since the application will preserve 'stale' data for 30 minutes before refreshing from Magic Seaweed, when requested), so hopefully that will keep the app's footprint down.
I have written the app's new [backend support](https://github.com/willwebberley/GowerTidesBackend) for handling and caching the surf data ready for incorporating into the Android app soon. So far, the experience has been really good, with the API responding with lots of detailed information - almost matching the data behind their own [surf forecasts](http://magicseaweed.com/Llangennith-Rhossili-Surf-Report/32/). Hopefully they won't remove any of the features when they properly release it!

View File

@ -0,0 +1,28 @@
---
date: "2013-07-31T17:00:00Z"
title: Gower Tides v1.4
description: "Announcing the latest version of Gower Tides Android app"
tags: [android, technology]
---
![Surf forecasts](https://will.now.sh/static/media/v1-4_surf.png)
Last week I released a new version of the tides Android app I'm currently developing.
The idea of the application was initially to simply display the tidal times and patterns for the Gower Peninsula, and that this should be possible without a data connection. Though, as the time has gone by, I keep finding more and more things that can be added!
The latest update saw the introduction of 5-day surf forecasts for four Gower locations - Llangennith, Langland, Caswell Bay, and Hunts Bay. All the surf data comes from [Magic Seaweed](http://magicseaweed.com)'s API (which I [talked about](/blog/2013/07/03/magic-seaweeds-awesome-new-api/) last time).
![Location choices](https://flyingsparx.net/static/media/v1-4_location.png)
he surf forecasts are shown, for each day they are available, as a horizontal scroll-view, allowing users to scroll left and right within that day to view the forecast at different times of the day (in 3-hourly intervals).
Location selection is handled by a dialog popup, which shows a labelled map and a list of the four available locations in a list view.
The [backend support](https://github.com/willwebberley/GowerTidesBackend) for the application was modified to now also support 30-minute caching of surf data on a per-location basis (i.e. new calls to Magic Seaweed would not be made if the requested _location_ had been previously pulled in the last 30 minutes). The complete surf and weather data is then shipped back to the phone as one JSON structure.
![Tides view update](https://flyingsparx.net/static/media/v1-4_tides.png)
Other updates were smaller but included an overhaul of the UI (the tide table now looks a bit nicer), additional licensing information, more speedy database interaction, and so on.
If you are interested in the source, then that is available [here](https://github.com/willwebberley/GowerTides), and the app itself is on [Google Play](https://play.google.com/store/apps/details?id=net.willwebberley.gowertides&hl=en). If you have any ideas, feedback or general comments, then please let me know!

View File

@ -0,0 +1,22 @@
---
date: "2013-08-31T17:00:00Z"
title: A rather French week
description: "Away in France for a week"
tags: [life, holiday]
---
I recently spent a week in France as part of a holiday with some of my family. Renting houses for a couple of weeks in France or Italy each summer has almost become a bit of a tradition, and it's good to have a relax and a catch-up for a few days. They have been the first proper few days (other than the <a href="/blog/13/3/30/a-bit-of-light-construction-on-an-easter-weekend/" target="_blank">decking-building adventure</a> back in March) I have had away from University in 2013, so I felt it was well-deserved!
![The house](/media/blog/french-house.JPG)
This year we stayed in the Basque Country of southern France, relatively near Biarritz, in a country farmhouse. Although we weren't really within walking distance to anywhere, the house did come with a pool in the garden, with a swimmable river just beyond, and an amazing, peaceful setting.
Strangely enough, there was no Internet installation at the house, and no cellular reception anywhere nearby. This took a bit of getting-used to, but after a while it became quite relaxing not having to worry about checking emails, texts, and Twitter. The only thing to cause any stress was a crazed donkey, living in the field next door, who would start braying loudly at random intervals through the nights, waking everyone up.
![French Gorge](/media/blog/french-gorge.JPG)
As might be expected, the food and drink was exceptional. Although we did end up eating in the house each evening (to save having someone sacrifice themselves to be the designated driver), the foods we bought from the markets were very good, and the fact that wine cost €1.50 per bottle from the local Intermarché gave very little to complain about.
The majority of most days was spent away from the house, visiting local towns, the beaches and the Pyrenees. We spent a few afternoons walking in the mountains, with some spectacular scenery.
![Pyrenees](/media/blog/french-pyrenes.JPG)

View File

@ -0,0 +1,26 @@
---
date: "2013-09-02T17:00:00Z"
title: "Zoned Network Sound-Streaming: The Problem"
description: "Multi-room audio simultaneous playback"
tags: [linux, technology]
---
For a while, now, I have been looking for a reliable way to manage zoned music-playing around the house. The general idea is that I'd like to be able to play music from a central point and have it streamed over the network to a selection of receivers, which could be remotely turned on and off when required, but still allow for multiple receivers to play simulataneously.
Apple's [AirPlay](http://www.apple.com/uk/airplay/) has supported this for a while now, but requires the purchasing of AirPlay compatible hardware, which is expensive. It's also very iTunes-based - which is something that I do not use.
Various open-source tools also allow network streaming. [Icecast](http://www.icecast.org/) (through the use of [Darkice](https://code.google.com/p/darkice/)) allows clients to stream from a multimedia server, but this causes pretty severe latency in playback between clients (ranging up to around 20 seconds, I've found) - not a good solution in a house!
[PulseAudio](http://www.freedesktop.org/wiki/Software/PulseAudio/) is partly designed around being able to work over the network, and supports the discovery of other PulseAudio sinks on the LAN and the selection a sound card to transmit to through TCP. This doesn't seem to support multiple sound card sinks very well, however.
PulseAudio's other network feature is its RTP broadcasting, and this seemed the most promising avenue for progression in solving this problem. RTP utilises UDP, and PulseAudio effecively uses this to broadcast its sound to any devices on the network that might be listening on the broadcast address. This means that one server could be run and sink devices could be set up simply to receive the RTP stream on demand - perfect!
However, in practice, this turned out not to work very well. With RTP enabled, PulseAudio would entirely flood the network with sound packets. Although this isn't a problem for devices with a wired connection, any devices connected wirelessly to the network would be immediately disassociated from the access point due to the complete saturation of PulseAudio's packets being sent over the airwaves.
This couldn't be an option in a house where smartphones, games consoles, laptops, and so on require the WLAN. After researching this problem a fair bit (and finding many others experiencing the same issues), I found [this page](http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Network/RTP/), which describes various methods for using RTP streaming from PulseAudio and includes (at the bottom) the key that could fix my problems - the notion of compressing the audio into MP3 format (or similar) before broadcasting it.
Trying this technique worked perfectly, and did not cause network floods anywhere nearly as severely as the uncompressed sound stream; wireless clients no longer lost access to the network once the stream was started and didn't seem to lose any noticeable QoS at all. In addition, when multiple clients connected, the sound output would be nearly entirely simultaneous (at least after a few seconds to warm up).
Unfortunately, broadcasting still didn't work well over WLAN (sound splutters and periodic drop-outs), so the master server and any sound sinks would need to be on a wired network. This is a small price to pay, however, and I am happy to live with a few Ethernet-over-power devices around the house. The next stage is to think about what to use as sinks. Raspberry Pis should be powerful enough and are _significantly_ cheaper than Apple's equivalent. They would also allow me to use existing sound systems in some rooms (e.g. the surround-sound in the living room), and other simple speaker setups in others. I also intend to write a program around PulseAudio to streamline the streaming process and a server for discovering networked sinks.
I will write an update when I have made any more progress on this!

View File

@ -0,0 +1,28 @@
---
date: "2013-09-14T17:00:00Z"
title: CasaStream
description: "Discussing a solution for multi-room synchronous audio playback"
tags: [project, linux, technology]
---
In my [last post](/blog/2013/09/02/zoned-network-sound-streaming-the-problem) I discussed methods for streaming music to different zones in the house. More specifically I wanted to be able to play music from one location and then listen to it in other rooms at the same time and in sync.
After researching various methods, I decided to go with using a compressed MP3 stream over RTP. Other techniques introduced too much latency, did not provide the flexibility I required, or simply did not fulfill the requirements (e.g. not multiroom, only working with certain applications and non-simultaneous playback).
To streamline the procedure of compressing the stream, broadcasting the stream, and receiving and playing the stream, I have started a project to create an easily-deployable wrapper around PulseAudio and VLC. The system, somewhat cheesily named [CasaStream](https://github.com/willwebberley/CasaStream) and currently written primarily in Python, relies on a network containing one machine running a CasaStream Master server and any number of machines running a CasaStream Slave server.
![Casastream interface](/media/blog/casastream1.png)
The Master server is responsible for compressing and broadcasting the stream, and the Slaves receive and play the stream back through connected speakers. Although the compression is relatively resource-intensive (at least, for the moment), the Slave server is lightweight enough to be run on low-powered devices, such as the Raspberry Pi. Any machine that is powerful enough to run the Master could also simultaneously run a Slave, so a dedicated machine to serve the music alone is not required.
![Casastream interface](/media/blog/casastream2.png)
The Master server also runs a web interface, allowing enabling of the system and to disable and enable Slaves. Slave servers are automatically discovered by the Master, though it is possible to alter the scan range from the web interface also. In addition, the selection of audio sources to stream (and their output volumes) and the renaming of Slaves are available as options. Sound sources are usually automatically detected by PulseAudio (if it is running), so there is generally no manual intervention required to 'force' the detection of sources.
My current setup consists of a Master server running on a desktop machine in the kitchen, and Slave servers running on various other machines throughout the house (including the same kitchen desktop connected to some orbital speakers and a Raspberry Pi connected to the surround sound in the living room). When all running, there is no notable delay between the audio output in the different rooms.
There are a few easily-installable dependencies required to run both servers. Both require Python (works on V2.*, but I haven't tested on V3), and both require the Flask microframework and VLC. For a full list, please see the [README](https://github.com/willwebberley/CasaStream/blob/master/README.md) at the project's home, which also provides more information on the installation and use.
Unfortunately, there are a couple of caveats: firstly, the system is not reliable over WLAN (the sound gets pretty choppy), so a wired connection is recommended. Secondly, if using ethernet-over-power to mitigate the first caveat, then you may experience sound dropouts every 4-5 minutes. To help with this problem, the Slave servers are set to restart the stream every four minutes (by default).
This is quite an annoying issue, however, since having short sound interruptions every few minutes is very noticeable. Some of my next steps with this project, therefore, are based around trying to find a better fix for this. In addition, I'd like to reduce the dependency footprint (the Slave servers really don't need to use a fully-fledged web server), reduce the power requirements at both ends, and to further automate the installation process.

View File

@ -0,0 +1,14 @@
---
date: "2013-10-05T17:00:00Z"
title: Workshop Presentation in Germany
description: "Presenting research in Germany"
tags: [talk, research]
---
Last week I visited Karlsruhe, in Germany, to give a presentation accompanying a recently-accepted paper. The paper, "Inferring the Interesting Tweets in Your Network", was in the proceedings of the Workshop on Analyzing Social Media for the Benefit of Society ([Society 2.0](http://www.cs.cf.ac.uk/cosmos/node/12)), which was part of the Third International Conference on Social Computing and its Applications ([SCA](http://socialcloud.aifb.uni-karlsruhe.de/confs/SCA2013/)).
Although I only attended the first workshop day, there was a variety of interesting talks on social media and crowdsourcing. My own talk went well and there was some useful feedback from the attendees.
I presented my recent work on the use of machine learning techniques to help in identifying interesting information in Twitter. I rounded up some of the results from the Twinterest experiment we ran a few months ago and discussed how this helped address the notion of information _relevance_ as an extension to global _interestingness_.
I hadn't been to Germany before this, so it was also a culturally-interesting visit. I was only there for two nights but I tried to make the most of seeing some of Karlsruhe and enjoying the traditional food and local beers!

View File

@ -0,0 +1,14 @@
---
date: "2014-01-17T17:00:00Z"
title: Direct-to-S3 Uploads in Node.js
description: "Uploading assets directly to S3 using Node.js"
tags: [heroku, javascript, technology]
---
A while ago I wrote an [article](https://devcenter.heroku.com/articles/s3-upload-python) for [Heroku](https://heroku.com)'s Dev Center on carrying out direct uploads to S3 using a Python app for signing the PUT request. Specifically, the article focussed on Flask but the concept is also applicable to most other Python web frameworks.
I've recently had to implement something similar, but this time as part of an [Node.js](http://nodejs.org) application. Since the only difference between the two approaches is literally just the endpoint used to return a signed request URL, I thought I'd post an update on how the endpoint could be constructed in Node.
The front-end code in the companion repository demonstrates an example of how the endpoint can be queried to retrieve the signed URL, and is available [here](https://github.com/willwebberley/FlaskDirectUploader/blob/master/templates/account.html). Take a look at that repository's README for information on the front-end dependencies.
The full example referenced by the Python article is in a [repository](https://github.com/willwebberley/FlaskDirectUploader) hosted by GitHub and may be useful in providing more context.

View File

@ -0,0 +1,16 @@
---
date: "2014-01-28T17:00:00Z"
title: Seminar at King's College London
description: "Giving a seminar on my research at KCL"
tags: [talk, kcl, research]
---
Last week, I was invited to give a seminar to the Agents and Intelligent Systems group in the [Department of Informatics](http://www.kcl.ac.uk/nms/depts/informatics/index.aspx) at King's College London.
I gave an overview of my PhD research conducted over the past two or three years, from my initial research into retweet behaviours and propagation characteristics through to studies on the properties exhibited by Twitter's social graph and the effects that the interconnection of users have on message dissemination.
I finished by outlining our methods for identifying interesting content on Twitter and by demonstrating its relative strengths and weaknesses as were made clear by crowd-sourced validations carried out on the methodology results.
There was some very interesting and useful questions from the audience, some of which is now being taken into consideration in my thesis. It was also good to visit another computer science department and to hear about the work done independently and collaboratively by its different research groups.
The slides from the seminar are available [here](http://flyingsparx.net/static/downloads/kcl_seminar_2014.pdf) and there is a [blog post](http://inkings.org/2014/02/03/tweets-and-retweets) about it on the Department of Informatics' website.

View File

@ -0,0 +1,12 @@
---
date: "2014-03-17T17:00:00Z"
title: Node.js Contribution to Heroku's Dev Center
description: "Contributing another article to the Heroku Dev Center"
tags: [contribution, heroku, javascript]
---
I recently wrote a new article for Heroku's Dev Center on carrying out asynchronous direct-to-S3 uploads using Node.js.
he article is based heavily on the previous [Python version](/blog/13/5/7/contribution-to-heroku-dev-center/), where the only major change is the method for signing the AWS request. This method was outlined in an [earlier blog post](/blog/2014/1/17/direct-to-s3-uploads-in-node.js).
The article is available [here](https://devcenter.heroku.com/articles/s3-upload-node) and there is also a [companion code repository](https://github.com/willwebberley/NodeDirectUploader) for the example it describes.

View File

@ -0,0 +1,12 @@
---
date: "2014-03-26T17:00:00Z"
title: Talk on Open-Source Contribution
description: "Internal seminar on contributing to open-source projects"
tags: [talk, opensource]
---
Today I gave an internal talk at the School of Computer Science & Informatics about open-source contribution.
The talk described some of the disadvantages of the ways in which hobbyists and the non-professional sector publicly publish their code. A lot of the time these projects do not receive much visibility or use from others.
Public contribution is important to the open-source community, which is driven largely by volunteers and enthusiasts, so the point of the talk was to try and encourage people to share expert knowledge through contributing documentation (wikis, forums, articles, etc.), maintaining and adopting packages, and getting more widely involved.

View File

@ -0,0 +1,12 @@
---
date: "2015-01-20T17:00:00Z"
title: End of an Era
description: "Completing my PhD"
tags: [life, phd, research]
---
I recently received confirmation of my completed PhD! I submitted my thesis in May 2014, passed my viva in September and returned my final corrections in December.
I was examined internally by [Dr Pete Burnap](http://burnap.org) and also by [Dr Jeremy Pitt](http://www.iis.ee.ic.ac.uk/~j.pitt/Home.html) of Imperial College London.
The whole PhD was an amazing experience, even during the more stressful moments. I learnt a huge amount across many domains and I cannot thank my supervisors, [Dr Stuart Allen](http://users.cs.cf.ac.uk/Stuart.M.Allen) and [Prof Roger Whitker](http://users.cs.cf.ac.uk/R.M.Whitaker), enough for their fantastic support and guidance throughout.

View File

@ -0,0 +1,26 @@
---
date: "2015-01-27T17:00:00Z"
title: NHS Hack Day
description: "Taking part in the 2015 NHS Hack Day"
tags: [event, nhs]
---
This weekend I took part in the [NHS Hack Day](http://nhshackday.com). The idea of the event is to bring healthcare professionals together with technology enthusiasts in order to build stuff that is useful for those within the NHS and for those that use it. It was organised by [AnneMarie Cunningham](https://twitter.com/amcunningham"), who did a great job in making the whole thing run smoothly!
![NHS Hack Day](/media/blog/nhshackday2.jpg)
**This was our team! The image is released under a Creative Commons BY-NC2.0 license by [Paul Clarke](https://www.flickr.com/photos/paul_clarke).**
I was asked to go along and give a hand by [Martin](http://martinjc.com), who also had four of his MSc students with him. [Matt](http://mattjw.net), previously from [Cardiff CS&I](http://cs.cf.ac.uk), also came to provide his data-handling expertise.
![NHS Hack Day 2](/media/blog/nhshackday.png)
We built a webapp, called [Health Explorer Wales](http://compjcdf.github.io/nhs_hack/app.html), that attempts to visualise various data for health boards and communities in Wales. One of the main goals of the app was to make it maintainable, so that users in future could easily add their own geographic or numeric data to visualise. For this, it was important to decide on an extensible [data schema](https://github.com/CompJCDF/nhs_hack/blob/master/data/descriptors.json) for describing data, and suitable data formats.
Once the schema was finalised, we were able to go ahead and build the front-end, which used [D3.js](http://d3js.org) to handle the visualisations. This was the only third-party library we used in the end. The rest of the interface included controls, such as a dataset-selector and controls for sliding back through time (for timeseries data). The app is purely front-end, which means it can essentially be shipped as a single HTML file (with linked scripts and styles).
We also included an 'add dataset' feature, which allows users to add a dataset to be visualised, as long as the schema is observed. In true hackathon style, any exceptions thrown will currently cause the process to fail silently ;) The [GitHub repository](https://github.com/CompJCDF/nhs_hack) for the app contains a wiki with some guidance on data-formatting. Since the app is front-end only, any data added is persisted using HTML5 local storage and is therefore user-specific.
Generally, I am pleased with the result. The proof-of-concept is (mostly) mobile-friendly, and allows for easily showing off data in a more comprehensible way than through just using spreadsheets. Although we focussed on visualising only two datatypes initially (we all <3 [#maps](https://twitter.com/_r_309)), we hope to extend this by dropping in modules for supporting new formats in the future.
There were many successful projects completed as part of the event, including a new 'eye-test' concept involving a zombie game using an Oculus Rift and an app for organising group coastal walks around Wales. A full list of projects is available on the event's [website](http://nhshackday.com/previous/events/2015/01/cardiff"). I really enjoyed the weekend and hope to make the next one in London in May!

View File

@ -0,0 +1,10 @@
---
date: "2015-02-05T17:00:00Z"
title: Developing Useful APIs for the Web
description: "Internal seminar on developing useful and effective web APIs"
tags: [talk, webapi]
---
Yesterday, I gave a talk about my experiences with developing and using RESTful APIs, with the goal of providing tips for structuring such interfaces so that they work in a useful and sensible way.
I went back to first principles, with overviews of basic HTTP messages as part of the request-response cycle and using sensible status codes in HTTP responses. I discussed the benefits of 'collection-oriented' endpoint URLs to identify resources that can be accessed and modified and the use of HTTP methods to describe what to do with these resources.

View File

@ -0,0 +1,18 @@
---
date: "2015-02-18T17:00:00Z"
title: Web and Social Computing
description: "Lecturing masters students on Web and Social Computing"
tags: [cardiffuniversity, teaching]
---
his week I begin lecturing a module for [Cardiff School of Computer Science and Informatics](http://cs.cf.ac.uk)' postgraduate MSc course in [Advanced Computer Science](http://courses.cardiff.ac.uk/postgraduate/course/detail/p071.html).
The module is called Web and Social Computing, with the main aim being to introduce students to the concepts of social computing and web-based systems. The course will include both theory and practical sessions in order to allow them to enhance their knowledge derived from literature with the practice of key concepts. We'll also have lots of guest lectures from experts in specific areas to help reinforce the importance of this domain.
As part of the module, I will encourage students to try and increase their web-presence and to interact with a wider community on the Internet. They'll do this by engaging more with social media and by maintaining a blog on things they've learned and researched.
Each week, the students will give a 5-minute [Ignite-format](http://en.wikipedia.org/wiki/Ignite_%28event%29) talk on the research they've carried out. The quick presentation style will allow everyone in the group to convey what they feel are the most important and relevant parts in current research across many of the topics covered in the module.
We'll cover quite a diverse range of topics, starting from an introduction to networks and a coverage of mathematical graph theory. This will lead on to social networks, including using APIs to harvest data in useful ways. Over the last few weeks, we'll delve into subjects around socially-driven business models and peer-to-peer finance systems, such as BitCoin.
During the course, I hope that students will gain practical experience with various technologies, such as [NetworkX](https://networkx.github.io) for modelling and visualising graphs in Python, [Weka](http://www.cs.waikato.ac.nz/ml/weka) for some machine learning and classification, and good practices for building and using web APIs.

View File

@ -0,0 +1,29 @@
---
date: "2015-04-28T17:00:00Z"
title: Media and volume keys in i3
description: "Keybinds for media control and volume in i3 window manager"
tags: [linux, i3, technology]
---
As is the case with many people, all music I listen to on my PC these days plays from the web through a browser. I'm a heavy user of Google Play Music and SoundCloud, and using Chrome to handle everything means playlists and libraries (and the way I use them through extensions) sync up properly everywhere I need them.
On OS X I use [BearededSpice](http://beardedspice.com) to map the keyboard media controls to browser-based music-players, and the volume keys adjusted the system as they should. Using [i3](https://i3wm.org) (and other lightweight window managers) can make you realise what you take for granted when using more fully-fledged arrangements, but it doesn't take long to achieve the same functionality on such systems.
A quick search revealed [keysocket](https://github.com/borismus/keysocket) - a Chrome extension that listens out for the hardware media keys and is able to interact with a large list of supported music websites. In order to get the volume controls working, I needed to map i3 through to `alsa`, and this turned out to be pretty straight-forward too. It only required the addition of three lines to my i3 config to handle the volume-up, volume-down, and mute keys:
```
bindsym XF86AudioRaiseVolume exec amixer -q set Master 4%+ unmute
bindsym XF86AudioLowerVolume exec amixer -q set Master 4%- unmute
bindsym XF86AudioMute exec amixer -q set Master toggle
```
And for fun added the block below to `~/.i3status.conf` to get the volume displayed on the status bar:
```
volume master {
format = "♪ %volume "
device = "default"
mixer = "Master"
mixer_idx = 0
}
```

View File

@ -0,0 +1,25 @@
---
date: "2015-05-01T17:00:00Z"
title: Using Weka in Go
description: "Weka bindings for Go"
tags: [weka, golang, machinelearning, technology]
---
A couple of years ago I wrote a [blog post](/blog/13/6/12/wekapy) about wrapping some of [Weka](http://www.cs.waikato.ac.nz/ml/weka)'s classification functionality to allow it to be used programmatically in Python programs. A small project I'm currently working on at home is around taking some of the later research from my PhD work to see if it can be expressed and used as a simple web-app.
I began development in [Go](https://golang.org) as I hadn't yet spent much time working with the language. The research work involves using a Bayesian network classifier to help infer a [tweet's interestingness](http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6686092&tag=1), and while Go machine-learning toolkits do [exist](http://biosphere.cc/software-engineering/go-machine-learning-nlp-libraries), I wanted to use my existing models that were serialized in Java by Weka.
I started working on [WekaGo](https://github.com/willwebberley/WekaGo), which is able to programmatically support simple classification tasks within a Go program. It essentially just manages the model, abstracts the generation of [ARFF](http://www.cs.waikato.ac.nz/ml/weka/arff.html) files, and executes the necessary Java to make it quick and easy to train and classify data:
```
model := wekago.NewModel("bayes.BayesNet")
...
model.AddTrainingInstance(train_instance1)
...
model.Train()
model.AddTestingInstance(train_instance1)
...
model.Test()
```
Results from the classification can then be examined, as [described](https://github.com/willwebberley/WekaGo/blob/master/README.md).

View File

@ -0,0 +1,36 @@
---
date: "2015-05-12T17:00:00Z"
title: Nintendo's Hotspot 'API'
description: "Using Nintendo's 3DS Hotspot API for Streetpass and Spotpass"
tags: [android, nintendo, technology]
---
Since getting a DS, [StreetPass](http://www.nintendo.com/3ds/built-in-software/streetpass) has become quite addictive. It's actually pretty fun checking the device after walking through town or using public transport to see a list of Miis representing the people you've been near recently, and the minigames (such as StreetPass Quest) that require you to 'meet' people in order to advance also make it more involved. Essentially the more you're out and about, the further you can progress - this is further accentuated through Play Coins, which can be used to help 'buy' your way forward and are earned for every 100 steps taken whilst holding the device.
![Nintendo Zone](/media/blog/nintendozone2.png)
The DS systems can also use relay points in Nintendo Zone hotspots to collect StreetPass hits. These zones are special WiFi access points hosted in certain commercial venues (e.g. in McDonalds and Subway restaurants), and allow you to 'meet' people around the world who also happen to be in another Nintendo Zone at the same time. As such, users can get a lot of hits very quickly (up to a maximum of 10 at a time). There are various ways people have [found](https://gbatemp.net/threads/how-to-have-a-homemade-streetpass-relay.352645) to set up a 'home' zone, but Nintendo have also published a [map](https://microsite.nintendo-europe.com/hotspots) to display official nearby zones.
However, their map seems a little clunky to use while out and about, so I wanted to see if there could be an easier way to get this information more quickly. When using the map, the network logs revealed `GET` requests being made to:
```
https://microsite.nintendo-europe.com/hotspots/api/hotspots/get
```
The location for which to retrieve data is specified through the `zoom` and `bbox` parameters, which seem to map directly to the zoom level and the bounds reported by the underlying Google Maps API being used. For some reason, the parameter `ummary_mode=true` also needs to be set. As such, a (unencoded) request for central Cardiff may look like this:
```
/hotspots/api/hotspots/get?summary_mode=true&zoom=18&bbox=51.480043,-3.180592,51.483073,-3.173028
```
Where the coordinates (`51.480043,-3.180592` and (`51.483073,-3.173028<`) respectively represent the lower-left and upper-right corners of the bounding box. The response is in JSON, and contains a lat/lng for each zone, a name, and an ID that can be used to retrieve more information about the host's zone using this URL format:
```
https://microsite.nintendo-europe.com/hotspots/#hotspot/&lt;ID&gt;
```
When the map is zoomed-out (to prevent map-cluttering) a zone 'group' might be returned instead of an individual zone, for each of which the size is indicated. Zooming back in to a group then reveals the individual zones existing in that area.
![Nintendo Zone 2](/media/blog/nintendozone1.png)
It seems that this server endpoint does not support cross-origin resource-sharing (CORS), which means that the data is not retrievable for a third-party web-app (at least, without some degree of proxying) due to browser restrictions. However, and especially since the endpoint currently requires no session implementation or other kind of authentication, the data seems very easily retrievable and manageable for non-browser applications and other types of systems.

View File

@ -0,0 +1,14 @@
---
date: "2015-05-27T17:00:00Z"
title: "Android: Consuming Nintendo Hotspot Data"
description: "Using the Nintendo Streetpass API in an Android app"
tags: [android, nintendo, project, technology]
---
I recently [blogged about](/blog/2015/5/12/nintendos-hotspot-api) Nintendo Hotspot data and mentioned it could be more usefully consumable in a native mobile app.
![Android Hotspot](/media/blog/android-hotspot.png)
As such, I wrote a small Android app for retrieving this data and displaying it on a Google Map. The app shows nearby hotspots, allows users to also search for other non-local places, and shows information on the venue hosting the zone.
The app is available on the [Play Store](https://play.google.com/store/apps/details?id=net.flyingsparx.spotpassandroid) and its source is published on [GitHub](https://github.com/willwebberley/NZone-finder).

View File

@ -0,0 +1,30 @@
---
date: "2017-03-16T17:00:00Z"
title: Two Year Update
description: "Updating my blog after two years"
tags: [travel, life]
---
I haven't written a post since summer 2015. It's now March 2017 and I thought I'd write an update very briefly covering the last couple of years.
I finished researching and lecturing full-time in the summer of 2015. It felt like the end of an era; I'd spent around a third of my life at the [School of Computer Science and Informatics](http://www.cardiff.ac.uk/computer-science) at [Cardiff University](http://cf.ac.uk), and had experienced time there as an undergraduate through to postgrad and on to full-time staff. However, I felt it was time to move on and to try something new, although I was really pleased to be able to continue working with them on a more casual part-time basis - something that continues to today.
In that summer after leaving full-time work at Cardiff I went [interailing](http://www.interrail.eu) around Europe with my friend, Dan. It was an amazing experience through which I had a taste of many new European cities where we met lots of interesting people. We started by flying out to Berlin, and from there our route took us through Prague, Krakow, Budapest, Bratislava, Vienna, Munich, Koblenz, Luxembourg City, Brussels, Antwerp, and then finished in Amsterdam (which I'd been to before, but always love visiting).
![Interailing](/media/blog/interrailing.png)
_Some photos from the Interrail trip._
After returning, I moved to London to start a new full-time job with [Chaser](https://www.chaser.io) Having met the founders David and Mark at a previous [Silicon Milkroundabout](https://www.siliconmilkroundabout.com), Chaser was so great to get involved with - I was part of a fab team creating fin-tech software with a goal to help boost the cashflows in small-medium sized businesses. Working right in the City was fun and totally different to what seemed like a much quieter life in Cardiff. Whilst there, I learned loads more about web-based programming and was able to put some of the data-analysis skills from my PhD to use.
At the end of 2015 I was to move back to South Wales to begin a new job at [Simply Do Ideas](https://simplydo.co.uk) as a senior engineer. Again, this was a totally different experience involving a shift from fin-tech to ed-tech and a move from the relentless busy-ness of London to the quieter (but no less fun) life of Caerphilly - where our offices were based. Since I was to head the technical side of the business, I was able to put my own stamp on the company and the product, and was able to help decide its future and direction.
![Simply Do team](/media/blog/sdi_bett.jpg)
_Myself and Josh representing Simply Do Ideas at Bett 2017 in London._
In February 2016 I was honoured to be promoted to the Simply Do Ideas board and to have been made the company's Chief Technology Officer. Over the last year myself and the rest of the team have been proud to be part of a company growing very highly respected in a really interesting and exciting domain, and we're all very excited about what's to come in the near (and far) future!
I still continue to work with Cardiff University on some research projects and to help out with some of the final-year students there, I hope to write a little more about this work soon.
I feel so lucky to have been able to experience so much in such a short time frame - from academic research and teaching, being a key part of two growth startups, heading a tech company's technology arm, being a member of a board along with very highly-respected and successful entrepreneurs and business owners, and getting to meet such a wide range of great people. I feel like I've grown and learned so much - both professionally and personally - from all of my experiences and from everyone I've met along the way.

View File

@ -1,7 +1,8 @@
---
date: "2017-06-22T17:00:00Z"
title: CENode
description: "A library for machine-machine and human-machine conversations, with Cardiff University and IBM"
layout: post
tags: [cenode, javascript, cardiffuniversity, ibm, ita, research]
---
Whilst working on the [ITA Project](http://usukita.com) - a collaborative research programme between the UK MoD and the US Army Research Laboratory - over the last few years, one of my primary areas has been to research around controlled natural languages, and working with [Cardiff University](http://cf.ac.uk) and [IBM UK](https://www.ibm.com/uk-en)'s [Emerging Technology](https://emerging-technology.co.uk) team to develop CENode.

View File

@ -1,16 +1,16 @@
---
date: "2017-06-26T17:00:00Z"
title: CENode in IoT
description: "Explaining how CENode can be used around the home and for interacting with IoT devices"
layout: post
tags: [cenode, iot, hue, project]
slug: cenode-iot
---
In a [previous note](/notes/2017/06/22/cenode/) I discussed CENode and briefly mentioned its potential for use in interacting with the Internet of Things. I thought I'd add a practical example of how it might be used for this and for 'tasking' other systems.
In a [previous note](/blog/2017/06/22/cenode/) I discussed CENode and briefly mentioned its potential for use in interacting with the Internet of Things. I thought I'd add a practical example of how it might be used for this and for 'tasking' other systems.
I have a few [Philips Hue](http://www2.meethue.com/en-US) bulbs at home, and the Hue Bridge that enables interaction with the bulbs exposes a nice RESTful API. My aim was to get CENode to use this API to control my lights.
A working example of the concepts in this note is available [on GitHub](https://github.com/willwebberley/CENode-IoT) (as a small webapp) and here's a short demo video (which includes a speech-recognition component):
<iframe src="https://player.vimeo.com/video/223169323" width="640" height="480" style="margin:20px auto;display:block; max-width: 100%;" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
A working example of the concepts in this note is available [on GitHub](https://github.com/willwebberley/CENode-IoT) (as a small webapp) and [here's a short demo video](https://player.vimeo.com/video/223169323) (which includes a speech-recognition component):
The first step was to [generate a username for the Bridge](https://developers.meethue.com/documentation/configuration-api#71_create_user), which CENode can use to authenticate requests through the API.

View File

@ -1,7 +1,9 @@
---
date: "2017-07-19T17:00:00Z"
title: "Alexa, ask Sherlock..."
description: "Explaining how CENode can be used to interact with additional IoT devices."
layout: post
tags: [cenode, alexa, iot, project]
slug: cenode-alexa
---
I have recently [posted about CENode](/2017/06/22/cenode/) and how it might be [used in IoT systems](/2017/06/26/cenode-iot/).
@ -10,9 +12,7 @@ Since CENode is partially designed to communicate directly with humans (particul
The [Alexa Voice Service](https://developer.amazon.com/alexa-voice-service) and [Alexa Skills Kit](https://developer.amazon.com/alexa-skills-kit) are great to work with, and it was relatively straight forward to create a skill to communicate with CENode's [RESTful API](https://github.com/willwebberley/CENode/wiki/CEServer-Usage).
The short video below demonstrates this through using an Amazon Echo to interact with a standard, non-modified CENode instance running on [CENode Explorer](http://explorer.cenode.io) that is partly pre-loaded with the "space" scenario used in our main [CENode demo](http://cenode.io/demo/index.html). The rest of the post discusses the implementation and challenges.
<iframe src="https://player.vimeo.com/video/226199106" width="640" height="480" style="margin:20px auto;display:block; max-width: 100%;" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
[This short video](https://player.vimeo.com/video/226199106) demonstrates this through using an Amazon Echo to interact with a standard, non-modified CENode instance running on [CENode Explorer](http://explorer.cenode.io) that is partly pre-loaded with the "space" scenario used in our main [CENode demo](http://cenode.io/demo/index.html). The rest of the post discusses the implementation and challenges.
Typical Alexa skills are split into ["intents"](https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-interaction-model-reference), which describe the individual ways people might interact with the service. For example, the questions "what is the weather like today?" and "is it going to rain today?" may be two intents of a single weather skill.
@ -34,6 +34,6 @@ Since we only have a single intent, using either 'ask' or 'tell' in the invocati
At this stage, the AWS Lambda function handling the intent makes a standard HTTP POST request to a CENode instance, and the response is directly passed back to the Alexa service for reading-out to the user. As such, CENode itself provides all of the error-handling and misunderstood inputs, making the Alexa service itself combined with the Lambda function, in this scenario, very 'thin'.
<img src="/media/blog/cenode-alexa.png" style="width:100%;max-width:620px;max-height:none;height:auto;">
![CENode and Alexa](/media/blog/cenode-alexa.png)
The skill has not yet been published to the Alexa skills store for general use, but the code for this project, including the Alexa Skills Kit configuration and the AWS Lambda code (written using their Node environment) is [available on GitHub](https://github.com/willwebberley/cenode-alexa).

View File

@ -1,13 +1,16 @@
---
date: "2017-08-18T17:00:00Z"
title: "Hue: Security Lights"
description: "Working with the Philips Hue bridge"
tags: [hue, iot, project, technology]
slug: security-lights
---
A [previous note about Philips Hue bulbs](/notes/2017/06/26/cenode-iot) got me thinking that the API exposed by the bridge might be used to warn if the house lights are left on too late at night, or even if they get turned on at unexpected times - potentially for security.
A [previous note about Philips Hue bulbs](/blog/2017/06/26/cenode-iot) got me thinking that the API exposed by the bridge might be used to warn if the house lights are left on too late at night, or even if they get turned on at unexpected times - potentially for security.
I put together a simple program that periodically checks the status of known Hue bulbs late at night. If any bulbs are discovered to be powered on during such times then an email notification is sent. It runs as a `systemd` service on a Raspberry Pi.
<img class="small-image" src="/media/blog/security-lights.png">
![Security lights](/media/blog/security-lights.png)
Currently the project is quite basic, but it could be further extended - perhaps to implement ignore lists or to automatically turn off specific sets of bulbs if they are found to be powered on.

View File

@ -1,6 +1,9 @@
---
date: "2019-08-20T17:00:00Z"
title: "Go backends on Now"
description: "Some of the pitfalls using Go as a function language runtime on Now Vercel"
tags: [golang, vercel, technology]
slug: go-now
---
ZEIT's [Now](https://zeit.co/now) service is great for deploying apps and APIs that are able to make use of serverless execution models, and I use it for many of my projects (including this website, at the time of writing).

View File

@ -1,6 +1,9 @@
---
date: "2020-02-02T17:00:00Z"
title: "Kubernetes Cluster: Essentials"
description: "Setting up a Kubernetes cluster from scratch"
tags: [kubernetes, devops, technology]
slug: kube-cluster
---
This note documents the set-up of a k8s cluster from scratch, including ingress and load-balanced TLS support for web applications. It's mainly for myself to revisit and reference later on. The result of this note is not (quite) production-grade, and additional features (e.g. firewalls/logging/backups) should be enabled to improve its robustness.

View File

@ -1,6 +1,9 @@
---
date: "2020-05-23T17:00:00Z"
title: "Command-line bookkeeping in Animal Crossing"
description: "Learning Ledger command-line bookkeeping and accountancy using Animal Crossing."
tags: [ledger, finance, technology]
slug: command-line-bookkeeping-acnh
---
I recently stumbled across [an article](https://www.csun.io/2020/05/17/gnucash-finance.html) on Hacker News discussing the pros of basic personal accounting using [GnuCash](https://www.gnucash.org/) - a free and open-source desktop accounting program. The article was interesting as the data geek in me resonated with the notion of being able to query the information in useful ways, particularly after having used the system for enough time to accumulate enough financial data.

View File

@ -1,9 +1,9 @@
---
year: 2020
month: 10
day: 10
date: "2020-10-10T17:00:00Z"
title: "JS Tidbit: Optional Chaining"
description: "Using the optional chaining JavaScript operator to improve code conciseness."
tags: [javascript, technology]
slug: optional-chaining
---
JavaScript has lots of handy tools for creating concise code and one-liners. One such tool is the optional chaining operator.

View File

@ -1,9 +1,9 @@
---
year: 2020
month: 11
day: 20
date: "2020-11-20T17:00:00Z"
title: "JS Tidbit: Nullish Coalescing"
description: "Using the JavaScript Nullish Coalescing operator to improve code conciseness"
tags: [javascript, technology]
slug: nullish-coalescing
---
This short post introduces a useful JavaScript operator to help make your one-liners even more concise.

View File

@ -1,9 +1,8 @@
---
year: 2020
month: 12
day: 15
date: "2020-12-15T17:00:00Z"
title: "React Query"
description: "Making use of the React Query library for interacting with web APIs in React apps, and for caching responses"
tags: [javascript, react, webapi, technology]
---
If you write React web apps that interface with a backend web API then definitely consider trying [React Query](https://react-query.tanstack.com).

View File

@ -1,9 +1,9 @@
---
year: 2021
month: 1
day: 3
date: "2021-01-03T17:00:00Z"
title: "Scaling serverless apps: some lessons learned"
description: "Potential pitfalls in writing serverless apps for AWS Lambda, Google Cloud Functions and Cloudflare Workers."
tags: [100daystooffload, serverless, aws, lambda, analysis, technology, opinion]
slug: scaling-serverless
---
Building apps on serverless architecture has been a game-changer for me and for developers everywhere, enabling small dev teams to cheaply build and scale services from MVP through to enterprise deployment.

View File

@ -0,0 +1,36 @@
---
date: "2021-01-20T17:00:00Z"
title: "Project Gemini"
description: "An introduction to the Gemini protocol and Gemini Space. Gemini clients, capsules, and search."
tags: [100daystooffload, gemini, technology]
---
Over the past few months I have been trying to use centralised "big tech" social media platforms less and instead immerse myself into the more community-driven "fediverse" of decentralised services that are connected ("federated") using common protocols (e.g. [ActivityPub](https://en.wikipedia.org/wiki/ActivityPub)). If you like, you can follow me on Mastodon ([@wilw@fosstodon.org](https://fosstodon.org/@wilw), recently migrated over from my [old mastodon.social account](https://mastodon.social/@will88)) and Pixelfed ([@wilw@pixelfed.social](https://pixelfed.social/@wilw)).
I've loved spending my time on these platforms - mainly due to the lack of noise and fuss, and more of a focus on sharing relevant content and interesting interactions with likeminded people (though of course this does depend on the [instance you join](https://joinmastodon.org)).
One of the things I've seen talked about more and more is [Gemini](https://gemini.circumlunar.space) - and having learned about it and participated myself - I have come to love the ideas behind it.
Some people will remember the [Gopher protocol](https://en.wikipedia.org/wiki/Gopher_(protocol)) - a lighter alternative to the web that was ultimately sidelined by most of the world in favour of the HTTP World-Wide Web in the early 90s. The [Gemini protocol](https://en.wikipedia.org/wiki/Gemini_(protocol)) is newer, having started in 2019, but is inspired by Gopher. In particular, it aims to solve some of the problems experienced by the modern HTTP(S) web we know today - around complexity, privacy, and "bloat" - and focuses on providing a graph of usefully connected _content_.
Gemini "sites" (known as "capsules" or "stars"/"starships") - the resources that form [Geminispace](https://en.wikipedia.org/wiki/Gemini_space) - are located using the `gemini://` URL scheme. Servers typically listen on port 1965: a homage to the [NASA's Gemini Project](https://en.wikipedia.org/wiki/Project_Gemini). Gemini text resources are similar to traditional HTML web pages in the sense that they can include links to other resources and provide structure and hierarchy through a markdown-like syntax. All Gemini resources must also be transferred using TLS.
**The image below shows my own capsule (found at [gemini://wilw.capsule.town](gemini://wilw.capsule.town)). If you can't open that link yet then read to the end of this post.**
![My Gem Capsule running in the Amfora client](/media/blog/amfora.png)
However, there are also significant differences. These `.gmi` files (and files with similar extensions served with the `text/gemini` MIME type) cannot indicate any styling instructions (as HTML often does with CSS) - and instead leaves the display and rendering of the file up to the client. In addition, whilst images can be served over the protocol, they cannot be included in (and rendered within) Gemini text files like they can in HTML. Similarly, there is no client-side dynamicity for these resources, such as the JavaScript included with most HTML web pages. It's simple; the client just renders whatever is provided by the server, and that's it.
The simplicity of the protocol - without styling, embedded images, client-side scripts, and more - offers a lightweight, ad-free, and content-oriented experience that is also available for low-powered devices and machines on slower networks. There is more of a focus on privacy (servers can't track or fingerprint you beyond knowing your IP address), and the relative "smallness" of it and absence of big-tech presence certainly brings back some of the fun and novelty of the early web as we remember it.
I certainly recommend visiting the [project's website](https://gemini.circumlunar.space) for more useful information.
## Getting involved
Since Gemini resources are not served using HTTP(S), you can't access them using a normal web browser (although you can use HTTP proxies such as [Mozz's Portal](https://portal.mozz.us/gemini/wilw.capsule.town)).
Instead, you'll need a Gemini client. I use [Amfora](https://github.com/makeworld-the-better-one/amfora) on my Mac and the [Gemini Browser](https://apps.apple.com/gb/app/the-gemini-browser/id1514950389) on my phone.
Once you have a client, you can view my own capsule by visiting [gemini://wilw.capsule.town](gemini://wilw.capsule.town). If you're interested in starting your own and want to see how mine is formed, you can view the [project source](https://git.wilw.dev/wilw/gemini-capsule).
I also recommend trying out the Gemini search-engine to discover what else lies in Geminispace. It is located at [gemini://gus.guru](gemini://gus.guru).

View File

@ -0,0 +1,15 @@
---
date: "2021-01-29T17:46:00Z"
title: "100 Days to Offload Challenge"
description: "In 2021 I hope to be able to write 100 posts on my blog as part of the #100DaysToOffload Challenge."
tags: [100daystooffload, life]
slug: 100-days-to-offload
---
I know that I've been a bit crap at updating my blog properly and consistently over the past few years. One of my new year's resolutions _this year_ is to get into the habit of writing more, and so [#100DaysToOffload](https://100daystooffload.com) seems a good opportunity to challenge myself to make sure I do.
The guidelines for and the ideas behind the challenge are [on the challenge's website](https://100daystooffload.com). There aren't any rules really, but the essential message is to "Just. Write.". So, I'll do my best before the end of 2021, and given that I've already published two posts this year I'll count this _number 3_.
I will try to keep things tech-related as much as possible. There's only so much there that I can write about though, so I will probably also include bits from my life, books I read, things I've watched, etc.
If you want to follow along, you can [subscribe to my RSS feed](/rss.xml). If you need an RSS reader, I can definitely recommend [Reeder 5](https://www.reederapp.com), which is available for [macOS](https://itunes.apple.com/app/id1529448980) and [iOS](https://apps.apple.com/app/id1529445840) - it's fab. If you're trying the challenge too, then [let me know](https://fosstodon.org/@wilw) so I can check out your posts!

View File

@ -0,0 +1,33 @@
---
date: "2021-01-30T15:31:00Z"
title: "Out with the Old: Moving to Gitea"
description: "Why and how I moved from using GitHub to a self-hosted Gitea service."
tags: [selfhosted, 100daystooffload, selfhost, gitea, github, analysis, technology, opinion]
slug: moving-to-gitea
---
If you've visited my geminispace ([gemini://wilw.capsule.town](gemini://wilw.capsule.town)) you'll have noticed that I've recently been on a mission to decentralise the every-day tools and services I use, and will understand the reasons why. This post will likely become part of a series of posts in which I talk about taking control and responsibility for my own data.
One of the changes I've made more recently is to move many of my own personal projects (including the [source for this site](https://git.wilw.dev/wilw/wilw.dev)) over to a self-hosted [Gitea](https://gitea.com) service. I chose Gitea personally, but there are many other self-hosted solutions available ([see this post for examples and comparisons](https://www.paritybit.ca/blog/choosing-a-self-hosted-git-service)).
### The "problem" with GitHub
I've been a [GitHub member](https://github.com/willwebberley) for as long as I can remember, and will continue to be so and actively use it in my more professional work and when contributing to other projects. However, I don't think I'm alone in that athough I try and develop things in the public and keep many home projects open-source, I **usually** don't do it with the _intention_ of receiving contributions from others. The discoverability on GitHub is great (though some may argue that its size means that things can get [a bit "diluted"](https://slashdev.space/posts/2021-01-23-signal-to-noise)), but many of the projects I develop are for my own use - and while anyone is free to take the code and use it as they want, the powerful tools offered by GitHub (and other centralised services) just never get used for these types of projects.
The other thing is that GitHub seems to have gradually become the LinkedIn of the software world, and many people use it as the basis of their CV or portfolio. This is great as it allows other people and potential employers to get an idea of the kinds of things a developer works on, coding style, and so on, but there's always a certain feel of _pressure_ (or sometimes subconscious competitiveness) that people on any socially-focused platforms might get.
When Twitter introduced their Fleets feature they mentioned that one of the motivators behind the project is that they understand that some people [get a fear of posting tweets](https://blog.twitter.com/en_us/topics/product/2020/introducing-fleets-new-way-to-join-the-conversation.html) when things feel so public. I've seen the same thing with GitHub in that people feel put-off contributing or publishing their own work in public repositories "in case someone sees" - is this a barrier to entry for more introverted developers? Inversely, re-engagement mechanisms - like the contributions graph on each user's profile - may make developers just publish for the sake of it.
None of these things are necessarily problems or wrong (private repos are always an option, for example), but these days it just feels more appropriate to be responsible for your own data as much as possible - especially when not making the most of what alternatives can provide you with, and it's always good to use and encourage alternative options so that one service doesn't become the expected norm.
### My experience so far
Since migrating many projects over to the smaller "world" that is my own git server, I get the feel that things are slower ([in a good way](https://jackcheng.com/essays/the-slow-web)) and I have been spending more time curating projects and working on the things I actually want to work on (though many are still private "for now"!).
If you're interested in trying your own self-hosted Gitea server, then it's pretty straight forward if you have a VPS (I just used the official Docker images, for which there are instructions [in the documentation](https://docs.gitea.io/en-us/install-with-docker)).
To move existing repositories over it's as simple as changing the `remote` (or adding a new one) in your local git configuration for the project and then re-pushing. Gitea also includes a migration service to automatically pull repositories through, and can also be set-up to mirror other remote repos.
In terms of performance, I've found it quick to use and navigate (certainly faster than GitHub's web interface) on a $10 VPS from Linode that I had anyway and on which I host many other services too.
It's definitely worth a try if this is something you're interested in. [Let me know](https://fosstodon.org/@wilw) how you get on.

View File

@ -0,0 +1,19 @@
---
date: "2021-01-31T18:15:00Z"
title: "Dirty Little Secrets by Jo Spain"
description: "A short review of the murder mystery book Dirty Little Secrets by Jo Spain."
tags: [100daystooffload, book]
slug: dirty-little-secrets
---
Recently I finished reading [Dirty Little Secrets](https://www.goodreads.com/book/show/38120306-dirty-little-secrets). This is the first book I have read by [Jo Spain](https://www.goodreads.com/author/show/14190033.Jo_Spain) and the first time I have known of the author.
![Dirty Little Secrets cover](/media/blog/dirtylittlesecrets.jpg)
The book first appears as though it's a typical murder mystery set in a relatively wealthy gated community in Ireland - however the intricacies of the characters and narrative quickly made it hard to put down. The story begins with the discovery of the long-dead body of the woman who lives at number 4 and continues with the involvement of the detectives as they investigate the strange incident.
The narrative primarily focuses on and is told from the perspectives of the neighbours and the police. It becomes clear that everyone - including the detectives - has hidden backgrounds and the story cleverly interwines past and present timelines (along with later repeated scenes told from different viewpoints) such that open-ended questions and arcs are often eventually resolved.
I really enjoyed this book, which I listened to as an audiobook well narrated by Michele Moran. It helps reinforce the reality that everyone has obscured backgrounds or secret parts to them, which can be prematurely forced into the open by external events.
Interestingly, another recent book I read - [The Guest List by Lucy Foley](https://www.goodreads.com/book/show/51933429-the-guest-list) - is a similar murder mystery also set in Ireland (Goodread's content recommender systems clearly working at their best). Although on paper it is similar (in terms of its location and multiple character-based perspectives) and it is similarly well-reviewed by others, I personally didn't really enjoy it. _The Guest List_ is certainly more of a suspenseful "thriller" in the traditional sense (largely given its setting) and the conclusion is probably more shocking, so I am not surprised it received good reviews. However, I just found the story to be a little dull and the characters a bit uninteresting and un-relatable. Each to their own, but I felt Jo Spain's storytelling and character development to be far more compelling.

View File

@ -0,0 +1,37 @@
---
date: "2021-02-01T21:06:00Z"
title: "Why not SQLite?"
description: "An open-ended post about why (or why not) use SQLite in your projects rather than a fully-fledged DBMS server."
tags: [100daystooffload, technology, opinion]
slug: why-not-sqlite
---
If you need a database for your next project, why not first consider if [SQLite](https://sqlite.org) might be a good option? And I don't mean just for getting an MVP off the ground or for small personal systems; I mean for "real" production workloads.
![Why not Sqlite?](/media/blog/sqlite.jpg)
Many people will be quick to jump on this with chimes of "it's not designed for production", but I think it depends on what is actually _meant_ by "production"? Sure, it's not the right choice for every scenario - it wouldn't work well in distributed workloads or for services expected to receive a very high volume of traffic - but it has been used successfully in many real-world cases.
What made me feel the need to write this article was seeing this sentence in the [README of the Synapse Docker repo](https://hub.docker.com/r/matrixdotorg/synapse/):
> By default it uses a sqlite database; for production use you should connect it to a separate postgres database. - [matrixdotorg/synapse](https://hub.docker.com/r/matrixdotorg/synapse/)
Don't get me wrong. I totally get its meaning, but at the same time do personal Matrix servers or [home Nextcloud servers](https://help.nextcloud.com/t/nextcloud-and-sqlite/34304) not count as "production"?
[Pieter Levels](https://levels.io) famously used SQLite to help drive revenues from some of his products to [well over six-digit dollar values](https://www.nocsdegree.com/pieter-levels-learn-coding), and SQLite's [own 'appropriate uses' list](https://www.sqlite.org/whentouse.html) explains where it can be useful:
> SQLite works great as the database engine for most low to medium traffic websites (which is to say, most websites) - [sqlite.org](https://www.sqlite.org/whentouse.html)
Even if your site or service does eventually outgrow SQLite (which will be a nice problem to have), your application code will still be using SQL and so it should be relatively easy to migrate to something like [PostgreSQL](https://www.postgresql.org).
As [Paul Graham said](http://paulgraham.com/ds.html), "do things that don't scale".
Of course, it is backed by disk and so is subject to the usual I/O constraints applicable to any file, but nearly all VPS providers offer SSD-backed instances these days and SQLite [claims to be faster than filesystem I/O](https://sqlite.org/fasterthanfs.html) anyway.
It's worth remembering that there can be huge overheads and costs in setting up "production-ready" database servers. You'll need to think about provisioning the instance itself, installation of dependencies, certificates, the usual networking hardening (firewalls, ports, etc.) - and then keeping all of this up-to-date too. Even when using managed database services there are still user roles, authentication and rotating credentials to worry about, along with securely provisioning your applications with the connection strings.
Having all of these things to worry about carries the additional risk of encouraging people to become lazy or to not have the time needed to make sure everything is done properly; an easy way to accidentally introduce security issues. Plus, if you have multiple environments (e.g. for staging or testing) then these factors, and the associated costs, amplify.
There is also some interesting discussion on the topic in this [Hacker News thread](https://news.ycombinator.com/item?id=23281994) from last year.
I just think it's definitely worth a go before jumping straight into alternative heavier options. It's free, has a [smaller footprint](https://sqlite.org/footprint.html), has easily accessible bindings for many languages, and you can get started in minutes - [all you need is a file](https://sqlite.org/onefile.html).

View File

@ -0,0 +1,30 @@
---
date: "2021-02-02T20:31:00Z"
title: "Blogging for Devs"
description: "'Blogging for Devs' is an excellent course by Monica Lent for gaining confidence in writing about technology, growing your audience, and blog strategy."
tags: [100daystooffload, life]
---
A few months ago I discovered [Blogging for Devs](https://bloggingfordevs.com) - I think through [Product Hunt](https://www.producthunt.com/posts/blogging-for-devs) when it made it to #1 Product of the Day back in August last year.
At the time blogging was something I had been thinking about quite a lot. I actively followed several other blogs - both from people I know and from others in the tech community - and it was clear that, in addition to producing content that was interesting to read by others, writing was something these bloggers actually enjoyed and found valuable too for their own learning and engagement with the community.
I have also always enjoyed writing (you have to if you're ever involved in research!). I was still posting things occasionally, and had been doing so for several years, but blogging had just never really got round to forming any part of my normal routine. It was certainly something I wanted to do more of - to write, engage more with likeminded people, and for all the other personal and professional benefits associated with consistent and frequent writing - and so this was clearly a habit I needed to learn to form.
![Blogging for Devs website](/media/blog/bloggingfordevs.png)
Blogging for Devs is a course and newsletter created by [Monica Lent](https://monicalent.com) and, with all of this running through my head, I signed-up almost straight away.
I don't want to give away too much about Monica's course or content (it's free to [sign up yourself!](https://bloggingfordevs.com)), but one thing I found really valuable was actually something that happened right at the start of the course. After I signed-up I received an automated email asking _why_ I had chosen to sign-up and what I wanted to learn. Of course, I know this is largely to help Monica shape her course and to get an understanding of people's needs, but I actually found it a super-helpful self-reflection.
Why _didn't_ I blog more? What was blocking me, even though it was something I actively wanted to do? After a while of thinking it boiled down to one main thing, which was my **confidence** and, in particular a fear of what people would think if they read it (especially if they knew me!) and also a worry of writing about things no-one is actually interested in ("why would anyone want to read this?"). I summarised this and wrote it back to Monica's email, and she got back to me with a nice personal reply not long after.
The course covers lots of topics - from SEO and branding through to actual blog content. However, my issue was still very much the whole confidence thing. One thing that became clear to me during the course is that the most important step in getting over that barrier, and then forming a habit - whether it's getting up early, doing more exercise, or writing blog posts - is just to **start doing it**.
And I don't mean tomorrow or next week, I mean **today**. Just pick something to write about. If you're just getting started it can be a quick post introducing yourslf ([WriteFreely](https://writefreely.org) is a great platform if you need one). If you've already got something going and want to write more (like me) then write a short post about something you've learned today - tech or not. The important thing is just to start doing it.
Of course, not everything you post will be enjoyed by everyone, but that's OK. It's not always solely about your audience; you're doing it for yourself too, remember.
And also remember to sign-up to [Blogging for Devs](https://bloggingfordevs.com) today too. It's a fantastic course. If you look back at my [writing history](/blog) you'll notice the difference it's had on me. I've blogged much more consistently and effectively since taking the course, and I'm still working through some of the content even today.
Even if you're already a seasoned blogger I'm sure you'll pick up some extra tips and helpful insights, and the Blogging for Devs website also has a great [Trends section](https://bloggingfordevs.com/trends) to help you discover new blogs to follow.

View File

@ -0,0 +1,65 @@
---
date: "2021-02-03T22:16:00Z"
title: "RSS: The Rise and Fall... and Rise Again"
description: "An opinion piece on RSS, its popularity over the last couple of decades, and how it can make a resurgence again."
tags: [100daystooffload, technology, opinion]
slug: rss-rise-fall-rise
---
Many people would consider RSS - Really Simple Syndication - to be a relic of the past. However I think it has been making a comeback.
RSS is a mechanism by which people can automatically receive updates from individual websites, similar to how you might follow another user on a social networking service. Software known as RSS _readers_ can be used to subscribe to RSS _feeds_ in order to receive these updates. As new content (e.g. a blog post) is published to an RSS-enabled website, its feed is updated and your RSS reader will show the new post the next time it refreshes. Many RSS readers have an interface similar to an email client, with read/unread states, folders, favourites, and more.
## The rise
RSS was [first released](https://en.wikipedia.org/wiki/RSS) in early 1999, and it steadily gained popularity amongst content producers and consumers, with adopters from media outlets and software implementations making their way into early Internet Eplorer and Firefox versions, amongst others. These were the days before the "real" [Web 2.0](https://en.wikipedia.org/wiki/Web_2.0) hit, and in which websites were very much more silos of information. Tools like RSS were powerful then because they enabled the easy _aggregation_ of information from multiple sources.
Not too long after this (Web 2.0 'began' in the mid 2000's), and during the years ever since, mainstream social networks became ubiquitous. Many people flock(ed) to these as a way to share and subscribe (by following others) to receive updates in real time, several times a day and from lots of different people and organisations. These services enabled features far beyond aggregation by allowing easy sharing, rating (e.g. likes) and commentating such that today such services have become the primary means of information and news sharing and reception for many people.
## The fall(?)
At that time RSS was still very much "a thing" for many people (though the [discontinuation of the hugely popular Google Reader in 2013](https://en.wikipedia.org/wiki/Google_Reader#Discontinuation) was a bit of a bummer to these communities). However new people now joining the web scene would be far more likely to instead engage with these extremely well-funded, well-marketed, and _centralised_ social platforms - [perfectly engeineered to be addictive](https://www.thesocialdilemma.com), entirely driven and propagated by [FOMO](https://en.wikipedia.org/wiki/Fear_of_missing_out), and focused on content-sharing (even if the content is often [misinformation](https://www.theguardian.com/technology/2021/jan/30/facebook-letting-fake-news-spreaders-profit-investigators-claim)) - where _you_ are the product, rather than spend the time researching and subscribing to individual RSS feeds.
To some commentators in this space the concept behind all of these social platforms is known as the _[fast web](https://jackcheng.com/essays/the-slow-web/#the-fast-web)_ - a web that tells you when and what information to consume rather than letting you make that decision for yourself. Facebook, Twitter, Instagram, and others all started as just a _chronological_ timeline of interesting content from friends and family. On all of these services today the "algorithm" determines what (and who) goes in your timeline, and it constantly learns what to feed you - and when - in order to get those few extra minutes from you each day. This is literally its business model.
What used to be an innocent bookmarking tool, Twitter's "favouriting" mechanism is now essentially a game of __retweet roulette__ in which the algorithm will every now and again choose to include your "bookmarks" (not just retweets) on the feeds of people who follow you. If that's not anxiety-inducing or user-hostile then I don't know what is!
Of course this is something Facebook has done for a while too, except perhaps in a more sinister way - such as implying [a user has liked something when they haven't at all](https://www.baekdal.com/thoughts/facebook-graph-search-privacy-woes).
Other social networking tools can be more user-friendly. For example, the open-source [Mastodon](https://en.wikipedia.org/wiki/Mastodon_(software)) software powers distributed social networks that aren't fuelled by addiction and instead give you more control over what you receive and where your posts go. However these tools still have some way to go to becoming anywere near mainstream.
I want to caveat some of the above: I obviously don't think any of this is the fault of the individual. These social platforms are fantastically easy places to set-up a web presence. Creating an Instagram, Facebook, or Tik-Tok account for you (or your business) takes literal seconds. Within a minute you can have your profile setup and be following a dozen people, and already getting engagement and "reactions" from others (remember those "Your friend, X, is now on Instagram!" type notifications?).
With all of this power and efficiency at the fingertips it's no wonder that people don't create their own personal websites anymore, or feel the need to reach out to actively keep-up with other such sites. What's the point in re-inventing the wheel when I can easily create a Facebook page for myself that includes an inbuilt blog "feed", a space for links, photos, and more? And it's "free"! The barrier to creating a self-owned personal space on the internet for yourself is considered too high for most people, and is probably still seen as "geeky" even if it does come with all the benefits of privacy and control.
And I'm not saying that self-owned spaces, RSS, and that whole ecosystem are related, or the opposite to, mainstream social media; more that it is a useful way to compare and contrast different ways of accessing and disseminating information and the level of control one has over this.
This probably feels like I'm going way off-piste, and I sort of have, but my key meaning here is that for several years the concept of RSS has evaporated from popular knowledge because people haven't _needed_ it - either as a tool for receiving _or_ disseminating information. Ask your non-tech friends and family if they've heard of RSS (and know what it is for a bonus point) - I bet the positive response rate will be low in most cases, especially in younger respondents.
Also, I don't think this is solely the fault of the social giants. Online media outlets - which would have needed to rely on RSS for years before online social media became more mainstream - now often completely ignore it or treat it as a second-class citizen.
The [BBC News website](https://www.bbc.co.uk/news) happily displays large friendly icons for Facebook, Twitter, and the like, but no mention of RSS (try `ctrl-F`). In fact, you'll probably need to search the web for "bbc rss" in order to find the RSS feeds that are [listed on a page that hasn't been updated for over a decade](https://www.bbc.co.uk/news/10628494) and which still lists IE7 and the long-discontinued Google Reader as sensible options (though ironically I suppose this does indicate the stability and robustness of the RSS system).
## The rise again
Anyway, all that sounds a bit doom and gloom but I definitely think we are starting to see a shift in people's attitude towards and - importantly, trust in - these big tech companies. Facebook's recent attitiude towards information collection (and subsequent sharing) has [hit mainstream headlines](https://www.independent.co.uk/life-style/gadgets-and-tech/facebook-update-apple-privacy-ads-b1795916.html) and everyone must have seen [Whatsapp's popup about data sharing](https://www.techradar.com/news/whatsapps-new-privacy-policy-requires-you-to-share-data-with-facebook). Too much uncertainty undermines the trust in these platforms, and people have understandably sought out other options. A few weeks ago Telegram reported an addition of [25 million new users within 72 hours](https://www.androidpolice.com/2021/01/12/telegram-adds-25-million-new-users-in-just-72-hours-as-rival-apps-falter) as a result of these policy "changes".
My parents aren't really tech-aware at all but even they were telling me last week on a video call about this "new app Signal" they had downloaded and begun to use with their friends - without any of my input.
I'm not sure what it is, but people seem to _care_ more about their data these days. Whether that's because of GDPR, the fact that coronavirus means people aren't endlessly scrolling through social feeds on their daily commutes anymore, or something else or a mixture of everything. And that extends to being more picky about the information they receive too.
Either way, I've noticed more and more [posts like this](https://atthis.link/blog/2021/rss.html) (and the subsequent [reactions and discussions](https://news.ycombinator.com/item?id=26014344)) recently, and the [#100DaysToOffload](https://100daystooffload.com) movement has brought about a surge in people - myself included, really - creating their own longer-form content, for which RSS is a perfect distribution mechanism.
I think we're on a bit of a brink representing a general - but real - change in attitude from people towards data, and the time that they choose to give to now lesser-trusted platforms. It is our responsibility to help educate about the alternative options so that those around us can make their own decisions. Whilst I am relatively new to RSS in the grand scheme of things (having only really started properly engaging with it about a year ago), it already makes me feel more in control of what I view, and when.
Whilst this concept doesn't need to be limited to RSS, it's a great starting point as it's easy to understand. It "feels" friendly, it helps power connections to the decentralised and [small web](https://ar.al/2020/08/07/what-is-the-small-web/).
It, as a concept, has no business model. Though of course you can pay for the software you use, and websites can make money through ads, but at least you have a _choice_ regarding who you subscribe to and the software you use to do it through (and [there are lots of choices](https://en.wikipedia.org/wiki/Comparison_of_feed_aggregators)). You aren't tied into anything and it respects your privacy - you don't need to "sign-up", provide your details, and sites don't know that _you_ personally have subscribed.
RSS may be age-old, but it is an excellent way to still get the information you need as you begin to use mainstream social media less, and - although it doesn't need to be slow in itself - it is a fantastic tool to combine with the growing and user-respecting world of the [slow web](https://jackcheng.com/essays/the-slow-web#timely-vs-real-time), in which timeliness (where you're in control) is far more important than "real-time".
--
### Edit
I've received some replies to this post that talk about the lack of mentions of the [Atom standard](https://en.wikipedia.org/wiki/Atom_(Web_standard)) and podcasts. RSS certainly is (and has been) a fantastic way to subscribe to podcasts. Its flexibility and ease of use has been a great tool for both content creators and consumers, and has helped to build the ecosystem of podcast apps and services we see today. And of course, there are other very useful distribution mechanisms and standards available for distributing information, such as Atom. This post was focused more on contrasting this family of systems with what many people may consider "mainstream" services, and how the wide adoption of the latter has perhaps had an effect on the former.

View File

@ -0,0 +1,82 @@
---
date: "2021-02-05T23:46:00Z"
title: "React State Management with Zustand"
description: "How to manage your JavaScript React app's global state using the zustand library."
tags: [100daystooffload, technology, javascript, react]
slug: react-state-zustand
---
## React state
React state management is what gives the library its reactiveness. It's what makes it so easy to build performant data-driven applications that dynamically update based on the underlying data. In this example the app would automatically update the calculation result as the user types in the input boxes:
```jsx
import React, { useState } from 'react';
function MultiplicationCalculator() {
const [number1, setNumber1] = useState(0);
const [number2, setNumber2] = useState(0);
return ( <>
<input value={number1} onChange={e => setNumber1(parseInt(e.target.value))} />
<input value={number2} onChange={e => setNumber2(parseInt(e.target.value))} />
<p>The result is {number1 * number2}.</p>
</> );
}
```
![The resultant React app, showing two text inputs and a result line](/media/blog/zustand1.png)
The entire function will re-run on each state change (the `setNumber1` and `setNumber2` functions) in order to reactively update the result text. The multiplication itself could be calculated in a `useEffect` but it is simpler to look at it as shown.
This is totally fine for many apps, however this quickly becomes unmanageable when you need to share state (e.g. `number1`) between this component and another component - and ensure that a state change in the former can be reflected in the latter - whether it's an ancestor, descendant, or a more distant component. Of course, you can pass the state variables (and the associated `setState` functions) from a parent down as `props` to child components, but as soon as you're doing this more than a handful of times or in cases where state needs to be shared across distant components this quickly becomes hard to maintain or understand.
An example of shared state might be to store the details about the currently logged-in user in an app. A navigation bar component would need to know about the user state to show a link to the correct profile page, and another component may need access to the same state in order to allow the user to change their name.
## Context and Redux
This is by no means a new problem. Many of these issues are solved using React's [Context API](https://reactjs.org/docs/context.html) and there are also libraries like Redux that are useful in perhaps more complex scenarios - it's much more opinionated and involves a fair bit of extra code that may be overkill in many apps. Adding just a small piece of state (e.g. a new text input), and the ability to alter it, to Redux involves updating reducers, creating an action, dispatchers, and wiring things through to your components using `connect`, `mapStateToProps`, and `mapDispatchToProps`. Plus you'll need the relevant provider higher up.
Redux is certainly a fantastic library, however, and I use it in many apps. [This post](https://changelog.com/posts/when-and-when-not-to-reach-for-redux) is useful and discusses the cases in which you may (or may not) want to use Redux.
## Zustand
In this post I want to talk about another option that is perhaps quicker and easier to use, expecially for those newer to React (though it's also great for more seasoned React developers) - [zustand](https://github.com/pmndrs/zustand). Not only is this the German word for "state", it's also a nice and succinct library for state management for React.
The zustand library is pretty concise, so you shouldn't need to add too much extra code. To get started just add it as a dependency to your project (e.g. `yarn add zustand`). Now let's rewrite the earlier multiplication example but using zustand.
First, define a _store_ for your app. This will contain all of the values you want to keep in your global state, as well as the functions that allow those values to change (_mutators_). In our store, we'll extract out the state for `number1` and `number2` we used in our component from earlier, and the appropriate update functions (e.g. `setNumber1`), into the store:
```jsx
import React from 'react';
import create from 'zustand';
const useStore = create((set) => ({
number1: 0,
number2: 0,
setNumber1: (x) => set(() => ({ number1: x })),
setNumber2: (x) => set(() => ({ number2: x })),
}));
```
Now - in the same file - we can go ahead and rewrite our component such that it now uses this store instead of its own local state:
```jsx
function MultiplicationCalculator() {
const { number1, number2, setNumber1, setNumber2 } = useStore();
return ( <>
<input value={number1} onChange={e => setNumber1(parseInt(e.target.value))} />
<input value={number2} onChange={e => setNumber2(parseInt(e.target.value))} />
<p>The result is {number1 * number2}.</p>
</> );
}
```
That's it - we now have a React app that uses zustand. As before, the component function runs each time the store's state changes, and zustand ensures things are kept up-to-date.
In the example above the two blocks of code are in the same file. However, the power of zustand becomes particularly useful when the store is shared amongst several components across different parts of your app to provide "global state".
For example, the `useStore` variable could be declared and exported from a file named `store.js` somewhere in your app's file structure. Then, when a component needs to access its variables or mutator functions it just needs to - for example, `import useStore from 'path/to/store'` - and then use [object destructuring](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment) (as on line 11 above) to pull out the needed variables and functions.
It's worth checking out [the documentation](https://github.com/pmndrs/zustand) since zustand is super flexible and can be used in ways that help improve performance, such as taking advantage of memoizing and state slicing. It also makes what can be tricky in other such libraries - e.g. asynchronous state updates - trivial.
If you've already got an established app using another state management system it may not be worth migrating everything over. But give zustand a go in your next project if you're looking for straight forward, yet powerful, state management.

View File

@ -0,0 +1,43 @@
---
date: "2021-02-06T19:34:00Z"
title: "Add icing to your websites using pattern.css"
description: "How to use the great pattern.css library to easily add subtle patterns and backgrounds to your websites and web apps."
tags: [100daystooffload, technology, css]
slug: pattern-css
---
Shapes and patterns can be leveraged in user interfaces to guide your users, draw attention to content, lend weight or emphasis, or just for aesthetics and decoration.
Layout and styling on the web is typically handled using CSS, however mastering CSS to the level where you can confidently take advantage of more advanced features is definitely not easy. I've been developing for the web almost full-time for a decade and I'm still pretty crap when it comes to doing complex stuff with CSS.
That said, some people have done some [mindblowing things using CSS and just a single `div` element](https://a.singlediv.com).
The [Subtle Patterns](https://www.toptal.com/designers/subtlepatterns) website has been around for years - it's a great resource for discovering and accessing nice textures and backgrounds for your creations. There are also some nice libraries in CSS that let you describe patterns programmatically, and this comes with the added performance advantages that CSS provides (as browsers are pretty performant when it comes to CSS).
[pattern.css](https://bansal.io/pattern-css) (created by [bansal-io](https://github.com/bansal-io)) is a great little CSS-only library for adding simple, but effective, patterns to your websites. For example, backgrounds for elements, false "block" shadows, or even within the text itself. All that's needed are a few extra classes on your elements and the small (less than 1Kb when gzipped) library will do the rest.
To get started, you can add the library to your project using your normal JavaScript package manager (e.g. `yarn add pattern.css`). Then either include the CSS file in your HTML or, if you're using React or another framework/builder that allows you to import CSS directly, you can:
```jsx
import 'pattern.css/dist/pattern.css'
```
Once that's done it's just a matter of [adding classes](https://bansal.io/pattern-css#usage) to your markup. All the `pattern.css` classes start with `pattern`, followed by the type of pattern (e.g. `-diagonal-stripes`), followed by the "size" of the pattern (e.g. `-sm`).
For example, to build a `div` with a chunky zig-zag patterned background you just need to use:
```html
<div class="pattern-zigzag-lg">
...
</div>
```
To change the colour of the pattern just set a `color` style on the element. If the element also has a `backgroundColor` then this will display through the transparent bits:
```html
<div class="pattern-diagonal-stripes-md" style="color: red; backgroundColor: yellow">
...
</div>
```
Have a read through [the documentation](https://bansal.io/pattern-css#hero) for examples and further pattern types. It's quick to get the hang of and far more effective to use if - like me - you find some of the complexities of CSS hard to get your head around!

View File

@ -0,0 +1,53 @@
---
date: "2021-02-07T19:31:00Z"
title: "Using Monica to Help Manage your Personal Relationships"
description: "Why you need a 'personal relationship manager' and how to set-up Monica on your own server."
tags: [100daystooffload, technology, life, selfhost]
slug: monica-personal-crm
---
Many people no longer feel comfortable using Facebook. Whether you were never a member to begin with or you've had an account but chosen to remove yourself from the service, or you've simply tried to start using it less - either way, it's no surprise given the way that they, across their family of products (including Instagram and WhatsApp), operate in terms of your own data and time.
This is a huge subject on its own and it's really up for everyone to make their own minds up when it comes to their own stance. It's been widely discussed pretty much everywhere, and there are [loads of resources available on this handy website](https://www.quitfacebook.org) if you're interested in understanding more about what goes on behind the scenes on these platforms.
## Staying in the loop
Anyway this isn't another post about Facebook, but one of the things that _is_ useful about that particular platform is its birthday reminder system in which you automatically receive an email from Facebook if it happens to be one of your friend's birthdays that day. In itself, this is of course simply a mechanism to try and get you to re-engage with the platform - such as to send your friend a direct message on Messenger or to post something on their timeline.
However, it is nice to get messages on your birthday, and nice to imagine that someone you only speak to a couple of times a year has the headspace to _remember_ that today is your special day. Even though you both know that it's because Facebook has sent a reminder with an easy CTA.
The good news is that there are still lots of services that help you remember key events without needing to rely on Facebook. Of course you can set-up calendars (many mail providers have in-built calendar facilities that can sync to your client with CalDAV), but you may want to remember other things too - such as anniversaries, friends' pets' names, that time you helped your cousin move house, and more. Quickly all of this info ends up distributed between a number of systems and becomes hard to look-up and manage (unless you're super organised).
## Monica: the "Personal Relationship Manager"
What we need is a personal _CRM_ ("customer relationship manager"), which can do all of this for us. And thankfully such systems exist - such as [Monica](https://www.monicahq.com).
> Monica is the single best investment you can make to have better relationships. - [monicahq.com](https://www.monicahq.com/pricing).
Monica is a piece of [open-source software](https://github.com/monicahq/monica) that can handle all of this for you as a "Personal Relationship Manager" (in their words) - and much more. You can sign-up on [their website](https://app.monicahq.com/register) and pay a small ongoing subscription fee to cover the server costs. Alternatively, you can easily self-host it on your own server.
![The Monica dashboard homepage](/media/blog/monica.png "This is what my Monica homepage looks like")
I've been using it (the self-hosted option) for some time now, and [love its features](https://github.com/monicahq/monica#features). I get automatic email notifications in-time to remind me about key events, I can keep track of the birthdays of my friends' kids, remember gifts I have been given, friend life events, jobs, and more.
Although I still want to spend some further time setting it up and adding more details about the people I know, it already helps me to include richer information when I message friends and family and to remember the things I really should be anyway.
Monica looks great, works fine on my phone web browser as well as my desktop browser, and also has an API that allows you to build your own workflows or to connect it to other services.
If you find yourself forgetting birthdays and important information about friends and family, or if you just want to log relationships more effectively, then I certainly recommend giving it a go.
## How to self-host Monica
I host Monica on a relatively small VPS. It's lightweight and it happily runs alongside a few other services.
I usually prefer using Docker to host things like this as it helps keep things isolated when running multiple services on the same machine. I have an Nginx container (with several virtual hosts) that proxies requests through to the appropriate services.
The Monica Team kindly maintain an official [Docker image](https://hub.docker.com/_/monica). I went for the Apache version (as I already have Nginx in-place for TLS, etc.) for which there is an example [Docker Compose](https://docs.docker.com/compose) config available on the official Monica image page. The documentation also explains how to get your first user setup.
One of the main advantages of Monica is its ability to keep you updated without you needing to login and check-up on things. It does this by sending you emails, and for this to work you'll need to add a bit of extra configuration to your Docker Compose file, as [described on this page](https://github.com/monicahq/monica/blob/master/docs/installation/mail.md). Just add the extra variables to your `environment` section in `docker-compose.yml`. The article mentions Amazon SES, however you can use your own mail provider's SMTP/IMAP server settings here (e.g. [Mailgun](https://www.mailgun.com)).
If you plan to use [Linode](https://www.linode.com) to host your Monica service (which is a great choice), you may just need to open up a quick support ticket with them so that they can make sure your account is allowed to send traffic on standard email ports (e.g. 25 and 587), which they sometimes restrict on new accounts to help fight spam.
## Contribute
If you want to contribute to this great open-source project, then there are [guides available on GitHub](https://github.com/monicahq/monica#contribute).

View File

@ -0,0 +1,88 @@
---
date: "2021-02-10T22:11:00Z"
title: "SSH Jumping and Bastion Hosts"
description: "Working with secure network architectures with bastion hosts and SSH jumping."
tags: [100daystooffload, technology, security]
image: header-ssh-jumping-bastion-hosts.png
imageDescription: AI generated pixel art of astronauts and cats jumping over computers.
slug: ssh-jumping-bastion-hosts
---
For many small or personal services running on a VPS in the cloud, administration is often done by connecting directly to the server via SSH. Such servers should be hardened with firewalls, employ an SSHd config that denies root and password-based login, run [fail2ban](https://www.fail2ban.org), and other services and practices.
Linode has some [great getting-started guides](https://www.linode.com/docs/guides/securing-your-server) on the essentials of securing your server.
## Protecting sensitive servers
In more complex production scenarios heightened security can be achieved by isolating application (webapp, API, database, etc.) servers from external internet traffic. This is usually done by placing these "sensitive/protected" servers in a private [subnet](https://en.wikipedia.org/wiki/Subnetwork), without direct internet-facing network interfaces. This means that the server is not reachable from the outside world.
In this type of scenario, outbound traffic from the sensitive server can be routed through a [NAT gateway](https://en.wikipedia.org/wiki/Network_address_translation) and inbound traffic can be funnelled through a [load-balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing)) or reverse proxy server. In both these cases the NAT gateway and load-balancer would exist in public subnets (with internet-facing network interfaces) and can reach the sensitive server through private network interfaces in order to forward requests (e.g. web traffic).
![Diagram of public and private subnets, with a NAT gateway and load balancer](/media/blog/ssh1.png)
Now the question is around how one _does_ manage the services running on the protected server, since it is no longer available to connect to. Traditionally this is done by introducing _bastion hosts_ into your network.
## Bastion hosts
[Bastion hosts](https://en.wikipedia.org/wiki/Bastion_host) - like the NAT gateway and load balancers - sit in the public subnet and so they are available to the outside world. They often accept SSH connections, from which one can "jump" through to the protected servers through the bastion's private networking interface.
![Adding a bastion host to the cloud infrastructure](/media/blog/ssh2.png)
Bastion hosts should be hardened as much as possible (with firewalls and other network rules), and should run a limited set of services - in many cases simply SSHd.
This server then enables administrators to connect through to the protected servers in order to carry out maintenance, upgrades, or other tasks.
## Connecting through a bastion host
SSH port-forwarding is a widely-used concept, in which a secure tunnel to a service running on the protected server is opened via a port on the local machine (using `ssh`'s `-L` option).
Another option is to use proxy _jumping_ (with the `-J` option):
```shell
ssh -J bastion.company.com protected.company-internal.com
```
In this example the user connects via the bastion through to the protected server at `protected.company-internal.com`. Since you should be using key-based authentication to connect, you may also need to specify the private key path (with the `-i` option), and also tell SSH to forward your agent to the bastion (using `-A`) so it can continue the connection.
This can make things hard to remember each time. You could write the command in a script, however it's probably easier to use SSH's own local configuration. To do so, you can add the following to your local `~/.ssh/config` file:
```
Host *.company-internal.com
ProxyJump bastion.company.com
User username
IdentityFile /home/username/.ssh/identity
```
With that in place you can now simply run the following when you want to connect to the protected server:
```shell
ssh protected.company-internal.com
```
_Note: depending on your system you may need to add the key to your local agent first, but you just need to do this once per login (`ssh-add ~/.ssh/identity`)_.
## A note on DNS
Generally I would probably avoid assigning public domain names to a bastion host, as this may invite unwanted attention and traffic (even if the host is secured). Instead you can just include the IP address directly in the `ProxyJump` line of `.ssh/config`. _I used domain names in the examples above to make the process clearer_.
Also, in the above example I refer to the `company-internal.com` domain for use within the private network. This domain should only be resolvable within members of the private network - either by using an internal DNS server or by simply modifying `/etc/hosts` on the bastion. Alternatively you can just use the private IP address for the protected server on the `Host` line of `.ssh/config`.
## Additional notes
In setups like this you may also want to consider the following:
### Private keys
Don't provision these on your bastion host. Instead use agent forwarding (as described above). You'll need to add your public keys to both the bastion and protected servers.
### Restrict source network
For extra security you can restrict SSH connections to your bastion only from trusted networks (e.g. your office network or a VPN).
Similarly, restrict protected servers such that they only accept SSH traffic from the bastion, and not from other servers on the network.
### Make use of managed services if/when possible
For extra security you can use managed services when they are available. For example, if you use AWS then you can make use of a combination of VPCs, subnets, NAT gateways, elastic load balancing, security groups, Route 53, and other services to secure your hosts and control your network. You can of course set this up on your own servers without relying on managed services.
Either way, I hope this post has helped shed light on some simple ways to improve network security for your applications and services.

View File

@ -0,0 +1,29 @@
---
date: "2021-02-13T21:17:00Z"
title: "The Midnight Library by Matt Haig"
description: "Some thoughts on the novel 'The Midnight Library' by Matt Haig, and my personal takeaways."
tags: [100daystooffload, book]
slug: midnight-library
---
Last week I read [The Midnight Library](https://www.goodreads.com/book/show/52578297-the-midnight-library) by [Matt Haig](https://www.goodreads.com/author/show/76360.Matt_Haig). The book won the 2020 [Goodreads Choice Award for Fiction](https://www.goodreads.com/award/show/21332-goodreads-choice-award).
![The Midnight Library cover](/media/blog/midnight-library.jpg)
"Set" in Bedford, England, the story starts by introducing the main character - Nora Seed - who feels completely down. She is depressed and thinks that she has nothing further to contribute to her own life or to the lives of the few people around her.
On the day she decides she no longer wants to live, she is fired from her job, her cat dies, and other events occur which help cement her decision. However, as she dies she is transported to a place that exists between life and death: The Midnight Library.
Here she is presented with the infinite number of books that make up the lives that could have been had she made different choices in the past - whether those were big or small (such as choosing whether to have a tea or coffee) or something more obviously impactful. Either way, they can contribute to a complete change in life direction.
She has the option to begin living these different lives by considering the _regrets_ she has about the decisions she made in her root life. As she "visits" her other lives she reflects on the decisions that led her to that point, and also realises the power in the choices she makes in their ability to also drastically affect the lives of those around her.
Whilst I feel that the book was perhaps not as deep as it could have been, it is my opinion that this was an intentional design by the author as it made the experience more of a canvas in order for the reader to make their own reflections.
Some of the key thought takeaways for me were around the knowledge that whilst the decisions one person makes may benefit them, they may not be beneficial for everyone. Whilst of course it is important to consider the happiness and wellbeing of yourself as well as those affected by your decisions, one needs to live and experience the variety of life without feeling paranoid about making the decisions that you feel are the right ones.
The book made me reflect on some of my own decisions. I know the grass isn't always greener but that the choices are always there to be made if I want or need a change - it is never too late.
The premise of the story sounds like it could be depressing, however I did not find that at all. In many ways, it was the complete opposite: having an understanding of the power in your choices helps you realise that even when things feel at their worst, you are not powerless. There is always something you can do to make a change and choices to be made to gear yourself towards where you need to be.
We all have regrets in our own lives, and decisions we wish we did (or didn't) make, but these should not be dwelled upon or worried about. Instead we can consider them as the useful tools they are to help us make different or better decisions as we look forward and continue into the future.

View File

@ -0,0 +1,33 @@
---
date: "2021-02-18T21:01:00Z"
title: "A Year Without Answering my Phone"
description: "Why I stopped answering my phone, and what my experience has been like since."
tags: [100daystooffload, life, opinion]
slug: no-phone-answering
---
This month marks a year from when I decided to (_mostly_ - see below) stop answering my phone. This was not because I wanted to be antisocial (quite the opposite), but because it's become the wrong form of communication for me.
## Why did I stop?
Like many people, I am inundated with sales-y and spammy phonecalls. I have had the same mobile phone number since 2001 (that's 20 years this year), which I am sort of proud of and would prefer to keep. However, careless (or malicious) entities over the years (and more than likely mistakes also made by my younger self) have meant that my number and name are now in the databases of many different types of agents - from insurance/legal company sales teams through to dodgy Bitcoin spam companies.
It got to the point that the signal/noise ratio ("real" phone calls vs. unwanted) probably dropped to around 5%. At first, spam calls were easier to spot (they'd call from random UK cities), but recently calls started to come in from numbers starting with "07" (which designates a mobile number in the UK) and also more and more from the [area code](https://en.wikipedia.org/wiki/List_of_dialling_codes_in_the_United_Kingdom) of the city where I live - probably in the hope of appearing more legitimate to me.
I also find talking on the phone sort of _stressful_. I'm sure I'm not alone in that the _Phone_ app is probably the least-used part of my smart"phone". For some reason, to me it just doesn't feel natural, and - with the exception of close friends and family (and even them sometimes) - I'd much rather "talk" to people via IM or live text chat.
I'm naturally pretty introverted so I get on better with channels that enable me to think and formulate comms in my own time.
Unexpected and unscheduled calls are also pretty _rude_, I think. Stephen Fry sums up essentially what I feel about this [in this short but great clip from QI](https://youtu.be/7xXSw07zrio?t=211) - that phoning someone out of the blue is really the equivalent to going up to that person and yelling at them, "speak to me now, speak to me now, speak to me now". Without caring that they might be busy, stressed, not in the right frame of mind, or any number of other states.
This is incredibly invasive to do to someone you don't even _know_.
## What was the result?
In the end I made a pact that I would no longer answer the phone unless it was a pre-arranged call or from a number that I recognised - and only then close friends and family.
I feel far more empowered and in control of my own time when I hear/see my phone ring - and I just silence it and let it ring out. The decision has already been made to purposefully miss the call and so there is no need for any anxiety that might accompany such unexpected calls.
I sometimes choose to avoid calls from numbers I _do_ recognise. These callers (usually businesses I deal with) just follow-up with an email anyway to which I can respond when I'm ready - usually within the hour. If there is an emergency they can leave a voicemail, which I will get notified about and then choose how best to respond. Friends and family either feel the same as me or know me well enough so that I don't need to miss their calls.
Either way, I haven't (knowingly) missed any events, appointments, insurance renewals, or whatever. I am going to carry on as I have been and I can certainly recommend this approach to you too if you feel the same way as me about unwanted phonecalls.

View File

@ -0,0 +1,47 @@
---
date: "2021-02-20T21:42:00Z"
title: "The Glamour of Cyberpunk and the Road to Solarpunk"
description: "What is Solarpunk, and can we make it a reality?"
tags: [100daystooffload, opinion]
slug: solarpunk
---
A few months ago I stumbled across this article: [Beyond Cyberpunk: Towards a Solarpunk Future](https://thedorkweb.substack.com/p/towards-a-solarpunk-future). It was posted on the excellent blog _Tales from the Dork Web_, by Steve Lord, which I can certainly recommend [subscribing to](https://thedorkweb.substack.com/subscribe).
I had never heard of the term "Solarpunk" before, but I read up more about it and the more I researched the more intrigued I became. Essentially it is defined as - more or less - the _opposite_ to the [Cyberpunk](https://en.wikipedia.org/wiki/Cyberpunk) subculture, and I think we're at a bit of a fork in the road from which either future could become a reality.
## Cyberpunk
Cyberpunk (_not_ the game by CD Projekt) is a term that describes a potential future setting that is pretty dystopian: there is a large "wealth gap" between the rich and poor; people live in dark and cramped accommodations, have mostly unhealthy existences, and are governed by a small number of large private corporations. The growth of these companies, however, allows citizens of the Cyberpunk future to be equipped with some pretty nice pieces of technology for communication, leisure & media, travel, automation, and anything else.
In a nutshell, it's often described as "high-tech, low-life".
Whilst it sounds (to some?) like a gloomy outlook, I love the dark and lonely imagery, the artwork, stories and subculture that has emerged from other people who are also fascianted by this movement. You've probably seen such scenes yourself in pictures, movies, books, and games that adopt the Cyberpunk setting. The [r/ImaginaryCyberpunk subreddit](https://www.reddit.com/r/ImaginaryCyberpunk) community also often posts excellent and emotive content.
I love this image: [Oris City by Darko Mitev](https://www.artstation.com/artwork/R3QNee) and I can certainly recommend checking out more of his work and tutorials too. I love all of the atmosphere and detail.
Despite the "glamour", interesting and exciting stories and movies, politics, and other cultural pieces that emerge from it, Cyberpunk describes a gloomy future that I imagine most people do not want to actually experience.
## Solarpunk
I think we're at a bit of a weird, but pivotal, point in time right now - from (geo-)political, societal and technological perspectives - in that the Cyberpunk dystopia is becoming a little unblurred. With ever-mounting consumerism, capitalism, bad choices regarding energy production, mass surveillance (from both private companies and governments), and much more, our reality certainly feels as though it is moving towards a point where some of the elements that comprise Cyberpunk do not feel too far-fetched at all.
The present feels pivotal because whilst there are excellent efforts being made to reverse some of these positions around the world (from local recycling schemes and zero-waste manufacturers through to fights for human rights and rallies around liberal activists), these processes only become effective and impactive if they are considered and actioned by society _as a whole_. While there are are still enough members that continue to wallow in seemingly-backward ideologies and refuse to become involved or make any of the needed adjustments, then change as a society cannot happen.
However, on a more positive note, if such challenges can be solved - and the right choices made now and in the near future - then a whole new potential future opens its doors: one that might be described as _Solarpunk_.
In a [Solarpunk future](https://en.wikipedia.org/wiki/Solarpunk) humanity is much more in-tune with the world around it, maintaining a focus on sustainability (in terms of energy production, consumerism, ecology, and _education_), locality (in terms of sourcing materials and food, manufacturing, and the "do it yourself" movement), and - perhaps most importantly - an _attitude_ that promotes sharing and positivity.
To me it's not "hippyish" or necessarily to do with the adoption of socialism or the outright rejection of capitlism and associated ideologies - it's more concerned with sensible _balances_ across many facets of society and its politics. Competitiveness and drives to "do better" are parts of what make us human, and can very much live hand-in-hand with the other points and aesthetics we're talking about here.
Nor is it a rejection of technology. In fact, from a technological perspective, forward-thinking efforts surrounding the [free and open-source software](https://en.wikipedia.org/wiki/Free_and_open-source_software) movement and privacy-first companies are certainly components I see that can help contribute to (and become a focus within) a more sustainable and fair world. Technology can continue to innovate, develop, and improve in either setting.
> Solarpunk isnt about doing your bit to save the world from climate collapse. Solarpunk is about building the world you want your grandchildren to grow old in. - [Steve Lord](https://thedorkweb.substack.com/p/towards-a-solarpunk-future)
We've already seen some fantastic real-world efforts that can be considered part of this movement - from architecture and transport through to self-repair and home agriculture. I love the [bottle farm](https://containergardening.wordpress.com/2011/09/07/bottle-tower-gardening-how-to-start-willem-van-cotthem) idea included in the post I mentioned at the start of this article, and want to try this myself.
There are also the more obvious reflections, such as to fully embrace solar energy (and other renewables) as a source of power - both at an individual and industrial scale - and efforts concerned with maintaining green spaces in developing and urban areas. I think that the more mainstream and ubiquitous we can make all of these actions the more realistic a Solarpunk world can become.
--
_Note: this article only scratches the surface of the Cyberpunk and Solarpunk subcultures. It is aimed to be more of a primer to introduce the concepts behind these ideas and to perhaps pique the interest of readers enough to continue their own research._

View File

@ -0,0 +1,79 @@
---
date: "2021-02-24T13:46:00Z"
title: "Migrating from Google Photos: Nextcloud, Piwigo, Mega, and pCloud"
description: "My experiences with trying to move away from Google Photos."
tags: [100daystooffload, technology, opinion]
image: header-google-photos-pcloud.png
imageDescription: AI generated pixel art representing photo apps on smartphones.
slug: google-photos-pcloud
---
By now I'm sure everyone has heard the horror stories about people (seemingly-) randomly losing access to their Google accounts. Often the account closures are reported to have been accompanied with vague automated notifications from Google complaining that the account-holder violated their terms in some way, but without any specific details or an offer of appeal or process to resolve the "issues" and reinstate the accounts.
As such, these events usually mark the end of the road for the victims' presence and data on Google platforms - including Gmail, Drive, Photos, YouTube - without having any option to extract the data out first. This could be years' worth of documents, family photos, emails, Google Play purchases, and much more (ever used "Sign in with Google" on another service, for example?).
Some affected people are fortunate to have a large social media following to ensure that their posts describing this treatment can traverse the networks enough to get through to someone close to Google who can try and elevate it in order to get accounts reinstated. However, for most people this is not possible.
The [creator of Stardew Valley](https://twitter.com/Demilogic) recently [found himself locked out of his 15-year-old Google account](https://twitter.com/Demilogic/status/1358661840402845696) - even whilst involved in a key ongoing deal with Stadia, which he has since pulled out from due to feeling mal-treated. There are many similar stories available, and probably thousands more we never hear about.
Of course, I am sure there are legitimate reasons for many accounts to be removed and that the original intentions behind these automated systems were good. Either way, this still just worries me. Whilst I haven't (at least, I don't think?) done anything to violate any terms, I just don't want to take the risk and wake up one morning to find I have lost 15 years' worth of emails, photos, and documents.
## What are the alternatives?
These days there are so many services that compete with Google's own offerings. For example, [DuckDuckGo](https://duckduckgo.com) is excellent for web search - though several times a day I do need to revert to Google search for more complex queries (which I can do by prepending DuckDuckGo search queries with `g!`). There are [many websites that list good alternatives to Google services](https://restoreprivacy.com/google-alternatives), and I won't bang on about these ideas here - it's up to you what you prefer to use, of course, and this type of thing has been covered many times before.
Personally, I've used my own domain to send and receive email (using [Fastmail](https://www.fastmail.com)) for several years now, and self-host my files and documents using [Nextcloud](https://nextcloud.com). I don't really use YouTube or have much data tied-up in the other Google offerings.
However the one service I do rely on still is Google Photos. To be fair, this is a fantastic service - the apps seamlessly back everything up, the search is great (it's Google's bread-and-butter, after all), and I can easily and instantly find specific photos from two decades ago, or from any time in-between. It's also super fast. I'd never found a good-enough replacement for media storage and so I never made the leap.
## The problems with media storage
Images and videos - espcially with modern cameras and phones - take up a _huge_ amount of space. I take a few pictures every day, and on my messenger apps I sometimes like to save images I receive from friends and family too.
This has resulted in a collection of over 84,000 pictures and videos in a mostly-continuous stream since 1998 - the year our family got our first digital camera. There are also digitised versions of photos from as early as 1959 on there too. Whilst this is not a massive collection by any standards these days, it forms a significant part of my own data footprint.
Whilst I was happy with using Google for this one area, I would get so nervous every time I read one of those "deleted accounts" stories that it got to the point where last month I finally committed to making a change.
In the meantime I needed to try and get my stuff out of Google Photos. The service lets you download 500 images at a time from the web interface, but that would have taken forever. The other option was to use [Google Takeout](https://google.com/takeout). I did this and shortly after received an email containing links to download 48 different archives of data.
![Email from Google Takout listing lots of download buttons](/media/blog/google-takeout-email.png)
When I downloaded a couple of examples, I saw that they seemed to contain a lot of JSON metadata files and not many actual photos. I imagined I'd have to download the whole lot to try and make sense of it all and manually piece bits together. I thought I'd leave that for now whilst I continued my search for an alternative service, and come back to that problem later.
## The search for a Google Photos alternative
The first job was to identify a new process/system for media storage. I had a few acceptance criteria in mind;
- It needed to be affordable (not necessarily as cheap as [Google One](https://one.google.com), but not bank-breaking either).
- I needed it to be quickly and easily navigable (i.e. to easily move to a particular date to find photos).
- It had to have some type of auto-sync from my phone's photo gallery (I am too lazy to remember to manually "back-up" things - I need automation!).
I was already using Nextcloud for my documents anyway, and the Nextcloud app (which is brilliant) also includes an auto-upload feature for photos. However, I find Nextcloud gets a bit slow and grinds my server to a halt when viewing (I guess it processes things in the background?) large and recently-uploaded photos. Also, my VPS provider's pricing (not unreasonable) would mean forking out the best part of $400 a year for the required block storage - which would only increase every year as my library gets bigger.
I also considered [Piwigo](https://piwigo.org), which looks great and is [reported to be very fast](https://piwigo.org/testimonials). However the self-hosted option would have the same pricing implications as Nextcloud (above), and the hosted offering would be [significantly more](https://piwigo.com/pricing) if I was to include videos too. I think Piwigo is aimed more at photographers maintaining and sharing albums rather than for use as a personal photo storage solution.
I [recently tooted](https://fosstodon.org/web/statuses/105692084325464954) out to the community about this problem and I got some great responses back. One idea in particular caught my eye: [Mega](https://mega.nz). I had used Mega before a while back, and the apps and web interfaces seemed to have come on a long way in recent years. After a bit of research I decided to choose this option. It seemed secure (with client-side encryption), quick, and the apps had the features I needed.
I went to pay for Mega (using the web browser), and it redirected me to a very dodgy-looking payment website - this threw me a little. I went back to the checkout page to see if I had clicked the wrong thing, clicked "confirm" again, and this time it took me to an entirely _different_ (but still sort of dodgy-looking) payment site. I've set-up [Stripe](https://stripe.com) a few times before, and know it's pretty trivial these days to accept card payments on your own site, and so alarm bells began to ring a little. My paranoid security-focused self was put off enough to continue my search.
## Migrating to pCloud
That's when I stumbled upon the Swiss-based [pCloud](https://www.pcloud.com) on a Reddit thread discussing storage alternatives. It seems to be pretty feature-matched with Mega, despite not offering client-side encryption out-of-the-box - but then neither does Google Photos. Additionally, pCloud offers both US and European servers.
pCloud's apps have similar functions to Mega, and the service also has the added bonus of offering a Google Drive integration! Hopefully this would mean I wouldn't need to spend ages traversing that Google Takeout mess. The service also offers integrations with Dropbox, OneDrive, and some social networking platforms.
I signed-up and paid - without needing to go to any dodgy sites. I then linked my Google account and waited for the magic to happen.
It was a little slow. I know there was a fair amount of data, and I imagine the combination of this plus Google rate-limiting and other factors contributed to the speed too. I checked every few hours on the progress; there's a sort of indicator (a folder count), but otherwise there was no way to really check what was going on. After a couple of days I noticed it had stopped (or "aborted") by itself.
![Screenshot of pCloud, showing Google Drive import aborted](/media/blog/pcloud-google-drive.png)
I had a quick browse through what pCloud had brought through and could see it had got to around July 2019 before it had had enough. This was OK - it had imported the vast majority and I was happy enough to run through the last couple of years' worth of content on Google Photos, downloading 500 photos at a time to manually upload to pCloud in order to plug the gap.
I then un-linked my Google account from pCloud. I turned off Google Photos auto-upload from my phone and instead all new media now gets auto-uploaded to pCloud. Job done.
## Final thoughts
pCloud's navigation seems to be pretty quick, and uploading content is also very fast. It's not _perfect_, though (is anything?) - viewing photos on the app can take a few seconds to generate/retrieve thumbnails, and it doesn't have the smoothness that Google Photos offers.
However, it's great for now. I have a _"tangible"_ folder of media that feels more portable in case I ever need to move again. pCloud also has clear channels for communication if I do ever need to get in touch, and I certainly feel as though I am less subject to automated judgments from unruly algorithms.

View File

@ -0,0 +1,153 @@
---
date: "2021-02-28T22:05:00Z"
title: "Making your Python Flask app serverless"
description: "How you can deploy your existing Flask app on a scalable serverless architecture."
tags: [100daystooffload, technology, python]
image: header-flask-serverless.png
imageDescription: Pythons, flasks, and laptops!
slug: flask-serverless
---
Python's [Flask framework](https://flask.palletsprojects.com) is an easy and excellent tool for writing web applications. Its in-built features and ecosystem of supporting packages let you create extensible web APIs, handle data and form submissions, render HTML, handle websockets, set-up secure account-management, and much more.
It's no wonder the framework is used by individuals, small teams and all the way through to large enterprise applications. A very simple, yet still viable, Flask app with a couple of endpoints looks as follows.
```python
from flask import Flask
app = Flask(__name__)
@app.route('/')
def hello_world():
return 'Hello, World!'
@app.route('/<name>')
def greet(name):
return 'Hello, ' + name
```
Flask apps like this can easily be deployed to a server (e.g. a VPS) or to an app-delivery service (e.g. [Heroku](https://www.heroku.com), [AWS Elastic Beanstalk](https://aws.amazon.com/elasticbeanstalk), and [Digital Ocean App Platform](https://www.digitalocean.com/products/app-platform)). In these scenarios, the server/provider often charges the developer for each hour the app is running. Additionally, as traffic increases or reduces, the provider can automatically scale up and down the resources powering your app in order to meet demand. However this scaling can sometimes be a slow process and also means that the developer is charged even when the app is not being used.
If you want to set-up your app so that it can automatically scale from 0 to thousands of concurrent users almost instantly, where you are not charged when users aren't using your app, where it is highly-available (keep up your uptime to meet SLAs), and where there is no server set-up or maintenance required (and there is nothing for bad actors to try and SSH into), then migrating to a more serverless architecture might be of interest to you.
Also, given that most providers offer a pretty generous free tier for serverless apps, you may not end up paying much at all (up to a few dollars max a month) until you start generating enough traffic.
_Note: in this article I use Flask as an example, however the same should apply to any WSGI-compatible framework, such as Bottle and Django, too._
## What is a serverless web app?
"Serverless" is the generic term for a family of cloud-based execution models where the developer does not need to worry about provisioning, managing, and maintaining the servers that run their application code. Instead, the developer can focus on writing the application and can rely on the cloud _provider_ to provision the needed resources and ensure the application is kept highly-available.
Although services such as Heroku and [Digital Ocean App Platform](https://www.digitalocean.com/products/app-platform) can be considered "serverless" too (in that there is no server to configure by the developer), I refer more to delivery via _function as a service_ as the particular serverless model of interest in this article, since this offers the benefits listed at the end of the previous section.
"Function as a service" (FaaS) - as its name suggests - involves writing _functions_, which are deployed to a FaaS provider and can then be _invoked_. Such systems are _event-driven_, in that the functions are called as a result of a particular event occurring - such as on a periodic schedule (e.g. a cron job) or, in the web application case, an HTTP request.
There are many FaaS providers, such as [Azure Functions](https://docs.microsoft.com/en-us/azure/azure-functions/functions-overview), [Google Cloud Functions](https://cloud.google.com/functions), [Cloudflare Workers](https://workers.cloudflare.com), and [IBM Cloud Functions](https://www.ibm.com/cloud/functions).
Probably the most famous (and first major) FaaS provider offering is [AWS Lambda](https://aws.amazon.com/lambda). In this article I will focus on using Lambda as the tool for deploying Flask apps, but many of the concepts discussed are generic across providers.
Serverless apps written using AWS Lambda usually also involve [Amazon API Gateway](https://aws.amazon.com/api-gateway/features), which handles the HTTP request/response side of things and passes the information through as code to the Lambda function. The `event` argument received by the function describes - among other things - the information about the request that can be used to generate an appropriate response, which is then returned by the function.
```python
import json
def lambda_handler(event, context):
name = event['queryStringParameters']['name']
return {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": json.dumps({"Hello": name})
}
```
As long as your function(s) return a valid object to generate a response from API Gateway, applications on Lambda can use a separate function for each request path and method combination, or use one function for _all_ invocations from API Gateway and use the `event` parameter and code logic to provide the needed actions.
Either way, this is a different pattern to how Flask structures its functions, requests, and responses. As such, we can't simply deploy our Flask app as-is to Lambda. I'll now talk about how we _can_ do it without too much extra work.
## Using Serverless framework to describe a basic app
The [Serverless framework](https://www.serverless.com), along with its extensive [library of plugins](https://www.serverless.com/plugins), is a well-established tool for provisioning serverless applications on a number of providers. It bundles your code and automates the deployment process, making it easy to create a serverless app.
Configuration of apps deployed using Serverless is done through the `serverless.yml` file. The example configuration below would, when deployed, create an API Gateway interface and a Lambda function using the code in `app.py`, and would invoke the `lambda_handler` function (above) each time a `GET` request is made to `/hello`:
```yml
service: my-hello-app
provider:
name: aws
runtime: python3.8
region: eu-west-1
memorySize: 512
functions:
hello:
handler: app.lambda_handler
events:
- http:
path: hello
method: get
```
## Deploying an existing Flask app to AWS Lambda
The good news is that we can also leverage the Serverless framework to deploy Flask apps - and without needing much change to the existing project. This section assumes that you have an AWS account already that you can use. If not, then you can sign-up from [their website](https://aws.amazon.com).
First off, we need to install the Serverless framework itself. This can be achieved through NPM: `npm install -g serverless`.
Next, we need to configure credentials that will allow Serverless to interact with your AWS account. To do so, use the IAM manager on the AWS console to generate a set of keys (an access key and secret access key) and then use the following command to configure Serverless to use them:
```shell
serverless config credentials --provider aws --key <ACCESS_KEY> --secret <SECRET_ACCESS_KEY>
```
While you should try and restrict access as much as possible, the fastest (yet riskiest) approach is to use an IAM user with Administrator Access permissions. If you want to configure more security I recommend reading the [Serverless docs](https://www.serverless.com/blog/abcs-of-iam-permissions).
Once the above groundwork has been completed, you can proceed to create a new `serverless.yml` file in the root of your Flask project:
```yml
service: my-flask-app
provider:
name: aws
runtime: python3.8
plugins:
- serverless-wsgi
functions:
api:
handler: wsgi_handler.handler
events:
- http: ANY /
- http: ANY {proxy+}
custom:
wsgi:
app: app.app
```
Don't worry too much about the `wsgi_handler.handler` and `events` parts - essentially these ensure that all HTTP requests to the service will get routed through to your app via a special handler that Serverless will setup for us.
This setup assumes your root Flask file is named `app` and that your Flask instance within this file is also named `app` (in the `custom.wsgi` attribute above), so you may need to change this if it doesn't match your project setup.
Another thing to note is the new `plugins` block. Here we declare that our application requires the [`serverless-wsgi`](https://www.serverless.com/plugins/serverless-wsgi) plugin, which will do much of the heavy lifting.
To make use of the plugin, you'll need to add it to your project as a dependency by running `serverless plugin install -n serverless-wsgi`. As long as your Flask project dependencies are listed in a `requirements.txt` file, you can now deploy your app by simply running `serverless deploy`. After a few minutes, the framework will complete the deployment and will print out the URL to your new service.
## Tweaking the deployment
There are various ways to adjust the environment of your deployed service. For example, you can change the amount of memory assigned to your function, make use of environment variables (e.g. for database connection strings or mail server URLs), define roles for your functions to work with other AWS services, and much more.
I recommend taking a look at the [Serverless documentation](https://www.serverless.com/framework/docs/providers/aws/guide/serverless.yml) to understand more about what options are available.
If you want to use a custom domain for your service, then you can either set this up yourself in API Gateway through the AWS console or by using the [`serverless-domain-manager`](https://github.com/amplify-education/serverless-domain-manager) plugin. Either way you will need to have your domain managed using [Route 53](https://aws.amazon.com/route53).
## Serverless caveats
Whilst the benefits offered by serverless delivery are strong, there are also some things to bear in mind - particularly when it comes to avoiding unexpected costs. Lambda functions bill per 100 milliseconds of execution time, and so long-running functions may be cut short (unless you tweak the duration allowance on the Lambda function, which can be up to 5 minutes long).
Additionally, if your Flask app makes use of concurrency (e.g. if you use threads to background longer-running tasks, like email-sending), then this may not play nicely with Lambda, since the function may get terminated once a response is generated and returned.
I outlined some extra things to watch out for [in a recent article](/blog/2021/01/03/scaling-serverless), so take a look through that if you want to read more on these.
Generally speaking, however, serverless apps are quite a cheap and risk-free way to experiment and get early prototypes off the ground. So, if you're familiar with Flask (or other WSGI frameworks) and want an easy and scalable way to deploy your app, then perhaps this approach could be useful for your next project.

View File

@ -0,0 +1,197 @@
---
date: "2021-03-04T22:17:00Z"
title: Easily set up discoverable RSS feeds on a Gatsby website
description: "How to set up multiple discoverable RSS feeds for your static Gatsby website."
tags: [100daystooffload, technology, javascript]
slug: gatsby-rss
---
RSS has had a [bit of a resurgence](/blog/2021/02/03/rss-rise-fall-rise) for personal websites and blogs in recent years, especially with the growing adoption of [Small Web](https://ar.al/2020/08/07/what-is-the-small-web) and [IndieWeb](https://indieweb.org) ideologies.
Many static site generators - including [Hugo](https://gohugo.io), [Jekyll](https://jekyllrb.com), and [Eleventy](https://www.11ty.dev) - can easily support the automatic generation of RSS feeds at build time (either directly, or through plugins).
The same is true for [Gatsby](https://www.gatsbyjs.com) - the framework currently used to build this static website - and the good news is that setting up one feed, or multiple ones for different categories, only takes a few minutes.
## Your Gatsby blog structure
This article talks about RSS feeds for blogs (a typical use-case), but is also relevant for other notes, podcasts, or anything else that is published periodically to your Gatsby site.
In Gatsby, the typical blog set-up involves the blog entries in markdown format, and a [template "page"](https://www.gatsbyjs.com/docs/tutorial/part-seven), which is used to render the markdown blog posts.
You'll also probably have a "blog" page which lists or paginates your posts for visitors to find them, and a `createPages` function in your `gatsby-node.js` that generates the pages from the template and markdown.
All this sounds way more complicated than it is in practice, and there are lots of [guides available](https://blog.logrocket.com/creating-a-gatsby-blog-from-scratch) to help set this up.
At the very least, this article assumes you have blog posts written in a directory containing markdown for each post similar to the following:
```yaml
---
date: "2021-03-04T22:17:00Z"
title: "Easily set up discoverable RSS feeds on a Gatsby website"
description: "How to set up multiple discoverable RSS feeds for your static Gatsby website."
tags: [100daystooffload, technology, javascript]
---
The post content starts here...
```
The metadata (frontmatter) doesn't need to be exactly as shown, but having useful metadata (e.g. tags) in-place helps make your feeds richer.
## Creating your feeds
To create the feeds, we'll use a Gatsby plugin called [`gatsby-plugin-feed`](https://www.gatsbyjs.com/plugins/gatsby-plugin-feed), which will do most of the heavy-lifting for us (as long as you have a blog in place structured similarly to the way described above).
First off, add the plugin as a dependency: `yarn add gatsby-plugin-feed`. I also recommend installing `moment` to help with formatting dates for the feed (as we'll see later): `yarn add moment`.
Next, you'll need to create some code in `gatsby-config.js`. If you have a blog already then you likely already have content in this file (e.g. `gatsby-source-filesystem` configuration). Your file probably looks a little like the following:
```javascript
module.exports = {
siteMetadata: {
title: 'My Cool Website',
siteUrl: 'https://my.cool.website',
},
plugins: [
{
resolve: 'gatsby-source-filesystem',
options: { ... },
},
'gatsby-plugin-react-helmet',
],
};
```
Along with any other plugins you may have.
To create the feed we'll make use of a GraphQL query, and a function which will create a feed object. If we define these separately (as below), it will give us more flexibility later.
In the same file (`gatsby-config.js`), at the top, first `require` the `moment` library we installed earlier, define the query we'll use, and a function to create a feed object:
```javascript
const moment = require('moment');
// Query for all blog posts ordered by filename (i.e. date) descending
const rssPostQuery = `
{
allMarkdownRemark(
sort: { order: DESC, fields: [fileAbsolutePath] },
filter: { fields: { slug: { regex: "/blog/" } } }
) {
edges {
node {
html
fields { slug }
frontmatter {
title
description
date
tags
}
}
}
}
}
`;
const createRssPost = (edge, site) => {
const { node } = edge;
const { slug } = node.fields;
return Object.assign({}, edge.node.frontmatter, {
description: edge.node.description,
date: moment.utc(`${node.frontmatter.date}`, 'YYYY/MM/DDTHH:mmZ').format(),
url: site.siteMetadata.siteUrl + slug,
guid: site.siteMetadata.siteUrl + slug,
custom_elements: [{ "content:encoded": edge.node.html }],
});;
};
```
The `rssPostQuery` assumes your blog posts are rendered at `/blog/filename` in your built site. If not, then just change this value in the regex. Likewise, the `createRssPost` function assumes the dates in the frontmatter of your posts are formatted like `YYYY/MM/DDTHH:mmZ` - if not, just change this string to match your own format (I use UTC here as we're dealing with global audiences!).
Essentially, the GraphQL query string returns all markdown files ordered by descending filename (I title my blog posts by date, so this gives a reverse chronological ordering of posts, with the newest first), and gives us the post content, slug ("path"), and selected fields from the posts' frontmatters.
We use a regex in the query to discern between different types of markdown files. For example, you may have a collection of notes - also written in markdown - which we want to ignore for the purposes of creating an RSS feed for _just_ blog posts.
The `createRssPost` function (which we'll call later), accepts a markdown file (`edge`) and information about the website (`site`), and returns a fresh object representing this information to be eventually embedded in the feed.
The `guid` field is a globally-unique ID for this post on your blog and reader software will use this to, for example, determine if the user has already seen the post and should mark it as "read". Since all of my posts have a unique path ("slug"), I just use this for the ID.
Finally, we need to add a section to our `plugins` array to tell `gatsby-plugin-feed` how to build our feed using the query and function we created above. In the same file, make the following changes:
```javascript
module.exports = {
siteMetadata: { ... }, // omitted for brevity
plugins: [
{
resolve: 'gatsby-source-filesystem',
options: { ... }, // omitted for brevity
},
{ // Add this object to your "plugins" array:
resolve: 'gatsby-plugin-feed',
options: {
feeds: [
{
serialize: ({ query: { site, allMarkdownRemark } }) =>
allMarkdownRemark.edges.map(e => createRssPost(e, site)),
query: rssPostQuery,
output: '/rss.xml;,
title: 'My Cool Blog',
description: 'All of my blog posts'
},
],
},
},
...
],
};
```
The `gatsby-plugin-feed` plugin only runs when the site is actually _built_. If you have your Gatsby site running locally, just run `gatsby build` in a separate Terminal window and then navigate to `/rss.xml` on your local development website to view the feed.
## Creating multiple feeds
The example configuration in the previous section creates a single feed containing all blog posts.
However, you may have noticed that the `feeds` attribute is an array; this means that the plugin can be used to create multiple feeds. I do exactly that on [this website](/feeds): I have different feeds for different audiences (e.g. for technology, life, books, etc.).
Since we've already broken our code out into a separate query and function, it is easy to add new feeds by `filter`ing on the markdown edges before passing them to `map` in the `serialize` function.
If you modify the same file again (`gatsby-config.js`), you can create a feed for all of your posts that contain a tag named "technology" as follows:
```javascript
... // omitted for brevity
{
resolve: 'gatsby-plugin-feed',
options: {
feeds: [
{ ... }, // omitted for brevity
{
serialize: ({ query: { site, allMarkdownRemark } }) =>
allMarkdownRemark.edges.filter(e => {
const tags = e.node.frontmatter.tags;
return tags && tags.length > 0 && tags.indexOf('technology') > -1;
}).map(e => createRssPost(e, site)),
query: rssPostQuery,
output: '/technology.xml',
title: 'My Technology Blog',
description: 'Posts in my blog tagged with "technology".'
},
],
},
},
...
```
This will create a new feed at `/technology.xml` containing these tech posts.
Since it's just plain old JavaScript, you can use any of the available information to craft a number of flexible feeds for your visitors to subscribe to. You can then list these feeds on a page on your site, like [this one](/feeds).
## Feed discovery
The `gatsby-plugin-feed` plugin has one more trick up its sleeve: without any extra work it will automatically inject the relevant `<link />` tags to your site's HTML at build-time to list the feeds that you have configured.
This means that your visitors just need to add your site's root URL (e.g. "https://my.cool.website") into their feed reader and it will suggest the available feeds to them.
![A screenshot showing the Reeder app auto-listing my website's feeds](/media/blog/reeder-feeds.png)
The image above shows the [Reeder macOS app](https://www.reederapp.com) automatically listing the available feeds on my website after entering just the root URL for the site. Visitors can then just add the ones they want.

View File

@ -0,0 +1,70 @@
---
date: "2021-03-08T19:23:00Z"
title: "Thoughts on minimalism, and what happens when I get mail"
description: "My processes for handling physical mail, receipts, and other paperwork."
tags: [100daystooffload, life, technology]
slug: getting-mail
---
## Minimising possessions
Like many people I these days try and live a minimal life when it comes to possessions. Having more _stuff_ means there is a greater level of responsibility required to look after it. I love the principles involved in "owning less".
Although I am in a very different situation to [Pieter Levels](https://levels.io), I find the ideas behind his [100 Thing Challenge](https://levels.io/the-100-thing-challenge) (and [other related pieces](https://levels.io/tag/minimalism)) to be inspiring.
Although my home contains items that are technically mine - furniture, kitchenware, decorations, etc. - I consider these as belonging to the _house_ itself rather than as my personal belongings. Personal items are essentially the things I can fit into my backpack and are things I actually _need_ on a daily or weekly basis: my laptop, my phone, some of my clothes, my toothbrush, passport, and a few other smaller items.
Non-essential things - although a "luxury" - are also a _liability_ (and an anchor).
This also helps to keep emotional attachment out of ownership. I know that if I were to lose or break my phone, I could get another and continue on as before. The main concepts here for me are _portability_ and _replaceability_.
I consider my data and communications to be personal belongings, too. For example, emails I've sent and received, documents, images, and so on. Since these are all digital, I can just stick them on my Nextcloud (or [pCloud](/blog/2021/02/24/google-photos-pcloud)), and I can access them any time through my phone or laptop.
I also strive for digital minimalism too, where possible. However, since this data storage methodology is scalable and I can keep things very organised I don't mind holding onto data and documents should they be useful in the future. Even with thousands of stored documents, the collection is still _portable_ and it fits with my model.
## Mail and paperwork
Many of the world's organisations - including insurance companies, banks, lawyers, and public services - still love doing business with physical documents and through physical mail. Also, these are typically the types of documents you are supposed to keep hold of for long periods of time for the purposes of financial records, insurance certification, and so on. Over time this paperwork builds up and quickly becomes disorganised.
Some people keep boxes or filing cabinets of documents and mail. This turns into something else to be responsible for. It's not portable (in the "backpack" idea mentioned earlier) or replaceable. If there was a fire it would be lost, and if moving home it's something else to "worry" about.
Until a couple of years ago, I kept documents in ring-binders. My process would include holepunching documents (retro, I know), finding the section of the ringbinder most appropriate for that document, placing the document, and then putting the ringbinders back on the shelf.
I had years' worth of utility bills, insurance documents, bank statements, pay-slips, and more that I would need to bring with me whenever I moved and always ensure there was a phycial space for them in my life somewhere.
I began to realise that - for the vast majority of these documents - I would never really need the _original_ version. Apart from things like my passport and paper certificates containing security features, document _copies_ would be fine. And since I already had a system for storing digital documents, I could extend this to maintain a more organised (and searcahble) collection of digitised paper documents too.
## Digitising paperwork
Phone cameras these days are more than capable of creating high-quality digital replicas of paper documents. There are also many scanner apps and software available to make this easier.
I personally use [Scanner Pro](https://apps.apple.com/app/apple-store/id333710667) on my iPhone, which is very useful. It automatically detects paper edges (even documents with weird dimensions) and straightens the image sensibly too. It also has settings to help configure further; for example, I only need greyscale copies and not the highest resolution - both of these factors help decrease the size of the eventual file.
The official iOS [Files app](https://apps.apple.com/us/app/files/id1232058109) also has a "Scan Documents" feature, which looks pretty good. I've not used this extensively myself yet.
After downloading the scanner app, I went through my ring-binders and piled up all the documents to throw out - stuff I just didn't need any record of but had, for some reason, kept anyway. I then went through each remaining section in turn and scanned each document in - storing each PDF to my Nextcloud.
The process was surprisingly quick and by the end I had a nicely organised collection of files on Nextcloud and a large pile of paper documents I could throw out. As I mentioned earlier, about the only physical things I _did_ keep were certificates, my passport, and a handful of other items.
It was a weirdly therapeutic exercise!
## My process now
Jumping back to the present and my more minimalism-focused self, I am now very strict about what paperwork I keep. In fact, I don't think I've kept hold of a physical document that I've received in the last year (and probably longer).
I have a simple process:
1. I receive the document/paperwork and open it;
1. I use my phone to scan the document;
1. I sync the file to an `0 Unfiled` directory on my Nextcloud, titled by date, sender, and short subject (e.g. `2021-03-02_BritishGas_Statement.pdf`);
1. I throw the document out (shredding first if sensitive);
1. If the paperwork requires action, I either do so immediately or set a reminder to do so;
1. Once a month or so I go through my `0 Unfiled` directory and categorise properly according to my personal filesystem.
I use a "holding" directory (`0 Unfiled`) to make the process quicker (for example, if there are several documents to scan) and it ensures I have actually actioned the files once I come round to organising them later. I use a `0` at the start of the directory name so that it sits at the top of my filesystem root in order to improve efficiency (and I try and use the [Johnny.Decimal](https://johnnydecimal.com) concepts as much as possible).
I also use the holding directory for other important documents - such as email attachments I want to include in this system. To me, it doesn't matter which medium was used to receive the document: it's all just data to be categorised and stored.
It's a satisfying process. I now feel more organised, I can easily find a particular document - even from several years ago - without needing to trawl through piles of paper; I can ensure _longevity_ and _integrity_ of the data (i.e. it can't get torn or damaged); I can back the collection up with added _redundancy_; and I can easily view and share the documents from anywhere.
If you currently keep lots of paper records and are interested in minimising your physical footprint then I can recommend trying a similar process yourself.

View File

@ -0,0 +1,23 @@
---
date: "2021-03-10T20:18:00Z"
title: "The Hunt for Red October by Tom Clancy"
description: "My thoughts on The Hunt for Red October by Tom Clancy"
tags: [100daystooffload, book, opinion]
slug: red-october
---
I recently finished reading [The Hunt for Red October](https://www.goodreads.com/book/show/19691.The_Hunt_for_Red_October) by [Tom Clancy](https://www.goodreads.com/author/show/3892.Tom_Clancy).
![Book cover for The Hunt for Red October](/media/blog/red-october.jpg)
This genre of novel (sort of military thriller fiction) is not usual for me and this is the first Clancy book I have read. That being said, the book has been on my "to-read" list for a fair amount of time and so I am glad I got round to reading it.
I also hadn't seen [the movie](https://www.imdb.com/title/tt0099810) (starring Sean Connery and Alec Baldwin) by the time I read it and so I didn't have any pre-perceived ideas about the story and could read afresh.
Side note: I have now since watched the movie, and whilst the core plot is mostly the same there are a lot of differing details throughout (in terms of both angle and storyline), and so I can certainly recommend both media if you've previously seen one or the other or neither.
In general, I very much enjoyed the book. It was an exciting read from start to finish, with interesting characters, relationships and story arcs. I was fascinated by all of the technical detail and also felt that it helped explain and justify many of the core concepts and features of the story. The character development was good, and you quickly build a connection with many of the different people involved.
Though I do not think this a fault of the author (I imagine the work is an accurate reflection given the time of the setting), I would hope that if it were written in modern times there would be improved gender diversity and more female representation in the novel - as it is I do not remember there being a single female character (aside from mentioning wives and family members who do not appear in the story directly).
Either way, I can certainly recommend the book to others who also enjoy an exciting story and lots of technical detail. I thought the run-up to the ending was great and I am definitely intrigued to further my reading in this genre.

View File

@ -0,0 +1,45 @@
---
date: "2021-03-15T11:05:00Z"
title: "The Tildeverse"
description: "Why the Tildeverse is interesting and why you might want to join in."
tags: [100daystooffload, technology]
slug: tildeverse
---
## The last twenty years of internet evolution
Although I was still somewhere between being of single-digit age and a young teen back in the '90s and early '00s, I still fondly remember discovering and becoming a small part of the flourishing community of personal, themed, and hobby websites that connected the web.
We were even given basic server space in school and the wider internet was thriving with [GeoCities](https://en.wikipedia.org/wiki/Yahoo!_GeoCities) and communities grew around services like [Neopets](http://www.neopets.com). Everyday, after school, we'd go home and continue our playground conversations over [MSN Messenger](https://en.wikipedia.org/wiki/Windows_Live_Messenger) (after waiting for the dial-up modem to complete its connection, of course). The internet felt small and personal (even if you didn't use your real name or identity) and _exciting_.
For those more tech-aware than I during those days there were also the established [BBS systems](https://en.wikipedia.org/wiki/Bulletin_board_system), [IRC](https://en.wikipedia.org/wiki/Internet_Relay_Chat) (which is still very much in active use), and several other types of available internet and communication services.
Over the years since then we've obviously seen the introduction and growth of tech companies, which have exploded into nearly every corner of the internet. Some have come and gone, but many are here still and continue to grow. We're now at a point where many of these services are almost a full "internet" (as it was back in the day) by themselves: on Facebook you can host a page for yourself or your business, you can engage with any nunber of other apps through your Facebook account, you can chat in real-time with individuals or groups of friends and family, and much more.
In the developing world, many people see the [internet and Facebook](https://medium.com/swlh/in-the-developing-world-facebook-is-the-internet-14075bfd8c5e) as being entirely analogous such that new mobile handsets are sold with the app pre-installed on their device and cellular carriers [sometimes provide free access to the platform](https://www.fool.com/investing/2020/05/22/facebook-expanded-internet-access-africa-1-billion.aspx) as part of their data plan.
This boom (invasion?) has completely changed the way the internet works for day-to-day users. Although these companies and their huge marketing teams have facilitated the growth of adoption of technology for community and communication, it has come at a cost. When using these services, the internet no longer feels personal and exciting.
For many people - particularly those who grew up with this state of the world or those who never fully engaged before Web 2.0 - this is fine and not a problem. They would likely laugh at the simplicity and "slowness" of the "old internet" compared to the flashy, speedy and engaging platforms they are used to interacting with for several hours every day.
## Community through the _Tildeverse_
However, there are also many of us who miss the _quality_ and _meaningfulness_ of the smaller and slower web. Since joining Mastodon a couple of years back, it's been great to be part of a movement that actively encourages the growth and maintenance of personal websites, blogs, distributed systems, and the self-hosted services that help promote these ideologies.
Movements and concepts such as the [Small Web](https://ar.al/2020/08/07/what-is-the-small-web), the [Indie Web](https://indieweb.org), and even initiaives like [Project Gemini](https://gemini.circumlunar.space) have all helped to raise awareness around the fact that there is still a large number of people interested in promoting the ideas around the [slow web](https://jackcheng.com/essays/the-slow-web), and building a real sense of _community_.
Also part of this movement is the notion of the _Tildeverse_. The [Tildeverse](https://tildeverse.org) draws some inspiration from [PubNix](https://www.pubnix.net) and stems from building community through "belonging" - similar to how one might feel when interacting with the [Fediverse](https://en.wikipedia.org/wiki/Fediverse).
The Tildeverse is an opportunity for people to _donate_ server resources by provisioning and managing a \*nix system (e.g. Linux, BSD, or similar), on which members of that _tilde community_ can have a user account that they can access using programs such as [SSH](https://en.wikipedia.org/wiki/SSH_(Secure_Shell)).
The name is derived from the fact that the tilde symbol (`~`) is used to denote a user's _home directory_ on UNIX-like systems that offer multiuser functionality (e.g. `~will`). On such servers, users can use their account and home directory to publish a website, a Gemini capsule, use tools to chat with other members via IRC or internal mail, or take advantage of any number of other services the server administrators may offer.
To join, it is recommended to first identify a community you feel you can contribute positively towards. Many servers don't require payment to join (although there are often options to make donations to help contribute towards the running costs), but it is usually expected that you help foster the sense of community by actively engaging with others, posting interesting or useful content, or by abiding by other "rules" that may be in place.
If you have found a community you'd like to join, a typical registration is often achieved by emailing the server administrators with your desired username and an SSH public key. If and when your registration is accepted, you can then use the corresponding private key to login and begin to engage with the community.
Many such communities, such as [tilde.club](https://tilde.club), list some of the users' home directories as webpages. This lets you get an idea of the community before choosing to join. Many homepages (though this isn't limited to the Tildeverse) include a _webring_, which you can use to navigate to other user websites belonging to the same webring.
Others, such as [tanelorn.city](https://tanelorn.city), are more focused on publishing Gemini content if this is more interesting to you.
Either way, I'd recommend browsing from [tildeverse.org](https://tildeverse.org) as a starting point if you're interested in getting involved. It helps explain some of the concepts and lists some of the Tildeverse _member_ servers.

View File

@ -0,0 +1,27 @@
---
date: "2021-03-17T19:23:00Z"
title: "Blood, Sweat, and Pixels by Jason Schreier"
description: "My thoughts on the book 'Blood, Sweat, and Pixels' by Jason Schreier"
tags: [100daystooffload, book]
slug: blood-sweat-pixels
---
This post contains some of my thoughts on the book _[Blood, Sweat, and Pixels](https://www.goodreads.com/book/show/34376766-blood-sweat-and-pixels)_ by [Jason Schreier](https://www.goodreads.com/author/show/16222011.Jason_Schreier).
![Blood, Sweat, and Pixels book cover](/media/blog/blood-sweat-pixels.jpg)
This book contains a number of stories about how some of the most well-known (and other less well-known) video games are made. The book's subtitle, "_The Triumphant, Turbulent Stories Behind How Video Games Are Made_", sums it up pretty well.
Working in the software industry myself, I often hear about the notion of "crunch time", which is a term we've borrowed from the game devleopment industry at times when critical updates, fixes, or deadlines are pressing. However, after reflecting on the stories in this book, it makes me realise that the "crunches" we suffer are nothing to the crunch and stresses experienced by game developers in many small teams and large development studios alike.
Every chapter explains in detail the pain and reward faced by game developers and management teams on an ongoing basis. The developer skill and expertise required by game studios, and the time and size of the required resource, helps to explain the huge financial impact these projects have.
It's no wonder why such harsh deadlines are set. In many cases it's a matter of "life or death": either the game gets released on time or there is no game at all and everyone has to lose their job - even in large well-funded companies.
I loved the stories of the groups of developers that ended up leaving their well-paid (but stressful) jobs in order to start something by themselves as a smaller group - not quite realising at the start what they were letting themselves in for.
I enjoyed the story behind the development of the game _Stardew Valley_. This is a game I love and have played for hours on my Switch - not knowing really (or fully appreciating) where the game came from and all the time spent by its solo developer and the stress that went on behind the scenes.
The background to the development of _The Witcher 3_ was also fascinating; how the relatively small but super-ambitious studio [CD Projekt Red](https://en.cdprojektred.com) successfully brought to the world stage the Polish much-loved fantasy world.
The book was great, and well-narrated by [Ray Chase](https://en.wikipedia.org/wiki/Ray_Chase_(voice_actor)) (I listened to the [Audible version](https://www.audible.co.uk/pd/Blood-Sweat-and-Pixels-Audiobook/B075KG1SBW)). I only wish there were more stories (it only took a few days to get through), but I appreciate the effort the author went into with researching and interviewing some of the key people involved. It is an excellent insight into how parts of the game industry work.

View File

@ -0,0 +1,136 @@
---
date: "2021-03-22T11:50:00Z"
title: "Running your own Matrix homeserver"
description: "A rough guide on how to run your own Matrix homeserver."
tags: [100daystooffload, technology, selfhost]
image: header-host-matrix.png
imageDescription: AI artwork of a home server.
slug: host-matrix
---
# Why use decentralised communication services
Centralised communication services, such as Telegram, Signal, and Whatsapp, offer convenient means to chat to friends and family using your personal devices. However these services also come with a number of pitfalls that are worth considering. For example;
- Many of these services are linked to your phone number, which can affect your privacy.
- They can be invasive with your contacts (_"Jane Doe is now using Telegram!"_).
- They usually require you to use proprietary client software. If your OS/platform isn't supported then you can't use that service.
- They typically require that everyone using the service has to use the same client software.
- They can be unreliable (Whatsapp frequently has downtime).
- They are invasive and collect data about you (particularly Whatsapp). If you don't pay for the service, then _you_ are the product.
- Even though Signal is encrypted end-to-end, its servers are based in the US and are subject to the laws there. Also their open-source server-side software appears to [not have been updated](https://github.com/signalapp/Signal-Server) for some time.
There are, of course, other factors on both sides that you may want to consider. It can be hard to move away from these services - after all, there's no point using a system that no-one else you need to talk to uses.
However, for some people, being able to avoid these issues can be important. One way to do so is to participate in a (preferably open-source) decentralised communication service in which the entire network is not owned by a single entity and where data colleciton is not the business model. This also helps prevent unstability and downtime, since there is not a single point of failure.
This is analogous to using services such as Mastodon and Pixelfed over Twitter and Instagram, respectively - the underlying software is open-source and anyone can host an "instance". In these cases, each instance can communicate with others using the [ActivityPub](https://en.wikipedia.org/wiki/ActivityPub) protocol. In this post I will talk about another protocol that offers decentralised and federated encrypted communication.
# The Matrix protocol
The [Matrix protocol](https://www.matrix.org) is one example of a standard for real-time decentralised communication. Since the standard is open, anyone can build server and client software that enables end-to-end encrypted communication between two or more people. Another example of a similar protocol is [XMPP](https://en.wikipedia.org/wiki/XMPP), which is also very popular and has been around (in its earlier forms) since 1999.
When using Matrix, you belong to a "homeserver". This is where your messages and some account details are stored. However, since Matrix is a _federated_ protocol, you can use your account to communicate with others on your homeserver as well as people from other homeservers that federate with yours.
The standard was introduced back in 2014, and by now there is an established ecosystem of software available for use. In fact, you can use [Element](https://element.io/get-started) on your device and get started by joining an existing homeserver right now.
Additionally, if you don't want the hassle of self-hosting yet another service, then [Element also provides plans](https://element.io/matrix-services) that allow you to run your own homeserver on managed hosting.
# Self-hosting a Matrix homeserver
If you want more control over your data, you may opt to self-host your own homeserver that implements the Matrix standard. Even if you self-host you can still take advantage of the protocol's federation features and communicate with people on other homeservers.
The resource requirement for Matrix servers is a bit on the heavier side (especially when compared to the lighter XMPP servers). However if you already run a small-ish VPS anyway (as I do for things like Nextcloud), and if you only expect one or two people to be enrolled directly on your homeserver, then you can certainly host Matrix on that same VPS without too much trouble. For reference, I have a single $10 server from [Linode](https://www.linode.com), which happily runs Matrix alongside a number of other services.
The [Synapse project](https://github.com/matrix-org/synapse) is probably one of the most robust and feature-complete homeserver implementations, and is the one I'll talk about in this post. They also offer an officially supported [Docker image](https://hub.docker.com/r/matrixdotorg/synapse), which is what I would recommend using to keep things in one place.
## Homeserver name
Firstly, I'd recommend setting up a domain (either an existing one or a new one) and then updating your DNS such that the relevant entry points to your server.
It is important to think about the domain name you choose for your homeserver, since this cannot be changed later. [Matrix recommends](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#choosing-your-server-name) using your root domain name itself rather than a subdomain for your homeserver name. However if you already host a website using your full domain name you will need some extra configuration to make it work properly. I personally don't, as I wanted an easier setup!
## Exposing ports and preparing TLS certificates
In order to configure HTTPS, I'd recommend setting up an Nginx container or server as a reverse proxy and issuing certificates using Let's Encrypt. The Matrix protocol uses standard port 443 for communication with clients (e.g. from an app) - known as the "client port" - and port 8448 for communication with other homeservers (the "federation port").
You may wish to read some of the [official documentation](https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md) on setting up a reverse-proxy, but I'll run through roughly what I do below.
Depending on your Nginx setup, you may need a couple of `server` blocks similar to the following to configure your reverse proxy (assuming your homeserver name is "example.com"):
```
server {
listen 80;
listen [::]:80;
server_name example.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
listen 8448 ssl;
listen [::]:8448 ssl;
server_name example.com;
ssl_certificate /path/to/fullchain.pem;
ssl_certificate_key /path/to/privkey.pem;
location / {
proxy_pass http://synapse:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
client_max_body_size 50M;
}
}
```
If you run Nginx as a Docker container remember also to expose port 8448 alongside 443.
Synapse uses port 8008 for HTTP communication, to which we forward requests received on both the secure client and federation ports. In the example above, `synapse` is the name of the container that runs my homeserver, as we'll cover next. Again, depending on your setup, and whether you choose to use Docker, you may need to change this value so that your reverse proxy can route through to port 8008 on your homeserver.
## Generate a configuration file
The next step is to generate your homesever config file. I recommend firstly creating a directory to hold your synapse data (e.g. `mkdir synapse_data`). We'll mount this to `/data` on the target container in order for the configuration file to be created.
The configuration file can be generated using Docker:
```
docker run -it --rm \
-v synapse_data:/data
-e SYNAPSE_SERVER_NAME=example.com \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest generate
```
Once this completes, your `synapse_data` directory should contain a `homeserver.yaml` file. Feel free to read through this and check out the [documentation](https://github.com/matrix-org/synapse) for ways in which it can be modified.
## Run the homeserver
Finally, we can now run the homeserver. Depending on your reverse proxy setup (and whether you are containerising anything else), you may need to configure your Docker networks, but generally you can just execute the following to get your homeserver running:
```
docker run -d \
-v synapse_data:/data
--name synapse \
--restart always \
matrixdotorg/synapse:latest
```
If everything went well (and assuming your reverse proxy is also now up and running), you should be able to use your web browser to visit your Matrix domain (we used "example.com" above) and see a page that looks like this:
![Matrix homeserver confirmation page](/media/blog/matrix.png)
## Creating your user account
As long as your homeserver is configured to accept user registrations (via the `enable_registration` directive in `homeserver.yaml`), you should be able to [download a client](https://matrix.org/clients) (or use the [Element webapp](https://app.element.io)) and register your first user account.
Once logged-in you can join rooms, invite people, and begin communicating with others.
# Conclusion
This post aims to be a rough introduction to running your own Matrix homeserver. The Synapse software offers a variety of ways to tailor your instance, and so it is certainly worth becoming familiar with some of [the documentation](https://github.com/matrix-org/synapse) to ensure you have configured things the way you need.
If you want to get in touch then you can send me a message using Matrix (@wilw:matrix.wilw.dev) or on Mastodon ([@wilw@fosstodon.org](https://fosstodon.org/@wilw)).

View File

@ -0,0 +1,21 @@
---
date: "2021-03-23T19:45:00Z"
title: "The Great Alone by Kristin Hannah"
description: "My thoughts on the book 'The Great Alone' by Kristin Hannah."
tags: [100daystooffload, book]
slug: the-great-alone
---
[_The Great Alone_](https://www.goodreads.com/book/show/34912895-the-great-alone) by [Kristin Hannah](https://www.goodreads.com/author/show/54493.Kristin_Hannah) is a book set out in the Alaskan wild. It tells the story of a young family that move in order to live off-the-grid after the father returns from being a prisoner of war in the Vietnam war.
![The Great Alone book cover](/media/blog/the_great_alone.jpg)
The book mostly focuses on the viewpoint of the daughter, Leni, who is thirteen years old when she moves with her mother and father. The story tells how Leni adapts and grows into her new Alaskan life over the years, whilst at the same time trying to navigate some of the perils at home in her family cabin. Leni and her family meet and grow close to different members of the local community, in which there are a variety of views regarding the types of people that should be allowed to come to Alaska.
The book certainly has its dark moments, and there is an ongoing sense of violence and intensity. At the same time, the author wonderfully describes the peacefulness of the environment, and the wildness of the Alaskan landscape, the wildlife, the weather, the sky, and the sea. It is clearly a place where humans and nature meet, and a place where - if people are to live off the land - they must learn and respect it and all it has to offer.
After all, in Alaska you can only ever make one mistake. The second one will kill you.
I loved the book and its intertwining themes of love, family drama (and more), forgiveness, wilderness, comradeship, and escapism. The author makes you feel frustrated with some of the decisions made by the characters in one moment, and the next you are cheering them on from behind the pages.
With everything that goes on in the story - the town and its community of interesting characters - it isn't always obvious where the title of the book comes from. However, as you progress further you realise that it's not just the landscape and geography that can evoke loneliness; the feeling can be more the result of the actions of others and having to keep secrets about what goes on behind closed doors.

View File

@ -0,0 +1,31 @@
---
date: "2021-03-27T16:31:00Z"
title: "PinePhone and PineTime"
description: "Why I pre-ordered the PinePhone and a bit of talk about the PineTime, an open-source and hackable smartwatch."
tags: [100daystooffload, technology, pinephone, life]
slug: pinephone-pinetime
---
## Pre-ordering the PinePhone Beta
Earlier this week I ordered a [PinePhone](https://www.pine64.org/pinephone), which recently became [available as a Beta Edition](https://pine64.com/product-category/smartphones).
I've been excitedly following the progress of the PinePhone for some time now. I've joined various Matrix rooms, subscribed to [blogs](https://linmob.net), and started listening to the [PineTalk podcast](https://www.pine64.org/pinetalk). The phone is a hackable device that runs plain old Linux - not an Android variant - and thus helps users escape from the grasp of the Google and Apple ecosystems.
Other similar devices exist - such as the [Librem 5 from Purism](https://puri.sm/products/librem-5) - however the unopinionated nature of the PinePhone, and its cost ($150 compared to the Librem's $800), make the Pine64 offering much more attractive to me.
I understand that the phone and software are still under very active development, and I fully expect that the phone is not yet ready to become a daily driver. However I am excited to try it out, support the project, and contribute where I can. The potential of this movement is huge.
## Some thoughts on PineTime
Whilst researching the PinePhone, I stumbled across the [PineTime smartwatch](https://www.pine64.org/pinetime). This is a wearable device also from Pine64, which aims to offer an open-source and hackable system in a similar vein to the PinePhone.
Pine64 offers the device for purchase but fully acknowledges that it is not yet ready for daily use, and encourages interested people to instead purchase the [Development Kit](https://pine64.com/product/pinetime-dev-kit) so that they can learn more or contribute to the project.
The device aims to offer health tracking solutions (since it includes a step counter and heart rate detector) and notifications, and so the intention is for it to offer a similar experience to other smart watches - except with much more freedom.
The open and community-driven nature of the device could take it any number of ways.
> We envision the PineTime as a companion for not only your PinePhone but also for your favorite devices — any phone, tablet, or even PC - pine64.org/pinetime
This vision seems to embody the Pine64 philosophy that we see across all of their products. I'm not the right person to be able to contrubute much to the project in its current stage (I don't have much experience with developing on embedded operating systems), but I look forward to seeing how it progresses and hopefully getting more involved slightly further down the line.

View File

@ -0,0 +1,69 @@
---
date: "2021-03-31T22:00:00Z"
title: "The simplicity and flexibility of HTTP for APIs"
description: "Understanding RESTful HTTP web APIs, and some frustrations with non-compliant services."
tags: [100daystooffload, technology, opinion]
slug: http-simplicity
---
# Simple and RESTful HTTP APIs
The [HTTP standard](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol) is an expressive system for network-based computer-computer interaction. It's a relatively old standard - it started life as HTTP/1.0 in 1996 and the HTTP/1.1 standard was formally specified in 1999. HTTP/2 (2015) introduced efficiencies around _how_ the data is transmitted between computers, and the still in-draft HTTP/3 builds further on these concepts.
I won't go into the nuts and bolts of it, but - essentially - for most applications and APIs, the developer-facing concepts haven't really changed since HTTP/1.1. By this version, we had all the useful methods required to build powerful and flexible APIs.
When writing a web service (e.g. a website or a web-based REST API), actions are based around _resources_. These are the "things" or "concepts" we are concerned with. For example, if one was to write a to-do list app, two of the concepts might be "to-do list" and "to-do list item". Generally, such an app might also maintain user accounts and so may have "user" and "session" resources, too, along with others if required.
In such a service, resources are usually indicated by a _path_. This is the bit that comes after the host name (e.g. `example.com`) in the URL and are usually mentioned in _plural_.
For example, in our to-do list example, a resource which indicates _all_ available lists might be given simply by `/lists`, and a specific list with ID `aaa-bbb-ccc` would be available at `/lists/aaa-bbb-ccc`.
This system allows the engineer to indicate _hierarchy_ or ownership in the data model. For example, to address all of the list items in a specific list one might use `/lists/aaa-bbb-ccc/items`. Then to access an item with ID `xxx-yyy-zzz` inside this list you'd use `/lists/aaa-bbb-ccc/items/xxx-yyy-zzz`. In many cases, for a simple web service of this type, this would be sufficient - it may not be appropriate to enable addressing a to-do list item directly without the context of its "parent" list.
Paths may sometimes include other metadata, such as the API version being called, but this can be simply included and described in the documentation.
In HTTP, _methods_ describe _what_ the API consumer wants to do with a resource. Some of the most widely used methods are `GET`, `POST`, `PUT`, `DELETE`, and `OPTIONS`. These methods are defined in the spec and some clients may handle requests differently based on the method being used. Unlike resources, you can't define your own methods to use. However, the flexibility provided as-is allows for most services to be built without requiring this level of customisation.
Some of these have pretty obvious semantic meaning. `POST` is typically used to create a new resource (e.g. "post a new to-do list item") and `PUT` is used to update an existing resource.
This means that, given the combination of our resource addressing and methods, we can express a powerful web service. Structuring your to-do list app using the system described here caters well to typical to-do list actions: creating lists and items (e.g. `POST /lists/aaa-bbb-ccc/items`), crossing-off items (probably `PUT /lists/aaa-bbb-ccc/xxx-yyy-zzz`), and retrieving, updating, and deleting things in a similar way using the appropriate methods.
HTTP request _headers_ can be used to provide authentication information, describe how the client wants information to be returned in the response, along with other ways to further annotate the request being made and to customise the expected response. Of course, the effectiveness of supplying these request headers depends on the server's own capability and configuration. However, the use of headers should certainly be considered by the engineer whilst planning and building out the service.
Using standards like these - resources, methods, and headers - in your APIs enables your users (_consumers_) to more easily learn and understand how to use your service. This saves them time, helps your service to grow, and means you'll spend less time dealing with support requests (unless your documentation is really good).
# Custom implementations
I think the above system is the most ideal, expressive, learnable and _expected_ way of building web services.
However, HTTP is flexible, and your server-side code can - in theory - do whatever you want it to, no matter what the request path, method, and headers are. But I don't really understand why one would want to.
I recently [migrated my photos collection to pCloud](/blog/2021/02/24/google-photos-pcloud), and wanted to explore their API to see if I could also use the service for programmatically backing-up other things, too.
Unfortunately I am unable to actually use their API, since I use two-factor authentication on pCloud and the API doesn't seem to work if this extra layer of security is in-place. However, whilst researching I discovered that pCloud's API is an example of a service that seems to defy the standards one is usually familiar with.
For example, it appears that it's perfectly acceptable to use `POST https://api.pcloud.com/deletefile?fileid=myfile` to delete a file or `GET https://api.pcloud.com/renamefolder?path=oldfolder&topath=newfolder` to rename a folder.
There's nothing _technically_ wrong with this implementation, especially given the fact that I'm sure it works. It perhaps makes it easier to route requests through to the correct internal functions. However it just feels _inelegant_ to me, and it seems to focus more on what's easier for them rather than their users.
The [page that lists file operations](https://docs.pcloud.com/methods/file) could instead show a couple of simple example paths and then rely on request _methods_ and parameters to describe available options.
I don't mean to pick on pCloud - the service itself is great and I'm sure the API works nicely. I plan to continue using the service via its web UI and official clients. I only bring it up because it seems odd to re-invent the wheel.
I'm completely on-board with the notion of discouraging system and process monopoly, but I don't think this is the same thing. The web is formed from a set of open standards that anyone can comment on or help contribute to.
# "Good" implementation examples
The web is full of services that expose sensible and learnable APIs.
An example I always love is the Stripe API - arguably a much more complex service than pCloud. However its [simple "compliant" API](https://stripe.com/docs/api/charges) makes credit card payments - and loads more - very easy to integrate with.
The [Spotify web API](https://developer.spotify.com/documentation/web-api/reference) also looks useful, though I haven't used that before myself.
# Beyond REST
REST has been a cornerstone of the web over the past couple of decades, and I think there is still very much a space for it - both for now and in the near future. Its flexibility has allowed it to remain useful across industries and settings - from small private IoT setups through to highly-secure enterprise-enterprise systems.
There are movements to begin using other technologies that may be better suited to the future of the web and communication - particularly as things continue to scale. Efforts such as [GraphQL](https://graphql.org), [Netflix's Falcor project](https://netflix.github.io/falcor), and even [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call) provide alternatives for when REST isn't the most appropriate solution.
However, if building a web API that you want other people to use, and which would be well suited to REST, then I always think it's worth sticking to these HTTP standards as much as possible.

View File

@ -0,0 +1,80 @@
---
date: "2021-04-04T11:10:00Z"
title: "From Apple Mail to Spark to Thunderbird"
description: "Three months: three mail clients. Some thoughts."
tags: [100daystooffload, technology, opinion]
slug: applemail-spark-thunderbird
---
Like many people, I own and manage multiple email accounts - for example, some are for work, for home, or for specific projects. I used to be a strong user of solely web-based email clients (such as Gmail or Fastmail's web apps) for each of my accounts. However the number of tabs I needed to keep open for all of this grew to the point where things became unmanageable - both in terms of needing to check multiple tabs several times per day and also frustrations when the browser would restart, or if I'd lose my tab setup for some other reason.
I needed a proper client, and although I knew that web-based software like [Roundcube](https://roundcube.net) and [Rainloop](https://www.rainloop.net) existed - which I could self-host - they just never felt stable or feature-ful enough.
This post is a short round-up of three mail clients I've been trying over the past few months.
## Apple Mail
For several years I've been an Apple Mail user on both my Mac and iPhone - mainly because it was the default on the devices I have but also because it's generally quite smooth and works reliably.
It's relatively painless to get setup, and the accounts sync well across the mail, contacts, and calendar apps. On Gmail (and other larger providers) there is an authentication "wizard" to help get the accounts setup. Fastmail allows you to [install profiles](https://www.fastmail.help/hc/en-us/articles/1500000279941-Set-up-iOS-devices-iOS-12) that automatically configure everything for you.
However, over time I began to find the interface a bit unintuitive. On iOS the general "Accounts" setting - which was useful as a single source of truth - seemed to disappear, and for some mailboxes it wouldn't let me add alias send-from addresses. I'm sure there was a reason for this, but I sometimes find that the iOS settings UIs overcomplicate things in their efforts for simplicity.
Whilst the Mac (currently) still has a dedicated Accounts setting in System Preferences, it also had other problems. Several times a day I'd frustratingly get my workflow interrupted by warnings of network problems.
![Apple Mail 'accounts offline' warning](/media/blog/apple-mail-issue.png)
I still think Apple Mail is a pretty decent app, but I thought that there must be something else out there that would work better, and be less frustrating, for me.
## Spark Mail
Back in February some of my colleagues recommended the [Spark mail app](https://sparkmailapp.com) from [Readdle](https://readdle.com). I've used some of Readdle's other software in the past (see [this post](/blog/2021/03/08/getting-mail), for example), and generally find it quite useful and intuitive. Like Apple Mail, it's also available for Mac and iOS.
Spark is free to get started (and I imagine most individuals would fit into their [free plan](https://sparkmailapp.com/pricing) long-term too). One of the features I immediately liked was that all of your mail accounts are tied to a single account. That means that if you get a new computer or phone, you don't need to go through the tedious business of setting up all the mail accounts again - just login with your main email and everything else gets pulled-through.
Email management is easy, search is lightning-fast, and the settings are useful.
Spark also comes bundled with a calendar that syncs well and automatically with services like Google Calendar and Fastmail Calendar. Like Apple Mail, there are dedicated setup wizards for email and calendar with the larger providers, and an option for manual entry for others. The calendar's event creator is nice, and also allows you to automatically schedule a video meeting.
![Spark calendar video meeting picker](/media/blog/spark-calendar-meeting.png)
One drawback is that there doesn't seem to be any way to view or manage contacts, and neither does it seem to integrate with the system contacts. I imagine it works directly with the relevant provider's contacts service.
Another frustration I had was in managing shared calendars. I think I'm a bit of a calendar power-user, however I imagine this must be affecting other people too. If someone else - who also shares their calendar with you - creates an event and invites you to it, there does not seem to be any way to select your own entry in order to interact with it (e.g. to accept or decline the invitation).
In the event below, if my calendar was the "green" one, for example, there is no way for me to select that in order to accept or decline. Again, I may be missing something but I've been trying to find a way for a while now without needing to "hide" my colleagues' calendars first.
![Spark calendar event with multiple attendees](/media/blog/spark-calendar-selection.png)
Then it comes to security. Whilst I "trust" Readdle - in that I imagine they have decent security practices in place - we know that even the most secure companies can become compromised. The account sync feature mentioned earlier is certainly useful, however this must mean that Readdle are storing the Gmail access keys or IMAP connection details on their own servers in a centralised location. Your email is the last thing you want to get compromised - since it is likely that this controls a number of your other online accounts - and so this risk is a bit of a concern.
Readdle [claim that](https://sparkmailapp.com/blog/privacy-explained) everything is encrypted at various levels but it still feels a little risky to me. Having the sync and push notifications is useful, and so it's up to the individual to choose what works best for them.
## Thunderbird
The last client I want to mention in this post is [Mozilla's Thunderbird](https://www.thunderbird.net). This is a bit of a re-visit for me, since this is the client I used consistently during my University years.
In honesty, the client doesn't seem to have changed a huge amount over the last decade, but then again - neither have the underlying email technologies themselves. It's an open-source client available on a number of operating systems - but not yet ([or ever?](https://support.mozilla.org/en-US/questions/990147)) for mobile.
Despite the slower development, I find Thunderbird to be a very powerful client. It has great support for email, calendar, and contacts straight out of the box. Things seem clearly organised, and account-management is super easy. There are no dedicated setups for Gmail, Outlook, etc., but it was able to automatically detect the relevant IMAP/SMTP servers for all of my accounts.
It's very unopinionated about ordering, views, threading, and much more - which allows you to set things up the way that works best for you. The interface doesn't try to be flashy or too clean and I find I am very productive when using it.
The calendar is easy to use and works with open standards like CalDAV.
It also has built in support for chat through systems like IRC and XMPP (if you use these types of things), and there's also a rich ecosystem of plugins to add extra functionality too. It's certainly the most flexible and powerful of the desktop mail apps I've used.
A few areas where it frustrates are around its performance. When adding a new account it proceeds to automatically download all of the mail headers for that account to be stored locally in its databases. This allows it to support searching and other local tasks, but the process causes the app to run slowly whilst it's in progress. If you change computer often, or have several machines to setup, then this could be a pain.
When opening large mail folders containing perhaps several hundreds of thousands of messages - for example my combined "Archive" folder - things get _very_ slow to the point where it is unusable. However, I don't really ever need to use these views so this isn't too much of a problem for me, but for some people this could be a blocker.
When compared to Apple Mail and Spark, the search function seems very slow. The results returned are quite accurate though, and the fact that results get shown in their own "tab" means that your flow isn't interruped elsewhere in the app. This is a nice feature.
Generally, I love Thunderbird. Mozilla is renowned for being privacy-centric and the fact that everything is stored locally gives me more confidence about its security. Of course, it has drawbacks which will put some people off, but it's good to be supporting open-source software where possible.
## Conclusion
Like a web browser, I think people should be free to continue to try alternative mail clients as their needs and the software features change. The software mentioned above comprise just a small sample, and are focused largely around the Apple ecosystems since these are the devices I happen to be using at the moment.
Some others I'd like to try are [Airmail](https://airmailapp.com) and [Polymail](https://polymail.io). However, it'd be great to get some feedback on what other people are using. If you have any suggestions then please get in touch using Matrix (@wilw:matrix.wilw.dev) or on Mastodon ([fosstodon.org/@wilw](https://fosstodon.org/@wilw)).

View File

@ -0,0 +1,33 @@
---
date: "2021-04-07T19:44:00Z"
title: "Is Facebook scraping the Fediverse?"
description: "Is Facebook using the Fediverse to suggest posts to users?"
tags: [100daystooffload, technology]
slug: is-facebook-scraping-fediverse
---
I don't use Facebook often. In fact, I only have an account currently because our company uses the "Login with Facebook" functionality in order to offer an additional single sign-on option for some customers.
I logged-in today as we needed to update some of the app's configuration on the Facebook Developer portal, and I went via the Facebook homepage feed to get there. A couple of "Suggested for you" posts that showed near the top of my feed were unusual and caught my eye.
![Facebook Suggested Post Tram picture](/media/blog/facebook_tram_1.png)
There wasn't just one. As I scrolled further, more and more showed up - all seemingly from the same user.
![Another Facebook Suggested Post Tram picture](/media/blog/facebook_tram_2.png)
![Yet another Facebook Suggested Post Tram picture](/media/blog/facebook_tram_3.png)
The page ("Nostalgia Vienna") doesn't seem to be selling anything in these posts, and I've never interacted with them before. I also don't have any content on Facebook and use browser plugins such as [Firefox Containers](https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers), [Privacy Badger](https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17), and others to try and prevent inadvertent data sharing with the social platform.
I know Facebook potentially has other ways of gathering user information, but I just simply don't have a big interest in Viennese trams (or trams in general). I don't really know why it is so keen to show me a new picture of a tram every few posts down the home feed.
I then realised that I recently [posted a picture to Pixelfed](https://pixelfed.social/p/wilw/267598377078886400) of a tram that I took on a trip to Basel a few years back.
![A screenshot of my tram photo from Pixelfed](/media/blog/pixelfed_tram.png)
My [Pixelfed account](https://pixelfed.social/wilw) is not explicitly tied to my own name or identity, but my bio there does contain a link to [my website](https://wilw.dev).
Interestingly, the styles and images of the Viennese trams suggested by Facebook are not a million miles away from my own post of the Swiss tram. The link feels tenuous but I can't think of anything else that might cause Facebook's algorithm to so strongly suggest this type of content to me.
I just wonder whether there is some clever scraping going on behind the scenes to further bolster Facebook's knowledge of its users.

View File

@ -0,0 +1,37 @@
---
date: "2021-04-12T21:38:00Z"
title: "Six months of Invisalign"
description: "How my Invisalign treatment went: the process and my thoughts."
tags: [100daystooffload, life]
slug: invisalign
---
Back in November I started an [Invisalign](https://www.invisalign.com) course to help straighten my teeth. Invisalign works like traditional braces, but is instead formed from transparent teeth "trays" that others can only really notice up-close. Given my personal situation, this seemed like a better approach than the traditional metal braces.
![My Invisalign goodie bags](/media/blog/invisalign.png)
In all honesty, my teeth weren't that bad to begin with but - like many people - as I got older I was beginning to notice a little more "crowding" (where teeth bunch together and start to move out of place). Invisalign was something I had wanted to try for a while, and whilst the UK was in lockdown and I couldn't see anyone anyway, it felt like a good time to go ahead with it.
# The process
I had a couple of initial appointments with my dentist just to ensure I was dentally fit for orthodontic work and in order to take a scan of my teeth. The scan was cool - it showed exactly what my teeth looked like and the software then uses the result to design a series of aligners that would bring the teeth back into line. I also got access to a website on which I could see how my teeth would be moving over time.
After my scans, I went back to the dentist a couple of weeks later in order for some attachments to be added to my teeth and to collect my newly-manufactured aligners. In total I was given 22 sets of aligners, with the aim being to start with set number 1 and then proceed to the next one each week - every change in aligner gradually moving the teeth into line.
I was also given a scanbox, into which I could place my phone in order to submit photos of my teeth every week through an app to my dentist. This enabled him to track the progress each week and to ensure I moved onto the next aligner set at the right time.
For the next two-three months I wore my aligners for 22 hours each day. Every week, I scanned my teeth and was instructed to move onto the next set of aligners in order to progress the treatment. In February I had to go back to visit the dentist in order to have some additional filing between some of my teeth so they could move into position properly.
I then continued for another few months - until today. I completed my last set of aligners last week and had a check-up this afternoon to see how things went. I was pleased with the result, and we agreed that no more movement was needed. My dentist removed the attachments from my teeth and we ordered the retainers, which I will need to continue to wear full-time for a few months and then beyond that just at night - in order to ensure things stay in place.
# My thoughts
In general, the process was super easy. For the first few days of the treatment I didn't think I would be able to keep it up for six months - the aligners felt pretty uncomfortable and can be a little painful for a couple of days every time I switched to a new set. Also, the extra work needed when brushing my teeth and having to remove the aligners between meals seemed inconvenient.
However, after a few weeks it all became second nature. It now feels weird when I don't have them in!
The treatment is also quite expensive. However, it is cheaper (I think?) than traditional braces, it's a shorter treatment period, and I preferred to have the almost-invisible aligners rather than metal braces in front of my teeth.
In addition, given that the sets of post-treatment retainers included in the treatment plan last for years, it feels like the treatment is a "one off" (🤞) - as long as I keep wearing the retainers properly then the teeth should now stay in place.
All in all, it was (and is still) a good experience and I am glad to have done it.

View File

@ -0,0 +1,77 @@
---
date: "2021-04-17T14:00:00Z"
title: "Reporting business accounts using Ledger"
description: "Why I switched from Xero to Ledger, and how I could still report business accounts."
tags: [100daystooffload, finance, technology, ledger]
slug: business-accounts-ledger
---
As is the case with many countries, all businesses in the UK must report the state of their financial accounts to the relevant inland revenue service at their year-end (in the UK, this is [HMRC](https://www.gov.uk/government/organisations/hm-revenue-customs)).
This is also the case if you are a freelancer or sole-trader (or if you've made other untaxed income - e.g. from investments). In these cases, this is called your [Self Assessment](https://www.gov.uk/self-assessment-tax-returns). Self Assessments are pretty straight forward, and can usually be completed online by the indiviual themself - as long as they have kept good accounts and know their numbers.
However, the required year-end _business_ accounts are different and are more complex in order to account for all the different types of operating models and variety of business types. There are also various rules for businesses of different sizes and if you don't know what you're doing you may end up paying too much or too little tax.
As such, it is generally advisable to appoint an accountant to help you at your year-end (even if you're making a loss!). It gives you peace of mind and also saves you time.
My business year-end passed recently. Historically I've used [Xero](https://xero.com) to track business finances - since it is a good one-stop shop for issuing invoices, getting paid, tracking bills and finances, and automatically reconciling against your bank. It's a great tool for small businesses as it helps you make sure everything is correctly accounted for, and it allows your accountant to easily get the information they need in order to make their reports for your business to HMRC.
However, it is a paid-for service, and if you've paused trading at least temporarily - like me - or if you're going through a financial dry patch, it feels a waste to pay for something that you're not using.
About a year ago I got quite heavily into [plain-text accounting](/notes/plain-text-accounting) - it feels logical and in-control. I was using it for some of my personal finances and so I thought I'd also switch business bookkeeping to the [Ledger](https://www.ledger-cli.org) approach too.
I exported my Xero accounts into a new Ledger file and paused my Xero subscription. Every month I would run through my bank statement/invoices/bills, and update the ledger and reconcile against the business bank account. As such, when it came round to year-end, I had a full set of books for the relevant tax period.
This is where I worried a little. The lady who normally files my accounts had access to my Xero and can run everything from there (many small business accountants in the UK recommend and sometimes only work with Xero). I didn't want to have to look for and begin working with a new accountant, and so I looked to see if I could get Ledger to output balance sheets and P&Ls in a similar way to Xero.
The Ledger tool offers a number of reporting mechanisms. The most useful are perhaps the `balance` and `register` commands, which respectively show the balance across your accounts and a transaction log.
After running a few of these simple Ledger commands, I had the files I needed: a balance sheet (covering all accounts), a profit & loss account (essentially a balance sheet covering income and expense accounts), and a transaction register. Examples describing how I generated these are shown below (in this case assuming a year-end of 31st December).
**Balance sheet:** To generate the balance sheet I used `ledger balance -b 2020/01/01 -e 2021/01/01`, which outputs something along the lines of:
```
£-XXXX.XX Assets
£XXXX.XX Bank 1
£-XXXX.XX Bank 2
£XXXX.XX Equity:Shareholder:Dividends
£XXXX.XX Expenses
£XXX.XX Advertising
£XX.XX Compliance
£XX.XX Domains
£XXX.XX Hosting
£XXX.XX Services
£XXX.XX Accounting
£X.XX Banking
£XX.XX Legal
£XXX.XX Software
£XXXX.XX Tax:Corporation
£-XXX.XX Income:Sales:Product
--------------------
0
```
**Profit & loss account:** The rough "P&L" was generated with `ledger balance -b 2020/01/01 -e 2021/01/01 income expenses`:
```
£XXXX.XX Expenses
£XXX.XX Advertising
£XX.XX Compliance
£XX.XX Domains
£XXX.XX Hosting
£XXX.XX Services
£XXX.XX Accounting
£X.XX Banking
£XX.XX Legal
£XXX.XX Software
£XXXX.XX Tax:Corporation
£-XXX.XX Income:Sales:Product
--------------------
£XXXX.XX
```
(Where the final line indicates the overall balance between income and expenses).
**Transaction log:** The register was generated using `ledger register -b 2020/01/01 -e 2021/01/01`. I won't include a sample below, as a transaction log is mostly obvious. I also generated it in CSV format in case this made it easier for the accountant at all: `ledger csv -b 2020/01/01 -e 2021/01/01`.
I placed the outputs from these commands into separate files and sent them to the accountant, who was then able to submit the company accounts without needing Xero. This was a great experience, as it gives me confidence in the end-to-end functionality of Ledger (and other similar command-line accounting tools). Writing and keeping books using plain-text files is quicker than Xero (which can be quite clunky), and now I can also easily get the information out the other end too. And it's free!

View File

@ -0,0 +1,19 @@
---
date: "2021-04-18T12:09:00Z"
title: "The Giver of Stars by Jojo Moyes"
description: "Some thoughts on the book 'The Giver of Stars' by Jojo Moyes."
tags: [100daystooffload, book]
slug: giver-of-stars
---
[The Giver of Stars](https://www.goodreads.com/book/show/43925876-the-giver-of-stars) by [Jojo Moyes](https://www.goodreads.com/author/show/281810.Jojo_Moyes) tells the story of a young English woman - Alice - who marries an American man and moves to a small town in Kentucky in the late 1930s.
![The Giver of Stars book cover](/media/blog/giver_of_stars.jpg)
Not long after arriving in Kentucky Alice realises she may have made a mistake when it comes to her new husband. However, the real story focuses around a job Alice gets working with the local library.
The library begins offering a new service, in which the (female) librarians travel around the local area (often hard to traverse due to mountainous terrain) on horseback to deliver books to those unable to get to town or who wouldn't usually engage with the library. The concept is based on a real project - the [Pack Horse Library Project](https://en.wikipedia.org/wiki/Pack_Horse_Library_Project) - and Alice and the other women are met with many different types of personalities on their rounds.
There are focuses on racism, sexism, misogyny, domestic abuse, murder, and much more in the story, and the librarians are faced with a number of hugely difficult situations both when at work and when at home.
The story was fantastic and engaging. I enjoyed the scene-setting, and could easily picture the local town and all the surrounding countryside. You feel an undeniable sense of unfairness in the world as the story progresses - in which rich white men nearly always get their own way in most situations - however the bond that builds between the characters, and their shared experiences, show that this can be overcome.

View File

@ -0,0 +1,23 @@
---
date: "2021-04-25T16:41:00Z"
title: "Steve Jobs by Walter Isaacson"
description: "Some thoughts on the Steve Jobs biography by Walter Isaacson."
tags: [100daystooffload, book]
slug: steve-jobs
---
I was recently asked whether Steve Jobs was someone that inspired me. It's a difficult question, I find; he's definitely an inspiring person in the sense of his work ethic, the products he envisages, and his way of understanding the needs of the target customer better than they know it themselves.
As a person, however, I find his personality and the way he treats others less inspiring. I try to be empathetic to others and take into account the emotional and psychological position of someone else when interacting with them. In a professional workplace this (hopefully) contributes towards creating a space that enables people to grow and develop whilst also emboldening colleagues to put forward their own thoughts and opinions in a more risk-free environment.
Jobs, on the other hand, has his own vision and - although these visions, if executed, are bound to be successful - you'll need to be on _his_ train in order to succeed in working with him.
![Steve Jobs biography book cover](/media/blog/steve_jobs.jpg)
The reason my colleague asked me this question was because I was reading the [Steve Jobs biography](https://www.goodreads.com/book/show/11084145-steve-jobs) by [Walter Isaacson](https://www.goodreads.com/author/show/7111.Walter_Isaacson) at the time. The biography's subject is not a hero of mine in any way, but he is indisputably a legend in the consumer technology space and so his story definitely deserves knowing (whatever your particular stance is).
Although I knew the rough story of his life - his co-founding of Apple with Steve Wozniak, his time and successes at Pixar, his founding of NeXT before his subsequent return to Apple and eventual battle with cancer - understanding how individual products came to be imagined and created was fascinating.
His relationships with others - friends, colleagues, competitors, and romances - undoubtedly helped shape his life and his successes. His obsessions over food, art (and the appearance of products, both outside and within) and his focus on work right to the end were certainly areas I did not know about, but it's clear that these all contribute towards what he managed to achieve.
I know that a lot of people don't like Jobs, or don't agree with the type of closed end-to-end technology he pioneered and obsessed over (myself included), however his achievements - even by the age of 30 - and his focus on the end goal should definitely be an inspiration to all technologists.

View File

@ -0,0 +1,11 @@
---
date: "2021-04-26T10:48:00Z"
title: "My appearance in the Wales \"35 Under 35\""
description: "Back in December I was lucky enough to be included in the Wales '35 Under 35'."
tags: [100daystooffload, life]
slug: 35-under-35
---
This is a bit of a vanity post, but back in December I was lucky enough to be included in the 2020 [WalesOnline "35 Under 35"](https://www.walesonline.co.uk/news/wales-news/walesonline-35-under-35-top-19351410).
This list aims to present the "best young businessmen in Wales" for the year. It was definitely an honour to be included and it's great to see the efforts from the whole team at [Simply Do](https://www.simplydo.co.uk) reflected. We're still only at the beginning of our journey and so we have an exciting few years ahead!

View File

@ -0,0 +1,55 @@
---
date: "2021-04-27T21:00:00Z"
title: "Starting out with the Pinephone"
description: "My initial plans with the Pinephone."
tags: [100daystooffload, technology, pinephone]
slug: pinephone
---
As you may know, I [recently purchased the beta edition of the Pinephone](/blog/2021/03/27/pinephone-pinetime). It arrived last week in the _Pinephone Beta Edition_ box shown below.
![Pinephone beta box](/media/blog/pinephone.jpg)
As mentioned in my previous post on the subject, I bought the phone for purely experimental purposes, to get involved in the community, and to be a part of the freedom and Linux-on-phone movement.
I fully understand that the device is not yet really considered ready for every-day reliable production use (especially when compared to my current iPhone 11 Pro Max). However, [the Pinephone](https://wiki.pine64.org/index.php/PinePhone) is less than 20% the price of my iPhone, and comes with the freedom to do so much more - without the restrictions of Apple's "walled garden".
However, I am very excited to see what _can_ be done with it. At the end of the day, it's just an ARM-based _computer_ with support for running mainline Linux and the added benefit of having cellular capabilities to make phone calls and handle data connections.
It's also super easy to try out different [operating systems](https://wiki.pine64.org/wiki/PinePhone_Software_Releases) by simply `dd`-ing to an SD card - much easier than the tedious root-recovery-flash song and dance often required in the Android ecosystem.
# The next few weeks
Anyway, I'm going a little off-topic. My initial plans aren't to try out new operating systems just yet (although I am excited to try). Instead, I'd like to spend the first few weeks tinkering with the out-of-the (beta) box underlying system and seeing how well it _does_ handle my day-to-day tasks on an as-is (i.e. without needing to change SD card) basis.
The beta edition comes pre-installed with [Manjaro Linux](https://manjaro.org) on the eMMC along with the [KDE Plasma Mobile](https://www.plasma-mobile.org) desktop environment, so this is what I'll stick with for now. Upon initial boot-up I can already see that it comes pre-installed with some useful packages (e.g. Telegram messenger and the [Megapixels camera application](https://git.sr.ht/~martijnbraam/megapixels)).
However, below is a list of day-to-day tasks I can do on my current phone and which I will try and accomplish using the device over the next few weeks.
- Basic calls and texts.
- 4G cellular data connectivity.
- WiFi connectivity.
- Bluetooth connectivity (including headphones).
- Photo- and video-taking using both front- and rear-facing cameras.
- Web browsing.
- Podcast subscribing, listing, and listening.
- Audiobook downloading and listening.
- Music-playing (preferably through Spotify).
- Mastodon (tooting and reading my timelines).
- Twitter.
- RSS (viewing my feeds from my FreshRSS server).
- Email reading and sending.
- Telegram messaging.
- Password-management.
I've purposefully kept a couple of things off this list - including Whatsapp, my bank's app, and some enterprise apps I use for work - since these systems are proprietary in nature and so would not be fair to expect of the phone. One could argue that this impacts its viability as a daily-driver, however that is not my current goal. Presently I am just looking to see how well some basic tasks can be accomplished before trying to take it further to be fully useful for daily use.
I also want to document the journey for myself and others wanting to get involved in this project.
Projects like [Anbox](https://linmob.net/2020/08/15/anbox-on-the-pinephone.html) look like potential routes for getting additional things working when needed in a pinch. However I'll save that for another time.
# Next
Check back in a few weeks to see how I get on. If you have any advice for starting out in this way then please let me know!
After this initial period I will look to try out other shells and underlying systems. The [ARM Arch with Phosh project](https://github.com/dreemurrs-embedded/Pine64-Arch) looks like a good start point for when I come to this.

View File

@ -0,0 +1,25 @@
---
date: "2021-05-04T18:44:00Z"
title: "Go Time"
description: "Some thoughts on the Go Time podcast."
tags: [100daystooffload, technology, podcast]
slug: gotime
---
I listen to a number of podcasts each week. One of these is [Go Time](https://changelog.com/gotime).
![Go Time logo](/media/blog/go-time.png)
The Go Time podcast releases episodes every Thursday. Its format is mostly comprised of panel discussions and interviews with founders and specialists in the community about the [Go programming language](https://golang.org). Episodes are usually between 60 and 90 minutes long.
I don't program in Go a lot myself these days, though do have one or two older [projects](/projects) written in the language. However, I feel that the content is often broadly relevant for non-full-time gophers - like myself - also.
The episodes include discussions around a diverse variety of topics - such as testing, networking, web-apps, tooling, startups, programming principles, and much more. Many of these concepts are interesting to gophers and non-gophers alike, as they touch on the broader problems as well as to dicuss how Go can specifically be used to solve these problems.
Recently I have started using the [Rust language](https://www.rust-lang.org) more and more, and particularly on [this side project](https://git.wilw.dev/wilw/capsule-town) which I have used as a mechanism for learning the ins-and-outs. Although the two languages (Go and Rust) are by no means the same, they do share a small number of similar attributes and I have found that the Go Time podcast has often touched on topics relevant to both languages.
Episodes also feature interesting guests from a variety of backgrounds - from specialists in the community through to startup founders. Hearing their stories is always great. Additionally, the show hosts are engaging and add light-heartedness to what can be deep technical conversations.
If you're a programmer, and even if not a gopher yourself, I recommend checking out a few of the episodes to see if you agree.
It should come up in your podcast app if you search for "Go Time". I use [Overcast](https://overcast.fm) on iOS, and if you do also you can subscribe at [this link](https://overcast.fm/itunes1120964487/go-time).

View File

@ -0,0 +1,20 @@
---
date: "2021-05-05T19:50:00Z"
title: "Data Sovereignty"
description: "Brief thoughts on the meanings of 'data sovereignty'."
tags: [100daystooffload, technology]
---
The term 'data sovereignty' is something we hear much more about these days. Increasingly I've also heard it being mentioned in different contexts.
We've seen it more in the world of enterprise SaaS; particularly in the case of UK-based public sector organisations amid post-Brexit data flow policies. More and more organisations are getting stricter in the geographic location of their users' data. Whereas before most organisations would be happy as long as the data is stored somewhere within the EU, they would now require it to be stored onshore within the UK.
They call this _data sovereignty_. At our company we're lucky to be agile enough to adapt and change our service offering to enable UK-only data processing and storage. However I can imagine many larger organisations might experience more inertia. Interestingly though, finding a UK-only mail provider isn't as easy as it sounds - most such services offer "EU" or "US" servers, but stop there (potential SaaS service offering there: UK-based mail provider).
The other place I've been hearing the term is in the indie tech and self-hosted community. In this case the data sovereignty concept relates more to data _ownership_, where the individual maintains control over their own data, where it is stored, how it is processed, and often goes as far as to keep their own data at home (for example, self-hosted setups using home servers).
I'm definitely in this camp too; whilst I don't keep stuff stored at home, I do keep my own data - when possible - on private servers in a secure datacentre or on services I trust. Things just feel more in-control with this approach.
Without data sovereignty, people are at risk of losing data they think they "own". For example, someone [recently lost access to their iCloud data](https://dcurt.is/apple-card-can-disable-your-icloud-account) because of issues with an unrelated service.
There's not much more to this post. I just think it's interesting that we're hearing more and more of the same phrase being used in different contexts by different groups of people and organisations.

View File

@ -0,0 +1,65 @@
---
date: "2021-05-09T21:31:00Z"
title: "Self-hosted notes and to-do lists"
description: "How I keep notes and to-do lists."
tags: [100daystooffload, technology, selfhost]
slug: notes-todos
---
In this post I will talk a little about how I handle my digital notes and to-do lists. In the spirit of my last post on [data sovereignty](/blog/2021/05/05/data-sovereignty), the focus will be on self-hosted approaches.
## To-do list management
It feels odd that the first task many new technical frameworks guide users through, by way of a tutorial, is a simple to-do list; yet finding great production-ready examples of such software can be challenging.
It's a pretty personal space. Although there are awesome time management processes out there (such as the [pomodoro technique](https://en.wikipedia.org/wiki/Pomodoro_Technique)), at the end of the day everyone is unique and what works for one person doesn't necessarily work for others.
A few years ago I got quite heavily into [Todoist](https://todoist.com). It's a very feature-rich platform with great apps across web, desktop, and mobile. It supports tagging, projects, deadlines, sub-tasks, and much more.
However, it's almost _too_ feature rich, and I find this can distract from the intended simplicity of to-do lists. Whilst it's important to set up a process that allows you to work effectively, spending too long configuring and reconfiguring things is counter-productive.
It also means that your data is held elsewhere and out of your control. A better solution might be one that you can keep local and sync or self-host.
There are [a few examples of open-source to-do list alternatives](https://github.com/awesome-selfhosted/awesome-selfhosted#task-managementto-do-lists) that you can self-host. The one I use is [Taskwarrior](https://taskwarrior.org).
> Taskwarrior is Free and Open Source Software that manages your TODO list from the command line. It is flexible, fast, and unobtrusive. It does its job then gets out of your way. - _taskwarrior.org_
I like Taskwarrior for many reasons. But mainly it's the speed and clarity of use - it really does just "get out of your way". Tags and projects are created automatically for you as you go, and querying feels as fast and sensible as [Ledger](https://www.ledger-cli.org) is for accounts.
I have my terminal open all of the time anyway, and so I can quickly and at any time view my current list (by running `task`), and see my currently in-play tasks listed right at the top.
I can also query by tag (`task ls +tagname`), or to-dos for a specific project (`task ls project:projectname`).
Adding todos is also just as easy, and arguably quicker than commercial offerings like Todoist. E.g. if I wanted to add a new task to buy that gift for my friend (and tag it as "life"), I can just run `task add +life Buy gift for Sam` and then forget about the task for now. I can then check out my "life" todos (`task ls +life`) at a time when I'm out of work and have time to actually complete such tasks.
I'm on a Mac, and so I just used `brew install task` to install it. There is likely a [package for your own distribution](https://taskwarrior.org/download) too.
In terms of self-hosting for multi-device setups, there is the [Taskserver](https://github.com/GothenburgBitFactory/taskserver) project from the same developers, and this is the recommended approach. For me, however, I only use Taskwarrior on one device and so I backup my tasks by simply syncing them to my Nextcloud. To do so, I just edited the relevant line in `~/.taskrc`:
```
...
data.location=~/Nextcloud/tasks
...
```
There is much more you can do with Taskwarrior, should you wish (including things like theming and task prioritising). I can certainly recommend taking a look through [the documentation](https://taskwarrior.org/docs) for more information.
## Notes (and notebooks)
Sometimes you just can't beat an old fashioned pen-and-paper notebook. The process of physically writing things down definitely seems to have a psychological effect on my ability to remember things. However, this approach isn't really compatible with my other information-oriented habits (particularly backup paranoia and [minimalism](/blog/2021/03/08/getting-mail)).
The same concepts around organising notes into notebooks, and keeping things logically organised, can still be applied to digital note-taking too.
There are a number of free and commercial offerings that exist. [Simplenote](https://simplenote.com) is great (though perhaps a little _too_ simple). For Apple users, [Bear](https://bear.app) is also good, but potentially locks you into the Apple ecosystem.
For some time I've used [Obsidian](https://obsidian.md). I like Obsidian as it just uses your filesystem as a way of organising notes (directories are "notebooks" and each note is a simple markdown file). This approach also makes syncing over Nextcloud super easy (just set your Obsidian vault to a directory in your local Nextcloud sync folder, and away you go). There is also a mobile app that's currently in closed beta.
Recently I've been trying to get more into [Joplin](https://joplinapp.org). I like this software because it is open-source, has a terminal interface as well as GUI ones, and has mobile apps available for note-taking on-the-go.
Joplin also has [native sync-ability with Nextcloud](https://joplinapp.org/#nextcloud-synchronisation), which is useful for backup and cross-device access. I find searching quick and intuitive, and the note editor uses Vim (by default, at least), which is great for easy editing.
All in all, I still teeter on the edge between Obsidian and Joplin - both are great options and are worth exploring for your own use.
## Open to ideas
I'm definitely open to other ideas for both note-taking and to-do list management. If you have any good examples of software to help with either of these then please get in touch!

View File

@ -0,0 +1,64 @@
---
date: "2021-05-12T19:20:00Z"
title: "Running"
description: "Workouts, adopting a dog, and getting back into running."
tags: [100daystooffload, life]
---
## The effects of working from home
The UK went into its first proper COVID-induced lockdown back around March time last year. At this time, our company locked its office doors and we all began working from home. We're now all still working remotely about 14 months later and will continue to do so for the forseeable future.
Before we closed the office, I used to walk across my city - Cardiff - to get to work. It's about a 3km walk, which would take me about 30 minutes to walk each way. I enjoyed the walk - I could stop for coffee on the way through, and the distance meant I could take different routes on different days if I wanted a change of scene.
![Walking through Cardiff](/media/blog/running1.jpg)
Now, and since last March, my daily commute simply involves me walking down the stairs to the corner of my living room that is my home "office". Whilst it is definitely convenient (and I would certainly prefer this to full-time back in an office), it had its downsides.
For the first few weeks, I just felt _lazy_. I was working hard (we all were, and were performing great as a remote team), but my body almost craved that morning walk. The walk was time that enabled my mind to sort itself out ready for the day of work, meetings, decisions, and everything else.
Without that walk time I felt my work starts were slower, and I was more easily distracted in the mornings. To try and alleviate this a little, I began walking around a park near me each evening after work - this definitely helped me wind down and the effects lasted until the following day.
## 🏋️‍♂️ Workouts
Around the same time I began working from home full-time, my brother told me about an app - [Fitbod](https://fitbod.me) - that aims to be like a mini personal trainer. It's not the only app of its type around, but it caught me at the right time.
I thought that having an additional exercise goal each day - as well as the evening walk - would help in making me feel more invigorated. I began using it in the afternoons after I had finished my main work for the day (before my walk).
Daily workouts, just simple ones at home following the app's instructions, definitely had a positive effect on my mental wellbeing - it felt almost like personal meditation time for me.
It wasn't long before I switched the routine to morning workouts (before work or after my first meetings of the day). This definitely helped my work too. I've been doing the same thing ever since (I think I've only missed 10 or so days of workouts in total for the whole of the last year).
## 🐶 Adopting a dog
In early December we adopted a dog, and this flipped things on their head a bit. Suddenly control over my own life changed slightly, as I now had someone to be responsible for and think of - at many times before myself. I'll write more about my dog in a later post, but will move on back to exercise for now.
Since getting the dog, I no longer had times for nice leisurely walks after work or workouts in the morning. I now had a new member of the family that needed to be walked once or twice a day, and entertained during the times at home.
People who tell you that you do more exercise when you have a dog are lying. When "walking" him, my time is spent mostly standing around whilst he runs and plays with his friends in the park. It's the only way he can get real exercise - walking him on a lead on my usual walk (especially a dog with as much energy as mine!) just does not give him the exercise he needs.
I wanted to find a way to maintain my level of exercise whilst also giving me the time to go to the parks for 1-2 hours each day to allow my dog to run around properly off the lead. (Note: we live in a city, and it's not very convenient to have to drive out to countryside trails every day).
## 🏃‍♂️ (Re-)starting to run
Some of the people I met in my local dog-walking friend group are quite heavily into running. I used to love running in my mid-20s, and would jog 15km or so three times a week. I was put off as had been told by some people that it can have long-lasting damage on knees and other joints, and so I stopped for several years.
Coincidentally I had recently been doing research about the long-term effects of running, and the results are mixed; some studies indicate what I had heard from others (about joint issues), but many talked about the benefits of building leg muscles and how this might even protect the joints. It also turns out that running properly and with good equipment (i.e. trainers) also makes a big positive difference.
I thought that running could be a good replacement for my walk and some of my workout time - it burns the calories, helps maintain fitness, and has many positive psychological effects too. Especially if I could do it a few times a week.
The dog-walking friends mentioned a shop nearby that could run some gait analysis with me and suggest running trainers most appropriate for me. I booked an appointment, ran the analysis, ordered the trainers, and within a week had them collected and at home.
## The first few weeks
I'm now about three weeks back into running, so I thought I'd report on how it's going.
I thought I'd be much more of a mess than I actually am. I'm by no means quick (I do about 5:30 minutes per km on a good day), but I'm getting faster and definitely feel more fit. There's certainly some muscle memory there still after all of these years.
I run an average of three times per week, and go about 6km each time. I run first thing in the mornings before doing my workout, and then work. This then gives me the time I need to give the dog a chance to run around after work.
On the days I don't run in the morning, I instead go for a 30 minute walk with the dog.
The routine is good (I ❤️ routine), I get the same (if not more) exercise than before, and my dog gets more running time too. It means I need to get up earlier in the morning (more about that in a future post), but I actually quite enjoy that.
The main thing is that I no longer feel the _laziness_ I felt before. I start work with a good hour's worth of solid exercise done every day, a nice cup of coffee, and much more focus.

Some files were not shown because too many files have changed in this diff Show More