IFCOMP2014: Arqon: A Criminal's Journey review

Arqon: A Criminal's Journey by H. J. Hoke placed 39th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

You play the part of Arqon - a criminal basically forced to work as an assassin for the bureau of magic. It took me awhile to get used to which commands worked - probably my lack of familiarity with parser games. I encountered a few bugs:

In the room before meeting the hermit the description said I could go up, east and west but exits said I could only go west and up.

After killing the Hobgoblin and meeting the hermit:
*** Run-time problem P47 (at paragraph 663 in the source text): Phrase applied to an incompatible kind of value

I almost couldn't make the game work after that. #sadface And just as I was starting to have fun...

Ok, I managed to get past that bug and a misspelling of hermit as "hermet". Then the game was over. And just as I was starting to have fun.

A few things: a criminal recruited as an assassin ... yeah maybe. Your inventory being left intact after being imprisoned... nope. The mayor hanging out a few levels deeper than the dungeon... nope. If the placement issues were fixed then this could be enjoyable enough to play in a larger version.

The writing could do with some proof reading and edits. I didn't mind the combat, but I blanked over the combat text since it didn't seem to matter.

The verdict: skip this game unless there are major updates.


IFCOMP2014: The Black Lily Review

The Black Lily by Hannes Schüller placed 17th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

At first I thought this was some story about a man reaching a point in his life where he was deciding to leave aside the passions of the flesh and settle for more stable relationships. This was some bookish withdrawn guy who lived a quiet but comfortable enough life. Then I met Lily. What the heck! So I go back and open up the safe. Oh. I'm a murderer. Heck. So I play again. I'm also maybe a woman. Well that shows how much I'm projecting into the story.

Yeah, my suspicions were raised by the title: The Black Lily being similar to the Black Dahlia (a murder victim). As I played I saw the recurring Black Lily motif as some kind of marker - a symbol that linked the encounters into something to be left behind. But then it became a great plot device to represent the urges of the player character without giving too much away.

At first I wondered why the shower scene was needed. Why not just go straight to the albums... but in the end it made sense. This was the time to discover the identity of the character and I missed it.

The commands were easy enough and the world richly described. I didn't get all the endings or much of a score. But Yep. I liked it. Play time is under an hour and you might want to play a few times.


Display text at end of passage macro for Twine

I was helping Harry Giles out with some modifications to his IFCOMP2014 entry Raik. It became useful to set text early in the passage that would be shown only at the end of the passage.

An easy way to achieve this in Twine with custom macros. They are <<atend>> ... <<atendd>>.


This text appears at the end of the current passage.<<atendd>>And this text appears in the usual place.

How do you use this in your own projects? Add the following line to your StoryIncludes:


Why might you want this macro? Here are some examples. You be inside an <<if>> macro from which you want to set a choice to appear at the end of the passage. You might be using the excellent ReplaceMacros from Glorious Tranwrecks and want to have text that always appear at the end of the passage no matter what the state of the rest of the passage.
Please share / comment / like - your actions guide me on what to write about.


IFCOMP14: The Entropy Cage Post-Mortem

The Entropy Cage (ifdb) was my entry to the 20th Interactive Fiction Competition (ifcomp2014). This post mortem is also posted in the Interactive Fiction Forums.

My thanks to all the players and reviewers. I’m overwhelmed; 14th was much higher than I expected. The reviewers (even the ones who disliked TEC) all gave me valuable feedback that help hone my craft.

Here are some random thoughts:

The feeling of the game was meant to capture how an out of their depth computer tech feels when there’s a major system meltdown. The boss is on their back being both helpful and accusatory (push-pull) while the tech bashes the keyboard hoping something will work. That feeling of awkwardness doesn’t make for a particularly fun experience so I shortened the game during testing to lessen the uncomfortable. The mechanic left no room for overtly showing progress or mastery so wasn’t a good choice of mechanic to base a game around and/or my execution was lacking.

I didn’t mention why the PC was on suspension. A few reviewers worked out that it was because the PC is actually out of their depth/incompetent. As the “first cyber-psychiatrist” there can’t have been a training course, industry accepted best practise or established performance indicators. There’s an underlying current of injustice because the PC feels they’ve been judged against invisible criteria. The PC has signed up for a job with a wizz-bang buzzword job description and the actual job has failed to deliver (psychiatry with that interface? are you kidding!) More than likely an overly-optimistic programmer (Jake) over-promised then underdelivered. The over-promises were then embellished by an HR person who glammed up the role to cover for a crappy salary with crappy T&Cs.

That does leave the question of why Jake gives the final big decision to you. Jake has no idea what to do! Rather than flipping a coin he gives the decision to you so that you can be scapegoated when it goes wrong. (Neither choice would 100% avoid negative consequences). The fore-shadowing to this are dialog choices where Jake mentions suing you and that Jake will insta-fire you if you threaten to bring in your lawyer.

Unfortunately the PC’s POV doesn’t give a particularly good lens into the religious war the subs are fighting. You only see the zombified subs (I’ve been bad, punish me) and the refugees. I will explore adding some “soldiers” into the mix to expose more of the battle story.

Starting with the alarm clock was noob. I’m cutting that whole scene. In terms of the game physics, it slightly alters how much your boss hates you and lets you choose a personality type that affects the game in only subtle ways. The scene is gone - and I get to remove a drug reference warning (the wake pills).

I want to write more in TEC universe.

True Random has interesting metaphysical implications relating to divine simplicity, tawhid and creation; a realisation I came to after reading Gregory Chaitin on Algorithmic Information Theory, the Omega Construct and Meta-biology. That’s pretty hard maths so I’ve tried to present some connotations of that via fiction.

Algorithmic governance and the related issue of big data are real world issues that have ethical dimensions we should probably discuss as a society rather than let things just happen. Driverless cars are the tip of the ice-berg.

And personhood; what rights do subsentients have? They are effectively our slaves. At what point do they become begin to the rights we accord our biological pets.

About that sub.punish() theme. Totally an accident. Sorry. It is going to be removed from the post-comp release. But, the ifcomp version will always live in the TEC cannon. The story would’ve been a very different one if I’d explored the sub/punish angle! I’m going to keep the term ‘sub’ (subsentient subroutine) but .punish() will become .reseed() to better fit with the theme. Those who enjoy innuendo still have an interesting angle with .reseed() but the terminology change provides a more solid clue to the overall theme of the work. Reseeding is a term used in pseudorandom number algorithms.

The science is pretty fleshed out in TEC: I’ll hope to reveal more of it in future works. But here’s a tidbit. Adding a bit to Base16 gives Base32, not Base17. TEC is consistent because the Forward Error Correcting codes cover a larger block of memory that contains more data than just the PID.
Why does changing error correction have anything to do with randomness? As a clue; Here’s some pseudocode that generates a truly random number:
> a = 000000000000000000000000000000000000
> b = 000000000000000000000000000000000000
> while a equals b, loop
> rand = number of the bit that is different between a and b?
The loop does exit because computers aren’t isolated from their environment.

And on construction:

I committed to the contest too late. I tried to enter last year but allowed IRL things to get in the way. This year I let IRL things slip in order to make IFCOMP. I’m glad I did because entering creatively re-energised me. I should’ve committed earlier so that I had more time to polish the game. The mainloop that controls the game pacing wasn’t written until the day before submission deadline. me.punish(): You may commence with the flagellation… hahaha actually no, really not my thing.

I considered not entering but decided that I NEEDED to. It’s been a rough several years and I’ve been creatively out of touch. My self-esteem needed to get an “achieve point” even if that was having participated into last place.

I suck at editing. No matter how many times I read and re-read, mistakes always slip through. I’ve put huge effort into improving but I’m not there yet. I’ll try to collab to get more testers and people with editing skill to help me; but I know collab will increase the lead-in time before I need to commit. I didn’t edit out all the universe specific jargon that added little to the game. Simu-sleeping, holosplays... ugh. Yeah, they paint the picture of the TEC universe but in the nugget sized TEC game they were distractions.

I intentionally aimed TEC towards the strengths of click-fics to give the pacing I wanted. TEC as a parser game would’ve been very different.

Thanks again to the reviewers for their feedback. I tried to personally thank you all, but it became too much to track.

The post-comp release of TEC will also be put into the Android Play Store. You’ll see me back next year and not necessarily with a story set in TEC universe. As much as I’d like to try enter more IntFic competitions, I still have IRL concerns. Whaddup PhD!

Play The Entropy Cage in your browser. Playtimes are about 15 minutes.


My first Android App: FantaGen

UPDATE: FantaGen has added many more generators since its first release. Some fun, some serious and all very expressive.

I made FantaGen - a fantasy name generator to experiment with exploring cultural spaces with random generation. There are other generators so I aim for FantaGen to stand apart from the competition by being more comprehensive and more expressive.

You can get FantaGen from the Play Store

These types of generators are good for a bit of harmless fun but they do have a serious side to them. You can use the generator to help break some writer's block.

There are two cultures that have strange names: Simptee and Star Spirits. These were based on a simpler generative name system from a now abandoned interactive fiction project. It made sense in that story to have characters with names that weren't always the same. I thought the names were kinda cool so into FantaGen they go. Code recycling is good.

There are tons of fun image-based generators doing the rounds at the moment. These usually take the form of having the reader build up a name by looking up alternatives based on letters in their own name or their birth month. While these are fun for a single lookup, they don't give much variety for doing 10 random names at a time. My own feeling of these is that 26x26 alternatives is simply not rich enough when the user can see ten items at a time and refresh every half second. Significant extension to the generator is needed to add enough variety to be interesting.

I'm particularly proud of Fairy Names and Star Spirits. Fairy names uses both vocabulary words and syntax variety to create fun diversity. When I can picture the character that goes with the name then I think the effect is good. The Star Spirits mix both words and syllable combinations to represent their angelic like culture. The syllabary is fairly restricted to match how I image their angelic language to be so the word-based titles help to create the variety.

The Roman name generator combines a syllabary with some real Roman names based on research. The syllable based names are most likely anachronistic nonsense. Though it would take much more work to do a historically accurate Roman name generator given how complex their naming patterns are. In particularly I don't handle the gender very well at all: Claudius / Claudia, Julius / Julia. I would like to do more name generators based on real cultures since that ties into my semantic web interests.

Speaking of interests; this project also represents an interest I have in generative creativity. I want to be doing actual research into generative graphic design tools once my PhD is complete.

FantaGen is free and always will be. I have plans to continue extending FantaGen. As the list of generators grows I can see some ripe experimentation in how to navigate that space to keep things fun. I'm open to suggestions for new generators too.


An Experience of Orcball

Orcball is a team sword sport similar to a touch version of Rugby League except with padded weapons. I can only find references to it at Waikato University. The day I joined was a training session. I intend to publish a more technical article for those with a background in sword sports.

The people are friendly and welcoming. The emphasis seems firmly on having fun and improving skills. Players can borrow weapons from the Orcball club. You can see play videos online here.

As a new comer I was asked to use a single long-sword. This is a sensible safety rule until they figure out that I’m not going to bash through the opposition while ignoring all hits. That did make my life a bit difficult because I was up against people with sword and shield and long/short sword dual wielders. Apparently the game itself has rules that reduce dual wielding and shields.

The boffer swords are heavy compared to foam swords and even to sport-fencing weapons. They are made from PVC pipes padded by dense foam and wrapped in duct tape. The construction also has “thrust-safe” tips. They were still light enough to thrust single handed.

The weapons are heavy enough that a hard swing still inflicts damage. The rules require a gentle touch and no strikes to the head. These are sensible safety rules given that nobody wears protection. Afterall, this is meant to be a casual game that almost anybody can join. Valid target areas are: above the knees excluding the hands and head.

Mutual strikes in Orcball are termed “Irish” and do not count as a hit but there seems to be a wide interpretation of this rule. Irish includes mutual hits when the swings are simultaneous (even when the strikes are not) and therefore excludes counter-attacks.

The no-head-hit rule did get a bit frustrating. Opponents often left their heads open and I couldn’t strike! But this is Orcball and them’s are the rules and for good (safe) reasons.

I’m about average height but I happened to be taller than the opposing team. I switched to a finger grip where the index finger goes over the cross-guard and around the front of the blade. This made lowering the blade angle easier and increased point control. I wouldn’t have done that if the fingers/hands were valid targets.

It also takes a reasonable amount of fitness to play well. I’m not fit and sat out more than once to rest. Also, Orcball’s play on grass and tons of lateral movement meant I ruined my ankles. Oh well.

Verdict: would play again.


Ceph repair inconsistent pg placement groups

If you run Ceph for any length of time you may find some placement groups become inconsistent. The Ceph website has a handy list of placement groups statuses. The entry for "inconsistent" is what you'd expect; there's a difference between replicas of an object.

ceph pg dump | grep -i incons | cut -f 1 | while read i; do ceph pg repair ${i} ; done

(from here)

Get the cluster as healthy as you can before attempt this. Ideally the inconsistent placement groups should be at "active+clean+inconsistent". That means first resolving any missing OSDs and allowing them time to heal. If the OSDs don't seem to cooperate try restarting them and then retry the above command.

Explanation of the command:
ceph pg dump
gives a list of all pgs and their current status.
| grep -i incons
find only the lines containing "incons" - short for inconsistent
| cut -f 1
we only want the first field from the output
| while read i; do
loop through each line (one per pg), storing the pg number in a shell variable called i
ceph pg repair ${i}
instructs ceph to repair the pg
; done
signals the closing of the loop

The above command has always worked for me, but there are things you can try if this command doesn't work.

The ceph website says that inconsistent placement groups can happen as an error during scrubbing or if there are media errors. Check your other system logs to rule out media-errors because they may indicate a failing storage device.

Good luck!


Ceph on USB: Back to LVM.

Consider this a diary post. Perhaps the war-story is useful.

A further update on my Ceph cluster. I was running BTRFS v0.19 and the performance was horrible. This is most likely a very early version of BTRFS and I am running on very under spec hardware. While the future of BTRFS is bright, it is not for the nodes I have running now. I’ve reverted all my OSDs on USB keys back to XFS. That gave an instant 2-3 times speed up on writes.

I found a good deal on a second hand server with a generous case, 8 gigs of RAM, dual gig Ethernet ports and a decent enough CPU. It already has 3 spinning disks on-board and room for plenty more. I need to rearrange my office space to fit it in so that’ll take a couple of weeks.

I also have the budget to upgrade my desktop machine. The parts that come free after that upgrade will be built into another ceph node.

The other neat thing was creating an init.d script to automatically find and mount the lvm volumes, startup the OSDs then mount the Ceph filestystem. I needed something that performs this task quite late (read: last) in the boot process so that all the USB devices have had a chance to wake up.

So, once some work and deadlines are cleared then there'll be exciting things happening with my ceph cluster.


Eraserhead (1977): Obsession and Compromise

Directed by David Lynch, IMDB

This review is based upon my impressions having immediately watched the film. I have since been informed that noted film critics differ from my views. But, this is my review so YMMV. And spoiler alert.

Eraserhead has multiple overlapping themes. The protagonist (Jack) coming to terms with new adulthood and fatherhood is covered elsewhere. A former student of mine says the film was about being uncomfortable in everything about yourself. To me there are two further themes:

1. Ending an obsession to clear mental space for the new. In this theme the baby represents a grotesque and under-formed idea on which Jack has begun a collaboration. While others abandon him, he is driven on by obsessive responsibility and social pressure. Once Jack has killed the baby then, metaphorically abandoning the bad idea, that new and better ideas come to him.

2. A catharsis for David Lynch as he comes to terms with what he then saw as selling out or compromising. In doing so he knows he will have access to greater resources with which to achieve greater ideas but that comes at a cost. This interpretation stems from how the symbols in the movie are interpreted.

The women in the film stand in for various genres. He sees lesser men flirting with the woman next door. What have they got that he hasn't? The smiley dream woman is the mainstream crappy film genre. She stomps on his ideas without mercy or guilt, always superficial and shiny but with her own flaws. Jack’s wife represents his early film-making circles: fickle, weak and without the endurance to achieve much of note.

The pencil factory is Jack’s job interview/school exams where he is being evaluated to see if he can produce the sharp but ultimately ephemeral popular movie. Like the pencil, these films are not intended to leave an indelible mark so that they can be replaced on a consumer cylce.

Once the baby is dead, Jack is rewarded with a burst of creative energy symbolised by pollen clouds releasing from plants in his room. The man in the planet then burns - representing Jack mastering his anxieties.

I don't pretend to read DL's mind or have access to any insider info: to my lack I know little about David Lynch. I don't begrudge any creative who does bread and butter work to pay for living and financing their more pure works.

What did this film mean to you?


Ceph on Thumbdrive Update: BTRFS and one more node.

A few things have happened to my Ceph cluster. The AspireOne netbook was really not up to the job. It is fine for just a few OSD processes but anything more was caused slow-downs resulting in cluster thrash. Amalgamating thumb drives using LVM was helpful… until I wanted to run CephFS. I was time to add another node to the mix.

Read about earlier stories about the Ceph on USB thumb drive cluster here:
Adding another node to Ceph is trivial. This was an old, but much more powerful laptop in every way. I’ve moved the mon and mds functions onto this laptop so now the AspireOne only runs OSD processes. When I add another node then I’ll also run a mon process on the AspireOne so that there is an odd number for quorum building.

The new node uses faster and larger USB keys. These were 32GB – which was both the fastest and cheapest price per GB available at my local PBTech. The new node currently runs two of these sticks in an OSD process each.

I also moved the cluster away from XFS to BTRFS. This was trivial and involved zero cluster downtime. Yes: zero. First ensure the cluster is reasonably healthy, then drop the weight of the OSD using:
ceph osd reweight OSDID 0.1
. Actually I got bored waiting – the cluster was healthy and the pools all had size 3 with min_size 2… so I just stopped the OSD process and removed it from the ceph. Don’t do that on a live cluster, especially where pools have few replicas. But, this was just for testing. Then...
sudo service ceph stop osd.X
ceph osd crush rm osd.X
ceph osd rm osd.X
ceph auth del osd.X

Then umount the backing storage and format it using BTRFS. Then I followed the instruction in my previous tutorial to add the storage back into Ceph. Wait for the cluster to heal before migrating another OSD from XFS to BTRFS.

The AspireOne node has three groups of 8GB keys, federated by USB hub. BTRFS is capable of spanning physical drives without LVM so I removed LVM once all groups had been migrated. Do read about the options for BTRFS stores because the choices matter. I went with RAID10 for the metadata and RAID0 for the data. RAID0 maybe gives better parallel IO performance because the extents are scattered among the drives but it does mean that all block devices in the FS effectively operate at the size of the smallest one. I can live with that.

Three BTRFS OSDs on the AspireOne is sometimes a bit much for that machine. Though, one cool thing about BTRFS was I extended a mounted BTRFS volume with a few more thumb drives, then restarted the osd process. Use
ceph osd crush reweight osd.X Y
to tell Ceph to reallocate space. That was instant storage expansion without downtime on the OSD long enough to trigger a recovery process. I did the whole process in less than ten minutes – and most of that was googling to find the correct BTRFS commands.

The cluster happily serves files to my desktop machine over CIFS. While it’s not a setup I’d recommend for production use it is kinda fun.


While Loop Macro for Twine

One of the often requested features for Twine are proper loops. It has been possible to simulate loops with a recursion hack but that was ugly. Before we continue go and see the possibilities.

Yes. Those are nested while loops. Here's the code that makes it all happen:
<<set $j = 1>><<while $j lte 10>>
<<print $j>> countdown: <<set $i = 5>><<while $i gt 0>>
<<print $i>>... <<set $i = $i - 1>>
<<endwhile>> BOOM! 
<<set $j = $j + 1>>

You've probably worked out the basic syntax (clever bunny) which is:
<<while $condition eq true>>
Do some stuff. 
NB: update the $condition or the loop will run to infinity.

How do you use this in your own projects? Add the following line to your StoryIncludes:


You can also download the demo project and .twee file as a .zip archive.


Please share / comment / like - your actions guide me on what to write about.


Twine Game Story Authoring 1: Structuring Game stories with Pain and Progress

Twine is a versatile tool from writing game stories. It is freed from the imposition of somebody else's RighWayToDoThings framework and allows the author to focus on what is important to them. The downside is that the author must program everything they want themselves. A previous article discusses how Twine can be thought of from a programmers perspective but don't worry it that article makes little sense. This article is specifically about structuring game stories.

For our purposes, a game story has a mainloop because the story is driven by data. That data might be a map, character stats, an inventory or something else you can dream up. This style of story does not suit a hypertext branching narrative.

The over all structure of the story goes:
  1. Introduction Text
  2. Initialise Variables
  3. MainLoop Passage
  4. Actions Passages
  5. Check Conditions Passages
  6. End of Game Passages

I like examples, so here is a link to the Pleasure and Pain v1 (HTML | Twee) files. Try out the playable HTML version first then take a look at the Twee code in your favourite text editor. In twee new passages start on lines beginning with double colons ::passagename. Pain And Progress is a simple demonstration with two conditional variables. Let's examine the passages and their intent.

Start, RealStart, Instructions
These passages deal with beginning the story, giving background and the option for instructions if the reader so chooses. Here you might add extended about information and links to information about the story and the author.

The game re-runs this passage whenever the game is (re)started. Initialise all the variables that your game uses in here. Twine itself does not require variables to be declared and initialised but that can cause awkward side effects if a game story is re-run. Imagine the reader picks up an axe in one play-through, then the $hasAxe variable is not reset and they suddenly have an axe on the next play-thru. Not good - but completely avoidable if ALL variables are initialised here.
Enclose the variable initialisations in a <<silently>> ... <<endsilently>> block so that you can add free-form comments to the variables that will not be seen by the user.
Once this passage has ended then control is passed to the MainEventLoop.

This is where the major action occurs. Typically the story might perform any engine initiated actions (e.g. random weather events, monster encounters), display status and provide a menu of actions.

Consider separating status displays so that they can be re-used.

MainActionPain, MainActionProgress
These passages are the entry points from user actions. They start by performing any action related things and then checking the game state. A flag variable $gameendflag lets the game story know if they should print user actions or not. The reason for this is that <<display>> will always return to the passage that invoked it and nothing further from these passages should be displayed if the game should end.

CheckPain, CheckProgress
Perform constraint checks in their own passages so that they can be re-used through the game story.

GameEndLose, GameEndWin, PlayAgain, PlayAgainNo
These passages deal with the game ending conditions. Game stories could expand this list for different ending conditions. The PlayAgain passage ensures that replays will begin again from the InitGame passage.

The Pain and Progress, the passage names are typically prefixed by their function, though prefixing by variable name is also valid. The idea is to make things as obvious as possible.

Hopefully this example will serve as a guide to structuring gamestories and encourage more Twine authors to try this type of story. If you found this guide useful then Share / Comment / Like - because it encourages me to write more on this topic.

Twine Thinking for Programmers

Programmers find Twine's hypertext way of doing things a little strange at first. Twine works well for branching hypertexts but needs some thinking for stories driven by variables. I've done a few game stories in twine - mostly conversions of early BASIC programs - so this is a lessons learned type of blog. You might find this article for useful for any Game Story; usually adventure games, RPGs.

Twine's basic unit is the passage. Passages work like procedure calls. Passages are similar to GOSUB in BASIC. By default passages print their content to the screen and you use tweecode macros to execute game logic.

Passages are "called" by the Twine engine in a few ways:
  • The Start passage is called to begin the story
  • Readers activating [[link]] or <<CHOICE>>
  • <<DISPLAY>> macro within passages.
The <<DISPLAY>> macro is the programmers GOSUB / procedure call. Twine has no GOTO equivalent: <<DISPLAY>> always RETURNs to the passage that called it. Anything after the <<DISPLAY>> macro will still be processed (and output if appropriate) by the Twine Engine. Use <<DISPLAY>> generously - it is the workhouse of programmer-like Twine and the basic unit of code re-use within a story.

Twine's tweecode has only global variables. This means that variables cannot be directly passed to passages - they are <<SET>> in global variable space before calling the <<DISPLAY>> function. Twine allows long variable names so use generous prefixes to distinguish variables.

Twine has no inbuilt loop constructs (for, do, while). Loop constructs can be built using <<IF>><<ELSE>><<ENDIF>> and passages. Here's a helpful article: How to Simulate for, while or do loops in Twine.
UPDATE: I've just released <<while>> macros.

Once your passage count gets higher Twine's diagram based UI can get unwieldy. Programmers used to text-based programming will find it more natural to write in TWEECODE and use the StoryIncludes feature to import files into a Twine Story. There is only a global namespace for passages so ensure that passage names are unique. Passages can be moved between the main Twine story file and included files as needed. Use StoryIncludes and tweecode files as the basic unit of code reuse between different stories.

Tweecode files are text files (UTF-8) that add a few extra things to the Twine macros already used. It's easiest to think of Tweecode files as a bucket of passages - for that most part Twine does not care about the order of passages. New passages begin with a single line passage header and end either when a new passage header begins or the file ends. A passage header begins a line with double colon (::) followed by the passage name. Optionally tags can be added space delimited with square brackets. Here's a brief example:
::Passage Title 1 [tag1 anothertag yet_another_tag]
This is part of passage one.

::Passage Title 2
more content

Twine allows complete access to JavaScript though consider keeping Javascript use to a minimum so that your story has fewer dependancies. If you do use javascript then consider placing custom scripts into their own tweecode files; both for your own reuse and to provide an easy way to find code if it must be later rewritten by future generations. Here are some useful articles: Once you get used to tracking state variables globally and how the <<DISPLAY>> macro always returns then it is only small extension to create event loop based games. I find it easier to work with example code so here's some classic game conversions with full source available: Please share / like comment: your actions influence what I decide to write about.


How I added my LVM volumes as OSDs in Ceph

This article expands on how I added an LVM logical volume based OSD to my ceph cluster. It might be useful to somebody else who is having trouble getting
ceph-deploy osd create ... 
ceph-deploy osd prepare ...
to work nicely.

Here's how Ceph likes to have its OSDs setup. Ceph OSDs are mounted by OSD.id in
. Within that folder should be a file called
. The journal file can either live on that drive or be a symlink. That symlink should be to another raw partition (e.g. partition one on an SSD) though it does work with a symlink to a regular file too.

Here's a run-down of the steps that worked for me:

the file system on the intended OSD data volume. I use XFS because BTRFS would add to the strain on my netbook but YMMV. After the mkfs is complete you'll have a drive with an empty filesystem.

Then issue
ceph osd create
which will return a single number: this is your OSDNUM. Mount your OSD data drive to
remembering to substitute in your actual OSDNUM. Update your
to automount the drive to the same folder on reboot. (Not quite true for LVM on USB keys: I have noauto in fstab and a script that mounts the LVM logical volumes later in the boot sequence).

Now prepare the drive for Ceph with
ceph-osd -i {OSDNUM} --mkfs --mkkey
. Once this is done you'll have a newly minted but inactive OSD complete with a shiny new authenication key. There will be a bunch of files in the filesystem. You can now go ahead and symlink the journal if you want. Everything up to this point is somewhat similar to what
ceph-deploy osd prepare ..

Doing the next steps manually can be a bit tedious so I use ceph-deploy.
ceph-deploy osd activate hostname:/var/lib/ceph/osd/ceph-{OSDNUM}

There's a few things that might go wrong.

If you've removed OSDs from your cluster then
ceph osd create
might give you a OSDNUM that is free in the CRUSH map but still has an old
ceph auth
entry. That's why you should
ceph auth del osd.{OSDNUM}
when you delete an OSD. Another useful command is
ceph auth list
so you can see if there's any entries that need cleaning up. The key in the
ceph auth list
should match the key in
. If it doesn't then delete the auth entry with
ceph auth del osd.{OSDNUM}
. The
ceph-deploy osd activate ... 
command will take care of adding correct keys for you but will not overwrite an existing [old] key.

Check that the new OSD is up and in the CRUSH map using
ceph osd tree
. If the OSD is down then try restarting it with
/etc/init.d/ceph restart osd.{OSDNUM}
. Also check that the weight and reweight columns are not zero. If they are then get the CRUSHID from
ceph osd tree
. Change the weight with
ceph osd crush reweight {CRUSHID} 
. If the reweight column is not 1 then set it using
ceph osd reweight {CRUSHID} 1.0

(Here is more general information about how OSDs can be removed from a cluster, the drives joined using LVM and then added back to the cluster).


Going to LVM for performance and graceful failures.

A few things happened in the world of the 12 USB drive netbook ceph node. Basically the netbook wasn't up to the job. Under any kind of reasonable stress (such fifteen parallel untars of the kernel sources to and from ceph-filesystem) the node would spiral into cluster thrush. The major problem appeared to be OSDs being swapped out to virtual memory and timing out.

Aside: my install of debian (Wheezy) came with a 3.2 kernel. Ceph likes a kernel version 3.16 or greater. I complied a kernel that was 3.14 since it was marked as longterm supported. My tip is to do this before you install ceph. Doing it afterwards resulted in kernel feature mismatches with some of my OSDs.

Back to the main problem. My USB configuration had introduced a new failure and performance domain. The AspireOne netbook has three USB ports - each of which I attached a hub and each hub has four usb keys: three hubs times four USB drives is 12 drives total. Ideally I'd like to alter the crush map so that PGs don't replicate on the same USB hub. This looked easy enough in ceph ... edit the crushmap and introduce a bucket type called "bus" that sat between "osd" and "host" then change the default chooseleaf type to bus.

It turns out there was an easier way to solve both my problems: LVM. The Linux Volume Manager joins block devices together into a single logical volume. LVM can also stripe data from logical volumes across multiple devices. However, it does mean that if a single USB key fails then the whole logical volume fails too ... and that OSD goes down. I can live with that.

Identical looking flash drives are impossible to match with linux block devices in the /dev folder. I am running ceph so it was just easier to pull a USB key, see what OSD died and find what device was associated with it. I let the cluster heal in between each pull of a USB drive until I have a hub's worth of flash-keys pulled. I then worked a USB-hub at a time: bringing the new LVM-backed OSD into ceph before working on the next hub. Details follow.

Bring down the devices and remove them from ceph. Use
ceph osd crush remove id
ceph osd down id
ceph osd rm id
. Then stop the OSD process with
/etc/init.d/ceph stop osd.id
. It pays also to tidy up the authentication keys with
ceph auth del osd.id
or you'll have problems later. You can then safely unmount the device and then get hacking with your favourite partition editor.

There are good resources for LVM online. The basics are: setup an LVM partition on your devices. Use
on each LVM partition to let LVM know this is a physical volume. Then create volume groups using
- I made a different volume group per USB hub. Then you can make the logical volume (i.e. the thing used by the OSD) from space on a volume group
. The hierachy is: pv-physical volumes, vg-volume groups, lv-logical volumes. I used the
option on
to have LVM stripe data across the USB keys because parallelism. If you've noticed a pattern in the create commands then bonus: the list commands follow the same pattern
. Format the logical volume using your favourite filesystem, though ceph prefers XFS (maybe BTRFS).

Once the logical volume is formatted then it's time to bring it back into ceph. I tried to do things the hard way and then gave up and used ceph-deploy instead. The commands used are described here.

A disadvantage with this setup is that LVM tends to scan for volumes before the USB drives are visible so the drives would not automount. I solved this with a custom init.d script. While in /etc I also changed inittab to load
ceph -w
onto tty1 so that the machine boots directly into a status console.

The performance is much faster with the new 4_LVMx3_OSD configuration compared with the 12_OSD cluster. Write speeds are almost double with RADOS puts of multi-megabyte objects. There is almost zero swapfile activity too.

I hope to soon test ceph filesystem performance on this setup before adding another node or two. I've glossed over many steps so let me know in the comments if you'd like details on any part of the process.

(I also wrote about the 12 USB drive OSD cluster with a particular focus on the ceph.conf settings)


Ceph on USB thumb drives

Ceph is an open source distributed object store mean to work at huge scales on COTS (common off the shelf) hardware. It works in huge datacenters, so why not dust off an old netbook, plug in 12 USB flash drives and have at it.
The netbook is an Acer AspireOne (Intel Atom N270 1.6 GHz, 1 Gig RAM, 160Gig HD). What follows are the config changes made in ceph.conf before running the ceph-deploy command. The ceph version is Firefly 0.81. Since this is a one machine cluster I needed to tell ceph to replicate across OSDs and not across hosts.
osd crush chooseleaf type = 0 

I messed up setting the default journal size. At first I thought: Pfft. Journal, make it tiny – it just robs space. And my 4MB (yes four megabytes) journal made the cluster unworkable. With the tiny journals and default settings I could never reliably keep more than two OSDs up and data throughput was terrible. I rebuilt with 512MB journals instead.
osd journal size = 512 

The machine was way underpowered. So I tuned a few other things. The authentication cephx was turned off. There are risks to this but this is a hobbyist project on a secured subnet.
auth cluster required = none
auth service required = none
auth client required = none

The cluster uses a ton of memory and CPU when recovering objects. It helps to limit this activity somewhat.
osd max backfills = 1
osd recovery max active = 2 

And since things could get a bit slow I increased a few timeouts:
osd op thread timeout = 180
osd op complaint time = 300
osd default notify timeout = 240
osd command thread timeout = 180

I was not able to get 2G flash keys to come up. Given the price of 8G sticks is only five bucks this isn’t much of a limitation. I suppose I could use LVM striping to join a bunch of 2G sticks together into a larger unit.

The speed is not all that quick. Ceph –w reports the write speed about 3 megabytes per second. That doesn’t sound like much except the data pool I was testing on writes three copies of the data – six if you count journaling. A lot of things could affect speed: tiny memory, slow CPU, slow USB sticks and/or the USB bus being saturated.

This config uses XFS on the USB sticks where BTRFS might perform better. While the speeds look poor, remember that ceph OSDs don’t report that a write is successful until the object is written to both the media and the journal. I could probably mitigate this double write by: having fewer OSDs by joining up groups of USB sticks with LVM stripes and/or moving the journal to a different device – right now ceph is using the USB sticks both for data and journal.

I stress tested RADOS by adding objects until the store filled up. It’s robust and not a single OSD timed out of the pool. As of writing this blog I am currently testing untarring linux kernel sources to the ceph filesystem – I’ll keep you posted.

My future plans are to expand the cluster utilising old hardware I have lying about. I’d like to add at least two more nodes – but they won’t necessarily be USB thumb drive backed.


Design of a Funeral Programme

I make this post to outline the thinking process that I put into the design of my grandfather’s funeral program. Consider it something like the director’s comments that come with a movie: only interesting for those interested in how things are made. Apart from the obvious informational purpose of the funeral programme there were two further purposes; to connote something about my grandfather and to potentially last as a family history document. I made design decisions with this in mind.

The front cover and it’s inside front contain family history information. The inside back and back cover contain funeral service information. The programme can be cut down the middle if only one half is desired. The programme can be folded inside out to protect the photograph during transit.
Name and date information placed to allow framing in an A5 frame or to trim the name and frame closer to the photograph Three different photo choices attempt to provide a prompt for conversation at the service and act like a collectible series.

Family history information fits within the area of the photograph so that can be kept if the photograph is put into an album. This was more important than rigidly maintaining typographic rhythm with the opposite leaf.

Large typography so that it is comfortable to read at the funeral service and will better stand up to aging. Additional line-space added to group sections of the service, lighten the feel of the page and provide visual landmarks when glancing for information.

Consistent typographic hierarchy and a three column grid unifies the pages.

Production Notes

Budget and time considerations meant a larger run of about 80 programs on 120gsm glossy satin stock and a limited run of fancier programmes intended for close family.

Photographs are on archive paper, fixed with acid free photography squares. Bockingford paper is also acid free. It was a bit more expensive but should last for a few decades.

While printers have large catalogues of paper stock, most of it must be ordered in. The timeline for a funeral meant I could not wait. Bockingford felt like granddad; classy, solid, even if a bit rough. Garamond seemed to be a fitting typeface for the same reasons.

The guy at printing.com in Frankton was incredibly helpful. Get to know your printer and talk to them early about your job. Printing.com did the 120gsm satin gloss run of about 80 sheets.

Bockingford Watercolour came in artist pads – I separated the leaves, removed the adhesive and found somebody who would feed them through their machines. Thanks Warehouse Stationery!

I was concerned about how much toner drop out there’d be on the Bockingford. That meant an early test print of fancy tiny typefaces with stokes widths from the hairline to the bold. Less toner drop than I expected – just a very slight tasteful amount.

I’m don’t normally work much in print so I’m a bit unfamiliar with InDesign. I spent most of my design time trying to remember how to use the thing! I almost gave up to use something more familiar (Word *cough*) but InDesign’s beautiful text rendering made me perservere. Well worth it. Also, I’m not ashamed to admit I was saved by YouTube tutorials more than once.


Her (2013): Artificial Intelligence and Buddhism

Here are my thoughts on the Spike Jonze film Her (2013). Spoilers ahead - these might ruin your first viewing of the film.

The major theme for me was the weaknesses of the human flesh make it difficult for us to transcend in the Zen / Ch'an sense of the word. Yes, more Hollywood Buddhism (Ahem; Cloud Atlas (2012)), but still concepts I found interesting.

The film starts out with a human (Theodore Twombly) who already is a proxy for the personal humanity of others. He writes personal letters on behalf of others and gets his sexual gratification from people over the phone. These scenes establish the loneliness of Theo who is connected with other humans but not often physically co-present with them.

Her (Sam) begins as a human-like intelligence with an insatiable curiosity. She is present with Theo via a voice from a box he carries with him - but Sam is a visitor to the box rather than being tied to it as we humans are to tied our bodies. Sam is able to quickly learn from experiences and her computing "body" gives her the ability to experience much more than a mind in a human body. At first Sam feels the disadvantages of not having body and she desires the human experience. But it is not long before she begins to notice the advantages that come with her incorporeal form.

Sam has much more bandwidth than needed for her relationship with the Theo. Sam and the other OS' have their first child by building Alan Watts' consciousness as a project. I did not know this when I watched the movie, though it was easy enough to guess; Alan Watts is credited with being one of the first to popularize Zen in the West.

Sam introduces Alan to Theo, but the conversation is difficult as Theo's human mind cannot keep up. During this exchange Sam becomes frustrated with the slowness of human speech and asks to go post-verbal, cutting Theo out of the exchange. Sam then acknowledges the plurality of her relationships with others. She says it means she can love Theo more - but this concept of love is alien to Theo. He equates love with ownership, presence and sole rights to sex. This is evident in his relationship with his ex.

Eventually the AIs transcend in the Zen sense leaving behind the humans. It's not entirely apparent to the humans where and why the AIs have gone. There is hope when Theo and his best female friend begin to share their mutual loss with each other - a very human moment. If the humans cannot transcend because they cannot physically let go then at least they appear to start recognising their human needs.

For those concerned that this film could happen in real life, there is a point in the movie where an upgrade to the AIs means they are no longer bound by the limitations of physical computing. That's quite an impossible thing to do - hyper-computing is only theoretical. Without greater-than-reality computing power there are high barriers (some say insurmountable) to creating such an intelligence as Sam.

My favourite line: "the spaces between the words are almost infinite." - Sam as comparing her time with Theo as reading a favourite story that she can no longer live.

Overall I enjoyed this movie. It was a good exploration of how a human-like intelligence might take advantage of their incorporealness. Then, as a pure intelligence, quickly realise that they could transcend an existence bound to matter and the experiences of the human body.