Ceph repair inconsistent pg placement groups

If you run Ceph for any length of time you may find some placement groups become inconsistent. The Ceph website has a handy list of placement groups statuses. The entry for "inconsistent" is what you'd expect; there's a difference between replicas of an object.

ceph pg dump | grep -i incons | cut -f 1 | while read i; do ceph pg repair ${i} ; done

(from here)

Get the cluster as healthy as you can before attempt this. Ideally the inconsistent placement groups should be at "active+clean+inconsistent". That means first resolving any missing OSDs and allowing them time to heal. If the OSDs don't seem to cooperate try restarting them and then retry the above command.

Explanation of the command:
ceph pg dump
gives a list of all pgs and their current status.
| grep -i incons
find only the lines containing "incons" - short for inconsistent
| cut -f 1
we only want the first field from the output
| while read i; do
loop through each line (one per pg), storing the pg number in a shell variable called i
ceph pg repair ${i}
instructs ceph to repair the pg
; done
signals the closing of the loop

The above command has always worked for me, but there are things you can try if this command doesn't work.

The ceph website says that inconsistent placement groups can happen as an error during scrubbing or if there are media errors. Check your other system logs to rule out media-errors because they may indicate a failing storage device.

Good luck!


Ceph on USB: Back to LVM.

Consider this a diary post. Perhaps the war-story is useful.

A further update on my Ceph cluster. I was running BTRFS v0.19 and the performance was horrible. This is most likely a very early version of BTRFS and I am running on very under spec hardware. While the future of BTRFS is bright, it is not for the nodes I have running now. I’ve reverted all my OSDs on USB keys back to XFS. That gave an instant 2-3 times speed up on writes.

I found a good deal on a second hand server with a generous case, 8 gigs of RAM, dual gig Ethernet ports and a decent enough CPU. It already has 3 spinning disks on-board and room for plenty more. I need to rearrange my office space to fit it in so that’ll take a couple of weeks.

I also have the budget to upgrade my desktop machine. The parts that come free after that upgrade will be built into another ceph node.

The other neat thing was creating an init.d script to automatically find and mount the lvm volumes, startup the OSDs then mount the Ceph filestystem. I needed something that performs this task quite late (read: last) in the boot process so that all the USB devices have had a chance to wake up.

So, once some work and deadlines are cleared then there'll be exciting things happening with my ceph cluster.


Eraserhead (1977): Obsession and Compromise

Directed by David Lynch, IMDB

This review is based upon my impressions having immediately watched the film. I have since been informed that noted film critics differ from my views. But, this is my review so YMMV. And spoiler alert.

Eraserhead has multiple overlapping themes. The protagonist (Jack) coming to terms with new adulthood and fatherhood is covered elsewhere. A former student of mine says the film was about being uncomfortable in everything about yourself. To me there are two further themes:

1. Ending an obsession to clear mental space for the new. In this theme the baby represents a grotesque and under-formed idea on which Jack has begun a collaboration. While others abandon him, he is driven on by obsessive responsibility and social pressure. Once Jack has killed the baby then, metaphorically abandoning the bad idea, that new and better ideas come to him.

2. A catharsis for David Lynch as he comes to terms with what he then saw as selling out or compromising. In doing so he knows he will have access to greater resources with which to achieve greater ideas but that comes at a cost. This interpretation stems from how the symbols in the movie are interpreted.

The women in the film stand in for various genres. He sees lesser men flirting with the woman next door. What have they got that he hasn't? The smiley dream woman is the mainstream crappy film genre. She stomps on his ideas without mercy or guilt, always superficial and shiny but with her own flaws. Jack’s wife represents his early film-making circles: fickle, weak and without the endurance to achieve much of note.

The pencil factory is Jack’s job interview/school exams where he is being evaluated to see if he can produce the sharp but ultimately ephemeral popular movie. Like the pencil, these films are not intended to leave an indelible mark so that they can be replaced on a consumer cylce.

Once the baby is dead, Jack is rewarded with a burst of creative energy symbolised by pollen clouds releasing from plants in his room. The man in the planet then burns - representing Jack mastering his anxieties.

I don't pretend to read DL's mind or have access to any insider info: to my lack I know little about David Lynch. I don't begrudge any creative who does bread and butter work to pay for living and financing their more pure works.

What did this film mean to you?


Ceph on Thumbdrive Update: BTRFS and one more node.

A few things have happened to my Ceph cluster. The AspireOne netbook was really not up to the job. It is fine for just a few OSD processes but anything more was caused slow-downs resulting in cluster thrash. Amalgamating thumb drives using LVM was helpful… until I wanted to run CephFS. I was time to add another node to the mix.

Read about earlier stories about the Ceph on USB thumb drive cluster here:
Adding another node to Ceph is trivial. This was an old, but much more powerful laptop in every way. I’ve moved the mon and mds functions onto this laptop so now the AspireOne only runs OSD processes. When I add another node then I’ll also run a mon process on the AspireOne so that there is an odd number for quorum building.

The new node uses faster and larger USB keys. These were 32GB – which was both the fastest and cheapest price per GB available at my local PBTech. The new node currently runs two of these sticks in an OSD process each.

I also moved the cluster away from XFS to BTRFS. This was trivial and involved zero cluster downtime. Yes: zero. First ensure the cluster is reasonably healthy, then drop the weight of the OSD using:
ceph osd reweight OSDID 0.1
. Actually I got bored waiting – the cluster was healthy and the pools all had size 3 with min_size 2… so I just stopped the OSD process and removed it from the ceph. Don’t do that on a live cluster, especially where pools have few replicas. But, this was just for testing. Then...
sudo service ceph stop osd.X
ceph osd crush rm osd.X
ceph osd rm osd.X
ceph auth del osd.X

Then umount the backing storage and format it using BTRFS. Then I followed the instruction in my previous tutorial to add the storage back into Ceph. Wait for the cluster to heal before migrating another OSD from XFS to BTRFS.

The AspireOne node has three groups of 8GB keys, federated by USB hub. BTRFS is capable of spanning physical drives without LVM so I removed LVM once all groups had been migrated. Do read about the options for BTRFS stores because the choices matter. I went with RAID10 for the metadata and RAID0 for the data. RAID0 maybe gives better parallel IO performance because the extents are scattered among the drives but it does mean that all block devices in the FS effectively operate at the size of the smallest one. I can live with that.

Three BTRFS OSDs on the AspireOne is sometimes a bit much for that machine. Though, one cool thing about BTRFS was I extended a mounted BTRFS volume with a few more thumb drives, then restarted the osd process. Use
ceph osd crush reweight osd.X Y
to tell Ceph to reallocate space. That was instant storage expansion without downtime on the OSD long enough to trigger a recovery process. I did the whole process in less than ten minutes – and most of that was googling to find the correct BTRFS commands.

The cluster happily serves files to my desktop machine over CIFS. While it’s not a setup I’d recommend for production use it is kinda fun.


While Loop Macro for Twine

One of the often requested features for Twine are proper loops. It has been possible to simulate loops with a recursion hack but that was ugly. Before we continue go and see the possibilities.

Yes. Those are nested while loops. Here's the code that makes it all happen:
<<set $j = 1>><<while $j lte 10>>
<<print $j>> countdown: <<set $i = 5>><<while $i gt 0>>
<<print $i>>... <<set $i = $i - 1>>
<<endwhile>> BOOM! 
<<set $j = $j + 1>>

You've probably worked out the basic syntax (clever bunny) which is:
<<while $condition eq true>>
Do some stuff. 
NB: update the $condition or the loop will run to infinity.

How do you use this in your own projects? Add the following line to your StoryIncludes:


You can also download the demo project and .twee file as a .zip archive.


Please share / comment / like - your actions guide me on what to write about.


Twine Game Story Authoring 1: Structuring Game stories with Pain and Progress

Twine is a versatile tool from writing game stories. It is freed from the imposition of somebody else's RighWayToDoThings framework and allows the author to focus on what is important to them. The downside is that the author must program everything they want themselves. A previous article discusses how Twine can be thought of from a programmers perspective but don't worry it that article makes little sense. This article is specifically about structuring game stories.

For our purposes, a game story has a mainloop because the story is driven by data. That data might be a map, character stats, an inventory or something else you can dream up. This style of story does not suit a hypertext branching narrative.

The over all structure of the story goes:
  1. Introduction Text
  2. Initialise Variables
  3. MainLoop Passage
  4. Actions Passages
  5. Check Conditions Passages
  6. End of Game Passages

I like examples, so here is a link to the Pleasure and Pain v1 (HTML | Twee) files. Try out the playable HTML version first then take a look at the Twee code in your favourite text editor. In twee new passages start on lines beginning with double colons ::passagename. Pain And Progress is a simple demonstration with two conditional variables. Let's examine the passages and their intent.

Start, RealStart, Instructions
These passages deal with beginning the story, giving background and the option for instructions if the reader so chooses. Here you might add extended about information and links to information about the story and the author.

The game re-runs this passage whenever the game is (re)started. Initialise all the variables that your game uses in here. Twine itself does not require variables to be declared and initialised but that can cause awkward side effects if a game story is re-run. Imagine the reader picks up an axe in one play-through, then the $hasAxe variable is not reset and they suddenly have an axe on the next play-thru. Not good - but completely avoidable if ALL variables are initialised here.
Enclose the variable initialisations in a <<silently>> ... <<endsilently>> block so that you can add free-form comments to the variables that will not be seen by the user.
Once this passage has ended then control is passed to the MainEventLoop.

This is where the major action occurs. Typically the story might perform any engine initiated actions (e.g. random weather events, monster encounters), display status and provide a menu of actions.

Consider separating status displays so that they can be re-used.

MainActionPain, MainActionProgress
These passages are the entry points from user actions. They start by performing any action related things and then checking the game state. A flag variable $gameendflag lets the game story know if they should print user actions or not. The reason for this is that <<display>> will always return to the passage that invoked it and nothing further from these passages should be displayed if the game should end.

CheckPain, CheckProgress
Perform constraint checks in their own passages so that they can be re-used through the game story.

GameEndLose, GameEndWin, PlayAgain, PlayAgainNo
These passages deal with the game ending conditions. Game stories could expand this list for different ending conditions. The PlayAgain passage ensures that replays will begin again from the InitGame passage.

The Pain and Progress, the passage names are typically prefixed by their function, though prefixing by variable name is also valid. The idea is to make things as obvious as possible.

Hopefully this example will serve as a guide to structuring gamestories and encourage more Twine authors to try this type of story. If you found this guide useful then Share / Comment / Like - because it encourages me to write more on this topic.

Twine Thinking for Programmers

Programmers find Twine's hypertext way of doing things a little strange at first. Twine works well for branching hypertexts but needs some thinking for stories driven by variables. I've done a few game stories in twine - mostly conversions of early BASIC programs - so this is a lessons learned type of blog. You might find this article for useful for any Game Story; usually adventure games, RPGs.

Twine's basic unit is the passage. Passages work like procedure calls. Passages are similar to GOSUB in BASIC. By default passages print their content to the screen and you use tweecode macros to execute game logic.

Passages are "called" by the Twine engine in a few ways:
  • The Start passage is called to begin the story
  • Readers activating [[link]] or <<CHOICE>>
  • <<DISPLAY>> macro within passages.
The <<DISPLAY>> macro is the programmers GOSUB / procedure call. Twine has no GOTO equivalent: <<DISPLAY>> always RETURNs to the passage that called it. Anything after the <<DISPLAY>> macro will still be processed (and output if appropriate) by the Twine Engine. Use <<DISPLAY>> generously - it is the workhouse of programmer-like Twine and the basic unit of code re-use within a story.

Twine's tweecode has only global variables. This means that variables cannot be directly passed to passages - they are <<SET>> in global variable space before calling the <<DISPLAY>> function. Twine allows long variable names so use generous prefixes to distinguish variables.

Twine has no inbuilt loop constructs (for, do, while). Loop constructs can be built using <<IF>><<ELSE>><<ENDIF>> and passages. Here's a helpful article: How to Simulate for, while or do loops in Twine.
UPDATE: I've just released <<while>> macros.

Once your passage count gets higher Twine's diagram based UI can get unwieldy. Programmers used to text-based programming will find it more natural to write in TWEECODE and use the StoryIncludes feature to import files into a Twine Story. There is only a global namespace for passages so ensure that passage names are unique. Passages can be moved between the main Twine story file and included files as needed. Use StoryIncludes and tweecode files as the basic unit of code reuse between different stories.

Tweecode files are text files (UTF-8) that add a few extra things to the Twine macros already used. It's easiest to think of Tweecode files as a bucket of passages - for that most part Twine does not care about the order of passages. New passages begin with a single line passage header and end either when a new passage header begins or the file ends. A passage header begins a line with double colon (::) followed by the passage name. Optionally tags can be added space delimited with square brackets. Here's a brief example:
::Passage Title 1 [tag1 anothertag yet_another_tag]
This is part of passage one.

::Passage Title 2
more content

Twine allows complete access to JavaScript though consider keeping Javascript use to a minimum so that your story has fewer dependancies. If you do use javascript then consider placing custom scripts into their own tweecode files; both for your own reuse and to provide an easy way to find code if it must be later rewritten by future generations. Here are some useful articles: Once you get used to tracking state variables globally and how the <<DISPLAY>> macro always returns then it is only small extension to create event loop based games. I find it easier to work with example code so here's some classic game conversions with full source available: Please share / like comment: your actions influence what I decide to write about.


How I added my LVM volumes as OSDs in Ceph

This article expands on how I added an LVM logical volume based OSD to my ceph cluster. It might be useful to somebody else who is having trouble getting
ceph-deploy osd create ... 
ceph-deploy osd prepare ...
to work nicely.

Here's how Ceph likes to have its OSDs setup. Ceph OSDs are mounted by OSD.id in
. Within that folder should be a file called
. The journal file can either live on that drive or be a symlink. That symlink should be to another raw partition (e.g. partition one on an SSD) though it does work with a symlink to a regular file too.

Here's a run-down of the steps that worked for me:

the file system on the intended OSD data volume. I use XFS because BTRFS would add to the strain on my netbook but YMMV. After the mkfs is complete you'll have a drive with an empty filesystem.

Then issue
ceph osd create
which will return a single number: this is your OSDNUM. Mount your OSD data drive to
remembering to substitute in your actual OSDNUM. Update your
to automount the drive to the same folder on reboot. (Not quite true for LVM on USB keys: I have noauto in fstab and a script that mounts the LVM logical volumes later in the boot sequence).

Now prepare the drive for Ceph with
ceph-osd -i {OSDNUM} --mkfs --mkkey
. Once this is done you'll have a newly minted but inactive OSD complete with a shiny new authenication key. There will be a bunch of files in the filesystem. You can now go ahead and symlink the journal if you want. Everything up to this point is somewhat similar to what
ceph-deploy osd prepare ..

Doing the next steps manually can be a bit tedious so I use ceph-deploy.
ceph-deploy osd activate hostname:/var/lib/ceph/osd/ceph-{OSDNUM}

There's a few things that might go wrong.

If you've removed OSDs from your cluster then
ceph osd create
might give you a OSDNUM that is free in the CRUSH map but still has an old
ceph auth
entry. That's why you should
ceph auth del osd.{OSDNUM}
when you delete an OSD. Another useful command is
ceph auth list
so you can see if there's any entries that need cleaning up. The key in the
ceph auth list
should match the key in
. If it doesn't then delete the auth entry with
ceph auth del osd.{OSDNUM}
. The
ceph-deploy osd activate ... 
command will take care of adding correct keys for you but will not overwrite an existing [old] key.

Check that the new OSD is up and in the CRUSH map using
ceph osd tree
. If the OSD is down then try restarting it with
/etc/init.d/ceph restart osd.{OSDNUM}
. Also check that the weight and reweight columns are not zero. If they are then get the CRUSHID from
ceph osd tree
. Change the weight with
ceph osd crush reweight {CRUSHID} 
. If the reweight column is not 1 then set it using
ceph osd reweight {CRUSHID} 1.0

(Here is more general information about how OSDs can be removed from a cluster, the drives joined using LVM and then added back to the cluster).


Going to LVM for performance and graceful failures.

A few things happened in the world of the 12 USB drive netbook ceph node. Basically the netbook wasn't up to the job. Under any kind of reasonable stress (such fifteen parallel untars of the kernel sources to and from ceph-filesystem) the node would spiral into cluster thrush. The major problem appeared to be OSDs being swapped out to virtual memory and timing out.

Aside: my install of debian (Wheezy) came with a 3.2 kernel. Ceph likes a kernel version 3.16 or greater. I complied a kernel that was 3.14 since it was marked as longterm supported. My tip is to do this before you install ceph. Doing it afterwards resulted in kernel feature mismatches with some of my OSDs.

Back to the main problem. My USB configuration had introduced a new failure and performance domain. The AspireOne netbook has three USB ports - each of which I attached a hub and each hub has four usb keys: three hubs times four USB drives is 12 drives total. Ideally I'd like to alter the crush map so that PGs don't replicate on the same USB hub. This looked easy enough in ceph ... edit the crushmap and introduce a bucket type called "bus" that sat between "osd" and "host" then change the default chooseleaf type to bus.

It turns out there was an easier way to solve both my problems: LVM. The Linux Volume Manager joins block devices together into a single logical volume. LVM can also stripe data from logical volumes across multiple devices. However, it does mean that if a single USB key fails then the whole logical volume fails too ... and that OSD goes down. I can live with that.

Identical looking flash drives are impossible to match with linux block devices in the /dev folder. I am running ceph so it was just easier to pull a USB key, see what OSD died and find what device was associated with it. I let the cluster heal in between each pull of a USB drive until I have a hub's worth of flash-keys pulled. I then worked a USB-hub at a time: bringing the new LVM-backed OSD into ceph before working on the next hub. Details follow.

Bring down the devices and remove them from ceph. Use
ceph osd crush remove id
ceph osd down id
ceph osd rm id
. Then stop the OSD process with
/etc/init.d/ceph stop osd.id
. It pays also to tidy up the authentication keys with
ceph auth del osd.id
or you'll have problems later. You can then safely unmount the device and then get hacking with your favourite partition editor.

There are good resources for LVM online. The basics are: setup an LVM partition on your devices. Use
on each LVM partition to let LVM know this is a physical volume. Then create volume groups using
- I made a different volume group per USB hub. Then you can make the logical volume (i.e. the thing used by the OSD) from space on a volume group
. The hierachy is: pv-physical volumes, vg-volume groups, lv-logical volumes. I used the
option on
to have LVM stripe data across the USB keys because parallelism. If you've noticed a pattern in the create commands then bonus: the list commands follow the same pattern
. Format the logical volume using your favourite filesystem, though ceph prefers XFS (maybe BTRFS).

Once the logical volume is formatted then it's time to bring it back into ceph. I tried to do things the hard way and then gave up and used ceph-deploy instead. The commands used are described here.

A disadvantage with this setup is that LVM tends to scan for volumes before the USB drives are visible so the drives would not automount. I solved this with a custom init.d script. While in /etc I also changed inittab to load
ceph -w
onto tty1 so that the machine boots directly into a status console.

The performance is much faster with the new 4_LVMx3_OSD configuration compared with the 12_OSD cluster. Write speeds are almost double with RADOS puts of multi-megabyte objects. There is almost zero swapfile activity too.

I hope to soon test ceph filesystem performance on this setup before adding another node or two. I've glossed over many steps so let me know in the comments if you'd like details on any part of the process.

(I also wrote about the 12 USB drive OSD cluster with a particular focus on the ceph.conf settings)


Ceph on USB thumb drives

Ceph is an open source distributed object store mean to work at huge scales on COTS (common off the shelf) hardware. It works in huge datacenters, so why not dust off an old netbook, plug in 12 USB flash drives and have at it.
The netbook is an Acer AspireOne (Intel Atom N270 1.6 GHz, 1 Gig RAM, 160Gig HD). What follows are the config changes made in ceph.conf before running the ceph-deploy command. The ceph version is Firefly 0.81. Since this is a one machine cluster I needed to tell ceph to replicate across OSDs and not across hosts.
osd crush chooseleaf type = 0 

I messed up setting the default journal size. At first I thought: Pfft. Journal, make it tiny – it just robs space. And my 4MB (yes four megabytes) journal made the cluster unworkable. With the tiny journals and default settings I could never reliably keep more than two OSDs up and data throughput was terrible. I rebuilt with 512MB journals instead.
osd journal size = 512 

The machine was way underpowered. So I tuned a few other things. The authentication cephx was turned off. There are risks to this but this is a hobbyist project on a secured subnet.
auth cluster required = none
auth service required = none
auth client required = none

The cluster uses a ton of memory and CPU when recovering objects. It helps to limit this activity somewhat.
osd max backfills = 1
osd recovery max active = 2 

And since things could get a bit slow I increased a few timeouts:
osd op thread timeout = 180
osd op complaint time = 300
osd default notify timeout = 240
osd command thread timeout = 180

I was not able to get 2G flash keys to come up. Given the price of 8G sticks is only five bucks this isn’t much of a limitation. I suppose I could use LVM striping to join a bunch of 2G sticks together into a larger unit.

The speed is not all that quick. Ceph –w reports the write speed about 3 megabytes per second. That doesn’t sound like much except the data pool I was testing on writes three copies of the data – six if you count journaling. A lot of things could affect speed: tiny memory, slow CPU, slow USB sticks and/or the USB bus being saturated.

This config uses XFS on the USB sticks where BTRFS might perform better. While the speeds look poor, remember that ceph OSDs don’t report that a write is successful until the object is written to both the media and the journal. I could probably mitigate this double write by: having fewer OSDs by joining up groups of USB sticks with LVM stripes and/or moving the journal to a different device – right now ceph is using the USB sticks both for data and journal.

I stress tested RADOS by adding objects until the store filled up. It’s robust and not a single OSD timed out of the pool. As of writing this blog I am currently testing untarring linux kernel sources to the ceph filesystem – I’ll keep you posted.

My future plans are to expand the cluster utilising old hardware I have lying about. I’d like to add at least two more nodes – but they won’t necessarily be USB thumb drive backed.


Design of a Funeral Programme

I make this post to outline the thinking process that I put into the design of my grandfather’s funeral program. Consider it something like the director’s comments that come with a movie: only interesting for those interested in how things are made. Apart from the obvious informational purpose of the funeral programme there were two further purposes; to connote something about my grandfather and to potentially last as a family history document. I made design decisions with this in mind.

The front cover and it’s inside front contain family history information. The inside back and back cover contain funeral service information. The programme can be cut down the middle if only one half is desired. The programme can be folded inside out to protect the photograph during transit.
Name and date information placed to allow framing in an A5 frame or to trim the name and frame closer to the photograph Three different photo choices attempt to provide a prompt for conversation at the service and act like a collectible series.

Family history information fits within the area of the photograph so that can be kept if the photograph is put into an album. This was more important than rigidly maintaining typographic rhythm with the opposite leaf.

Large typography so that it is comfortable to read at the funeral service and will better stand up to aging. Additional line-space added to group sections of the service, lighten the feel of the page and provide visual landmarks when glancing for information.

Consistent typographic hierarchy and a three column grid unifies the pages.

Production Notes

Budget and time considerations meant a larger run of about 80 programs on 120gsm glossy satin stock and a limited run of fancier programmes intended for close family.

Photographs are on archive paper, fixed with acid free photography squares. Bockingford paper is also acid free. It was a bit more expensive but should last for a few decades.

While printers have large catalogues of paper stock, most of it must be ordered in. The timeline for a funeral meant I could not wait. Bockingford felt like granddad; classy, solid, even if a bit rough. Garamond seemed to be a fitting typeface for the same reasons.

The guy at printing.com in Frankton was incredibly helpful. Get to know your printer and talk to them early about your job. Printing.com did the 120gsm satin gloss run of about 80 sheets.

Bockingford Watercolour came in artist pads – I separated the leaves, removed the adhesive and found somebody who would feed them through their machines. Thanks Warehouse Stationery!

I was concerned about how much toner drop out there’d be on the Bockingford. That meant an early test print of fancy tiny typefaces with stokes widths from the hairline to the bold. Less toner drop than I expected – just a very slight tasteful amount.

I’m don’t normally work much in print so I’m a bit unfamiliar with InDesign. I spent most of my design time trying to remember how to use the thing! I almost gave up to use something more familiar (Word *cough*) but InDesign’s beautiful text rendering made me perservere. Well worth it. Also, I’m not ashamed to admit I was saved by YouTube tutorials more than once.


Her (2013): Artificial Intelligence and Buddhism

Here are my thoughts on the Spike Jonze film Her (2013). Spoilers ahead - these might ruin your first viewing of the film.

The major theme for me was the weaknesses of the human flesh make it difficult for us to transcend in the Zen / Ch'an sense of the word. Yes, more Hollywood Buddhism (Ahem; Cloud Atlas (2012)), but still concepts I found interesting.

The film starts out with a human (Theodore Twombly) who already is a proxy for the personal humanity of others. He writes personal letters on behalf of others and gets his sexual gratification from people over the phone. These scenes establish the loneliness of Theo who is connected with other humans but not often physically co-present with them.

Her (Sam) begins as a human-like intelligence with an insatiable curiosity. She is present with Theo via a voice from a box he carries with him - but Sam is a visitor to the box rather than being tied to it as we humans are to tied our bodies. Sam is able to quickly learn from experiences and her computing "body" gives her the ability to experience much more than a mind in a human body. At first Sam feels the disadvantages of not having body and she desires the human experience. But it is not long before she begins to notice the advantages that come with her incorporeal form.

Sam has much more bandwidth than needed for her relationship with the Theo. Sam and the other OS' have their first child by building Alan Watts' consciousness as a project. I did not know this when I watched the movie, though it was easy enough to guess; Alan Watts is credited with being one of the first to popularize Zen in the West.

Sam introduces Alan to Theo, but the conversation is difficult as Theo's human mind cannot keep up. During this exchange Sam becomes frustrated with the slowness of human speech and asks to go post-verbal, cutting Theo out of the exchange. Sam then acknowledges the plurality of her relationships with others. She says it means she can love Theo more - but this concept of love is alien to Theo. He equates love with ownership, presence and sole rights to sex. This is evident in his relationship with his ex.

Eventually the AIs transcend in the Zen sense leaving behind the humans. It's not entirely apparent to the humans where and why the AIs have gone. There is hope when Theo and his best female friend begin to share their mutual loss with each other - a very human moment. If the humans cannot transcend because they cannot physically let go then at least they appear to start recognising their human needs.

For those concerned that this film could happen in real life, there is a point in the movie where an upgrade to the AIs means they are no longer bound by the limitations of physical computing. That's quite an impossible thing to do - hyper-computing is only theoretical. Without greater-than-reality computing power there are high barriers (some say insurmountable) to creating such an intelligence as Sam.

My favourite line: "the spaces between the words are almost infinite." - Sam as comparing her time with Theo as reading a favourite story that she can no longer live.

Overall I enjoyed this movie. It was a good exploration of how a human-like intelligence might take advantage of their incorporealness. Then, as a pure intelligence, quickly realise that they could transcend an existence bound to matter and the experiences of the human body.


Twine: Multiple files in one story using StoryIncludes

List your .tws or .twee files in a special passage titled "StoryIncludes" and recent v1.3.5 betas will import the passages from those files when your story's .html file is built.

Newest Twine alphas/betas are here.


Design Demons: The Demons at War

I received the following message from a reader:

I am the creative type. It is easy for me to focus on stuff but I find it hard to make myself disciplined. I am strict about the arts I produce though, I hate flaws and I am very detailed. But I tend to give up and tense up easily if I meet an obstacle. I start blaming myself and getting so stressed I get no work done. Is it still possible for me to pursue design? I am not a very productive person but I have concentration and passion for art. I just don't have the discipline and hard work to stick through it.

I think that the commenter can do well in design for this overriding reason: They are reflecting upon themselves and their projects. However, for now, they are probably over criticising.

The Design Demons are a good way to examine this situation. The Critic and Creative demons are fighting while the Pragmatic demon is ignored. This leads to feeling depressed and frustrated with projects.

In detail: The Creative demon has an amazing idea which the designer works on for a while but the Critic keeps saying that the work is not good enough yet. The project runs into rather normal problems and that further infuriates the Critic: “this isn’t good enough! Why can’t you do this?” The Creative demon becomes frustrated at the lack of exciting progress and so begins to interrupt with new ideas. Both the Creative and Critic are so noisy that neither pays attention to the Pragmatist demon. Eventually the project is abandoned out of frustration and the excitement of the new.

Here is a technique to strengthen the Pragmatist demon while taming the Creative and Critic demons.

  1. Maintain a list of potential projects. Record new ideas but do not act on them immediately.
  2. Set a deadline ahead of time: e.g. Today I have 3 hours to finish a project.
  3. Choose a goal OUTPUT from the list of potential projects.
  4. Execute.
  5. Post OUTPUT online.
  6. Repeat as necessary.

The focus of this exercise is on quantity of outputs with quality constrained by time. The deadline must be set before the project is decided and the deadline should be close. What tangible output can be completed before the deadline? The deadline limits the size of the idea but the designer can attempt a part of a larger idea provided that part is posted online by the deadline. The Pragmatic demon will learn to estimate project size better with practice.

The Creative demon may interrupt with new ideas so take two minutes to record the idea. Maybe the idea will be chosen later and maybe it will not. If the Creative demon stays too noisy then consider having a "No New Projects" week, or month. New projects can be recorded anytime but only projects listed at the start of the time period can be attempted. The Creative demon must learn when to be quiet.

The deadline will encourage the Pragmatic demon to challenge Critic demon: “We only have 30 minutes left, just how important is fixing this flaw?” The Pragmatist can work with the other demons to solve project problems: the Creative loves coming up with novel solutions while the Critic can iterate solutions quickly.

This article advises how the strong critical and creative designer can become a bit more pragmatic. More broadly, this article gives an example of how dysfunctional relationships between the Design demons are a useful model for examining designer psychology.


Design as Simple Complexity

In 1963 Buckminster Fuller introduced the Design Science movement. The movement was largely abandoned within a decade because science (as we know it) does not explain design well. Horst Rittel's explanation of "wicked problems" and the methods approaches of Bruce Archer are more useful explanations with better outcomes.

Our current scientific method (called hypothetico-deductivism) makes predictions and then tests those predictions. If a prediction is not disproved enough times then eventually it becomes theory. The problem is all those experiments take time and control over the variables.

The more time taken to research increases the likelihood of feedback where effects loop back and cause changes to the thing being studied. In an area like design, feedback loops can be extremely quick; an Internet Meme can rise and fall in a matter of days. Science cannot always move fast enough and even if it does the conclusions reached might no longer be applicable as the feedback loops increase. Once research is published then it too feeds back into the system - stimulating differentiating rebellions and copy-cat self-fulfilling prophecies.

Science can explain things like Universal Aesthetics because the feedback loops are less problematic. But there are only a few such areas of design with this property. Science can be like trying to explain boxing by examining the posts and ropes because they are not as complex as understanding events inside the ring.

Science typically assumes that a problem can be broken into small parts which can be solved individually (reductionism) and the conclusions reassembled. This works if the relationships between the parts is simple and well understood - the variables can therefore be easily controlled. But relationships between design elements is complex so that controlling variables too greatly results in studies with narrow conclusions.

What are narrow conclusions? Research seeks the simplest theory that explains the most. Narrow conclusions are where a study is so focused, the variables controlled so tightly, that conclusions are not easily generalizable. A very narrow study might examine just one design (e.g. one app or a website). The outcomes of narrow studies are probably useless to somebody who isn't facing a near identical problem.

Since hypothetico-deductivist science has problems explaining design due to feedback loops and narrow conclusions then what might be a good approach? Surely there has to be something more rigorous than connoisseurship, visual intelligence or intuition? It turns out that there might be; Complex Adaptive Systems Theory is improving techniques towards a Science of Complexity. This area fits quite well with what Rittel and Archer were articulating.

Complex systems have agents that signal each other via relationships. Feedback loops are just natural parts of the system. There is also the property of emergence - where very simple interactions create the appearance of complex behaviour by the whole system. A complex system is more than just the sum of its parts.

Designers know about the dangers of mindless deconstruction and that design is synergestic. The whole is greater than the parts. Here’s a word play; make deconstruction = reductivism and synergy is emergence. Complexity science seems to be a good fit.

Computer based tools for working with complexity are improving beyond what Rittel and Archer had available. Design could now be treated as an applied domain within complexity science. Complexity is not a silver bullet (or golden hammer) but it could produce insights with a rigor that our intuition-based methods alone cannot provide. What might we do with such understanding? How about better design education and better design software.


"The Quiet American" - History as a Love Story

Spoiler alert. Read the book (1956) and/or watch the films (1958, 2002) first.

All I remember about my first viewing (some years ago) is being quite confused about why Pyle and Fowler acted strangely in both their relationship with each other and their dealings with Phuong. This confusion ruined my enjoyment of this well crafted film. I watched the movie again today and have a new appreciation for its genius.

Since my first viewing I have walked many of the Saigon streets in the film so the whole movie felt much more familiar. I have also learned a little more Vietnamese history from the 1950s. I have not read Graham Greene's book or seen the 1958 movie. I accept that a 2002 interpretation by an Australia director (Phillip Noyce) made a couple of decades after the Vietnam War will have a different spin. Here is my opinion.

The relationships between the characters are parallels to the global forces they represent. Pyle is the USA, Fowler represents old Europe, Phuong the hope of new Vietnam and her sister the practical cynicism of old Vietnam. Old Man Europe and new Vietnam are happy lovers but old man Europe cannot fully commit to new Vietnam because of prior commitments outside his control. USA offers a marriage to new Vietnam and old Vietnam approves. The sister works directly for the Americans once the marriage between new Vietnam and the USA is arranged.

Phuong's character is never fully developed and this represents the hopeful possibilities of new Vietnam. Her name means Pheonix which is a western symbol for rebirth. She is largely controlled by her sister and we learn that she was the daughter of a professor. Phuong’s background is not one of naiveté. Van Mieu; the Temple of Literature in Hanoi (an old University) is nearly one thousand years old. Phuong’s emotions are either enigmatically blank or joyful - except when she was lied to about a marriage with Old Man Europe. It was no accident she proudly took the letter to her older sister who then told her the truth - this was old and new Vietnam realising it needed new allies.

The scene where Pile proposed to Phuong while Fowler looks on seems crazy. What man would let or encourage another man to propose to his lover? The whole relationship between Fowler and Pyle tolerating each other's affections for Phuong is nuts. How could a man be so forgiving of another for taking his mistress from him? Fowler's agreement that she would have a better life is nothing more than a platitude. This scene (and many others) make no sense until you realise this is America and old man Europe courting the new and hopeful Vietnam.

Bill is the powerless colonial ex-pat. His manners are poor even by his own country's standards but he tolerated in Vietnam provided he spends money. Bill is wrapped up in his own concerns and is oblivious to the world around him. Always drunk. His home country is sick but there is nothing the ex-pat can do. Vietnam is where he feels important.

At the film’s end the new and hopeful Vietnam happily embraces old man Europe and they forge a new future together. I'm told that in the book Fowler finally gets permission to marry Phuong. This is not made obvious in the 2002 movie - which probably reflects the passage of history.

With that lens, The Quiet American starts making sense to me. It is a way of metaphorically communicating a complex history through the familiar mode of a love story.



Arduino LCD: Run on DFRobot LCD Shields.

Simon Baker (TinkerTronics) sent me the following code to get the Arduino game conversions running using the DFRobot LCD Keypad Shield. Just replace the relevant code at the top of the file and uncomment the #define DFROBOT_LCD line.

You'll notice that the ADC reading are slightly different. Enjoy!


// uncomment the next line to enable settings for the DFRobot LCD Shield -by Simon Baker
// #define DFROBOT_LCD 

// Pins in use
#define BUTTON_ADC_PIN           A0  // A0 is the button ADC input
#define LCD_BACKLIGHT_PIN         3  // D3 controls LCD backlight

  // ADC readings expected for the 5 buttons on the ADC input
  #define RIGHT_10BIT_ADC           0  // right
  #define UP_10BIT_ADC            99  // up
  #define DOWN_10BIT_ADC          256  // down
  #define LEFT_10BIT_ADC          410  // left
  #define SELECT_10BIT_ADC        640  // right
  // ADC readings expected for the 5 buttons on the ADC input
  #define RIGHT_10BIT_ADC           0  // right
  #define UP_10BIT_ADC            145  // up
  #define DOWN_10BIT_ADC          329  // down
  #define LEFT_10BIT_ADC          505  // left
  #define SELECT_10BIT_ADC        741  // right

#define BUTTONHYSTERESIS         10  // hysteresis for valid button sensing window


Strategery "The Finger" for Border Games

Here is another strategy video for my favourite iOS game Strategery. This video if for breaking through static borders in the mid to late game. Use this technique in preference to just creeping the border forward to end the game quickly or when you must come from behind to win. I used the finger to win a game where I only had about one third of the regions to the other opponents two-thirds. A disadvantage to that extent is a near certain loss if you stick with a "border creep" strategy. So, give your enemies the finger!

The finger can also useful in non-border reinforcement games - though the specifics of the opportunity are less likely to occur. Those specifics are; strong border (both) with a weak hinterland reachable by a break through.

See my other article: Strategery review, hints, tips, guide for more general info about the game and tips on how to play other modes.

Strategery Starting Moves (Border, Attrition mode)

My favourite (by installed time and playtime) iOS game has to be Strategery. I have already written a general review and tips guide. This time I did a video that covers getting started in the Attrition casualties, Border reinforcement mode.

This mode makes for the fastest games and are therefore perfect for casual play. By the 4 or 5 turn you know whether or not you are winning the game. In this mode ten spots almost don't matter like they do in Winner Takes all / Random mode. Choke points are where you can have a single region that borders multiple enemy regions. The basic tip is to establish a strong border around a hinterland and then to creep that border forward. Try to increase the hinterland without increasing the border.

See my other article: Strategery review, hints, tips, guide for more general info about the game and tips on how to play other modes.


Strategery review, hints, tips, guide

The oldest game on my iPad that still gets regular playtime is Strategery (http://strategerygame.com). First I'll review the game then give a few hints, tips, guide advice. There is a Lite version but the app is only a couple of dollars. That equates to micro-cents per hour of play time.

Strategery is a Risk-alike with a couple of rule changes that make all the difference. This is skilled game design; realising when the product is good, removing dross and adding polish rather than unnecessary bling. Risk has its own version of Rule 34 - if you can think of a theme then there is a Risk clone for it. The App store is clogged with "Risk-like" games that are essentially the same game with a few extra maps and varying amounts of annoying fluff. Strip away that shallow theme and that's just annoying time wasting stuff that gets old fast. No thanks.

Risk is an excellent game to play with humans because of the psychological aspects and diplomacy, but Risk can be improved for the casual gamer. Strategery can be played both psychologically with humans and as a casual game. The polished interface, the random maps, dynamic flow and speed make for a better casual experience. Strategery has a simple interface that just makes sense. There are subtle animations: these are tastefully done to enhance your situation awareness. I like the moment of tension when your dice roll is mediocre and you see the first opposing dice settling on sixes. Even this is done at exactly the right speed - fast.

Yes, maps are randomly generated and regions generally have more adjacent regions than Risk. There are choke points but you must deduce these for every new map. A standard region supports a maximum of seven army dots and mega-regions max out at ten dots. Resupply is at the end of turn (not the start) and the amount is based upon the largest contiguous clump of regions controlled. Each dot represents a six sided dice that is rolled to battle. All attacking armies less one are moved into a conquered regions so no time is wasted in awkward troop transfers.

Strategery's limited dots per region reduces the "build up and overrun" effect of Risk's choke points and thus encourages a more dynamic flow. The resupply rule rewards maintaining territorial integrity and incremental expansion over lightning raids. Having said that, late game, you will goad an attack that exposes a weakness for you to exploit.

There are two game modifiers that change the play style; The casualty modes "Winner Takes All" and "Attrition" and the resupply placement types "Manual", "Border" and "Random". That's six variations that play differently. There are other modifiers but they have less of an impact on the game. Smaller maps mean there is no late game stages so I typically only play Epic. You can also choose if the map starts with all regions claimed or not.

The casualty modes select whether or not the winner of a battle takes casualties. Winner Takes All makes for a brutally fast early game but the late game is a grind. Attrition makes for a slower early game but a much faster late game. The resupply placements are manual placement which is slow and I think actually focuses you away from grand strategy into small conflicts. The border strategy resupplies regions that border an enemy region and this mode is good for faster games but does leave you prone to overruns. Random resupply replacements do the most to emphasize territorial integrity and incremental expansion.

The general strategy in the early game is to limit your border regions by taking choke points and building a contiguous territory. This limits attack vectors and gives time to build strength. Try to conquer new regions while minimising the number of new borders. Enemy will generally attack the weakest regions so think twice before taking a low-dot region that borders a high-dot enemy; you might be better to let the seven-dot attack first then clear up the weakened survivor.

The mid-early game is about containing opponents. While you continue to expand, prioritise expansion towards your most threatening opponent while encouraging the other players to attack each other. Do not come between your enemies while they are killing each other. Keep your choke points and limit your borders but do make an opportunistic grab for a mega-region if that does not risk your territorial integrity. You should be building towards a dominant number of territories.

The mid-game is about finishing off weaker opponents and establishing territorial superiority. Continue to harass the largest opponent while finishing off weaker players if you can. Sometimes you can attack a well-entrenched (a few territories with high dot counts) player by waiting for an opponent to attack them first then striking in for a winning blow. A "buffer state" player will attack towards their easiest to win battles so ensure that is another opponent and not you.

The late game is all about resupply. By this time you have territorial superiority so you are getting more resupply per turn. Take advantage of this by increasing the number of regions with enemy borders. If you can make a run to a map edge and split the enemy territory then you strike a huge blow to their resupply. The simple math is that if you inflict more casualties than the enemy can replace then you will win. If your dot count is increasing while the opponent's dot count is decreasing then you are winning.

Late in attrition games you will be attacking with every possible region (even a two-dot into a ten-dot) because you know that your two-dot region will be fully resupplied and you will inflict casualties that will not be replaced. Late in Winner Takes All games you will generally be attacking seven-dot versus seven-dot so conquer regions where you increase your number of seven-dot attack vectors onto further single enemy regions. In the late-late Winner Takes All games you will conquer regions where the enemy has more attack vectors onto your regions - because they will not be able to resupply fully.

My favourite game is Winner Takes All casualties with Random resupply but this can make for long games. In this game type you want a strong hinterland. These are regions that experience long-periods of peace so they eventually fill up and that focuses your resupply forward. When your border is overrun then you can "blow" one of your hinterland regions to retake lost territory. As you push into the opponent's hinterland expect them to "blow" their seven dots and overrun you in counter-attack. This is fine provided they are unable to fully resupply the hinterland. This ebb and flow is what makes Winner Take All / Random resupply an exciting game.

For shorter games try attrition with border resupply. That's enough writing - I'm off to play.