Play The Maryland Extra Credit Problem

A screenshot of a class problem from the University of Maryland has been doing the rounds. The teacher invites the students to vote to receive extra credit:

Here you have the opportunity to earn some extra credit on your final paper grade. Select whether you want 2 point or 6 points added onto your final paper grade. But there's a small catch: if more than 10% of the class selects 6 points, then no gets any points. Your responses will be anonymous to the rest of the class, only I will see the responses.

This situation is a little different than the Prisoner's Dilemma made famous in Game Theory because nobody stands to lose anything. All outcomes are either neutral or a gain. From that vantage point the best course of action is to always vote for six points. However, I think there are some political dynamics at play that might alter your decision to decrease the likelihood of the neutral outcome.

Depending on your expected final grade, here's my advice on how to play:

If you're a troll then go for six points. #YOLO

If you're a high scoring student (A / A+) then select 2 points. You don't need the extra marks. You're doing so well that you're above all this competitive stuff. Give the other people a chance for a few extra points. Unless, you reckon that people should have to earn their place and you think the mountain top has room for only you... then go for 6 points.

Jo Beeplus: Always go 6 points. You stand to lose nothing if it ends up nobody gets any bonus points and you might just get the six points to put into A territory. You work hard, you deserve a shot at an A right?

Jo SeePlus: Go for the 2 points. You might need the bonus marks to ensure you pass so you don't want to risk getting zero bonus points.

Jo "NearFail": Go for six points. You might get them and two points or zero points won't make a difference.

Jo "TotalFailure" You're so far behind you should give others a shot to shine: go for 2 points. Unless you're spiteful.

Bonus evilness karma if you vote for 6 points while talking up a big game regarding the virtues of choosing 2 points.

Go play!


Ceph and BitRot

BitRot is the tendency for data to degrade over time on storage devices. CERN reports error rates are at the 10-7 level, so BitRot is a significant problem. This short article talks about how to deal with BitRot on Ceph clusters.

Ceph's main weapon against BitRot is the deep scrubbing process. Deep scrubbing verifies data in a placement group against its checksum. If an object fails this test then the placement group is marked inconsistent and the administrator should repair it. Note that deep-scrub only detects and inconsistency and does not attempt an automatic repair. By contrast, a normal scrub only checks object sizes and attributes.

Deep scrubbing is resource intensive and can cause a noticeable performance drop. You can temporarily disable scrubbing and deep-scrubbing:

ceph osd set noscrub
ceph osd set nodeep-scrub
And the re-enable scrubbing with:
ceph osd unset noscrub
ceph osd unset nodeep-scrub

The configuration options for scrubbing allow the administrator to suggest how quiet the cluster should be before initiating a scrub, how long the cluster is allowed to go before it must scrub, and how many scrubs can run in parallel.

It's also technically possible to manually trigger scrubs via the command line. This means that an administrator that doesn't mind writing code could scrub the placement groups in different pools according to different policies. This article scrubs on a seven day cycle at night-time.

Another source of data degradation can occur in RAM before the data is written into the primary placement group. The best way to guard against this happening is to use ECC RAM. This particular problem is not unique to Ceph but is exacerbated because the nature of clusters is that they increase the number of potential corruption point in the supply-chain between application and storage device.

Ceph uses an underlying filesystem as a backing store and this in turn sits on a block device. There are choices an administrator might make in those layers to also help guard against BitRot - but there are also performance trade offs. For example ext4 and XFS do not protect against BitRot but ZFS and btrfs can if they are configured correctly. Ars Technica has an excellent article on the topic called BitRot and Atomic COWs: Inside next-gen filesystems. Also, don't expect that RAID will detect or repair BitRot.

For more technical details you can read this Q&A post by Sage Weil of InkTank from the Ceph mailing list.


Discussing Human-Centric and Sentience-First Ethics

This blog post comes about in a discussion on the morality of eating meat. The no-meat camp use a sentience-first moral justification for rejecting the eating of meat. My position is different. Please be aware that I am not a professional philosopher and have no formal training in the topic. Those new to ethics should also be aware that while we can disagree on moral justification that doesn't necessarily make our daily behaviour different. For example, I would generally agree that we eat too much meat, though I don't go as far to say "Eat zero meat" is the only morally justified position.

My definition of human-centric morality has to presuppose humans are sentient. Though, the declaration of sentience as core to morality is an arbitrary claim.

I think my position comes from that of moral skepticism (anti-realist? nominalist?). That is I think that any moral system includes some axiomatic declarations: at the base of it we have to declare "X is good" and derive from there. Whether there's a declared ontic good, some deontological axiom or other... I don't believe that these moral claims exist as anything other than an emergent abstract object. In other words our morality does not exist without us.

A contributor raised the point that moral skepticism can derive consistent systems but these are not morals per se (rough paraphrase). I'd counter that all moral systems make axiomatic claims but not all axioms are created equal - we can measure them by making some epistemological assumptions.

Mine is roughly:
Survival of my species is good (axiom).
How do I know this: Morals evolved as decision making short-cuts to guide behaviours to benefit survival. If our morals didn't broadly fulfill this goal then we wouldn't be here to have morals. (There's the self-reference).

What about sentience first? Strawmen unintended.
Sentience is good (axiom).
How do I know this: I think therefore I am. If I wasn't then that's not good. (there's the self reference). It's no great stretch to assume that other sentiences exist. If I want my sentience respected then I should also respect the sentience of others.

I'd agree with this so far. But sentience-first embeds two further assumptions that I don't think are justified:

  1. Sentience is binary; you either have it or you don't.
    I don't think we have a scientific basis to draw a hard line between what is and is not a conscious/sentient being. It might be easy when we talk about humans, mammals, reptiles, insects ... but it gets harder in the relevantly-edible edge cases: Are colony organisms conscious? What about if my computer becomes conscious?*
  2. All sentiences are worthy of equal "don't eat me" consideration. Why, especially if 1) is not clear?
*FWIW I think sentience/consciousness as nominal objects; it's useful to talk about them but they don't actually exist except as an emergent phenomena. To think otherwise might open the door to mind/body dualism.

A common rebuttal to the meat eaters is to claim that it is not necessary for humans to eat meat. I asked if we should then attempt to convert the other omnivores to vegetarianism. The most logically consistent response I got was "Yes, we should but to do so would lead to BadStuff". That is question begging: We probably agree that such a conversion would cause BadStuff, but what is it about the BadStuff that makes it bad? How does that badness link back to sentience-first? I'm interested in thought on the matter.


Ceph Cluster Diary March 2015

I am decommissioning all the USB backed OSDs. On the old hardware that I have the USB OSDs are much slower than USB spinning drives. With newer hardware I might get the full USB3.0 speeds that these flash drives are capable of. This does not mean I am done with Ceph. The truth is the complete opposite: I am migrating my bulk network storage to Ceph and so I need speeds comparable to my current RAID6 NAS box. This is why I have attached USB spinner drives: 2 x 2Tb, 1 x 1TB and 1 x 300Gb. Ceph reports I have just under 5Tb of usable space.

I will remove the netbook from the cluster because it’s too under powered for Ceph and I have other projects that can use it. That leaves a single older USB2 Toshiba Satellite to run Ceph until I get my two other planned nodes online. Those nodes are in decent sized tower cases with plenty of internal bays for more drives.

At present I’m not pressed for storage space. I backup the NAS to the Ceph cluster using rsync. That will do for now.

My future plans will be to consider an SSD based cache tier if I try another big data project. The speed I get from the object store will be the biggest driver of how much effort / budget goes in this direction.

I’m also watching developments in single board computers SBC. It’s almost getting cheap enough and maybe even powerful enough consider making an SBC running an OSD. Place an SBC and an HD into a stackable enclosure and there’s an easy way to grow out a home storage cluster a node at a time. If conditions are right I could even try placing a remote Ceph node or two at friend’s premises for automatic offsite redundancy over a VPN.

About those USBs: It was a great way to learn about Ceph for not much money. You could do the same thing with virtual machines and virtual devices I guess. But now – it’s time to be serious.


Forcing AIO on Ceph OSD journals

My Ceph cluster doesn't run all that quick. On reads it was about 20% slower than my RAID5 NAS and writes were 4x slower! Ouch. A good part of that is probably down to using USB flash keys but...

Upon starting the OSDs I see this message:

journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway
My OSDs are XFS backed which supports async writing to the journal, so let's set that up.

First, ssh into the ceph-deploy node and get the running version of the .conf file, replacing {headnode] with the hostname of the main monitor:

ceph-deploy --overwrite-conf config pull {headnode}
You can skip this step if your .conf is up to date.

Next edit the .conf file and add the following. If you already have an [OSD] section then update accordingly.

journal aio = true
journal dio = true
journal block align = true
journal force aio = true
This will try to apply this setting to all OSDs. You can control this on a per OSD basis by adding sections named after the OSD. E.g.
journal aio = true
journal dio = true
journal block align = true
journal force aio = true
And here's the official documentation.

Next, push the config back out to all the ceph nodes and restart ceph your osds. Separate hostnames with spaces.

ceph-deploy --overwrite-conf config push {headnode} {cephhost1} {cephhost2} ....
A word of warning: the --overwrite-conf flag is destructive. I'll leave it to you to take backups.
Then SSH into the various nodes restarting the ceph service as you go. I just restarted all ceph services,
sudo service ceph restart
But it's probably okay to just restart the OSDs
sudo service ceph restart osd

I experienced an almost 2 times speed increase on writes until the journals fill. Still slow, but getting much better. My fault for not having better hardware!

Read more about my Ceph Cluster.


Howto Deep-Scrub on All Ceph Placement Groups

Ceph automatically takes care of deep-scrubbing all placement groups periodically. The exact timing of that is tunable but you're probably here because you want to force deep-scrubs.

The basic command for deep-scrubbing is:

ceph pg deep-scrub <pg-id>

and you can find the placement group ID using:
ceph pg dump

And if you want to instruct all placement groups to deep-scrub, use the same script from repairing inconsistent PGs. Basically loop over all the active PGs, instructing each to deep-scrub:

ceph pg dump | grep -i active | cut -f 1 | while read i; do ceph pg deep-scrub ${i}; done

The repair article explains how this line of script works.

You can be more specific about which PGs are deep-scrubbed by altering the

part of the script. For example, to only scrub active+clean PGs:
ceph pg dump | grep -i active+clean | cut -f 1 | while read i; do ceph pg deep-scrub ${i}; done

Some general caveats are in order. Repair your PGs before attempting to deep-scrub; it's safer to only scrub PGs that are active and clean. You can use

ceph pg dump_stuck
ceph health detail
to help find out what's going on. Here's a link to Ceph placement group statuses.

Good luck!


Bringing back an LVM backed volume

What can you do when an LVM backed logical volume goes offline? This happens on my slower netbook on an LVM logical volume spanning about 20 USB flash drives. Sometimes those PVs go missing and the filesystem stops! Here's the steps I take to fix this problem without a reboot. My volume group is called "usb" and my logical volume is called "osd.2".

Since my volume is part of a ceph cluster, I should ensure that the ceph osd is stopped. service ceph stop osd.2. You probably don't need to do this since the OSD probably exited once it saw errors on the filesystem.

Next, unmount the filesystem and mark the logical volume as inactive. We use the -f -l switches to force the dismount and lazily deal with the dismount in the background. Without those switches the umount might freeze.
umount -f -l /dev/mapper/usb-osd.2
Marking the logical volume as inactive can be done in two ways. Prefer the first method since it is more specific. The second method will mark inactive all dismounted logical volumes and that might be overkill.
lvchange -a n usb/osd.2 -or- vgchange -a n

At this point I unplug all the USB drives and check the hubs. Plug in the USB keys a few at a time and use pvscan as you go to ensure that each USB key is being recognised. If you have a dead USB key then try again in another port. If that doesn't work then check the hubs have power - even replug the hub. Failing that try a reboot. Failing that... attempt to repair the LVM volume some other way. Since ceph already replicates data I don't bother running the LVM backed logical volumes on RAID - I just overwrite the LV and make a new one from the remaining USB flash drives.

Once all the PVs have come back then pvscan one last time then vgscan. Now you should see your volume groups have all their PVs in place. Now it's time to reactivate the logical volumes. Both methods will work but again I prefer the first once since it is more specific.
lvchange -a y usb/osd.2 -or- vgchange -a y

All things going well and the Logical Volume is now active. It's a good idea to do a filesystem consistency check before you remount the drive. Since I use XFS I'll carry on with the steps for that. You should use whatever tools work for your filesystem.
mount /dev/mapper/usb-osd.2 mounting the drive allows the journal to replay. That usually fixes any file inconsistency problems.
umount /dev/mapper/usb-osd.2 unmount the drive before checking.
xfs_check /dev/mapper/usb-osd.2 to check the drive and use xfs_repair /dev/mapper/usb-osd.2 if there are any errors.

Now we're ready to mount the logical volume again: mount /dev/mapper/usb-osd.2

And since I'm running ceph I want to restart the OSD process: service ceph osd restart osd.2


Read more about my ceph cluster running on USB drives.


A Quick Review of USB flash drives from Apacer, Sandisk and Strontium

In the course of building my USB thumbdrive based ceph cluster I tried USB keys from three different manufacturers with a total of five different varieties of USB drives. Here are my impressions of them.

I have used 3 of the 8GB and 3 of the 32GB drives, both of the USB 3.0 Pen-Cap model (PBTech: 8GB | 32GB). One of the 8GB and 2 of the 32GB sticks failed (50%!). I have personal data on them so I don't want to return them for a refund in-case the failures are not total. I have had great feedback about these sticks from others and I did love the speed. However, they do run quite hot and perhaps the heavy IO loads of ceph melted them. They do have a blinky activity LED that is a gentle blue. The drives will stack on top of each other but are a too wide to stack side-by-side. However, stacking does increase the heat problem. The actual usage space is about 28-29GB this was low compared to competitors but the drives tended to be a bit cheaper.
The price probably makes them great for your briefcase/backpack but I wouldn't recommend them for high-usage.

I have 12 of the 8GB Cruzer Blade drives and one of the 32GB larger slide-cover Ultra3 thumb drives (PBTech: 8GB | 32GB). The Cruzer Blade style drives do not have an LED but thankfully I have had zero failures. The Cruzer Blade drives stack both horizontally and vertically in USB ports with a tiny bit of touching. The Ultra drive is too wide and tall to stack in USB ports but it does have a small blue activity LED. The Ultra3 would be my favourite USB drive for the briefcase/backpack because you get more usable storage than Apacer, the price is not much more and there's no cap to lose.

I have four of the 32GB JET USB DRIVEs (PBTech: 32GB). These are much too large to stack multiples in standard USB ports, though you might squeeze them in stacking vertically. I love the price, performance and reliability. The Strontium thumb drives have a red activity LED and they have never failed me. These are my favourite drives for Ceph -> when I want reliability. They do come with a cap - which I don't like for a briefcase/backpack drive though.

It should be said that the economics of running CEPH on USB flash drives doesn't add up. USB hard-drives give better price per GB and probably better performance too (particularly on my old laptops).

Read more about my Ceph Cluster.

IFCOMP2014: With Those We Love Alive

With Those We Love Alive written by Porpentine and scored by Brenda Neotenomie placed 5th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

Porpentine is one of the few authors I know much about. I have played CyberQueen, Cry$tal Warrior Ke$ha and Howling Dogs. Those works, and Porpentine’s interviews and posts keep me thinking the relationships between the tools, the medium, the story and how these produce a final work. Hagiography over.

I particularly WTWLA for the rich symbolism invoked with small amounts of text. I like how this game looks like it could go well onto a small screen. I’m not sure if the "select my preference" purple links had an effect on the underlying game but that didn’t matter because it helped ME construct a coherent picture of the world and how the relationships within it worked.

The game invites you to draw symbols onto your skin. I did not do this but I see how it could enhance the game. It would further immerse the player into the world and fit well with the way that time passes in the game.

The game is technically well crafted as prior actions etch onto other parts of the environment. The recognition of my prior craft reinforced how much my character had supported the machine from which I wanted to escape. Going along to go on. The music and colour changes support the story well.

Overall I was interested in where the story was going and how it got there. There are some lovely symbolic moments that will mean different things to many. I particularly like the princess uprising. Beautiful.

Highly recommended.

IFCOMP2014: Laterna Magica review

Laterna Magica by Jens Byriel placed 42nd (last) in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

This work occupies an interesting place for me. I’m not sure it intends to be FICTION. I read this as a dialogue between yourself and yourself as a way of exploring / confronting your own thinking about new age spirituality. YMMV on subject matter like this... I found myself having to just buy-in to certain beliefs in order to continue. Ugh fine, it’s fiction.

As an IF-work: I like the idea of using IF in a dialogue manner; a form of education, a conversation a reader can have to learn about things. This is different to Wikipedia where knowledge is presented in chunked totality. This mode of education is an unfolding journey. I’d love to see work where the conversation changes based on what has come before – that would unlock great educational potential.

I initially couldn’t find an ending and went around in loops trying to explore as many answers as I could. I get this was the whole point; to come and go as I choose but there is no end to this quest/questioning. I’m going to admit to cheating and eventually reading the source code to make sure I got everything – ha! It turns it there is a proper ending - this is sort of a maze game afterall.

I don't recommend this game.

IFCOMP2014: ICEPUNK review

ICEPUNK by pageboy placed 31st in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

A neat post-apocalyptic loner back story and an interesting narrative where you go around slurping up data out of the landscape. The interface riffs on old text console games and features some retro ASCII art. The map is randomised. The map was a bit clunky and slow to navigate – probably a feature of my impatience and my older computer.

The task of slurping up data got a bit tedious. It became more score-keeping than a chance to revisit the items of culture the game presented as data for the taking. It became less an unfolding story than a chore to complete. I did encounter one potential show stopper bug that gave a totally blank screen. I got around this with some console tricks.

Booting up the computer gave a simple victory screen. About what you’d expect.

A story-line with the inhabitants of another bunker basically opting-out of the game wasn’t particularly well followed. Neither was the initial screens of gender selection.

I score this game well in technical merit. The back story was cool, but too much was exposed via exposition in the opening scenes rather than discovered as the story unfolded. I suspect the author had high ambitions but unfortunately ran out of time. I don’t want to sound at all discouraging, because this could be a great game story if polished and honed.

IFCOMP2014: Begscape review

Begscape by Porpentine placed 28th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

I went for this game because of its simple aesthetic. It feels like the PeopleSoft style text games from the 1970s (e.g. Chris Gaylo’s Highnoon). Those games helped me learn to program! I have done some fairly faithful conversions of these old games to Twine and Arduino. This means I already have an affinity for the style of game.

The choices are brutal and limited; even by the standards of the 1970s games. But those limits are the story. Poverty is not only about money though that becomes the basic means of maintaining health. Poverty is about resources; health, mental, charisma, and connectedness. That this game doesn’t let PC me do a bunch of things RL me could do (borrow, ask a relative or friend, apply state benefits, work for food, etc) speaks exactly to that.

Perfectly sized and I love it.

IFCOMP2014: Hill 160 review

Hill 160 by Mike Gerwat placed 36th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

This is the first parser game I've played in quite some time, so it took me a bit of getting used to things. 3 hours in and ... well I opened up the walkthru and realised I'm not even quarter of the way there. But I really want to finish this game - find out what is going on.

It's probably my noob-ness with modern parser that make things a bit slow. Getting killed happened often, but it was great I got instant retries. Some of the continuity got a bit patchy - ie I was able to put things back into a locked cabinet without opening it.

Research was generally great except that: Sargeant is a non-commissioned officer rank. The story-line assumes Grant is a junior officer. It might work better to change Grant to a Lieutenant and Grant's entry to the army as an officer cadet. I was able to overlook this slip though to enjoy what bit of the story time allowed me to play.

I get the feeling that CYOA might've suited the game better than parser because the narrative pacing could be better controlled. While parser gave the illusion of freedom, the "narrative rails" the game was on were on quite apparent. It wasn't like I had the agency to do much except what the story next required of me. Still, this is my first parser play in quite some time so take what I said with a grain of salt.

Big game. Very engaging. A bit more testing and polishing and it would be great.

IFCOMP2014: The Secret Vaults of Kas the Betrayer review

The Secret Vaults of Kas the Betrayer by A. E. Jackson placed 33rd in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

As I started to play I got the strong feeling of a Fighting Fantasy novel - nothing to do with the author's surname. Cool - I have boxes and boxes of FF books in my garage so count me as a genre fan.

There's an interesting enough back story with enough depth for a FF story. The room-with-options-that-you-return-to is common enough in FF stories but SVKB gives too much freedom to move backwards and forwards. FF generally made the PC move the narrative forward as a general rule with only the occasional looping room. The looping room having to repeat text is a limitation of dead-tree format. In SVKB looping text felt nostalgic at first but quickly became annoying. I like computer based IF because it frees me from too much looping text. I wanted a short-cut back to the poem from puzzles. Eventually I just opened the poem in another browser window.

The writing could do with edits. Coding generally well crafted. Parchment styled CSS was nice enough though the default Responsiv storyformat grey header/footer should've also been brought into theme.

I think basing the game on tightly built rooms with options made the game feel small. There are 66 Twine passages though the density of <<if>>s make comparison with a FF400 not exactly useful. However, Fighting Fantasy made the 400 paragraph limit feel like large world because the decisions had a mixture of actions that had large and small effects. FF covered a lot of ground in passages like: you walk for a few hours and ...." or "you pass through a village of nondescript buildings that look much like each other."

At times I was a bit confused which links would examine things and which would perform actions on those things. Typically an examination action does not change state (except time, or triggering something as you walk to the NOUN). FF made this clear by usage; usually clear verb phrases. I like the style where links embedded in text examine / think (don't change state) and links at the bottom of passages DO things (change state). There isn't a correct answer though; do something that makes sense!

I keep coming back to the genre thing. I wonder if a tightly woven room puzzler with freedom to move back and forth might have better suited Inform. As it stands emulating the rooms in Twine worked well enough.

I did enjoy myself. That FF nostalgia was too strong for me to resist.


IFCOMP2014: Arqon: A Criminal's Journey review

Arqon: A Criminal's Journey by H. J. Hoke placed 39th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

You play the part of Arqon - a criminal basically forced to work as an assassin for the bureau of magic. It took me awhile to get used to which commands worked - probably my lack of familiarity with parser games. I encountered a few bugs:

In the room before meeting the hermit the description said I could go up, east and west but exits said I could only go west and up.

After killing the Hobgoblin and meeting the hermit:
*** Run-time problem P47 (at paragraph 663 in the source text): Phrase applied to an incompatible kind of value

I almost couldn't make the game work after that. #sadface And just as I was starting to have fun...

Ok, I managed to get past that bug and a misspelling of hermit as "hermet". Then the game was over. And just as I was starting to have fun.

A few things: a criminal recruited as an assassin ... yeah maybe. Your inventory being left intact after being imprisoned... nope. The mayor hanging out a few levels deeper than the dungeon... nope. If the placement issues were fixed then this could be enjoyable enough to play in a larger version.

The writing could do with some proof reading and edits. I didn't mind the combat, but I blanked over the combat text since it didn't seem to matter.

The verdict: skip this game unless there are major updates.


IFCOMP2014: The Black Lily Review

The Black Lily by Hannes Schüller placed 17th in the 20th Interactive Fiction Competition IFCOMP2014. You can play online at ifdb. This series of blog posts are mini-reviews I wrote as a fellow author to document my impressions of other games.

Spoilers below

At first I thought this was some story about a man reaching a point in his life where he was deciding to leave aside the passions of the flesh and settle for more stable relationships. This was some bookish withdrawn guy who lived a quiet but comfortable enough life. Then I met Lily. What the heck! So I go back and open up the safe. Oh. I'm a murderer. Heck. So I play again. I'm also maybe a woman. Well that shows how much I'm projecting into the story.

Yeah, my suspicions were raised by the title: The Black Lily being similar to the Black Dahlia (a murder victim). As I played I saw the recurring Black Lily motif as some kind of marker - a symbol that linked the encounters into something to be left behind. But then it became a great plot device to represent the urges of the player character without giving too much away.

At first I wondered why the shower scene was needed. Why not just go straight to the albums... but in the end it made sense. This was the time to discover the identity of the character and I missed it.

The commands were easy enough and the world richly described. I didn't get all the endings or much of a score. But Yep. I liked it. Play time is under an hour and you might want to play a few times.


Display text at end of passage macro for Twine

I was helping Harry Giles out with some modifications to his IFCOMP2014 entry Raik. It became useful to set text early in the passage that would be shown only at the end of the passage.

An easy way to achieve this in Twine with custom macros. They are <<atend>> ... <<atendd>>.


This text appears at the end of the current passage.<<atendd>>And this text appears in the usual place.

How do you use this in your own projects? Add the following line to your StoryIncludes:


Why might you want this macro? Here are some examples. You be inside an <<if>> macro from which you want to set a choice to appear at the end of the passage. You might be using the excellent ReplaceMacros from Glorious Tranwrecks and want to have text that always appear at the end of the passage no matter what the state of the rest of the passage.
Please share / comment / like - your actions guide me on what to write about.


IFCOMP14: The Entropy Cage Post-Mortem

The Entropy Cage (ifdb) was my entry to the 20th Interactive Fiction Competition (ifcomp2014). This post mortem is also posted in the Interactive Fiction Forums.

My thanks to all the players and reviewers. I’m overwhelmed; 14th was much higher than I expected. The reviewers (even the ones who disliked TEC) all gave me valuable feedback that help hone my craft.

Here are some random thoughts:

The feeling of the game was meant to capture how an out of their depth computer tech feels when there’s a major system meltdown. The boss is on their back being both helpful and accusatory (push-pull) while the tech bashes the keyboard hoping something will work. That feeling of awkwardness doesn’t make for a particularly fun experience so I shortened the game during testing to lessen the uncomfortable. The mechanic left no room for overtly showing progress or mastery so wasn’t a good choice of mechanic to base a game around and/or my execution was lacking.

I didn’t mention why the PC was on suspension. A few reviewers worked out that it was because the PC is actually out of their depth/incompetent. As the “first cyber-psychiatrist” there can’t have been a training course, industry accepted best practise or established performance indicators. There’s an underlying current of injustice because the PC feels they’ve been judged against invisible criteria. The PC has signed up for a job with a wizz-bang buzzword job description and the actual job has failed to deliver (psychiatry with that interface? are you kidding!) More than likely an overly-optimistic programmer (Jake) over-promised then underdelivered. The over-promises were then embellished by an HR person who glammed up the role to cover for a crappy salary with crappy T&Cs.

That does leave the question of why Jake gives the final big decision to you. Jake has no idea what to do! Rather than flipping a coin he gives the decision to you so that you can be scapegoated when it goes wrong. (Neither choice would 100% avoid negative consequences). The fore-shadowing to this are dialog choices where Jake mentions suing you and that Jake will insta-fire you if you threaten to bring in your lawyer.

Unfortunately the PC’s POV doesn’t give a particularly good lens into the religious war the subs are fighting. You only see the zombified subs (I’ve been bad, punish me) and the refugees. I will explore adding some “soldiers” into the mix to expose more of the battle story.

Starting with the alarm clock was noob. I’m cutting that whole scene. In terms of the game physics, it slightly alters how much your boss hates you and lets you choose a personality type that affects the game in only subtle ways. The scene is gone - and I get to remove a drug reference warning (the wake pills).

I want to write more in TEC universe.

True Random has interesting metaphysical implications relating to divine simplicity, tawhid and creation; a realisation I came to after reading Gregory Chaitin on Algorithmic Information Theory, the Omega Construct and Meta-biology. That’s pretty hard maths so I’ve tried to present some connotations of that via fiction.

Algorithmic governance and the related issue of big data are real world issues that have ethical dimensions we should probably discuss as a society rather than let things just happen. Driverless cars are the tip of the ice-berg.

And personhood; what rights do subsentients have? They are effectively our slaves. At what point do they become begin to the rights we accord our biological pets.

About that sub.punish() theme. Totally an accident. Sorry. It is going to be removed from the post-comp release. But, the ifcomp version will always live in the TEC cannon. The story would’ve been a very different one if I’d explored the sub/punish angle! I’m going to keep the term ‘sub’ (subsentient subroutine) but .punish() will become .reseed() to better fit with the theme. Those who enjoy innuendo still have an interesting angle with .reseed() but the terminology change provides a more solid clue to the overall theme of the work. Reseeding is a term used in pseudorandom number algorithms.

The science is pretty fleshed out in TEC: I’ll hope to reveal more of it in future works. But here’s a tidbit. Adding a bit to Base16 gives Base32, not Base17. TEC is consistent because the Forward Error Correcting codes cover a larger block of memory that contains more data than just the PID.
Why does changing error correction have anything to do with randomness? As a clue; Here’s some pseudocode that generates a truly random number:
> a = 000000000000000000000000000000000000
> b = 000000000000000000000000000000000000
> while a equals b, loop
> rand = number of the bit that is different between a and b?
The loop does exit because computers aren’t isolated from their environment.

And on construction:

I committed to the contest too late. I tried to enter last year but allowed IRL things to get in the way. This year I let IRL things slip in order to make IFCOMP. I’m glad I did because entering creatively re-energised me. I should’ve committed earlier so that I had more time to polish the game. The mainloop that controls the game pacing wasn’t written until the day before submission deadline. me.punish(): You may commence with the flagellation… hahaha actually no, really not my thing.

I considered not entering but decided that I NEEDED to. It’s been a rough several years and I’ve been creatively out of touch. My self-esteem needed to get an “achieve point” even if that was having participated into last place.

I suck at editing. No matter how many times I read and re-read, mistakes always slip through. I’ve put huge effort into improving but I’m not there yet. I’ll try to collab to get more testers and people with editing skill to help me; but I know collab will increase the lead-in time before I need to commit. I didn’t edit out all the universe specific jargon that added little to the game. Simu-sleeping, holosplays... ugh. Yeah, they paint the picture of the TEC universe but in the nugget sized TEC game they were distractions.

I intentionally aimed TEC towards the strengths of click-fics to give the pacing I wanted. TEC as a parser game would’ve been very different.

Thanks again to the reviewers for their feedback. I tried to personally thank you all, but it became too much to track.

The post-comp release of TEC will also be put into the Android Play Store. You’ll see me back next year and not necessarily with a story set in TEC universe. As much as I’d like to try enter more IntFic competitions, I still have IRL concerns. Whaddup PhD!

Play The Entropy Cage in your browser. Playtimes are about 15 minutes.


My first Android App: FantaGen

UPDATE: FantaGen has added many more generators since its first release. Some fun, some serious and all very expressive.

I made FantaGen - a fantasy name generator to experiment with exploring cultural spaces with random generation. There are other generators so I aim for FantaGen to stand apart from the competition by being more comprehensive and more expressive.

You can get FantaGen from the Play Store

These types of generators are good for a bit of harmless fun but they do have a serious side to them. You can use the generator to help break some writer's block.

There are two cultures that have strange names: Simptee and Star Spirits. These were based on a simpler generative name system from a now abandoned interactive fiction project. It made sense in that story to have characters with names that weren't always the same. I thought the names were kinda cool so into FantaGen they go. Code recycling is good.

There are tons of fun image-based generators doing the rounds at the moment. These usually take the form of having the reader build up a name by looking up alternatives based on letters in their own name or their birth month. While these are fun for a single lookup, they don't give much variety for doing 10 random names at a time. My own feeling of these is that 26x26 alternatives is simply not rich enough when the user can see ten items at a time and refresh every half second. Significant extension to the generator is needed to add enough variety to be interesting.

I'm particularly proud of Fairy Names and Star Spirits. Fairy names uses both vocabulary words and syntax variety to create fun diversity. When I can picture the character that goes with the name then I think the effect is good. The Star Spirits mix both words and syllable combinations to represent their angelic like culture. The syllabary is fairly restricted to match how I image their angelic language to be so the word-based titles help to create the variety.

The Roman name generator combines a syllabary with some real Roman names based on research. The syllable based names are most likely anachronistic nonsense. Though it would take much more work to do a historically accurate Roman name generator given how complex their naming patterns are. In particularly I don't handle the gender very well at all: Claudius / Claudia, Julius / Julia. I would like to do more name generators based on real cultures since that ties into my semantic web interests.

Speaking of interests; this project also represents an interest I have in generative creativity. I want to be doing actual research into generative graphic design tools once my PhD is complete.

FantaGen is free and always will be. I have plans to continue extending FantaGen. As the list of generators grows I can see some ripe experimentation in how to navigate that space to keep things fun. I'm open to suggestions for new generators too.


An Experience of Orcball

Orcball is a team sword sport similar to a touch version of Rugby League except with padded weapons. I can only find references to it at Waikato University. The day I joined was a training session. I intend to publish a more technical article for those with a background in sword sports.

The people are friendly and welcoming. The emphasis seems firmly on having fun and improving skills. Players can borrow weapons from the Orcball club. You can see play videos online here.

As a new comer I was asked to use a single long-sword. This is a sensible safety rule until they figure out that I’m not going to bash through the opposition while ignoring all hits. That did make my life a bit difficult because I was up against people with sword and shield and long/short sword dual wielders. Apparently the game itself has rules that reduce dual wielding and shields.

The boffer swords are heavy compared to foam swords and even to sport-fencing weapons. They are made from PVC pipes padded by dense foam and wrapped in duct tape. The construction also has “thrust-safe” tips. They were still light enough to thrust single handed.

The weapons are heavy enough that a hard swing still inflicts damage. The rules require a gentle touch and no strikes to the head. These are sensible safety rules given that nobody wears protection. Afterall, this is meant to be a casual game that almost anybody can join. Valid target areas are: above the knees excluding the hands and head.

Mutual strikes in Orcball are termed “Irish” and do not count as a hit but there seems to be a wide interpretation of this rule. Irish includes mutual hits when the swings are simultaneous (even when the strikes are not) and therefore excludes counter-attacks.

The no-head-hit rule did get a bit frustrating. Opponents often left their heads open and I couldn’t strike! But this is Orcball and them’s are the rules and for good (safe) reasons.

I’m about average height but I happened to be taller than the opposing team. I switched to a finger grip where the index finger goes over the cross-guard and around the front of the blade. This made lowering the blade angle easier and increased point control. I wouldn’t have done that if the fingers/hands were valid targets.

It also takes a reasonable amount of fitness to play well. I’m not fit and sat out more than once to rest. Also, Orcball’s play on grass and tons of lateral movement meant I ruined my ankles. Oh well.

Verdict: would play again.