2014-08-27

While Loop Macro for Twine

One of the often requested features for Twine are proper loops. It has been possible to simulate loops with a recursion hack but that was ugly. Before we continue go and see the possibilities.

Yes. Those are nested while loops. Here's the code that makes it all happen:
<<set $j = 1>><<while $j lte 10>>
<<print $j>> countdown: <<set $i = 5>><<while $i gt 0>>
<<print $i>>... <<set $i = $i - 1>>
<<endwhile>> BOOM! 
<<set $j = $j + 1>>
<<endwhile>>

You've probably worked out the basic syntax (clever bunny) which is:
<<while $condition eq true>>
Do some stuff. 
NB: update the $condition or the loop will run to infinity.
<<endwhile>>

How do you use this in your own projects? Add the following line to your StoryIncludes:

http://github.com/tweecode/TwineQuest/raw/master/macros/9999%20Macro%20while.twee

You can also download the demo project and .twee file as a .zip archive.

Enjoy!

Please share / comment / like - your actions guide me on what to write about.

2014-08-25

Twine Game Story Authoring 1: Structuring Game stories with Pain and Progress

Twine is a versatile tool from writing game stories. It is freed from the imposition of somebody else's RighWayToDoThings framework and allows the author to focus on what is important to them. The downside is that the author must program everything they want themselves. A previous article discusses how Twine can be thought of from a programmers perspective but don't worry it that article makes little sense. This article is specifically about structuring game stories.

For our purposes, a game story has a mainloop because the story is driven by data. That data might be a map, character stats, an inventory or something else you can dream up. This style of story does not suit a hypertext branching narrative.

The over all structure of the story goes:
  1. Introduction Text
  2. Initialise Variables
  3. MainLoop Passage
  4. Actions Passages
  5. Check Conditions Passages
  6. End of Game Passages

I like examples, so here is a link to the Pleasure and Pain v1 (HTML | Twee) files. Try out the playable HTML version first then take a look at the Twee code in your favourite text editor. In twee new passages start on lines beginning with double colons ::passagename. Pain And Progress is a simple demonstration with two conditional variables. Let's examine the passages and their intent.

Start, RealStart, Instructions
These passages deal with beginning the story, giving background and the option for instructions if the reader so chooses. Here you might add extended about information and links to information about the story and the author.

InitGame
The game re-runs this passage whenever the game is (re)started. Initialise all the variables that your game uses in here. Twine itself does not require variables to be declared and initialised but that can cause awkward side effects if a game story is re-run. Imagine the reader picks up an axe in one play-through, then the $hasAxe variable is not reset and they suddenly have an axe on the next play-thru. Not good - but completely avoidable if ALL variables are initialised here.
Enclose the variable initialisations in a <<silently>> ... <<endsilently>> block so that you can add free-form comments to the variables that will not be seen by the user.
Once this passage has ended then control is passed to the MainEventLoop.

MainEventLoop
This is where the major action occurs. Typically the story might perform any engine initiated actions (e.g. random weather events, monster encounters), display status and provide a menu of actions.

MainStatus
Consider separating status displays so that they can be re-used.

MainActionPain, MainActionProgress
These passages are the entry points from user actions. They start by performing any action related things and then checking the game state. A flag variable $gameendflag lets the game story know if they should print user actions or not. The reason for this is that <<display>> will always return to the passage that invoked it and nothing further from these passages should be displayed if the game should end.

CheckPain, CheckProgress
Perform constraint checks in their own passages so that they can be re-used through the game story.

GameEndLose, GameEndWin, PlayAgain, PlayAgainNo
These passages deal with the game ending conditions. Game stories could expand this list for different ending conditions. The PlayAgain passage ensures that replays will begin again from the InitGame passage.

The Pain and Progress, the passage names are typically prefixed by their function, though prefixing by variable name is also valid. The idea is to make things as obvious as possible.

Hopefully this example will serve as a guide to structuring gamestories and encourage more Twine authors to try this type of story. If you found this guide useful then Share / Comment / Like - because it encourages me to write more on this topic.

Twine Thinking for Programmers

Programmers find Twine's hypertext way of doing things a little strange at first. Twine works well for branching hypertexts but needs some thinking for stories driven by variables. I've done a few game stories in twine - mostly conversions of early BASIC programs - so this is a lessons learned type of blog. You might find this article for useful for any Game Story; usually adventure games, RPGs.

Twine's basic unit is the passage. Passages work like procedure calls. Passages are similar to GOSUB in BASIC. By default passages print their content to the screen and you use tweecode macros to execute game logic.

Passages are "called" by the Twine engine in a few ways:
  • The Start passage is called to begin the story
  • Readers activating [[link]] or <<CHOICE>>
  • <<DISPLAY>> macro within passages.
The <<DISPLAY>> macro is the programmers GOSUB / procedure call. Twine has no GOTO equivalent: <<DISPLAY>> always RETURNs to the passage that called it. Anything after the <<DISPLAY>> macro will still be processed (and output if appropriate) by the Twine Engine. Use <<DISPLAY>> generously - it is the workhouse of programmer-like Twine and the basic unit of code re-use within a story.

Twine's tweecode has only global variables. This means that variables cannot be directly passed to passages - they are <<SET>> in global variable space before calling the <<DISPLAY>> function. Twine allows long variable names so use generous prefixes to distinguish variables.

Twine has no inbuilt loop constructs (for, do, while). Loop constructs can be built using <<IF>><<ELSE>><<ENDIF>> and passages. Here's a helpful article: How to Simulate for, while or do loops in Twine.
UPDATE: I've just released <<while>> macros.

Once your passage count gets higher Twine's diagram based UI can get unwieldy. Programmers used to text-based programming will find it more natural to write in TWEECODE and use the StoryIncludes feature to import files into a Twine Story. There is only a global namespace for passages so ensure that passage names are unique. Passages can be moved between the main Twine story file and included files as needed. Use StoryIncludes and tweecode files as the basic unit of code reuse between different stories.

Tweecode files are text files (UTF-8) that add a few extra things to the Twine macros already used. It's easiest to think of Tweecode files as a bucket of passages - for that most part Twine does not care about the order of passages. New passages begin with a single line passage header and end either when a new passage header begins or the file ends. A passage header begins a line with double colon (::) followed by the passage name. Optionally tags can be added space delimited with square brackets. Here's a brief example:
::Passage Title 1 [tag1 anothertag yet_another_tag]
This is part of passage one.

::Passage Title 2
more content

Twine allows complete access to JavaScript though consider keeping Javascript use to a minimum so that your story has fewer dependancies. If you do use javascript then consider placing custom scripts into their own tweecode files; both for your own reuse and to provide an easy way to find code if it must be later rewritten by future generations. Here are some useful articles: Once you get used to tracking state variables globally and how the <<DISPLAY>> macro always returns then it is only small extension to create event loop based games. I find it easier to work with example code so here's some classic game conversions with full source available: Please share / like comment: your actions influence what I decide to write about.

2014-08-15

How I added my LVM volumes as OSDs in Ceph

This article expands on how I added an LVM logical volume based OSD to my ceph cluster. It might be useful to somebody else who is having trouble getting
ceph-deploy osd create ... 
or
ceph-deploy osd prepare ...
to work nicely.

Here's how Ceph likes to have its OSDs setup. Ceph OSDs are mounted by OSD.id in
/var/lib/ceph/ceph-*
. Within that folder should be a file called
journal
. The journal file can either live on that drive or be a symlink. That symlink should be to another raw partition (e.g. partition one on an SSD) though it does work with a symlink to a regular file too.

Here's a run-down of the steps that worked for me:

First
mkfs
the file system on the intended OSD data volume. I use XFS because BTRFS would add to the strain on my netbook but YMMV. After the mkfs is complete you'll have a drive with an empty filesystem.

Then issue
ceph osd create
which will return a single number: this is your OSDNUM. Mount your OSD data drive to
/var/lib/ceph/osd/ceph-{OSDNUM}
remembering to substitute in your actual OSDNUM. Update your
/etc/fstab
to automount the drive to the same folder on reboot. (Not quite true for LVM on USB keys: I have noauto in fstab and a script that mounts the LVM logical volumes later in the boot sequence).

Now prepare the drive for Ceph with
ceph-osd -i {OSDNUM} --mkfs --mkkey
. Once this is done you'll have a newly minted but inactive OSD complete with a shiny new authenication key. There will be a bunch of files in the filesystem. You can now go ahead and symlink the journal if you want. Everything up to this point is somewhat similar to what
ceph-deploy osd prepare ..
does.

Doing the next steps manually can be a bit tedious so I use ceph-deploy.
ceph-deploy osd activate hostname:/var/lib/ceph/osd/ceph-{OSDNUM}



There's a few things that might go wrong.

If you've removed OSDs from your cluster then
ceph osd create
might give you a OSDNUM that is free in the CRUSH map but still has an old
ceph auth
entry. That's why you should
ceph auth del osd.{OSDNUM}
when you delete an OSD. Another useful command is
ceph auth list
so you can see if there's any entries that need cleaning up. The key in the
ceph auth list
should match the key in
/var/lib/ceph/osd/ceph-{OSDNUM}
. If it doesn't then delete the auth entry with
ceph auth del osd.{OSDNUM}
. The
ceph-deploy osd activate ... 
command will take care of adding correct keys for you but will not overwrite an existing [old] key.

Check that the new OSD is up and in the CRUSH map using
ceph osd tree
. If the OSD is down then try restarting it with
/etc/init.d/ceph restart osd.{OSDNUM}
. Also check that the weight and reweight columns are not zero. If they are then get the CRUSHID from
ceph osd tree
. Change the weight with
ceph osd crush reweight {CRUSHID} 
. If the reweight column is not 1 then set it using
ceph osd reweight {CRUSHID} 1.0


(Here is more general information about how OSDs can be removed from a cluster, the drives joined using LVM and then added back to the cluster).

2014-08-14

Going to LVM for performance and graceful failures.

A few things happened in the world of the 12 USB drive netbook ceph node. Basically the netbook wasn't up to the job. Under any kind of reasonable stress (such fifteen parallel untars of the kernel sources to and from ceph-filesystem) the node would spiral into cluster thrush. The major problem appeared to be OSDs being swapped out to virtual memory and timing out.

Aside: my install of debian (Wheezy) came with a 3.2 kernel. Ceph likes a kernel version 3.16 or greater. I complied a kernel that was 3.14 since it was marked as longterm supported. My tip is to do this before you install ceph. Doing it afterwards resulted in kernel feature mismatches with some of my OSDs.

Back to the main problem. My USB configuration had introduced a new failure and performance domain. The AspireOne netbook has three USB ports - each of which I attached a hub and each hub has four usb keys: three hubs times four USB drives is 12 drives total. Ideally I'd like to alter the crush map so that PGs don't replicate on the same USB hub. This looked easy enough in ceph ... edit the crushmap and introduce a bucket type called "bus" that sat between "osd" and "host" then change the default chooseleaf type to bus.

It turns out there was an easier way to solve both my problems: LVM. The Linux Volume Manager joins block devices together into a single logical volume. LVM can also stripe data from logical volumes across multiple devices. However, it does mean that if a single USB key fails then the whole logical volume fails too ... and that OSD goes down. I can live with that.

Identical looking flash drives are impossible to match with linux block devices in the /dev folder. I am running ceph so it was just easier to pull a USB key, see what OSD died and find what device was associated with it. I let the cluster heal in between each pull of a USB drive until I have a hub's worth of flash-keys pulled. I then worked a USB-hub at a time: bringing the new LVM-backed OSD into ceph before working on the next hub. Details follow.

Bring down the devices and remove them from ceph. Use
ceph osd crush remove id
then
ceph osd down id
,
ceph osd rm id
. Then stop the OSD process with
/etc/init.d/ceph stop osd.id
. It pays also to tidy up the authentication keys with
ceph auth del osd.id
or you'll have problems later. You can then safely unmount the device and then get hacking with your favourite partition editor.

There are good resources for LVM online. The basics are: setup an LVM partition on your devices. Use
pvcreate
on each LVM partition to let LVM know this is a physical volume. Then create volume groups using
vgcreate
- I made a different volume group per USB hub. Then you can make the logical volume (i.e. the thing used by the OSD) from space on a volume group
lvcreate
. The hierachy is: pv-physical volumes, vg-volume groups, lv-logical volumes. I used the
-i
option on
lvcreate
to have LVM stripe data across the USB keys because parallelism. If you've noticed a pattern in the create commands then bonus: the list commands follow the same pattern
pvs
,
vgs
,
lvs
. Format the logical volume using your favourite filesystem, though ceph prefers XFS (maybe BTRFS).

Once the logical volume is formatted then it's time to bring it back into ceph. I tried to do things the hard way and then gave up and used ceph-deploy instead. The commands used are described here.

A disadvantage with this setup is that LVM tends to scan for volumes before the USB drives are visible so the drives would not automount. I solved this with a custom init.d script. While in /etc I also changed inittab to load
ceph -w
onto tty1 so that the machine boots directly into a status console.

The performance is much faster with the new 4_LVMx3_OSD configuration compared with the 12_OSD cluster. Write speeds are almost double with RADOS puts of multi-megabyte objects. There is almost zero swapfile activity too.

I hope to soon test ceph filesystem performance on this setup before adding another node or two. I've glossed over many steps so let me know in the comments if you'd like details on any part of the process.

(I also wrote about the 12 USB drive OSD cluster with a particular focus on the ceph.conf settings)

2014-08-11

Ceph on USB thumb drives

Ceph is an open source distributed object store mean to work at huge scales on COTS (common off the shelf) hardware. It works in huge datacenters, so why not dust off an old netbook, plug in 12 USB flash drives and have at it.
The netbook is an Acer AspireOne (Intel Atom N270 1.6 GHz, 1 Gig RAM, 160Gig HD). What follows are the config changes made in ceph.conf before running the ceph-deploy command. The ceph version is Firefly 0.81. Since this is a one machine cluster I needed to tell ceph to replicate across OSDs and not across hosts.
[default]
osd crush chooseleaf type = 0 

I messed up setting the default journal size. At first I thought: Pfft. Journal, make it tiny – it just robs space. And my 4MB (yes four megabytes) journal made the cluster unworkable. With the tiny journals and default settings I could never reliably keep more than two OSDs up and data throughput was terrible. I rebuilt with 512MB journals instead.
[osd]
osd journal size = 512 

The machine was way underpowered. So I tuned a few other things. The authentication cephx was turned off. There are risks to this but this is a hobbyist project on a secured subnet.
[default]
auth cluster required = none
auth service required = none
auth client required = none

The cluster uses a ton of memory and CPU when recovering objects. It helps to limit this activity somewhat.
[osd]
osd max backfills = 1
osd recovery max active = 2 

And since things could get a bit slow I increased a few timeouts:
[osd]
osd op thread timeout = 180
osd op complaint time = 300
osd default notify timeout = 240
osd command thread timeout = 180

I was not able to get 2G flash keys to come up. Given the price of 8G sticks is only five bucks this isn’t much of a limitation. I suppose I could use LVM striping to join a bunch of 2G sticks together into a larger unit.

The speed is not all that quick. Ceph –w reports the write speed about 3 megabytes per second. That doesn’t sound like much except the data pool I was testing on writes three copies of the data – six if you count journaling. A lot of things could affect speed: tiny memory, slow CPU, slow USB sticks and/or the USB bus being saturated.

This config uses XFS on the USB sticks where BTRFS might perform better. While the speeds look poor, remember that ceph OSDs don’t report that a write is successful until the object is written to both the media and the journal. I could probably mitigate this double write by: having fewer OSDs by joining up groups of USB sticks with LVM stripes and/or moving the journal to a different device – right now ceph is using the USB sticks both for data and journal.

I stress tested RADOS by adding objects until the store filled up. It’s robust and not a single OSD timed out of the pool. As of writing this blog I am currently testing untarring linux kernel sources to the ceph filesystem – I’ll keep you posted.

My future plans are to expand the cluster utilising old hardware I have lying about. I’d like to add at least two more nodes – but they won’t necessarily be USB thumb drive backed.

2014-08-06

Design of a Funeral Programme

I make this post to outline the thinking process that I put into the design of my grandfather’s funeral program. Consider it something like the director’s comments that come with a movie: only interesting for those interested in how things are made. Apart from the obvious informational purpose of the funeral programme there were two further purposes; to connote something about my grandfather and to potentially last as a family history document. I made design decisions with this in mind.


The front cover and it’s inside front contain family history information. The inside back and back cover contain funeral service information. The programme can be cut down the middle if only one half is desired. The programme can be folded inside out to protect the photograph during transit.
Name and date information placed to allow framing in an A5 frame or to trim the name and frame closer to the photograph Three different photo choices attempt to provide a prompt for conversation at the service and act like a collectible series.


Family history information fits within the area of the photograph so that can be kept if the photograph is put into an album. This was more important than rigidly maintaining typographic rhythm with the opposite leaf.


Large typography so that it is comfortable to read at the funeral service and will better stand up to aging. Additional line-space added to group sections of the service, lighten the feel of the page and provide visual landmarks when glancing for information.


Consistent typographic hierarchy and a three column grid unifies the pages.

Production Notes

Budget and time considerations meant a larger run of about 80 programs on 120gsm glossy satin stock and a limited run of fancier programmes intended for close family.

Photographs are on archive paper, fixed with acid free photography squares. Bockingford paper is also acid free. It was a bit more expensive but should last for a few decades.

While printers have large catalogues of paper stock, most of it must be ordered in. The timeline for a funeral meant I could not wait. Bockingford felt like granddad; classy, solid, even if a bit rough. Garamond seemed to be a fitting typeface for the same reasons.

The guy at printing.com in Frankton was incredibly helpful. Get to know your printer and talk to them early about your job. Printing.com did the 120gsm satin gloss run of about 80 sheets.

Bockingford Watercolour came in artist pads – I separated the leaves, removed the adhesive and found somebody who would feed them through their machines. Thanks Warehouse Stationery!

I was concerned about how much toner drop out there’d be on the Bockingford. That meant an early test print of fancy tiny typefaces with stokes widths from the hairline to the bold. Less toner drop than I expected – just a very slight tasteful amount.

I’m don’t normally work much in print so I’m a bit unfamiliar with InDesign. I spent most of my design time trying to remember how to use the thing! I almost gave up to use something more familiar (Word *cough*) but InDesign’s beautiful text rendering made me perservere. Well worth it. Also, I’m not ashamed to admit I was saved by YouTube tutorials more than once.