Once we start demanding test flexibility and stability via clean starts, we inevitably begin to obsess about startup
time. I’ve done a few easy bits in the last few posts, but now it’s time to go in a slightly different direction.
One of the game server’s startup tasks is reading in the world maps. A collection of ascii files (still very similar to
the original hand-assembled maps that players compiled in the 1980s) contain the geography of the various towns and
dungeons and such. For most of the testing, I replaced these with very simple test maps, but those would need to expand
in order to test more of the game logic. A map pre-processing step could be a useful optimization and, perhaps more
importantly, fun to do.
//Doorway into Ydmos' tower is 1 way entrance
//Teleport in SW corner of -70 goes to NW part of surface Morrigans Island
//Teleport in SE corner of -70 goes to SE part of surface Annwens Glade
<z>0</z><x>0</x> <y>9</y> <f>heavy</f> <n>Annwn Surface</n> outdoor=true
/\/\/\/\/\/\/\/\/\/\/\/\/\
/\/\/\/\/\/\/\~r~r~r~r~r~r~r~r~r~r~r/\/\
/\/\~r~r~r~r~r~r~r~r~r~r~r~r~r~r~r~r~r~r/\
Some comment lines, then a header (XML-style tags, followed by key-value pairs), then the actual map cells (two
characters each). Repeat for each level of the map…
// name of the z planeif(s.IndexOf("<n>")!=-1&&s.IndexOf("</n>")!=-1){if(!zNames.ContainsKey(z))zNames.Add(z,s.Substring(s.IndexOf("<n>")+3,s.IndexOf("</n>")-(s.IndexOf("<n>")+3)));}
A forestry level (which only makes sense sometimes) and a name for the z-level. The key-value pairs are clearly
booleans which apply to that whole z-level, and are false if not present.
To tidy things up, we’ll make all tags except f mandatory, in the order that’s most common, and order the key-value
pairs alphabetically when present.
Why so strict? Not just for efficiency. If we want to do some simple round-trip
testing of map parsing code, having the ability to output a format which exactly matches the input is very helpful.
Why not more strict? Nostalgic attachment to these old ascii files, so we’re trying not to alter them too much.
Tweaking the files works fine for a while, adding some zero offsets and naming z-levels after their z coordinate where
tags were missing. Then I come across the map file for Torii:
Which looks fine until you scroll that top line all the way to the right.
A rogue section in the Torii map file.
Not sure when that happened; it dates back to the first commit in the repo, but I could easily have messed it up at
some point prior to that. But if I fix it, will that y-offset still be correct? Or has it been incorrect all this time?
Let’s just shift that line down and leave the offset alone, start up the server, and see what happens.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 321 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 323 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 332 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 333 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 334 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 335 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 336 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 337 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 338 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 339 did not find a suitable spawning point after 40 attempts.
06/28/2023 20:07:59: {SystemWarning} SpawnZone 396 did not find a suitable spawning point after 40 attempts.
And what if I back off the y-offset to 35 to make up for the extra line? No more spawning failures. That settles it.
And now we have a set of map files that a bit more consistent. What should we do with them?
So now let’s have some fun: code up a utility to consume one of these map files, split it into its component parts, and
then output the exact same file. I feel like
Go will turn out to be a good fit for this
project, so we’ll start using it now.
Each map level has the header info that’s global to that level, then a map of cell graphics keyed by their
coordinates. Tagging the fields of the MapHeader structure with XML element names will make processing the header
almost trivial, as long as it’s wrapped in an outer tag (here we use <mapheader>):
funcprocessXMLHeader(xmlStringstring,headerMapHeader)(MapHeader,error){err:=xml.Unmarshal([]byte(xmlString),&header)iferr!=nil{returnheader,err}parts:=strings.Fields(xmlString)for_,part:=rangeparts{keyValue:=strings.Split(part,"=")iflen(keyValue)==2{switchkeyValue[0]{case"outdoor":header.Outdoor,_=strconv.ParseBool(keyValue[1])case"norecall":header.NoRecall,_=strconv.ParseBool(keyValue[1])case"alwaysdark":header.AlwaysDark,_=strconv.ParseBool(keyValue[1])case"townlimits":header.TownLimits,_=strconv.ParseBool(keyValue[1])default:returnheader,errors.New("Unknown key: "+keyValue[0])}}}ifheader.ZName==""{returnheader,errors.New("no z name")}returnheader,nil}
Indeed, unmarshalling the XML part is a one-liner. Handling the key-value pairs is a little more awkward; if there were
more than four of them, it’d be worth doing some refactoring (perhaps incorporating them into XML before the
Unmarshal() call?).
funcmapStyleMapHeader(mapHeaderMapHeader)string{varsbstrings.Builder// Add comments
for_,comment:=rangemapHeader.Comments{sb.WriteString("//"+comment+"\n")}// XML format
sb.WriteString("<z>"+strconv.Itoa(mapHeader.ZCoord)+"</z> ")sb.WriteString("<x>"+strconv.Itoa(mapHeader.XOffset)+"</x> ")sb.WriteString("<y>"+strconv.Itoa(mapHeader.YOffset)+"</y> ")ifmapHeader.ForestryLevel!=""{sb.WriteString("<f>"+mapHeader.ForestryLevel+"</f> ")}sb.WriteString("<n>"+mapHeader.ZName+"</n>")// Add key-value pairs
kvs:=[]string{}ifmapHeader.AlwaysDark{kvs=append(kvs,"alwaysdark=true")}ifmapHeader.NoRecall{kvs=append(kvs,"norecall=true")}ifmapHeader.Outdoor{kvs=append(kvs,"outdoor=true")}ifmapHeader.TownLimits{kvs=append(kvs,"townlimits=true")}iflen(kvs)>0{sb.WriteString(" ")sb.WriteString(strings.Join(kvs," "))}sb.WriteString("\n")returnsb.String()}
A little more complicated to make sure we recreate the style from the map files. Comments first, then the XML-style
part with optional forestry, and each of the key-value pairs that aren’t false. We don’t return any errors, assuming
we can create a string from any valid header struct.
The tests are pretty much the same with the input and output swapped, with the addition of comments and newlines.
Now that we have both sides of the coin, we can have some fun with round-trip
tests. First we do header->struct->header, and then struct->header->struct.
These are a little less verbose, given that the input and desired output are the same. Other than that, though, what am
I going to do with this round-trip testing?
Over in the Ruby land of tests, I’ve never had much luck making Rubocop completely happy. On a whim, I decided to
install
Spotify’s VSCode Ruby extension pack
and work on it for a little while.
Though I did opt to override Rubocop with a lot of my own opinions:
AllCops:NewCops:enable# Nostalgia aside, I find 100 to be a more reasonable limit.Layout/LineLength:Max:100# %x is more clear than backticks when glancing at code.Style/CommandLiteral:SupportedStyles:mixed# I use 'fail' in test code for catastrophic failure.Style/SignalException:Enabled:false# Allow long(ish) methods for nowMetrics/MethodLength:Max:40Metrics/BlockLength:Max:40Metrics/AbcSize:Max:30Naming/MethodParameterName:AllowedNames:["x","y","z"]Naming/AccessorMethodName:Enabled:falseNaming/VariableName:Enabled:falseMetrics/CyclomaticComplexity:Max:10Metrics/PerceivedComplexity:Max:10
There are certainly engineers who argue that using an opinionated linter with no exceptions is the only way to go, but
I struggle to do so with Rubocop. However, it’s a good place to start. Then every time the linter complains, you can
consider whether the rule makes sense for your project and team.
Code coverage is similar: Rather than doing too much tweaking to guarantee 100%, exceptions can be considered with
appropriate context.