301 Days (redux)

Another metric leap year of gamedev experiments and such

New Day 6 - Lots of Tests

Jun 8, 2023 - 18 minute read - NewDaysseitan-spin

Put our service to the test!

In which we get our old test suites to run in this new world of containers.

A Barrel Full of Ruby Cucumbers

Remember all those Cucumber tests we wrote so very long ago?

124 scenarios (124 passed)
1241 steps (1241 passed)
50m36.933s

Now that we have the database and game server running in Docker containers, might as well get the tests running in that environment as well.

Previously, the tests interacted with the game server in a few ways:

  1. Directly launching/killing the DragSpinExp executable.
  2. Writing to the .config file.
  3. Reading/writing to the game database.
  4. Telnetting into the game and interacting as a user would.

So let’s see how we have to address these in the test code.

Game Server Goes Up, Game Server Goes Down

The Before Times: Co-located on a Windows server with the game server executable, using taskkill and Process.spawn() to bring it up and down:

features/lib/hooks.rb GitLab
18
19
20
21
Before('@reset_server') do
  # Kill the server if it's already running
  result = %x(taskkill /F /T /IM DragSpinExp.exe)
  debug_msg "Result of taskkill: #{result}"
features/lib/hooks.rb GitLab
40
41
42
43
44
After('@kill_server_after') do
  # Kill the server
  result = %x(taskkill /F /T /IM DragSpinExp.exe)
  debug_msg "Result of taskkill: #{result}"
end
features/lib/server_helper.rb GitLab
 2
 3
 4
 5
 6
 7
 8
 9
10
def start_server()
  # Start up the server - we don't wait for it to come up in this method
  result = false
  Dir.chdir("test_env/test001") do
    result = Process.spawn("DragSpinExp.exe", [:out, :err]=>["DragSpinExp.log", "w"], :close_others=>true)
    debug_msg "Result of start: #{result}"
  end
  result
end
features/steps/server_steps.rb GitLab
3
4
5
6
Given(/^I use the "([^"]*)" database as\-is$/) do |database_name|
  # Kill the server if it's already running
  result = %x[taskkill /F /T /IM DragSpinExp.exe"]
  debug_msg "Result of taskkill: #{result}"
features/steps/server_steps.rb GitLab
16
17
18
19
Given(/^I use the "([^"]*)" database$/) do |database_name|
  # Kill the server if it's already running
  result = %x[taskkill /F /T /IM DragSpinExp.exe"]
  debug_msg "Result of taskkill: #{result}"

Let’s replace those with the appropriate Docker commands, calling them using Open3.capture3() so we can easily get stdout and stderr to use in debugging:

features/lib/hooks.rb GitLab
26
27
28
29
30
31
32
33
34
35
36
37
38
Before('@reset_server') do
  # Kill the server if it's already running
  stdout, stderr, status = Open3.capture3("docker compose down")
  debug_msg "Result of docker compose down: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr

  @server_database = 'minimal'
  # Bring up the containers
  stdout, stderr, status = Open3.capture3("docker compose up --detach")
  debug_msg "Result of docker compose up: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr
features/lib/server_helper.rb GitLab
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
def start_server()
  containers_up = verify_containers_up()

  # Start up the game server
  stdout, stderr, status = Open3.capture3("docker exec --detach seitan-spin-gameserver-1 \"./entrypoint.sh\"")
  debug_msg "Result of docker exec: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr

  containers_up && (status == 0)
end

def verify_containers_up()
  # Make sure the containers are up
  stdout, stderr, status = Open3.capture3("docker compose up --detach")
  debug_msg "Result of docker compose up: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr

  status == 0
end
features/steps/server_steps.rb GitLab
 3
 4
 5
 6
 7
 8
 9
10
Given(/^I use the "([^"]*)" database as\-is$/) do |database_name|
  verify_containers_up()
  
  # Kill the server if it's already running
  stdout, stderr, status = Open3.capture3("docker compose stop seitan-spin-gameserver-1")
  debug_msg "Result of docker compose stop seitan-spin-gameserver-1: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr
features/steps/server_steps.rb GitLab
21
22
23
24
25
26
27
28
29
30
31
32
33
34
Given(/^I use the "([^"]*)" database$/) do |database_name|
  # Kill the server if it's already running
  stdout, stderr, status = Open3.capture3("docker compose down")
  debug_msg "Result of docker compose down: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr

  @server_database = database_name

  # Bring up the containers
  stdout, stderr, status = Open3.capture3("docker compose up --detach")
  debug_msg "Result of docker compose up: #{status.inspect}"
  debug_msg "  stdout: #{stdout}" if stdout
  debug_msg "  stderr: #{stderr}" if stderr

Hmm, could use some refactoring. But wait a minute, why docker exec --detach seitan-spin-gameserver-1 "./entrypoint.sh"? In order to bounce the game server executable, while retaining access to the .config file, we need to split that container’s “running” status into “container running” and “game server executable running”. So we override the entrypoint in docker-compose.yml:

docker-compose.yml GitLab
16
17
18
  gameserver:
    image: registry.gitlab.com/libipljoe/drag-spin-exp:latest
    entrypoint: tail -f /dev/null

And now starting the container doesn’t start the executable, and we can call ./entrypoint.sh when needed (once we’ve set the .config file and database up correctly for the test).

Reconfigure as Needed

There are a number of configuration options for the game server that live in the .config file:

App.config GitLab
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <appSettings>
        <add key="APP_VERSION" value="2.0.99.19" />
        <add key="CLIENT_VERSION" value="0.0.1" />
        <add key="APP_NAME" value="Drag Spin Exp" />
        <add key="NEWS" value="^8/1/2015^Fork - This experimental version is a fork of version 2.0.16.5 of Dragon's Spine^^" />
        <add key="APP_PROTOCOL" value="Kesmai" />
        <add key="ClearStores" value="False"/>
        <add key="RestockStores" value="True"/>
        <add key="ColorLog" value="True"/>
        <add key="RequireMakeRecallReagent" value="False"/>
        <add key="DisplayCombatDamage" value="True"/>
        <add key="UnderworldEnabled" value="False"/>
        <add key="NPCSkillGain" value="True"/>
        <add key="SQL_CONNECTION"
            value="User ID='sa';Password='TempBadPassw0rd';Initial Catalog='TempDatabaseName';Data Source='tcp:sql,1433';Connect Timeout=15"/>
    </appSettings>
  <runtime>
    <gcServer enabled="True"/>
    <disableCachingBindingFailures enabled="1"/>
  </runtime>
</configuration>

Since it’s only read by the executable at startup, any changes need to be made in this file before it’s executed. To that end, we’ll need to map that folder outside of the container.

docker-compose.yml GitLab
1
2
3
4
5
6
7
volumes:
  gameserver:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: ${PWD}/gameserver
docker-compose.yml GitLab
27
28
29
30
    volumes:
      - type: volume
        source: gameserver
        target: /source/DragonsSpine/bin/Debug/

I did find the process of doing this in Docker Compose a bit confusing, to be honest. It’s possible (probable) that this isn’t actually the best way to do it.

With that volume set up, and making sure the container is running, we know where to find the file and how to adjust it:

features/lib/configfile_helper.rb GitLab
 5
 6
 7
 8
 9
10
11
12
def set_value_in_configfile(config_key, config_value)
  debug_msg "Setting #{config_key} to #{config_value} in configfile."
  filename = 'gameserver/DragSpinExp.exe.config'
  doc = File.open(filename) { |f| Nokogiri::XML(f) }
  sql_connection = doc.at_xpath("//appSettings//add[@key='#{config_key}']")
  sql_connection['value'] = config_value
  File.write(filename, doc.to_xml)
end

Once the config file is set up however we need it, we can then call start_server() to actually launch the game server executable.

A Base for Data

We were already operating at arm’s length from the database, using TinyTds for the connection and sending raw SQL queries.

features/lib/db_helper_sql.rb GitLab
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
def connect_to_db(database)
  @db_client ||= nil
  @db_hash ||= nil
  if db_connect_hash(database) == @db_hash
    debug_msg "Same hash"
    if @db_client && @db_client.active?
      debug_msg "Already connected!"
      return @db_client
    end
  end
  @db_hash = db_connect_hash(database)
  try = 0
  begin
    client = @db_client = TinyTds::Client.new(@db_hash)
  rescue TinyTds::Error => e
    puts "#{try} connect_to_db #{e.inspect}"
    try += 1
    try <= 5 ? retry : raise
  end
end

We were using calls to sqlcmd for the heavy lifting like loading the entire database; but since we have the databases baked into the container image, we don’t need to do that anymore.

The new issue with talking to the database is that the game server and the tests will do so in different ways. On the container network, the game server will make calls to sql on port 1433. From the outside, the tests will call to port 27999.

docker-compose.yml GitLab
 8
 9
10
11
12
services:
  sql:
    image: registry.gitlab.com/301days/mssql-dragspinexp:latest
    ports:
      - 27999:1433
run_tests.sh GitLab
20
21
export DB_DATASERVER=tcp:sql,1433
export DB_PORT_OVERRIDE=27999

Wait a minute, why not just map the same port, so it’s 1433 regardless? It’s a little safety tip that I’ve picked up: make sure that the code you write is executing where you think it is. If I write something I thought would run in the test context, and it runs inside one of the containers instead, the wrong port number will cause it to fail, and I’ll know my mistake. This rule of thumb has saved me hours of debugging on many occasions.

Log In and Play

Similarly, we map the game server’s telnet port 3000 to 27998 outside the containers:

docker-compose.yml GitLab
16
17
18
19
20
  gameserver:
    image: registry.gitlab.com/libipljoe/drag-spin-exp:latest
    entrypoint: tail -f /dev/null
    ports:
      - 27998:3000

Simple enough to just introduce a DS_PORT environment variable for it:

features/lib/misc_helper.rb GitLab
 7
 8
 9
10
11
12
13
14
DEFAULT_TELNET = {
  "Host" => ENV['DS_HOST'],
  "Port" => ENV['DS_PORT'],
  "Prompt" => "Login:",
  "Telnetmode" => false,
  "Timeout" => 20,
  "Waittime" => 1
}.freeze

Make It Work

Now we put together the bash version of the run_tests.cmd script.

run_tests.sh GitLab
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
#!/bin/bash

# set DS_HOST to "localhost" for local runs, "docker" for CI runs.
# set DB_HOST_OVERRIDE to "localhost" for local runs, "docker" for CI runs.
domain=$(hostname -d)
echo $domain
if [[ $domain == "localdomain" || $domain == "local" ]]; then
  export DS_HOST=localhost
  export DB_HOST_OVERRIDE=localhost
else
  export DS_HOST=docker
  export DB_HOST_OVERRIDE=docker
fi

export DS_PORT=27998
export DB_PORT_OVERRIDE=27999
export TEST_DEBUG=false
export TELNET_TRACE=false
export CUCUMBER_PUBLISH_QUIET=true

export DB_USERNAME=sa
export DB_PASSWORD=Rapunz3l!
export DB_DATASERVER=tcp:sql,1433 # How the game server will contact the database.

export SQLSERVR_SA_PASSWORD=${DB_PASSWORD}
export SQLSERVR_DB_NAME=minimal

# Clear and recreate the dir for the gameserver volume.
rm -rf gameserver
mkdir gameserver

# Make sure the Docker Compose environment is valid.
docker compose up --quiet-pull --detach

# Make sure we all of the right gems.
bundle install

set -o pipefail # So tee doesn't overwrite a failure code
bundle exec cucumber $1 --tags 'not @WIP' --tags 'not @slow' --tags 'not @bug' --format html --out cucumber.html --format pretty | tee cucumber.out

We’ll come back to that domain logic in a bit. But does it work when I run it locally?

$ docker images -a
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
$ ./run_tests.sh features/temp.feature    
localdomain
[+] Running 2/2
 ⠿ gameserver Pulled                                                                                                                       17.3s
 ⠿ sql Pulled                                                                                                                              21.5s
[+] Running 3/3
 ⠿ Network seitan-spin_default         Created                                                                                              0.1s
 ⠿ Container seitan-spin-sql-1         Healthy                                                                                              5.9s
 ⠿ Container seitan-spin-gameserver-1  Started                                                                                              6.1s
Using rake 13.0.6
Using ansi-sys-revived 0.8.4
...
Using webrick 1.8.1
Using win32-process 0.10.0
Bundle complete! 10 Gemfile dependencies, 41 gems now installed.
Use `bundle info [gemname]` to see where a bundled gem is installed.
@kill_server_after
Feature: Temporary scenarios

  Scenario: Multiple character effects stack correctly                # features/temp.feature:4
    Given I use the "minimal" database                                # features/steps/server_steps.rb:21
Getting ID from player TestSubject01_name
Player ID for TestSubject01_name is 2
    And I add player and character "TestSubject01"                    # features/steps/account_steps.rb:12
Getting ID from player TestSubject01_name
Player ID for TestSubject01_name is 2
    And I inflict fear on the character for "2" turns                 # features/steps/character_steps.rb:217
Getting ID from player TestSubject01_name
Player ID for TestSubject01_name is 2
    And I blind the character for "4" turns                           # features/steps/character_steps.rb:222
    And the server is started                                         # features/steps/server_steps.rb:39
    When I log on as "TestSubject01"                                  # features/steps/login_steps.rb:190
    And I enter the game                                              # features/steps/login_steps.rb:266
    Then I saw a message "You are scared!"                            # features/steps/character_steps.rb:145
    And after "2" turns I no longer see the message "You are scared!" # features/steps/character_steps.rb:339
    Then I saw a message "You are blind!"                             # features/steps/character_steps.rb:145
    And after "2" turns I no longer see the message "You are blind!"  # features/steps/character_steps.rb:339

1 scenario (1 passed)
11 steps (11 passed)
1m16.731s

And we even get nice-looking HTML output:

Temp Tests HTML Cucubmber Ouput Temp Tests HTML Cucubmber Ouput

Testing in the CI Environment

In order to have this run in one of GitLab’s handy CI containers, we have to revisit our good friend, the Docker-in-Docker container ( Gitlab’s docs). While the tests run in one Docker container, they will bring up the server containers in another. The hostname for it will be docker, so we use that instead of localhost when interacting with the mapped ports (27998 and 27999).

So that explains this part of the run_tests.sh script:

run_tests.sh GitLab
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/bin/bash

# set DS_HOST to "localhost" for local runs, "docker" for CI runs.
# set DB_HOST_OVERRIDE to "localhost" for local runs, "docker" for CI runs.
domain=$(hostname -d)
echo $domain
if [[ $domain == "localdomain" || $domain == "local" ]]; then
  export DS_HOST=localhost
  export DB_HOST_OVERRIDE=localhost
else
  export DS_HOST=docker
  export DB_HOST_OVERRIDE=docker
fi

Your mileage may vary; especially if your home servers have a DNS domain other than local or localdomain. The .gitlab-ci.yml file got kind of interesting:

.gitlab-ci.yml GitLab
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
.common_test:
  stage: test
  image: docker:20.10.16
  variables:
    # When using dind service, you must instruct Docker to talk with
    # the daemon started inside of the service. The daemon is available
    # with a network connection instead of the default
    # /var/run/docker.sock socket.
    DOCKER_HOST: tcp://docker:2376
    #
    # The 'docker' hostname is the alias of the service container as described at
    # https://docs.gitlab.com/ee/ci/services/#accessing-the-services.
    # If you're using GitLab Runner 12.7 or earlier with the Kubernetes executor and Kubernetes 1.6 or earlier,
    # the variable must be set to tcp://localhost:2376 because of how the
    # Kubernetes executor connects services to the job container
    # DOCKER_HOST: tcp://localhost:2376
    #
    # Specify to Docker where to create the certificates. Docker
    # creates them automatically on boot, and creates
    # `/certs/client` to share between the service and job
    # container, thanks to volume mount from config.toml
    DOCKER_TLS_CERTDIR: "/certs"
    # These are usually specified by the entrypoint, however the
    # Kubernetes executor doesn't run entrypoints
    # https://gitlab.com/gitlab-org/gitlab-runner/-/issues/4125
    DOCKER_TLS_VERIFY: 1
    DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
    DS_HOST: docker
    DB_HOST_OVERRIDE: docker
  services:
    - docker:20.10.16-dind
  before_script:
    - uname -a
    - nproc
    - apk update
    - apk upgrade
    - apk add ruby ruby-dev
    - apk add freetds-dev freetds
    - apk add libffi-dev libffi
    - apk add build-base bash
    - docker info
    - ruby -v
    - gem install bundler --no-document
    - bundle install --jobs $(nproc)
    - bundle update
  artifacts:
    when: always
    paths:
      - cucumber.html
      - cucumber.out

config_file:
  extends: .common_test
  script:
    - ./run_tests.sh features/config_file.feature

connect:
  extends: .common_test
  script:
    - ./run_tests.sh features/connect.feature

That’s not even the whole thing, but we can stop there for now.

In the .common_test definition that’s common to all of the test jobs: Set the environment variables to make the whole Docker-in-Docker thing work (with lots of explanatory comments, copied from GitLab’s docs). Note the service definition that actually brings up the separate “docker” container. In the before-script, add all of the pieces we’ll need to install and run the set of Ruby gems we rely upon (some of which need to build native extensions). Lastly, make sure we always keep the Cucumber output files as artifacts.

Then we define some separate test jobs that extend .common_test, with the only real difference being the feature file that they take their scenarios from. These can all run in parallel, in theory. So far so good.

.gitlab-ci.yml GitLab
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
slow:
  extends: .common_test
  needs:
    - config_file
    - connect
    - login
    - player_effects
    - temp
  script:
    - ./run_slow.sh
  artifacts:
    when: always
    paths:
      - cucumber_slow.html
      - cucumber_slow.out

wip:
  extends: .common_test
  needs:
    - config_file
    - connect
    - login
    - player_effects
    - temp
  script:
    - ./run_wip.sh
  artifacts:
    when: always
    paths:
      - cucumber_wip.html
      - cucumber_wip.out
  allow_failure: true

The slow and wip jobs are a little different. They have their own bash scripts, and we use the needs clause to run them only if all of the standard tests pass. In theory (again), slow contains the long-runtime tests; no need to waste the time if other stuff is broken. And wip should contain tests that aren’t quite reliable yet; not helpful to run on top of a shaky foundation.

In the run_wip.sh script we also have all of the debug noise turned up to 11:

run_wip.sh GitLab
12
13
export TEST_DEBUG=true
export TELNET_TRACE=html

Not Quite Perfect Yet

GitLab shows job flow, with player_effects test failing GitLab shows job flow, with player_effects test failing

The player_effects test job is failing, for a kind of surprising reason:

      | Protection_from_Blind_and_Fear | 7   | Blind     | 1 | | |
Getting ID from player TestRP01_name
Player ID for TestRP01_name is 2
Getting ID from player TestRP01_name
Player ID for TestRP01_name is 2
      | Protection_from_Blind_and_Fear | 8   | Fear      | 1 | | |
ERROR: Job failed: execution took longer than 1h0m0s seconds

I guess my assessment of which tests are “slow” might be outdated. Or I just need to bundle them differently. Tomorrow.

One More Thing

features/login.feature GitLab
56
57
58
59
60
61
62
63
64
65
66
67
68
@slow
Scenario: Logging in a new user to check stats - Mu
Given a new random user
And a telnet connection
When I create an account to verify that race "M" has the following stat constraints:
    | stat         | min  | max |
    | strength     | 5    | 18  |
    | dexterity    | 3    | 18  | 
    | intelligence | 3    | 17  | 
    | wisdom       | 3    | 17  | 
    | constitution | 4    | 18  | 
    | charisma     | 3    | 18  | 
Then I can play the game

Why is this @slow? Can’t computers roll dice really quickly? And at this point we’re not in the game, so we’re not waiting for the round timer. Even though we’re rolling many times to try to (somewhat) verify the constraints, it shouldn’t be taking that long.

Normally we need to be careful with the telnet connection, because we can easily get false positives as far as when the server is done sending text. We wait for a prompt, but what if that prompt also appears somewhere in the output? But in the case of rolling stats, we know we’re done when we see the Roll again (y,n): come across. So we specifically set the Waittime to zero:

features/steps/login_steps.rb GitLab
123
124
125
  until mins_needed.empty? && maxes_needed.empty? do
    stats_string = telnet_command(@connection, "y", /\(y,n\)\: /, {"Waittime" => 0}) unless rolls == 1
    stats = parse_stats(stats_string)

Just to be sure, let’s time that telnet_command() call, see if it’s taking longer than we’d expect:

features/lib/misc_helper.rb GitLab
106
107
108
109
110
111
112
113
114
115
116
def telnet_command(connection, string, match, overrides = {})
  debug_msg "Sending #{string} and waiting for #{match.inspect}..."
  start_time = Time.now
  @last_resp = resp = connection.cmd({"String" => string, "Match" => match}.merge(overrides)) do |c| 
    telnet_out(c)
  end
  end_time = Time.now
  elapsed_time = end_time - start_time
  debug_msg "Time waited for telnet response: #{elapsed_time}"
  resp
end
$ grep "Time waited for telnet" cucumber_wip.out 
Time waited for telnet response: 1.15628657 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.10263682 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.040668261 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.10236742 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.100364496 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.102931419 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.122063689 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.104850838 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.103263606 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.107096273 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.10310023 # features/lib/misc_helper.rb:114:in `telnet_command'
Time waited for telnet response: 1.104339874 # features/lib/misc_helper.rb:114:in `telnet_command'
...

Never less than a second. It certainly doesn’t look like our override of Waittime is helping much. Just to be sure, let’s check out the source of the cmd() method in the net/telnet library.

lib/net/telnet.rb GitHub
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
    def cmd(options) # :yield: recvdata
      match    = @options["Prompt"]
      time_out = @options["Timeout"]
      fail_eof = @options["FailEOF"]

      if options.kind_of?(Hash)
        string   = options["String"]
        match    = options["Match"]   if options.has_key?("Match")
        time_out = options["Timeout"] if options.has_key?("Timeout")
        fail_eof = options["FailEOF"] if options.has_key?("FailEOF")
      else
        string = options
      end

      self.puts(string)
      if block_given?
        waitfor({"Prompt" => match, "Timeout" => time_out, "FailEOF" => fail_eof}){|c| yield c }
      else
        waitfor({"Prompt" => match, "Timeout" => time_out, "FailEOF" => fail_eof})
      end
    end

Oh, Waittime is not actually one of the options that it pays attention to in the parameter hash. So my one-second default is in effect even when I try to override it with a zero. I can, however, pass it directly to waitfor():

lib/net/telnet.rb GitHub
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
    def waitfor(options) # :yield: recvdata
      time_out = @options["Timeout"]
      waittime = @options["Waittime"]
      fail_eof = @options["FailEOF"]

      if options.kind_of?(Hash)
        prompt   = if options.has_key?("Match")
                     options["Match"]
                   elsif options.has_key?("Prompt")
                     options["Prompt"]
                   elsif options.has_key?("String")
                     Regexp.new( Regexp.quote(options["String"]) )
                   end
        time_out = options["Timeout"]  if options.has_key?("Timeout")
        waittime = options["Waittime"] if options.has_key?("Waittime")
        fail_eof = options["FailEOF"]  if options.has_key?("FailEOF")

Now I can write a telnet_command_fast() that bypasses cmd() and goes straight to waitfor() with a zero wait time:

features/lib/misc_helper.rb GitLab
118
119
120
121
122
123
124
125
126
127
128
129
130
131
# Used only if there is no change of seeing the prompt string before it's actually the prompt.
def telnet_command_fast(connection, string, match)
  debug_msg "Sending #{string} and waiting for #{match.inspect}..."
  start_time = Time.now

  connection.puts(string)
  @last_resp = resp = connection.waitfor({"Prompt" => match, "Waittime" => 0})
  telnet_out(resp)

  end_time = Time.now
  elapsed_time = end_time - start_time
  debug_msg "Time waited for telnet response: #{elapsed_time}" if elapsed_time > 1.0
  resp
end
...
Time waited for telnet response: 0.102208506 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.103160697 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.098152807 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.138335645 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.061805784 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.103061699 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.140917246 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.05900543 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.098719397 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
Time waited for telnet response: 0.14280107 # features/lib/misc_helper.rb:129:in `telnet_command_fast'
...

Much better.


More to come

seitan-spin New Day 6 code