Runner Progress

My questions about the app might be best explained with this proposed UI-change:

The side-navigation would be used to select one or more runners/benchmark and combinations of the two (for comparing).

E.g. select Ninefold (runner) and Discourse Benchmark (benchmark).
Now select 2 different runs to compare against e.g. ruby-funny-falcon-patch vs. ruby-2.1.0-p-1

In my mind the models for this example would look something like: (diff = different the current app)

  • Runner

    • has many benchmarks (diff)
    • description (diff) where is it running? Who is running it…
    • name
    • hardware
    • token
  • Benchmark (diff - a model as opposed to an attribute in result)

    • human readable description (what is benchmark trying to measure)
    • link to the source of benchmark (github)
    • has_many runs
  • Run

    • has many results
    • belongs to benchmark (diff - an “instance” of the benchmark)
    • belongs_to runner
    • rails version
    • ruby version
  • Result (diff and not sure about this*)

    • page “home”
    • percentile “50”
    • response_time_in_ms “49”

*Would work if we take sam’s example from above, but how would one generalize it for different benchmarks? I think we should be fine if we just get this working with the current discourse benchmark and worry about generalizing it later.

before

home:
50: 49
75: 50
90: 52
99: 114

after

home:
50: 47
75: 48
90: 50
99: 105

runner => Ninefold
benchmark => Discourse Benchmark

run_before => 2.1.0-p-1
result1 => page: "home", percentile: 50, response_time: 49
result2 => page: "home", percentile: 75, response_time: 50

run_after => 2.1.0-p-1-funny-falcon_patch
result1 => page: "home", percentile: 50, response_time: 47
result2 => page: "home", percentile: 75, response_time: 48

Could you explain how one would model this use-case in the current application? I hope I’m not making a terrible fool out of myself, I’m sure you thought about this stuff.

I’m sorry if this is super confusing I’m really tired and shouldn’t be typing…


**Edit: Perhaps a better approach to handling the results (which might look different depending on what benchmark they came from) would be to store them directly in a column (as json) in the `runs` table?

Here is an image

1 Like