Runner Progress

The runner side of the project has been running a lot slower than the UI, but I do feel I have made some interesting project.

I provisioned an “old” desktop I have to run the bench, it is running ubuntu 12.04.03 LTS (x64) this is in line with what most servers out there should be running.

I also had to spend an inordinate amount of time getting cpufreutisl to play nicely. I originally though I could disable all CPU scaling in the BIOS but it just does not play nice.

Instead I force the CPU into performance mode which give very even and consistent results. Multiple runs of the Discourse bench come back with results within 1ms on multiple runs.

I also added a -b option (best of) to the Discourse bench so we can repeat tests multiple times and pick the best.

I have got a very large number of ruby builds installed now and have decided to standardize on chruby.

Initially, I was using rbenv, but rbenv performs just way too much magic and rbenv rehash was taking forever.


Next week I will see if I can script a huge run of the Discourse bench across all the builds so we can graph it. However in the mean time I have a project for @andypike , @dmathieu and the UI team.

Recently, funny-falcon has come up with a pretty awesome optimisation that makes method caching in Ruby much faster. We are trying to gather information about how much it improves things. ko1 is open to merging it in once all the concerns are addressed. You can read about this here: http://bugs.ruby-lang.org/issues/9262

I think it would completely awesome if we could highlight the differences between before and after in a pretty bar chart generated from the UI side. It would open up a completely new usage pattern for bench ui, and is incredibly useful. There are multiple perf patches pending, providing great comparisons of the behavior before and after is key to getting stuff accepted.

Here are there results of running

# runs the discourse bench 5 times, 300 reqs per page per tested
$ ruby script/bench.rb -b 5 -i 300

###BEFORE

---
home:
  50: 49
  75: 50
  90: 52
  99: 114
categories:
  50: 75
  75: 77
  90: 83
  99: 149
home_admin:
  50: 57
  75: 59
  90: 60
  99: 127
topic:
  50: 12
  75: 13
  90: 13
  99: 83
categories_admin:
  50: 88
  75: 90
  90: 96
  99: 161
topic_admin:
  50: 22
  75: 23
  90: 25
  99: 99
timings:
  load_rails: 3029
ruby-version: 2.1.0-p-1
rss_kb: 253160
architecture: amd64
operatingsystem: Ubuntu
kernelversion: 3.8.0
memorysize: 23.55 GB
physicalprocessorcount: 1
processor0: Intel(R) Core(TM) i7 CPU         960  @ 3.20GHz
virtual: physical

###AFTER:

---
topic_admin:
  50: 20
  75: 22
  90: 24
  99: 30
topic:
  50: 12
  75: 13
  90: 14
  99: 37
home_admin:
  50: 54
  75: 55
  90: 58
  99: 121
categories_admin:
  50: 80
  75: 82
  90: 89
  99: 156
categories:
  50: 69
  75: 71
  90: 98
  99: 159
home:
  50: 47
  75: 48
  90: 50
  99: 105
timings:
  load_rails: 2970
ruby-version: 2.1.0-p-1
rss_kb: 255392
architecture: amd64
operatingsystem: Ubuntu
kernelversion: 3.8.0
memorysize: 23.55 GB
physicalprocessorcount: 1
processor0: Intel(R) Core(TM) i7 CPU         960  @ 3.20GHz
virtual: physical

Is there any way we can get this graphed on the UI in a bar chart so we can attach to the ticket.

(note: 99 = 99th percentile, just as ab returns it)

We’re not generating bar charts for now. Only lines, displaying the evolution on time.
I guess the feature you would be looking for in the UI here would be to be able to compare two results, which would then make sense to generate a bar chart.
I just created an issue about that.

As for your comparison here, since there’s no production deployment for now, and this would just be creating a runner for only two data points, wouldn’t it be just easier to generate an image (which doesn’t needs the UI, only a spreadsheet software) with the graph?

sure, but the motivation here is to promote the project. Its not critical we can wait for a larger data set.

I’m wondering where is the runner’s code?
I can only find https://github.com/ruby-bench/ruby-bench, which seems to be only a web interface. I suppose there must be a runner that can run benchmark and send the result to ‘ruby-bench’, right?

Thanks

@lazywei I’m sure you’ve figured it out by now but the runner @sam was talking about is available at:

Is someone working on this? As far as I can tell this functionality is not in ruby-bench yet. I would get started if thats fine with you guys.

@wpp, I started some works on here: https://github.com/lazywei/ruby-bench-docker

We tried to build some docker environments to run benchmarks. Perhaps you can integrate your work in to this?

Thanks.

Any chance we can see any results, I would be happy to run something on my bench box if you provide me with instructions.

Sorry for getting back so late.

I was motivated to get something up on the weekend but the app currently uses a few technologies I’m not very familiar with (haml, coffeescript, wisper and docker?).
I was also trying to contact @dmathieu on twitter and the IRC channel because I had/have a couple of questions about the models and the overall state of the project,
but couldn’t get in touch with him.

I whipped up a new Rails app quickly to make a screenshot and give you an Idea of what I have in mind:
Imgur: The magic of the Internet (Sorry can’t upload directly because I’m a new user).

Regarding your original post:

More hardware and a nice domain name would also be awesome.

I got ruby-bench.com just in case (and of course I’ll transfer it).

First is information gathering, writing the scripts needed to collect all the historical data into a database of sorts.

I think @tenderlove might have some scripts which he used to measure Rails perf (AdequateRecord, mentioned in one of his talks). Maybe we can get something from him?

So long story short:
I think this project is super-important and I’m eager to get something real up and running. Although I probably won’t have significant time till the upcoming weekend.

You mean ruby-bench? Please let me know if you get question on this Rails app, I might be able to answer some of them.

Can we integrate this into original Rails app (ruby-bench)?

Can we integrate this into original Rails app (ruby-bench)?

Absolutely, I really don’t want to fork the project, it was and is just a “mock-up” app because I need to learn haml and coffee.

Yes ruby-bench. I’ll send you my questions when I get home, thanks!
(@work now).

Cheers

Sorry about that. I completely missed your twitter mentions.

We can completely integrate any improvement you’d want to make on the ruby-bench rails app.
The current state of things is that @lazywei is finishing a Google Summer of Code on the worker. You can find his work here: GitHub - lazywei/ruby-bench-docker: Docker image for ruby-bench
Once this worker is ready to be used, I hope we can move it to the ruby-bench organization (@sam is the only one with access to the organization though), so we can go to the next step of hosting it and making it run.

I think we can start trying to run some simple benchmarks. At the same time, I’d love to transfer it to ruby-bench organization.

My questions about the app might be best explained with this proposed UI-change:

The side-navigation would be used to select one or more runners/benchmark and combinations of the two (for comparing).

E.g. select Ninefold (runner) and Discourse Benchmark (benchmark).
Now select 2 different runs to compare against e.g. ruby-funny-falcon-patch vs. ruby-2.1.0-p-1

In my mind the models for this example would look something like: (diff = different the current app)

  • Runner

    • has many benchmarks (diff)
    • description (diff) where is it running? Who is running it…
    • name
    • hardware
    • token
  • Benchmark (diff - a model as opposed to an attribute in result)

    • human readable description (what is benchmark trying to measure)
    • link to the source of benchmark (github)
    • has_many runs
  • Run

    • has many results
    • belongs to benchmark (diff - an “instance” of the benchmark)
    • belongs_to runner
    • rails version
    • ruby version
  • Result (diff and not sure about this*)

    • page “home”
    • percentile “50”
    • response_time_in_ms “49”

*Would work if we take sam’s example from above, but how would one generalize it for different benchmarks? I think we should be fine if we just get this working with the current discourse benchmark and worry about generalizing it later.

before

home:
50: 49
75: 50
90: 52
99: 114

after

home:
50: 47
75: 48
90: 50
99: 105

runner => Ninefold
benchmark => Discourse Benchmark

run_before => 2.1.0-p-1
result1 => page: "home", percentile: 50, response_time: 49
result2 => page: "home", percentile: 75, response_time: 50

run_after => 2.1.0-p-1-funny-falcon_patch
result1 => page: "home", percentile: 50, response_time: 47
result2 => page: "home", percentile: 75, response_time: 48

Could you explain how one would model this use-case in the current application? I hope I’m not making a terrible fool out of myself, I’m sure you thought about this stuff.

I’m sorry if this is super confusing I’m really tired and shouldn’t be typing…


**Edit: Perhaps a better approach to handling the results (which might look different depending on what benchmark they came from) would be to store them directly in a column (as json) in the `runs` table?

Here is an image

1 Like

@wpp @lazywei @dmathieu @tgxworld @richard_ludvigh

moving all rubybench discussion to: http://community.rubybench.org/