Hello Sergei, Elena and all,
I just answered the question that Elena asked on my proposal for GSoC. I'm sorry I was late.
Please let me know if I should answer anything else to be on time before the deadline.

Regards and Thank you!

Pablo


On Fri, Mar 21, 2014 at 4:50 PM, Pablo Estrada <polecito.em@gmail.com> wrote:
Hello Sergei and all,
I have made a last update to my proposal, and submitted it. I added more specifics regarding the scope of the project (running as part of the buildbot), as well as data to leverage for the project and potential learning algorithms to use depending on the data that the project will have access to in the end (only test result history, or also code change history, etc.).

If anyone has any changes to suggest before the deadline, let me know. I'll be glad to look into them.
All the best, all : )

Pablo


On Tue, Mar 11, 2014 at 6:53 PM, Sergei Golubchik <serg@mariadb.org> wrote:
Hi, Pablo!

First: please always cc: maria-developers@lists.launchpad.net,
don't send emails to me only.

Now:

On Mar 11, Pablo Estrada wrote:
> Hello!
> I am getting ready to submit my proposal. Is there any feedback for my
> application document? Anything that would be good to add, remove, rephrase,
> etc?

Go on and submit it. There will be a comment form under your proposal,
if I will want you to change something, I will leave my comments there.

Please mention what tools/languages you think you'll be using.

> Also, if I could see the data available (the history of test results), I
> could write more in detail : )

Hm. Okay. Attached is the preprocessed table that I used two years ago.
It lists test failures in buildbot in a given period of time.
Columns are (as far as I remember):
* time of the test run start
* code branch that was tested
* revision id that was tested
 - it's fake, our buildbot tables don't store revision id. I've
   constructed it from the set of changed files, revision number, etc.
   hopefully it's as unique as a real revision id
* build number
* platform where the tests were run
* test type
 - basically, the set of tests to run and what protocol to use. one test
   may be run with different protocols
* test name
* test variant
 - or combination. certain tests may be run in many combinations

Using tis file I tried to rearrange test execution order for every
specific build number (using failure data from previous builds in the
same table) to detect test failures as early as possible. I managed to
have 90% of all test failures happening within first 10% of the tests.

The goal of this project is to have something we can put back into
buildbot, so that it could actually run only 10% of the tests, still
having 90% probability of discovering a failure (or 20%, or 95%,
whaveter).

See also http://buildbot.askmonty.org/buildbot/
you can click on a link for any branch, for example,
http://buildbot.askmonty.org/buildbot/grid?branch=10.0
then on any particular build, for example,
http://buildbot.askmonty.org/buildbot/builders/kvm-fulltest/builds/2297
and on a stdio log for a particular test run:
http://buildbot.askmonty.org/buildbot/builders/kvm-fulltest/builds/2297/steps/test_6/logs/stdio

on these pages you'll see what a platform (builder) is, build number, test
types, combinations, etc.

Regards,
Sergei