Anton Malinskiy
1 min readOct 22, 2019

--

Hey Alexey!

I think what you’re searching for is a different flakiness strategy that ignores flaky tests if those are actually found.

Although not supported at the moment, it would be trivial to implement such strategy, but I have to warn you: from my previous experience there is usually only 10-15% of tests that are stable, so by following the logic of `if my test is flaky then I ignore it` you’d effectively stop running most of your tests.

In my opinion the decision to ignore a test should be manual, i.e. done by a human. This can be implemented for example by introducing a specific annotation in your code (\@Flaky) and then filtering out by the FilterConfiguration. Another approach would be to specify a list of tests you want to ignore by using a blacklist. My point here is that these should be done by a human as opposed to marathon automatically not running a test.

You will lose the proper calculation of probability of test passing and on top of that how would you even recover from this scenario? How would marathon know when to resume running the test if it’s automatically marked as flaky and never run again because it’s now infinitely flaky?

Hope I answered some of the question, please feel free to reach out via medium, messenger or slack (it’s mentioned on the main github page) to continue this discussion.

--

--

Anton Malinskiy
Anton Malinskiy

Written by Anton Malinskiy

Software engineer & IT conference speaker; Landscape photographer + occasional portraits; Music teacher: piano guitar violin; Bike traveller, gymkhana

Responses (1)