Comment by tsv_

Comment by tsv_ 4 days ago

5 replies

I'm working on Vedro, a Python testing framework as a pytest alternative without the magic and with clear output.

The main idea is that tests should just be Python: plain `assert` statements instead of custom matchers, no fixture magic, and when tests fail you get readable diffs that actually show what went wrong. Tests can be simple functions or structured with steps that self-document in the output.

https://vedro.io

I would be very happy to receive any feedback!

erezsh 3 days ago

I like the promise, and it looks nice. But I'm not sure what are the selling points.

- pytest already works with assert. Why brag about something that is already commonplace?

- It could help if your docs explained the alternative to using fixtures. I assume it would be done by re-using givens, but you could make it clearer what is the preferred way to do it, and what is gained, or lost, but doing it that way.

- Can you explain how your diff is better than the pytest diff? (I'm asking as someone who hates the pytest diff)

  • tsv_ 2 days ago

    Thanks for the feedback, it helps me see things from a different perspective.

    These are excellent questions, and you're absolutely right that they should be clear from the landing page. I'll work on fixing that.

    Short answers:

    1. Good point about asserts. When writing the benefits, I was targeting a broader audience (unittest users, people coming from other languages like JS), but the reality is most visitors are probably "pytest escapers" who already know pytest uses assert. I'll reorganize the selling points to focus on what actually differentiates Vedro.

    2. The main philosophy is "all you need is functions and their compositions", no special decorators or dependency injection magic. But this is indeed missing from the index page. Will definitely add clear examples showing how to handle common fixture use cases with plain functions.

    3. One diff example on the landing page clearly isn't enough. I'll add more comparisons. Since you hate pytest's diff output too, I'd love to hear what specifically bothers you about it, your pain points would be incredibly valuable for improving how I present Vedro's approach.

benji-york 3 days ago

As someone that loves Python and hates pytest, you have my support.

(Although, I don't like using bare `assert`s in tests, but maybe you'll convince me.)

  • tsv_ 2 days ago

    Thanks for the support! It means a lot, especially from someone who shares the pytest frustration.

    About bare `assert`s. Vedro is actually flexible enough to use any matchers you prefer, but let me share why I stick with plain asserts:

    1. In most editor themes, `assert` jumps out with distinct syntax highlighting. When scanning tests, I can quickly spot the assertions and understand what's being tested.

    2. The expressions feel cleaner to me:

       assert error_code not in [400, 500]
       # vs
       assert_that(error_code, is_not(any_of(400, 500)))  # hamcrest
    
    3. I like that there's nothing new to learn, the expressions work exactly like they do in any Python code, with no special test behavior or surprises.

    Would love to hear what specifically bothers you about bare asserts, always looking to understand different perspectives on testing ergonomics!

    • benji-york 10 hours ago

      Your first and second points makes sense. They don't matter much to me, but I see how others could value those things.

      Aside: I also don't like the hamcrest syntax. I also don't love unittest's syntax but it's OK and it's pervasive (i.e., available in the stdlib).

      The third point is where I start to disagree more strongly.

      > I like that there's nothing new to learn, the expressions work exactly like they do in any Python code, with no special test behavior or surprises.

      This doesn't seem true to me.

      > the expressions work exactly like they do in any Python code

      Not to my mind. In normal Python, an assertion communicates something that is unequivocally believed to be true, not something that may or may not be true (a test). Let me see if I can explain it this way, I often use asserts in tests to show (and enforce) something that I believe to be true and must be true, before the test can have any meaning. E.g.,

      assert test_condition() == False invoke_the_code_under_test() self.assertTrue(test_condition())

      The "assert" communicates that this is a precondition, the "self.AssertTrue" communicates that this is a test.

      I can 100% see that others might not see/care about the distinction, but I think it is important.

      > no special test behavior

      Well, that's not quite true. You have to handle the AssertionError specially and do some fairly magical work to figure out the details of the expression that failed. The unittest-style assertions just report the values passed into them.

      I don't really like that magic, both from an aesthetic standpoint and from a least-complexity-in-my-tooling standpoint. Again, I can understand others making different tradeoffs.