Comment by crabmusket
Comment by crabmusket 3 hours ago
I applaud your desire to write better commit messages and not be lazy. Not every commit deserves the attention, but being able to turn on "I am definitely going to leave a precise record for the next person to see this diff" is a great skill to have.
However, I feel like your approach here is a little backwards. By getting the AI to come up with the commit messages, you're actually removing the chance for the human, you, to practise and improve.
I'm a real fan of Kahneman's "thinking fast" and "thinking slow" paradigm. By asking the human to review and approve the commit message, you're allowing them to "think fast", instead of doing the challenging, deliberative "thinking slow" of actually writing what you mean.
While getting the LLM to ask you questions about what you did and why is better than just one-shotting the commit message from the diff, it still lets you reply "reactively" and instinctually, using your "fast" gut thinking, instead of engaging the slower attentive processes required to write from scratch.
Now there are a couple of other posters here critiquing the commit messages in this repo's history. I think that's fair, but by your own admission you are learning, and this is a small and new project! Probably most commits should be along the lines of "getting a thing working", not essays about the intricacies of character encoding:
https://dhwthompson.com/2019/my-favourite-git-commit
But the commits we can see are already demonstrating some of the pitfalls of LLM generated language.
From a recent commit,
"This update enhances user interaction by explicitly addressing scenarios with large diffs, directing users towards feasible actions and maintaining workflow continuity."
This comes after a detailed breakdown of the diff. It is too vague to stand alone without the preceding detail (e.g. 40k character limit) but also doesn't explain them. Why 40k characters? Why any limit at all? Words like "enhances" and "feasible" are filler - be concrete instead.
This article on wiki has fantastic advice about ways that LLM writing fails, more along the lines of what I've just pointed out:
https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
Writing well is hard, never "effortless" as your readme advertises. Sadly, good results have to come from hundreds of hours of hard and uncomfortable work. Truth is rare and precious and difficult to come by, and even when we glimpse it, turning it into words is a whole nother story. I hope you can continue to develop this tool to help you learn and train your own writing, rather than avoid it.
> Probably most commits should be along the lines of "getting a thing working", not essays about the intricacies of character encoding:
I'm not a fan of that commit as a commit, although it would make a great start for a blog post. The explanation of how the issue was tracked down, is not helpful in understanding what the issue is. On the other hand, while the author describes finding (and replacing) a non-ASCII character masquerading as a space, it would have been more interesting to know what character it was.
I agree that explaining the "why" is useful, but in this case I don't think it deserves much more detail than "ensure the file uses only ASCII characters to avoid a text encoding error while running tests in this specific manner". (I guess I can also see the argument for showing the error message for later searches, too....)