Comment by RestartKernel
Comment by RestartKernel 6 hours ago
I much prefer this over those AI generated commit messages that just say "refactored X" every single commit.
Comment by RestartKernel 6 hours ago
I much prefer this over those AI generated commit messages that just say "refactored X" every single commit.
God I can't stand it when I get this kind of output from Claude, they really need to train it out for Claude 5.
"[Tangentially related emoji] I have completed this fully functional addition to the project that is now working perfectly! There are now zero bugs and the system is ready for deployment to production! [Rocketship emoji]"
Then of course you test it out and it doesn't work at all! It's very grating. It would be more bearable if it hedged its claims a bit more (maybe that will negatively affect the quality of the results though - if training a model to output insecure code also makes it a murderous Hitler admirer then, since when humans hedge their output is less likely to be perfect, it may mean it pushes the model to output code that is less than perfect).
It is missing the (to me) most important part. The reason why these changes are made.
True, you need to instruct the AI agents to include this.
In our case the agent has access to Jira and has wider knowledge. For commit messages i don’t bother that much anymore (i realise typing this), but for the MRs I do. Here i have to instruct it to remove implementation details.
> you need to instruct the AI agents to include this.
The agent can't do that if you told Claudepilotemini directly to make some change without telling it why you were prompting it to make such a change. LLMs might appear magic, but they aren't (yet) psychic.
I think you're missing context.
He's saying that he likely has an MCP connected to jira on the LLM he's developing with.
Hence the prompt will have already referenced the jira ticket, which will include the why - and if not, you've got a different issue. Now the LLM will only need something like "before committing, check the jira ticket we're working on and create a commit message ...
But whether you actually want that is a different story. You're off the opinion it's useful, I'd say it's rarely doing to be valuable, because requirements change, making this point in time rational mostly interesting in an academic sense, but not actually valuable for the development you're doing
It depends on a ton of factors, and at least I'd put very little stock in the validity of the commit message that it might as well not exist. (And this is from the perspective of human written ones, not AI)
GitHub Copilot for one, and I'm pretty sure JetBrains' offering does the same.
Every time I’ve tried to use AI for commit messages its designers couldn’t be bothered to get it to take into account previous commit messages.
I use conventional commit formats for a reason, and the AI can’t even attempt it. I’m not even sure I’d trust it to get the right designation, like “fix(foo)!: increase container size”.
what kind of AI are you using that generates shitty commit messages? This a common kind of message from Claude / Augment: