Discussion about this post

User's avatar
pascal martin's avatar

I would like to add that a review involve boring coding minutia checks that are better left to AI: dangerous code constructs, security blunders, out of sequence function calls, etc. I have no problem, really, with a first review pass being automated. This already existed before AI.

The second step of a review is to verify that it meets the functional, performance, maintenance and easy administration needs. That is where review time can become exponential with the size of the change, unless the design choices are already well known. With AI's reputation for code duplication and rapid generation of large amounts of code, there is no way a human reviewer can keep up.

Which leads us to the question: if something fails, how to handle it? Maybe the failure is an infrastructure unable to meet the traffic, an application logic problem or missing input validations. Who will know what to look for?

If nobody knows how it works, I can foresee teams rushing to the AI prompt and adding more requirements. It might not take long before conflicts have been created, causing AI output that is just 'statistically valid'.

I still remember the xAI rushes to make their engine both please the boss and still avoid grossly offending output..

pascal martin's avatar

"It replace junior reviews": too bad, these are a critical part of training juniors. I see a trend here: junior hatred at its highest..

Otherwise my experience is that to review a large body of code is quasi impossible. I have had cases when I had no choice but limit the depth and scope of my reviews. A review need to be early or in the middle of development to be the most useful.

Reviewing AI generated code must be a hopeless hell.. I am so happy I retired 18 months ago.

No posts

Ready for more?