According to the Stanford video the only cases (statistically speaking) where that happened was high-complexity tasks for legacy / low popularity languages, no? I would imagine that is a small minority of projects. Indeed, the video cites the overall productivity boost at 15 - 20% IIRC.
Question for discussion - what steps can I take as a human to set myself up for success where success is defined by AI made me faster, more efficient etc?
In many cases (though not all) it's the same thing that makes for great engineering managers:
smart generalists with a lot of depth in maybe a couple of things (so they have an appreciation for depth and complexity) but a lot of breadth so they can effectively manage other specialists,
and having great technical communication skills - be able to communicate what you want done and how without over-specifying every detail, or under-specifying tasks in important ways.
>where success is defined by AI made me faster, more efficient etc?
I think this attitude is part of the problem to me; you're not aiming to be faster or more efficient (and using AI to get there), you're aiming to use AI (to be faster and more efficient).
A sincere approach to improvement wouldn't insist on a tool first.