I don't think there's any slow reinvention going on in the way you're describing -- the adoption of ideas from (for example) game development has been a total non-secret since the dawn of the React age. There's definitely slow reimplementation going on though, and unfortunately it's having to be done at the scripting level, rather than the engine level.
Yea you're right, I guess there is a distinction between reinventing and reimplementing. The authors even mentioned that concurrent mode is inspired by "double buffering" from graphics.
I am just trying to support the grandparent comment that the current optimization is at the wrong level of abstraction. React is at the level of a "scene graph" (dom). If it's slow, the first technique to reach for should be stuff like "occlusion culling".
That's largely analogous to what people attempt to do with virtualisation, but the ability of things to have unpredictable resizing and reflow behaviours throws a few spanners into the works. My understanding that (at least until recently, things might be different now) is that most game engine optimisations depend on being able to make a lot of assumptions that usually hold true, or being able to calculate most of your scene graph optimisations during a build step.