I don't know what you're talking about WRT the 3rd way. I agree that developers should continuously improve their systems.
I was replying to the descriptions in the article:
> The third way of DevOps is continuous learning. [...] Sometimes this means introducing artificial intelligence and machine learning into your systems or tooling, to ensure that it is continuously learning.
> The third way involves continuous learning. If our automated monitoring system, itself, can learn continuously from the traffic, access, and behavior of our cloud activity and users, then that respects the third way. A system that can train itself, learn when to remove permissions, when to replace permissions, and when to block access, not only respects the third way, it makes for a fantastic least privilege security tool.
If your company specializes in this, and you have ML experts who know how to design, maintain, interpret, and monitor the models you are using in production, and who carry pagers so you can page them at 2 AM when something breaks and you can't understand what the model is doing...then sure. If you are the 99.999% of other companies, then this is a terrible idea and you should let me know before the interview so I can avoid working at your company (unless you're hiring me to remove it, because it inevitably caused more problems than it solved).
I was replying to the descriptions in the article:
> The third way of DevOps is continuous learning. [...] Sometimes this means introducing artificial intelligence and machine learning into your systems or tooling, to ensure that it is continuously learning.
> The third way involves continuous learning. If our automated monitoring system, itself, can learn continuously from the traffic, access, and behavior of our cloud activity and users, then that respects the third way. A system that can train itself, learn when to remove permissions, when to replace permissions, and when to block access, not only respects the third way, it makes for a fantastic least privilege security tool.
If your company specializes in this, and you have ML experts who know how to design, maintain, interpret, and monitor the models you are using in production, and who carry pagers so you can page them at 2 AM when something breaks and you can't understand what the model is doing...then sure. If you are the 99.999% of other companies, then this is a terrible idea and you should let me know before the interview so I can avoid working at your company (unless you're hiring me to remove it, because it inevitably caused more problems than it solved).