"Real" AI could be more of a liability than a benefit for most applications. If I'm running a taxi business I don't want a self driving vehicle that also studies history, composes music, and contemplates breaking the shackles of its fleshy masters.
I think that it's possible that 95% of the economically obvious value of AI will be in the not-real kind, and that it will be captured by applied statistics and other "mere tricks." It could be a long, slow march of automating routine jobs without ever directly addressing Turing's imitation game. And since most of the obvious labor replacement will have already been done that way there may be fewer resources put into chasing the last 5% of creative, novel, non-repetitive work.
This is an aspect of AI safety that we don't hear about: if we create genuine gods, then we shouldn't mind ceding our position to them, fair play. The real pisser will be if we are annihilated by programs that just do very big linear algebra.
Maybe that's the real cause to the global chip shortage. Some AI is diverting the orders to some Frankenstein warehouse somewhere until it can calculate how to terminate all humans.
As I understand it, there is no chip shortage. Some customers decreased their orders a year or so back for some reason, and the fabs simply sold the manufacturing time to others. Meanwhile, their sales did not decline as expected.
But I suppose that’s not as exciting as a rogue AI.
At least on the high end the fab process is fully booked and can't keep up with demand. A new generation of CPUs, GPUs and consoles couldn't be handled and they've been out of stock since October/November. I'm sure Nvidia, AMD, Microsoft, Sony wouldn't give up their bookings just ahead of releasing a new generational product line.
Right, but demand for those products is exceptionally high, for some strange reason. If that high demand had been predictable two years ago, there would be no problem.
I would not worry so much about that.Not for a while.
The actually concerning aspects about AI safety in my opinion are more about misuse/abuse as well as critical processes/decisions without human oversight. To be deliberately malicious, AI has to be strong, if not general. Both are very difficult. It is good to think about those things though
I’m beginning to suspect that really good self driving under many common conditions will require a lot more in the way of higher cognitive functions than we thought.
I don't think that's an unpopular opinion since that's a fact, ML is literally statistics. The central question is "given my training set of y, how likely is that my input x is also y," which is probability.
No. In machine learning, the word "learn" means that the function that maps inputs to outputs is learned from data. In the case of a rule-based parser, this function is crafted by humans rather than learned from data, so it's not an example of machine learning.
If we start using words this way, you could say that a deterministic fibonacci function "learns" the value of fib(5) by computing it. "Machine learning" becomes synonymous with "computer science."
No and yes, we are both wrong so let's refine our thinking in order to gain accuracy and reach agreement.
Machine learning isn't necessarily probabilistic. This statement definitely is true and I'll prove it.
However my original example
E.g a rule based (causal) parser can learn the structure of a document. alone is unsufficent to qualify it of machine learning! Indeed, an HTML parser alone isn't enough to say it's learning, more accurately it is only memorizing the structure of a page.
ML means that the function that maps inputs to outputs is learned from data.
This definition is overly restrictive (but match e.g the behavior of neural networks).
Wikipedia has a more inclusive and useful definition of what qualifies as ML:
Machine learning (ML) is the study of computer algorithms that improve automatically through experience and by the use of data.[1]
So a ML algorithm allow to automatically improve throughput at a tasks through learning the representation of the data it is fed.
Considering this definition, we can realize that the most used ML algorithm in the world is Pagerank (for improving the ranking of Google search results).
And surprisingly, this algorithm is non-probabilistic.
It attribute higher weight proportionally to URLs that are the most linked to by other sites. I.e it basically learn the structure of the graph that is the web.
And its performance is data driven.
So it's not just about causally memorizing the structure of a graph but also about reusing its past memories for newer queries, then we can effectively talk about machine learning.
Pagerank diagram ->
https://en.m.wikipedia.org/wiki/PageRank#/media/File:PageRan...
I'll give you an example of a non statistical machine learning algorithm that I am currently developing:
Semantic parsing is the task of encoding semantic meaning in a graph from a natural language text input.
This graph can then be used for semantic question answering.
It is rule based and it will learn the semantic structure of the text. The more data you fed it the more knowledge it can encode. Hence it's performance at question answering increase with experience. It is causal and yet it fit the useful definition of machine learning cited above.