I just think any statement that "AI will never be able to do ___ as well as a person because the human element is irreplaceable" is really missing the reality of what it can already do and how rapidly development is proceeding.
Again, while I agree with you that we should never say never (mainly because there is no reason to presume the human mind is particularly sacred), and I think it's perfectly sensible that we might find the model that produces
Artificial General Intellegence, I think it's extremely important to reiterate that existing AI/LLMs
are not magic, nor are they AGI. They are based on straightforward math, and LLMs are still very dumb because they don't actually have what we would consider a mind behind them.
https://humsci.stanford.edu/feature/study-finds-chatgpts-latest-bot-behaves-humans-only-betterhttps://www.pnas.org/doi/10.1073/pnas.2313925121You're not going to find a bigger fan of Alan Turing than me (the movie they made about him was unfortunately absolute garbage and literally slander), but the Turing test itself is philosophically problematic. Firstly, there should be no defined beginning and ending of a Turing test, and any existing prescribed test can be obviously built around. It also ignores the philosophical problem of
qualia and consciousness, though it is arguable that those might be illusions.
A real test would be literally any question. We know that Chat-GPT's LLM can't handle spacial locations very well, and it is easy to exploit the tokenization system it uses to get wrong answers, so it's not really close to passing the Turing test, however it is
obviously, extremely impressive.
Again, it is not magic. It's a designed model. The model does things very well, but it's not
anything approaching AGI, which is what everyone keeps conflating into the conversation with LLMs. That isn't to say that it couldn't happen one day, but even if it did, we don't actually know how much better it could create a golf course than a person. There are a lot of philosophical issues about the limits of intelligence that are just not known. I generally think it's reasonable to believe that the speed-limit of scientific knowledge is empirical research, not the human mind. That said, here's hoping (assuming it doesn't accidentally or on purpose, kill us all).