#ai #algorithm #technology #machine-learning
idea
GPT3 is a LLM implementation. ChatGPT is using GPT3 in chats.
GPT stands for Generative Pre-trained Transformer. It's a program that accepts. some text and generate continuation for that text.
limitations
While the technology is suggesting a lot of enthusiasm, it remains a conversational tool rather than actual AI, and exhibits the limitations that come with it. It is in fact akin to a more advanced search engine, working probabilistically, mimicking what a conversation on a topic looks like[1]. It is incapable of building a system of beliefs[3], of morality[4], or critical thinking. It cannot infer or deduce[5]. It doesn't "learn" the same way as humans do[2].
links
references
Noam Chomsky, Ian Roberts and Jeffrey Watumull / The False Promise of ChatGPT: (discussion on HN)
[1]: The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I
[2]: The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
[3]: Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round [...] For this reason, the predictions of machine learning systems will always be superficial and dubious
[4]: True intelligence is also capable of moral thinking
[5]: Note, for all the seemingly sophisticated thought and language, the moral indifference born of unintelligence. Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators. In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.