Actualización

This commit is contained in:
Colima Press 2023-06-30 00:00:08 -07:00
parent 31392b14a2
commit 3d0e57b1c8
2 changed files with 3 additions and 3 deletions

View File

@ -36,13 +36,13 @@
<h1>Enlaces</h1><ul class="list">
<li class="link" id="94">
<h1><a class="anchor" href="#94"></a><a class="name" target="_blank" href="https://chomsky.info/20230503-2/">Noam Chomsky Speaks on What ChatGPT Is Really Good For</a></h1>
<p class="dates"><span class="created">28/06/2023</span><span class="updated">28/06/2023</span></p>
<p class="dates"><span class="created">28/06/2023</span><span class="updated">29/06/2023</span></p>
<p class="description">Entrevista a Chomsky sobre ChatGPT y otros Modelos Grandes de Lenguaje</p>
<p class="tags"><a>#ai</a> <a>#chatgpt</a> <a>#chomsky</a></p>
<p class="social"><a target="_blank" href="https://web.archive.org/web/*/https://chomsky.info/20230503-2/">Ver archivado</a></p>
<details class="info">
<summary>Leer mi nota</summary><h1>Opinión de Chomsky sobre los Modelos Grandes de Lenguaje como ChatGPT</h1>
<p>En esta entrevista en inglés realizada por Polychroniou, Chosmky de nueva cuenta manifiesta su escepticismo ante los <a target="_blank" href="https://w.wiki/6tS6">Modelos Grandes de Lenguage</a> (LLM, por sus siglas en inglés). En orden de aparación mis citas favoritas fueron (el resaltado es mío):</p>
<p>En esta entrevista en inglés (<a target="_blank" href="https://ctxt.es/es/20230501/Politica/42967/Polychroniou-Common-Dreams-Chatgpt-Chomsky-chatbot-OpenAI.htm)">aquí en español</a> realizada por Polychroniou, Chosmky de nueva cuenta manifiesta su escepticismo ante los <a target="_blank" href="https://w.wiki/6tS6">Modelos Grandes de Lenguage</a> (LLM, por sus siglas en inglés). En orden de aparación mis citas favoritas fueron (el resaltado es mío):</p>
<blockquote><p>As I understand them, the founders of AI—Alan Turing, Herbert Simon, Marvin Minsky, and others—regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. <i>Over the years those concerns have faded and have largely been displaced by an engineering orientation</i>.</p><p>&#8288;</p><p>Engineering projects can be useful, or harmful. Both questions arise in the case of engineering AI. Current work with Large Language Models (LLMs), including chatbots, <i>provides tools for disinformation, defamation, and misleading the uninformed</i>. The threats are enhanced when they are combined with artificial images and replication of voice.</p><p>&#8288;</p><p>These considerations bring up a minor problem with the current LLM enthusiasm: <i>its total absurdity</i>, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.</p><p>&#8288;</p><p>One is that the LLM systems are designed in such a way that <i>they cannot tell us anything about language, learning, or other aspects of cognition</i>, a matter of principle, irremediable. Double the terabytes of data scanned, add another trillion parameters, use even more of Californias energy, and the simulation of behavior will improve, while revealing more clearly the failure in principle of the approach to yield any understanding. The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.</p><p>&#8288;</p><p>Data of performance provide evidence about the nature of the internal system, particularly so when they are refined by experiment, as in standard field work. But <i>even the most massive collection of data is necessarily misleading in crucial ways</i>. It keeps to what is normally produced, not the knowledge of the language coded in the brain, the primary object under investigation for those who want to understand the nature of language and its use.</p><p>&#8288;</p><p>Unless carefully controlled, <i>AI engineering can pose severe threats</i>. Suppose, for example, that care of patients was automated. The inevitable errors that would be overcome by human judgment could produce a horror story. Or suppose that humans were removed from evaluation of the threats determined by automated missile-defense systems. As a shocking historical record informs us, that would be the end of human civilization.</p><p>&#8288;</p><p><i>I can easily sympathize with efforts to try to control the threats posed by advanced technology, including this case. I am, however, skeptical about the possibility of doing so</i>. I suspect that the genie is out of the bottle. Malicious actors—institutional or individual—can probably find ways to evade safeguards. Such suspicions are of course no reason not to try, and to exercise vigilance.</p></blockquote>
</details>
</li>

File diff suppressed because one or more lines are too long