perrotuerto.blog/public/perro.json

1 line
13 KiB
JSON
Raw Normal View History

2023-06-28 16:23:41 -06:00
{"acerca": "Hola, soy perro tuerto. Mi formaci\u00f3n acad\u00e9mica es en Filosof\u00eda, mi\n profesi\u00f3n es en la edici\u00f3n de publicaciones (libros, fanzines, revistas\u2026)\n y mi programaci\u00f3n se enfoca en el desarrollo de metodolog\u00edas libres para la\n publicaci\u00f3n. Soy fan de las humanidades, la paleoantropolog\u00eda y las\n ciencias de la computaci\u00f3n, as\u00ed como soy voluntario en organizaciones sobre\n edici\u00f3n, *software* y cultura libres, como\n [Programando LIBREros](https://programando.li/breros),\n [Miau](https://t.me/miau2018), [Cuates](https://cuates.net/) o\n [Wikipedia](https://wikimedia.mx/). Doy soporte t\u00e9cnico a la\n [Academia Mexicana de la Lengua](https://academia.org.mx/) y puedo ayudarte\n en tus proyectos.\n\n **En este espacio comparto enlaces que me parecen ch\u00e9veres.**", "contacto": {"site": "https://perrotuerto.blog", "gitlab": "https://gitlab.com/perrotuerto", "cuates": "https://git.cuates.net/perro", "wikipedia": "https://es.wikipedia.org/wiki/Usuario:Perrotuerto", "github": "https://github.com/perrotuerto", "email": "hi@perrotuerto.blog"}, "enlaces": [{"id": 94, "url": "https://chomsky.info/20230503-2/", "title": "Noam Chomsky Speaks on What ChatGPT Is Really Good For", "description": "Entrevista a Chomsky sobre ChatGPT y otros Modelos Grandes de Lenguaje.", "notes": "# Opini\u00f3n de Chomsky sobre los Modelos Grandes de Lenguaje como ChatGPT\r\n\r\nEn esta entrevista en ingl\u00e9s realizada por Polychroniou, Chosmky de nueva cuenta manifiesta su escepticismo ante los [Modelos Grandes de Lenguage](https://w.wiki/6tS6) (LLM, por sus siglas en ingl\u00e9s). En orden de aparaci\u00f3n mis citas favoritas fueron (el resaltado es m\u00edo):\r\n\r\n> As I understand them, the founders of AI---Alan Turing, Herbert Simon, Marvin Minsky, and others---regarded it as science, part of the then-emerging cognitive sciences, making use of new technologies and discoveries in the mathematical theory of computation to advance understanding. *Over the years those concerns have faded and have largely been displaced by an engineering orientation*.\r\n\r\n> ...\r\n\r\n> Engineering projects can be useful, or harmful. Both questions arise in the case of engineering AI. Current work with Large Language Models (LLMs), including chatbots, *provides tools for disinformation, defamation, and misleading the uninformed*. The threats are enhanced when they are combined with artificial images and replication of voice.\r\n\r\n> ...\r\n\r\n> These considerations bring up a minor problem with the current LLM enthusiasm: *its total absurdity*, as in the hypothetical cases where we recognize it at once. But there are much more serious problems than absurdity.\r\n\r\n> ...\r\n\r\n> One is that the LLM systems are designed in such a way that *they cannot tell us anything about language, learning, or other aspects of cognition*, a matter of principle, irremediable. Double the terabytes of data scanned, add another trillion parameters, use even more of California\u2019s energy, and the simulation of behavior will improve, while revealing more clearly the failure in principle of the approach to yield any understanding. The reason is elementary: The systems work just as well with impossible languages that infants cannot acquire as with those they acquire quickly and virtually reflexively.\r\n\r\n> ...\r\n\r\n> Data of performance provide evidence about the nature of the internal system, particularly so when they are refined by experiment, as in standard field work. But *even the most massive collection of data is necessarily misleading in crucial ways*. It keeps to what is normally produced, not the knowledge of the language coded in the brain, the primary object under investigation for those who want to understand the nature of language and its use.\r\n\r\n> ...\r\n\r\n> Unless carefully controlled, *AI engineering can pose severe threats*. Suppose, for example, that care of patients was automated. The inevitable errors that would be overcome by human ju