Back to All Events

Current Affairs: Discussion Group - ChatGPT, fear it or embrace it?

AI-based GPT (Generative pre-trained transformer - generative because it generates human-like texts) is a language tool able to generate in-depth texts in response to complex questions and respond to a wide array of requests. Basically, it’s a free, cover-all chatbot.

But can AI and its associated tools ever hope to approximate human consciousness? As we all know, for the time being at least AI cannot understand human emotional intelligence sufficiently to be able to mimic it, cannot really understand the world, cannot comprehend abstract concepts; it can recognize information but not understand the meaning, with all the insinuations, nuances, cynicisms, etc. embedded in human language. So, reasoning and finding solutions in the way the human brain does is still not within the capabilities of AI. It can carry out multiple tasks, but it can also create erroneous facts, known as hallucinations in the jargon: statements can sound plausible but be false and humans may not be able to recognize them as false.

ChatGPT is full of exciting opportunities and is extremely useful in many respects but is plagued by doubts and fears that software engineers and researchers are working on to dispel - a worry for educators, a threat to transparent science or the ethical way forward for science, bias, false facts, ethical considerations, rights issues surrounding the extraction of collective data (“The era of machine learning effectively renders individual denial of consent meaningless” - Martin Tisné, Iluminate), all these issues and more make for an interesting discussion.

Many of you will have your own sources of information and thoughts to bring to the discussion, but below are just a few interesting articles to read on the matter.

The meeting will be in person and will take place in the downtown area of Buenos Aires. If you wish to attend, please respond to phyllisbarrantes@gmail.com. As promised last year, tea, coffee and cakes will be provided. For reasons of space, the session will be limited to 12 participants. The exact address will be sent to those who RSVP. 

The Promise and Peril of Generative AI – Diane Coyle, Project Syndicate, April 2023

Exploring the impact of language models - Brookings Institution

Ban listing of chatGPT as co-authors on papers – The Guardian 

AI chatbots are coming to search engines - can you trust the results? - Chris Stokel-Walker, Nature, Feb 2023 

What kind of mind does ChatGPT have? – Cal Newport, The New Yorker, April 2023

 What if we could just ask AI to be less biased?

Can generative AI’s stimulating powers extend to productivity? – John Thornhill, Financial Times, Jan 2023

Whispers of AI’s modular future – James Somers, New Yorker, Feb 2023  h

The race of the AI labs heats up - The Economist, Feb 2023

Why ChatGPT could be disastrous for truth in journalism – Emily, Bell, The Guardian, March 2023 

Why open-source generative AI models could be the ethical way forward for science  - Arthur Spirling, Nature,  April 2023 

The Data Delusion - Martin Tisné, Iluminate, July 2020

Can A.I. Treat Mental Illness? – Dhruv Khuller, The New Yorker, March 2023