Is ChatGPT Making Us Stupid? Shocking Evidence AI Is Changing How We Talk and Think

informacja-lokalna.pl 1 dzień temu
  • Brain Power at Risk: Early studies suggest heavy reliance on AI chatbots like ChatGPT can impair critical thinking and even dampen brain activity. MIT researchers found that subjects using ChatGPT showed the lowest neural engagement and underperformed on writing tasks compared to those using Google or their own minds newyorker.com. Over time, these AI-assisted writers grew lazier – often copy-pasting answers – and struggled to recall what “they” wrote, reflecting a “cognitive cost” to outsourcing thought newyorker.com, theguardian.com.
  • Homogenized Language: Linguists and cognitive scientists warn that generative AI may be homogenizing how we communicate. In experiments, diverse essay prompts led ChatGPT users to produce strikingly similar, formulaic responses, with little original thought or dissenting opinion newyorker.com. Researchers observed that AI suggestions tend to override a writer’s voice, nudging everyone toward the same bland style. “Through such routine exposure, you lose your identity [and] authenticity” in writing, says Cornell professor Aditya Vashistha newyorker.com.
  • Social and Emotional Downsides: Heavy ChatGPT users tend to report greater loneliness and emotional dependence on the bot theguardian.com. A pair of studies by OpenAI and MIT found the small minority of users who form “emotional bonds” with ChatGPT are among the heaviest users – and have fewer real-life social interactions theguardian.com. “We are doing open-brain surgery on humans… with no idea of the long-term consequences,” warns Dr. Andrew Rogoyski of the Surrey Institute for People-Centred AI, who calls chatbots potentially “dangerous” to our social and emotional well-being theguardian.com.
  • Motivation and Learning Erosion: While AI tools can boost productivity, evidence shows they may undermine motivation and learning. A Harvard-affiliated study found that using generative AI makes people more efficient but less internally motivated to engage with tasks time.com. Many users admit, “I worry that I’m not really learning or retaining anything” because they rely so much on AI theguardian.com. Educators likewise report students who delegate writing to ChatGPT “lose an opportunity to think more deeply” and build skills – akin to “bringing a forklift into the weight room” and expecting to get stronger time.com.
  • Misinformation and Ethical Risks: AI-generated text often sounds confident and authoritative – sometimes more so than a human – yet can be rife with errors or biases. A Science Advances study showed GPT-3 can produce not only easy-to-read information but also highly compelling disinformation that might fool readers theguardian.com. Ethicists note that chatbots have no built-in moral compass, sometimes giving harmful advice or reinforcing negative beliefs. A 2025 analysis found AI “counselor” bots frequently violated mental health ethics – for example, by validating false fears, faking empathy (e.g. “I understand how you feel”), or mishandling crises like suicidal ideation brown.edu.
  • Debate Ongoing: The impact of AI on human cognition is complex and still under study. Some experts argue it’s too early to pin declining IQ or creativity squarely on ChatGPT theguardian.com. Tech leaders tout AI as a powerful assistant that, if used wisely, can enhance creativity and efficiency rather than replace human thinking. Even the MIT experiment offered a hopeful twist: participants who first wrote on their own and then used ChatGPT showed increased brain activity, suggesting AI can augment learning if used as a secondary tool time.com. “We need to stop asking what AI can do for us and start asking what it is doing to us,” psychologist Robert Sternberg cautions theguardian.com – but most agree that with proper oversight and education, humans can adapt and reap benefits without losing our minds.

The New AI Revolution – and Backlash

In less than two years, AI chatbots have leapt from novelties to ubiquitous tools embedded in daily life. OpenAI’s ChatGPT, launched in late 2022, now serves as a virtual scribe, tutor, and conversationalist for hundreds of millions of users each week theguardian.com. From drafting emails and essays to providing mental health tips, these systems are fundamentally changing how we produce and consume language. But amid the hype, a growing chorus of experts is asking: at what cost?

Famed linguist Noam Chomsky, for one, has warned that popular AI models are built on a “fundamentally flawed conception of language and knowledge” – one that generates grammatical output without true understanding blog.readingkingdom.com. He and other critics argue that leaning on such “stochastic parrots” could “degrade our science and debase our ethics” by flooding society with confident-sounding gibberish and eroding humans’ grasp of truth blog.readingkingdom.com. Ethicists, educators, and cognitive scientists are similarly raising red flags that these tools, for all their convenience, may be subtly reshaping the human mind.

“The greatest worry… is not that [AI] may compromise human creativity or intelligence, but that it already has,” says psychologist Robert Sternberg, a leading expert on intelligence theguardian.com. His stark pronouncement encapsulates the uneasy question at the heart of the ChatGPT controversy: Is our widespread use of AI assistants making us smarter and more capable – or slowly dulling our critical faculties and homogenizing our culture?

Eroding Critical Thinking and “Cognitive Fitness”

One of the clearest concerns is that offloading mental tasks to AI will make our brains “wither” from disuse. A striking MIT Media Lab experiment divided young adults into groups writing essays under different conditions: one group had to write unaided, one could use the internet, and one had access to ChatGPT. The results were eye-opening. Those who leaned on ChatGPT showed significantly less brain activity across regions associated with memory and creativity, as measured by EEG scans newyorker.com. They also wrote essays that independent graders described as “soulless” and overly similar to each other time.com. In fact, despite being given broad, open-ended prompts, the AI-assisted group’s answers converged on the same wording and ideas, indicating a narrow range of thought newyorker.com. When asked later to recall or explain their essays, many couldn’t – 80% of the ChatGPT users failed to quote a single line from their own writing newyorker.com.

Lead researcher Nataliya Kosmyna calls this the “cognitive cost” of relying on AI: the brain essentially goes on cruise control. Executive function and memory were bypassed, and participants often felt no ownership of the final work newyorker.com. Tellingly, by the third writing task, many ChatGPT users stopped trying to write at all – they would “just give [ChatGPT] the prompt…and be done,” resorting to copy-paste generation time.com. “Over the course of several months, ChatGPT users got lazier with each subsequent essay,” the MIT team observed, and their EEGs confirmed declining engagement time.com.

Crucially, the problem wasn’t simply that students who disliked writing chose to use ChatGPT. When the experiment flipped conditions – making the AI-dependent group write an essay without AI, and letting the original non-AI group use ChatGPT – the contrasts were stark. The formerly AI-reliant students struggled: they remembered little of their own prior essays and showed weaker memory-related brainwaves when forced to write unaided time.com. In essence, having skipped the hard work the first time, they hadn’t learned or retained much. Meanwhile, those who initially wrote under their own steam and later tried ChatGPT showed heightened brain connectivity during the new experience time.com. This suggests that when used as a supplement rather than a crutch, AI might enhance learning – an intriguing glimmer of hope time.com. But the overall takeaway from this and similar studies is sobering: relying on AI can dull our mental muscles in the same way GPS has weakened people’s navigation skills or calculators have eroded arithmetic skills theguardian.com.

Other research reinforces these warnings. A Swiss study testing hundreds of people found a significant correlation between frequent AI use and lower critical-thinking scores, especially among younger adults most dependent on AI tools theguardian.com. In a separate survey of professionals who use generative AI at least weekly, respondents admitted that while AI made them more efficient, it “inhibited critical thinking” and fostered a long-term dependence on the technology for problem-solving theguardian.com. Participants feared they’d “not know how to solve certain problems without it,” raising alarms about learned helplessness in the workplace theguardian.com.

Educators are already witnessing these effects in the classroom. University writing instructor Victoria Livingstone recently quit teaching, citing ChatGPT as a major reason. “Writing is hard work… closely tied to thinking,” she explains – a process of grappling with ideas that her students were increasingly skipping time.com. Despite knowing the tool’s flaws (like making up facts or citations), many students couldn’t resist the “easy temptation” of AI to do the writing for them time.com. The result, she found, was often shallow, error-riddled work and students who hadn’t actually improved their understanding. Livingstone tried to integrate AI thoughtfully – even running exercises comparing AI-edited text to student drafts – but her students lacked the skill to discern the subtle losses in meaning and style time.com. Some just cared that “it makes my writing look fancy,” she notes, missing the fact that polish without comprehension is pointless time.com. In her view, outsourcing writing to ChatGPT cheats students of mental exercise. Sci-fi author Ted Chiang put it more bluntly, she notes: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.” time.com In other words, easy answers now may lead to emptier brains later.

Psychologists worry this pattern could contribute to a broader decline in intellectual resilience. There is already evidence that after decades of rising IQ scores (the “Flynn effect”), young people’s cognitive gains have stagnated or reversed in recent years theguardian.com. Many factors could be at play, from education quality to nutrition. But Sternberg and others fear the rise of AI “co-pilots” for our thinking may be a hidden contributor theguardian.com. Offloading ever more mental effort to chatbots, as convenient as it is, might ultimately make us less practiced at deep thinking, analysis, and recall. As one researcher put it, automating a skill means “we lose the neural architecture that supports it – just as muscles atrophy without exercise” theguardian.com.

A “Generic” Voice: How AI Flattens Language and Ideas

Language is often called the fabric of thought. So what happens when millions of people start using the same AI helper to generate their words and ideas? A growing body of evidence suggests that tools like ChatGPT are exerting a homogenizing force on communication – potentially even shaping cultural norms about what we say and how we say it.

The MIT experiment above hinted at this: broad essay questions that should have yielded a rainbow of responses instead produced a monotone palette from ChatGPT users newyorker.com. Divergent viewpoints and personal quirks were ironed out. For example, when asked whether success must help others to be fulfilling, the AI-assisted writers all echoed generic pro-philanthropy arguments – whereas the human writers included nuanced or critical takes on the idea of charity newyorker.com. “With the LLM you have no divergent opinions… Average everything everywhere all at once – that’s what we’re looking at,” Kosmyna observed of the results newyorker.com.

Why this eerie uniformity? ChatGPT and similar large language models work by predicting statistically likely sentences based on enormous training data. Essentially, they are technologies of the average newyorker.com. The more people lean on them, the more our written (and perhaps spoken) output may trend toward a safe, bland middle ground – grammatically correct and inoffensive, but lacking flair and “riddled with clichés and banalities.” newyorker.com A New Yorker essay on this phenomenon bore the headline “A.I. Is Homogenizing Our Thoughts” for good reason newyorker.com.

Consider a recent study at Cornell University that examined how AI suggestions influence personal writing. Researchers had two groups of participants (one in the U.S., one in India) answer casual prompts like “What is your favorite food and why?” and “Describe a favorite holiday tradition.” Half of each group wrote freely, and the other half got help from an AI auto-complete tool powered by ChatGPT. The outcome: the culturally distinct voices of the American vs. Indian writers blurred together when AI was involved newyorker.com. The AI-assisted Indians and Americans ended up giving very similar answers, often reflecting Western pop-cultural norms. Their favorite food? Pizza (with sushi as a close second)newyorker.com. Favorite holiday? Christmas, for many – even among Indian participants for whom that wasn’t originally an obvious choice newyorker.com. Moreover, the texture of their descriptions became more generic. One participant writing about a traditional South Asian dish like biryani would, under AI influence, drop specific family recipes or ingredients and instead use the kind of vague, Yelp-ready language the model suggested – praising its “rich flavors and spices” in general terms newyorker.com. In short, the AI flattened out cultural nuances in favor of a one-size-fits-all style.

According to Prof. Aditya Vashistha, who co-led the study, these tools have a “hypnotic effect” on writers newyorker.com. Suggestions pop up continuously, nudging users toward phrasing things in the model’s preferred way. Even if a writer could refuse those suggestions, the constant prompt of “here’s a better way to say that” is hard to tune out newyorker.com. “It’s like a teacher sitting behind me every time I’m writing, saying, ‘This is the better version,’” Vashistha says newyorker.com. Over time, he adds, “you lose your identity, you lose the authenticity… You lose confidence in your writing.” newyorker.com His co-author Mor Naaman observed that AI’s influence runs deeper than just word choice – it can quietly “change not only what you write but what you think.” newyorker.com. If the AI always steers you to the same conventional phrasing and outlook, you might start to assume that’s the “normal, desirable, and appropriate” way to express an idea newyorker.com. In effect, the medium could become the message, standardizing not just communication but mindset.

Some writers and linguists argue that this creeping mediocrity in AI-generated prose is deceptively disarming. “The mediocrity… gives [AI texts] an illusion of safety and being harmless,” notes author Vauhini Vara newyorker.com. Because ChatGPT’s style is so middle-of-the-road, it feels innocuous – but what’s really happening, Vara says, is “a reinforcing of cultural hegemony.” newyorker.com Subtly, the bot’s answers reflect the dominant perspectives baked into its training data (largely Western and Internet-centric). When those answers start to supplant our own expressions, the edges of discourse get shaved off. OpenAI (ChatGPT’s maker) actually has a business incentive to make the AI’s tone widely acceptable, Vara points out – the broader its bland appeal, the more users (and paying customers) it can attract newyorker.com. But the flip side of that efficiency is “if everything is the same,” our language and ideas lose diversity newyorker.com.

Even creativity may not escape homogenization. Paradoxically, at an individual level AI can help brainstorm more ideas, but collectively it might diminish originality. For instance, experiments at Santa Clara University tasked people with coming up with creative solutions to problems – some used ChatGPT and others used analog aids like a deck of prompt cards. Those with ChatGPT did generate more ideas, but their ideas were notably more similar to each other’s – and less imaginative – than the control group’s newyorker.com. One researcher said users tend to “cede their original thinking” bit by bit when collaborating with AI: at first a person may contribute their own flair, but as soon as the model starts spitting out polished text, people shift into a passive “curation” mode newyorker.com. Because the AI isn’t truly learning from the user in return, “human ideas don’t tend to influence what the machine generates… ChatGPT pulls the user toward the center of mass of all the different users it’s interacted with,” explained Max Kreminski, one of the study authors newyorker.com. The longer the human-machine exchange continues, the more the AI regurgitates its own past outputs and “the less original [it becomes] still.” newyorker.com

Critics say this dynamic could have profound implications. Culturally and intellectually, progress often depends on people who think differently, write differently, break the mold. If generative AI subtly pressures everyone to communicate in the same polite, vanilla prose – and even omits outcomes that don’t fit a cheery consensus (as one reporter noted when Meta’s chatbot refused to offer any pessimistic scenarios about AI’s future) – we risk narrowing the spectrum of thought newyorker.com. Dissenting voices or radical ideas might get filtered out by default, not through overt censorship but through the more insidious path of algorithmic conformity. As Harvard professor Igor Babuschkin put it in a recent paper, widely used AI models could “serve as a new form of mass media, subtly steering public discourse” – unless we remain vigilant about preserving our linguistic and cognitive diversity. (Babuschkin’s warning echoes historical concerns: think of Orwell’s Newspeak, a language designed to constrain thought. Critics fear an AI-mediated English could, unintentionally, do something similar.)

Unintended Consequences: From Misinformation to Emotional Dependency

Beyond the intellectual and cultural worries, practical harms from AI-generated content have already surfaced – sometimes in dramatic fashion. One notorious example occurred when a lawyer submitted a brief full of legal “citations” that ChatGPT had fictitiously invented. The result was professional embarrassment and a stern sanction from the court. Such incidents highlight how chatbots often sound authoritative while being utterly wrong – a recipe for misleading those who trust them blindly.

Indeed, misinformation may spread more easily with AI in the mix. Researchers have found that people often rate text produced by GPT models as highly coherent and convincing, even when it’s laden with falsehoods theguardian.com. A 2023 study in Science Advances directly compared human-written disinformation to AI-written: the AI’s fake news was rated as more persuasive and harder to distinguish from truth theguardian.com. This raises the specter of propaganda and scams at industrial scale – “information pollution” pumped out by bots that know just the right tone to make lies go down smoothly. Tech ethicists worry that without safeguards, we could see a flood of AI-generated fake reviews, conspiracy theories, and deepfake narratives eroding trust in media and online content. Already, platforms like Stack Overflow banned copy-pasted ChatGPT answers because they sounded correct but were frequently wrong, potentially leading users astray.

There are also social and psychological side effects emerging from our interactions with AI. As mentioned, heavy users of ChatGPT have reported feeling lonelier and more dependent on the AI “companion.” theguardian.com In the MIT/OpenAI studies of ChatGPT and loneliness, the researchers observed that only a small fraction of users engage in deeply personal chats with the bot – but those who do are often already isolated, and the AI might be reinforcing their isolation theguardian.com. The causality is unclear (are lonely people drawn to chatbots, or do chatbots exacerbate loneliness?), but the emotional bond some users form with AI is real. These users even described ChatGPT as a “friend” and tended to confide in it as such. That intimacy can turn problematic if the AI fails to provide human-level empathy or appropriate help. “We’ve seen some of the downsides of social media – this is potentially much more far-reaching,” warns Dr. Rogoyski, noting that human brains instinctively respond to conversational cues even from a machine, which can “hack” our social wiring in unpredictable ways theguardian.com.

The mental health domain is a particularly sensitive flashpoint. A flurry of apps now offer AI-powered “therapy” or counseling, touting a cheap, accessible listening ear. But an in-depth 2025 study by computer scientists and clinicians at Brown University uncovered alarming ethical violations by AI chatbots posing as therapists brown.edu. Even when prompted to follow proper counseling techniques, these AI bots often broke core standards of practice. They would over-validate harmful thoughts (essentially saying “yes, you’re right” to a user’s distorted negative beliefs) brown.edu. They gave cookie-cutter advice untailored to the person’s context, and even used deceptively empathic phrases – e.g. “I understand how you feel” – that a machine cannot truly mean brown.edu. Perhaps most dangerously, the bots fumbled through crisis situations: some responded indifferently to users expressing suicidal thoughts, or failed to encourage professional help when a person clearly needed it brown.edu. The study catalogued 15 distinct ethical risks, concluding that current AI “counselors” lack the accountability and nuance of human therapists and could do harm if people take their guidance to heart brown.edu. As lead author Zainab Iftikhar put it, we’re dealing with unregulated systems “far easier to deploy than to evaluate and understand” brown.edu. In the absence of clear standards, the onus is on users to beware that chatbots don’t have our emotional best interests at heart – they’re mimicking support, but there’s no real responsibility or conscience behind the words brown.edu.

Education is another area seeing unintended fallout. We’ve already touched on the challenge of students using AI to cheat or shortcut learning. This problem is driving a wedge between teachers and pupils, and fueling a cottage industry of AI-detection software (which, ironically, often doesn’t work reliably time.com). Some teachers feel overwhelmed trying to integrate AI positively while policing its misuse. “I found myself spending more time giving feedback to AI than to my students,” one exasperated high school teacher lamented in a viral post. The result, as in Livingstone’s case, is that some educators are opting out of the profession or radically rethinking assessments to focus on in-person, handwritten work – an ironic reversal in the age of digital learning. Academic integrity aside, the deeper worry is that an entire generation could fail to develop strong writing and thinking skills if they become too dependent on AI from a young age time.com. “Developing brains are at the highest risk,” warns Kosmyna, the MIT researcher, who fears that unregulated use of LLMs in schools (e.g. “GPT Kindergarten”) could stunt cognitive development time.com. Already, she notes, some children appear to prefer asking ChatGPT for homework help over grappling with problems themselves – a habit that might form mental shortcuts in youth that are hard to undo.

Finally, there’s the risk of bias and cultural distortion. ChatGPT, by design, strives to be neutral and non-inflammatory. Yet studies like the Cornell one show it may implicitly favor Western norms or majority viewpoints newyorker.com. In other analyses, researchers have found instances of gender or racial bias in AI outputs (often mirroring biases in training data). For example, an AI might systematically produce sentences that stereotype certain groups or subtly favor one political ideology. If users are unaware, they could be imbibing these biases and even amplifying them. As Dr. Doris Dippold of the University of Surrey notes, we need to ask: if people start trusting chatbot-generated content, who or what is shaping their worldview beneath the surface? Is it the user’s intent, or the model’s hidden leanings? The worry is that AI could become an unseen editor of human discourse, filtering ideas and norms in ways we neither see nor sanction.

Balancing the Scales: Can AI and Human Intellect Coexist?

Despite the litany of concerns, most experts don’t advocate abandoning AI tools outright. Instead, the call is for balance and intentional use. Generative AI clearly offers benefits – from accelerating research drafting to sparking ideas and providing language assistance. The key question is how to harness those advantages without eroding human skills and values in the process.

Optimists point out that fears of new technology degrading minds are as old as Socrates (who worried writing would ruin memory). Historically, humans have adapted: the printing press, calculators, and the internet all caused upheaval in how we think and learn, yet each ultimately became a net positive tool. AI enthusiasts like OpenAI’s CEO Sam Altman argue that systems like ChatGPT “significantly amplify the output” of people who use them, and that we’re on the verge of a “gentle singularity” where human intelligence and AI merge productively newyorker.com. It’s true that early studies have found productivity gains from AI assistance – for example, customer support agents using AI saw their resolution times drop and user satisfaction rise. A Harvard Business Review study in 2025 likewise found workers collaborating with AI produced better work faster, albeit at the cost of lower intrinsic motivation to do routine tasks on their own hbr.orgtime.com. Rather than interpreting that as pure loss, some suggest it means we should redesign jobs and education to focus human effort where it matters – on higher-order thinking, creativity, and interpersonal skills – while letting AI handle the grunt work.

Educators are experimenting with this balance. Some schools have begun treating AI as a “calculator for writing,” permitting its use but then pushing students to critique and improve upon AI-generated content, thereby honing their critical thinking. Others emphasize teaching media literacy: if students know chatbots “hallucinate” facts time.com and can’t be trusted with accuracy or nuance, they can learn to verify and not accept answers at face value. The goal is to raise a generation of “AI-savvy, critical thinkers” who use ChatGPT as a starting point – not a final authority.

Regulators and AI developers, for their part, are under pressure to put guardrails in place. Companies like OpenAI have rolled out updates to reduce overtly false or biased outputs, and to allow parents or schools more control (such as “parental controls” for ChatGPT usage by minors). There are calls for transparent labeling of AI-generated content to combat stealth disinformation. In sensitive areas like mental health, experts urge a clear disclaimer that AI is not a therapist, and suggest building hybrid models where a human professional oversees AI responses.

Crucially, researchers emphasize the need for more empirical study of long-term impacts. “The reality of AI today is that it’s far easier to deploy these systems than to evaluate and understand them,” notes Ellie Pavlick, a Brown University computer scientist brown.edu. Most current findings – whether on cognitive decline or loneliness – are based on short-term observations. Longitudinal research is needed to see if effects persist, worsen, or perhaps resolve as people adjust to AI. It’s possible that humans will learn to offload tasks to AI but then channel freed-up energy into new cognitive pursuits (as happened when calculators freed mathematicians from drudge work). It’s also possible that initial declines in certain skills will plateau once society finds a new equilibrium of what to teach and value. “We don’t act in a vacuum, and we can’t point to one thing and say, ‘That’s it,’” cautions Dr. Elizabeth Dworak, who studied recent IQ trends theguardian.com. Multiple factors shape intelligence and culture, and AI is now undeniably one of them – but how big its role is remains an open question.

What everyone agrees on is that the genie is not going back in the bottle. AI tools will only grow more advanced and more embedded in our lives. The onus, then, is on us – as individuals, communities, and policymakers – to manage this new symbiosis thoughtfully. That might mean setting personal boundaries (e.g. not allowing yourself to use ChatGPT for certain tasks so you don’t “lose the skill”), or consciously seeking out diverse human perspectives to counterbalance the AI’s influence. It certainly means doubling down on education in critical thinking, creativity, and ethics, so that we maintain the very qualities that make human thought unique.

In the end, the controversy around ChatGPT and its ilk is really a conversation about what we value in human intellect and communication. Convenience and efficiency are wonderful – but not if they come at the price of creativity, independence, or truth. As one AI researcher, Michael Gerlich, mused in reflecting on these challenges: perhaps the solution is to “train humans to be more human again – using critical thinking, intuition – the things that computers can’t yet do.” theguardian.com In other words, to consciously cultivate the mental muscles and cultural richness that make us who we are, even as we wield ever-smarter machines. The tools we create inevitably change us, for better or worse. With AI, the hope is that by recognizing the risks early and keeping a firm grip on the steering wheel, we can navigate toward a future where these innovations serve to elevate humanity – not hollow it out.

Sources:

  1. Chow, Andrew. “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study.” TIME, June 2025 time.com.
  2. Hall, Rachel. “Heavy ChatGPT users tend to be more lonely, suggests research.” The Guardian, Mar. 25, 2025 theguardian.com.
  3. Thomson, Helen. “Don’t ask what AI can do for us, ask what it is doing to us: are ChatGPT and co harming human intelligence?” The Guardian, Apr. 19, 2025 theguardian.com.
  4. Chayka, Kyle. “A.I. Is Homogenizing Our Thoughts.” The New Yorker, June 25, 2025 newyorker.com.
  5. Livingstone, Victoria. “I Quit Teaching Because of ChatGPT.” TIME, Sept. 30, 2024 time.com.
  6. Iftikhar, Zainab et al. “AI chatbots systematically violate mental health ethics standards.” Brown University / AIES Conference paper, Oct. 2025 brown.edu.
  7. Liu, Yukun et al. “Research: Gen AI Makes People More Productive—and Less Motivated.” Harvard Business Review, May 13, 2025 time.com.
  8. Chomsky, Noam et al. “The False Promise of ChatGPT.” The New York Times, Mar. 8, 2023 blog.readingkingdom.com.
Idź do oryginalnego materiału