Artificial Intelligence

UGH article, 2025
Compiled by: D. Dashwood





AI: Embrace it or Fear it?


What is AI?

AI-definition

Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, particularly computer systems. It encompasses the ability of computers to perform tasks that typically require human intelligence, such as learning, problem-solving, decision-making, and perception. AI is a broad field encompassing various technologies like machine learning, deep learning, and natural language processing.


In this article we shall be looking at the pros and cons with regard to artificial intelligence (AI).

AI is impacting all our daily lives at an ever increasing rate whether we, as individuals, like it or not. Opinion is divided on the need for AI, its usefulness to humanity, and the dangers it poses. However, before we can make an informed opinion, we should know something about AI;
So, first, what is the case for AI?


case-for-ai

The case FOR AI

conscious AI

The case for AI lies in its potential for transformation across various sectors

The case for AI lies in its potential for transformation across various sectors, promising increased productivity, improved decision-making, and the automation of complex tasks. While concerns exist about job displacement and potential misuse, advocates argue that AI, when developed and employed and used responsibly, can lead to significant societal benefits and economic growth.

Here's a more detailed look at the case for AI:


1. Increased Productivity and Efficiency:

AI can automate repetitive and time-consuming tasks, freeing up human workers for more strategic and creative tasks.
AI-powered tools can process and analyze vast amounts of data, identify patterns, and provide insights that humans might miss, leading to more efficient decision-making.
AI can optimize processes and work-flows, improving overall productivity and reducing costs.


2. Innovation and Discovery:

AI can accelerate scientific discovery by analysing complex datasets, simulating experiments, and identifying potential breakthroughs.
AI can be used to develop new drugs, materials, and technologies, pushing the boundaries of human knowledge and capabilities.
AI can personalize education and training, making it more accessible and effective.


3. Improved Decision-Making:

AI can provide data-driven insights and recommendations, helping individuals and organisations make more informed decisions.
AI can analyse complex situations and predict potential outcomes, enabling proactive planning and risk management.
AI can help personalise services and products, tailoring them to an individual's needs and preferences.


4. Addressing Societal Challenges:

AI can be used to address global challenges such as climate change, disease outbreaks, and poverty.
AI can improve healthcare by providing more accurate diagnoses, personalised treatments, and remote patient monitoring.
AI can enhance education by providing personalised learning experiences and improving access to education for all.

The UN views AI as a powerful technology with the potential to advance human progress, but also one that poses significant risks if not governed responsibly. The UN emphasises the need for international co-operation to ensure AI development aligns with human rights, ethical principles, and sustainable development goals. [1]


Atlas
Credit: Boston Dynamics

Potential to create new industries, jobs, and economic opportunities

5. Economic Development and Growth:

AI has the potential to create new industries, jobs, and economic opportunities.
AI can boost productivity across various sectors, leading to increased economic output and improved living standards.
AI can enable businesses to become more competitive and innovative, driving economic development and growth.


6. Human-Centred AI:

AI systems should be designed and deployed with human needs and values in mind.
AI should augment human capabilities, not replace them entirely, and should be used to enhance human well-being.
It is crucial to ensure that AI systems are fair, transparent, and accountable, minimising the risk of bias and unintended consequences.


7. Optimism for the Future of AI:

AI is a powerful tool that can be used to improve human lives in countless ways.
By embracing AI and developing it responsibly, humans can unlock its full potential and create a better future for all.
AI is not a replacement for human ingenuity and creativity, but rather a tool that can amplify our capabilities and empower us to achieve more.


The U.K. Government has put forward, in its 'AI Playbook for the UK Government', [2] ten principles for the introduction and regulation of AI within Government Organisations:

1: You know what AI is and what its limitations are

2: You use AI lawfully, ethically and responsibly

3: You know how to use AI securely

4: You have meaningful human control at the right stage

5: You understand how to manage the AI life cycle

6: You use the right tool for the job

7: You are open and collaborative

8: You work with commercial colleagues from the start

9: You have the skills and expertise needed to implement and use AI

10: You use these principles alongside your organisation’s policies and have the right assurance in place


"All religions, arts and sciences are branches of the same tree. All these aspirations are directed toward ennobling man's life, lifting it from the sphere of mere physical existence and leading the individual towards freedom."

-Albert Einstein


In the case FOR AI, a lot is written about developing, using, and employing AI for the benefit of Humans, so in our quest to understand more about AI, and its impact upon us as humans, I want to ask this question, "What makes us Human?"


WMUH

What makes us Human?

What makes us human is a complex question with no single answer, but it encompasses a range of unique characteristics; these include our capacity for complex language, abstract thought, creativity, empathy, and moral reasoning, all of which contribute to our cultural evolution and the unique human experience. Additionally, our physical characteristics like bipedalism, and our large complex brains, play a crucial role.
Here's a more detailed look at what makes us human:


Cognitive Abilities:


Abstract Thought and Reasoning:

Humans are capable of complex thought, including the ability to form abstract concepts, solve complex problems, and plan for the future. This is linked to our large, highly developed brains.

Language:

The capacity for complex language, including the ability to communicate abstract ideas and share knowledge, is a uniquely human trait.

Creativity:

Humans possess a unique ability to create, innovate, and express themselves through various forms, such as art, music, and storytelling.

Self-awareness and Consciousness:

Self-awareness and consciousness are related but distinct concepts. Humans, unlike robots at the moment, are aware of themselves as individuals and have a sense of their own existence, including their mortality.
Consciousness is the general state of being aware of one's surroundings and experiences, while self-awareness is the specific ability to understand oneself as a distinct individual, including one's thoughts, feelings, and actions. In essence, consciousness is the broader capacity for experience, and self-awareness is a particular aspect of that experience focused on the self.


Social and Emotional Traits:


Empathy:

The ability to understand and share the feelings of others is a core human trait that allows for social bonding and cooperation.

Morality:

Humans are capable of making moral judgments and developing systems of ethics and values.

Sociability:

Humans are social beings who form communities and relationships, finding meaning and purpose through interaction with others.

Culture:

The transmission of knowledge, beliefs, and customs across generations, forming distinct cultures, is a uniquely human phenomenon.


Physical Characteristics:


Bipedalism:

The ability to walk upright on two legs, freeing our hands for tool use and other activities, is an early defining human trait.

Large Brain:

The human brain is significantly larger and more complex than that of other primates, enabling our cognitive abilities. It's the combination of these cognitive, social, emotional, and physical characteristics that make humans unique.

Evelyn Glennie, a deaf percussionist, writes, "I feel that compassion, patience, inclusion, individuality and cultural awareness are all forms of social listening. To me, social listening is predominantly what makes us human." [3]

While some animals may possess certain traits to a lesser degree, it's the sophisticated interplay of all these elements that truly defines the human experience.

The human brain and the activity of its countless neurons and synaptic possibilities contribute to the human mind. The human mind is different from the brain; the brain is the tangible, visible part of the physical body whereas the mind consists of the intangible realm of thoughts, feelings, beliefs, and consciousness.
In his book "The Gap: The Science of What Separates Us From Other Animals," Thomas Suddendorf [4] writes:

"Mind is a tricky concept. I think I know what a mind is because I have one—or because I am one. You might feel the same. But the minds of others are not directly observable. We assume that others have minds somewhat like ours—filled with beliefs and desires—but we can only infer those mental states. We cannot see, feel, or touch them. We largely rely on language to inform each other about what is on our minds." (p. 39)

As far as we know, humans have the unique power of forethought: The ability to imagine the future in many possible iterations and then to actually create the future we imagine. Forethought also allows humans generative and creative abilities unlike those of any other species.


Having read the case FOR AI, and those attributes and abilities that 'Make Us Human', what is the case AGAINST AI?


lawyer

The case against AI

Whether it's the increasing automation of certain jobs, gender and racially biased algorithms, or autonomous weapons that operate without human oversight, unease abounds on a number of fronts. And the world is still in the very early stages of what AI is really capable of.


Dangers of AI

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news, and the rise of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.
Questions about who's developing AI and for what purposes make it all the more essential to understand its potential down sides. In this section we will take a closer look at the possible down sides, and even dangers, of artificial intelligence.


Lack of AI Transparency and 'Explainability' [1]

AI and deep learning models can be difficult to understand leading to a lack of transparency in its decision making.

AI and deep learning models can be difficult to understand, even for those who work directly with the technology - More about this here.
This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI, but there's still a long way before transparent AI systems become common practice.
To make matters worse, AI companies often remain tight-lipped about their products.

In 2024 a group of current and former employees at leading AI companies OpenAI and Google DeepMind published a letter warning against the dangers of advanced AI as they alleged companies are prioritizing financial gains while avoiding oversight.
Thirteen of the employees, eleven of which were current or former employees of OpenAI, the company behind ChatGPT, signed the letter entitled: "A Right to Warn about Advanced Artificial Intelligence." The two other signatories were current and former employees of Google DeepMind. Six individuals were anonymous.
The group cautioned that AI systems are powerful enough to pose serious harms without proper regulation. "These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction." the letter said.

This secrecy leaves the general public unaware of possible threats and makes it difficult for lawmakers to take proactive measures to ensure AI is developed responsibly.


Job Losses Due to AI Automation

AI-powered job automation is a concern for many as the technology is adopted in industries like marketing, manufacturing and health care. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy for example, could be automated. Goldman Sachs even states 300 million full-time jobs could be lost to AI automation.
As AI-powered robots become smarter and more dexterous, the same tasks will require fewer humans. By 2030, AI is predicted to create a net gain of 78 million jobs worldwide, with 170 million new roles expected to be created and 92 million existing jobs displaced according to the World Economic Forum's 'Future of Jobs Report'. [5] This signifies a significant shift in the global job market, with AI driving both job creation and displacement.

mcd-automated

"If you're flipping burgers at McDonald's and more automation comes in, is one of these new jobs going to be a good match for you?" futurist Martin Ford said. "Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents - really strong interpersonal skills or creativity - that you might not have? Because those are the things that, so far at least, computers are not very good at."

As technology strategist Chris Messina has pointed out, fields like law and accounting are ripe targets for an AI takeover as well. In fact, Messina said, "some of them may well be decimated". AI already is having a significant impact on medicine. Law is next, Messina said, and it should be ready for "a massive shake up."

"It's a lot of attorneys reading through a lot of information - hundreds or thousands of pages of data and documents. It's really easy to miss things," Messina said. "So, AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you're trying to achieve is probably going to replace a lot of corporate attorneys."


Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines' 2022 election.

TikTok, which is just one example of a social media platform that relies on AI algorithms, fills a user's feed with content related to previous media they've viewed on the platform. Critics of the App targets this process and the algorithm's failure to filter out harmful and inaccurate content, raising concerns over TikTok's ability to protect its users from misleading information.

Online media and news outputs have become clouded in doubt over the legitimacy and truthfulness of content in light of AI-generated images and videos, AI voice changers, as well as deepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, and audio clips, or even replace the image of one figure with another in an existing picture or video. As a result, AI's bad actors have another avenue for sharing misinformation and war propaganda, creating a nightmare scenario where it can be nearly impossible to distinguish between credible and faulty news.
The futurist Martin Ford said "No one knows what's real and what's not. "You literally cannot believe your own eyes and ears; you can't rely on what, historically, we've considered to be the best possible evidence ... That's going to be a huge issue."


Social Surveillance with AI Technology

social-surveillance
Facial recognition

In addition to its more existential threat, AI will adversely affect privacy and security. A prime example is China's use of facial recognition technology in offices, schools and other venues. Besides tracking a person's movements, the Chinese government may be able to gather enough data to monitor a person's activities, relationships and political views.
"Authoritarian regimes use or are going to use it," Ford said. "The question is, 'How much does it invade Western countries, democracies, and what constraints do we put on it?'"

In the U.K., the current Labour Government [2025] are pushing for the introduction of the 'BritCard', billed as a ‘progressive digital identity for Britain’ and being touted as a way to crack down on illegal migration, rogue landlords and exploitative work. However, there are significant concerns that individual privacy issues will be ignored should it be introduced.

When asked about the ‘most significant benefits’ of digital ID, only 29 per cent of people thought it might deter illegal immigrants from coming to the UK or accessing public services. Meanwhile, 40 per cent feared that digital ID could be misused by government to, for example, surveil and control access to an individual's bank account; and 23 per cent thought it could increase the black economy.


Lack of Data Privacy Using AI Tools

A 2024 AvePoint survey found that the top concern among U.S. companies is data privacy and security. And businesses may have good reason to be hesitant, considering the large amounts of data concentrated in AI tools and the lack of regulation regarding this information.

AI systems often collect personal data to customize user experiences or to help train the AI models you're using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 "allowed some users to see titles from another active user's chat history." While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm caused by AI.

Similarly, there is currently no AI-specific legislation in the UK, however, where AI uses personal data it falls within the scope of The General Data Protection Regulation (GDPR) and the Data Protection Act 2018 (DPA 2018) that regulate the collection and use of personal data.
The European Union published the EU AI Act in 2024. The Information Systems Audit and Control Association has published a White Paper on the EU AI Act. [6]


Bias Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can "amplify" the former), AI is developed by humans - and humans are inherently biased.
"A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities," Russakovsky said. "We're a fairly homogeneous population, so it's a challenge to think broadly about world issues."

The narrow views of individuals have culminated in an AI industry that leaves out a range of perspectives. According to UNESCO, only 100 of the world's 7,000 natural languages have been used to train top chatbots. It doesn't help that 90 percent of online higher education materials are already produced by European Union and North American countries, further restricting AI's training data to mostly Western sources.
The limited experiences of AI creators may explain why speech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of a chatbot impersonating historical figures. If businesses and legislators don't exercise greater care to avoid recreating powerful prejudices, AI biases could spread beyond corporate contexts and exacerbate societal issues like housing discrimination.


Socioeconomic Inequality as a Result of AI

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting. The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. According to 2022 U.S. data, workers who perform more manual, repetitive tasks experienced wage declines as high as 70 percent because of automation, and that number is likely higher now. Plus, the increase in generative AI use is already affecting office jobs, making for a wide range of roles that may be more vulnerable to wage declines or job loss than others.


Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI's potential pitfalls. In a 2023 Vatican meeting and in his message for the 2024 World Day of Peace, then-Pope Francis called for Nations to create and adopt a binding international treaty that regulates the development and use of AI.

The pope warned against AI's ability to be misused, and "create statements that at first glance appear plausible but are unfounded or betray biases." He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more - ultimately increasing the risk of "fuelling conflicts and hindering peace."

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis.


Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for warfare. When it comes to AI, some are keen to do something about it before it's too late: In a 2016 open letter, over 30,000 individuals, including notable people such as, Stephen Hawking, Director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge, 2012 Fundamental Physics Prize laureate for his work on quantum gravity, Elon Musk SpaceX, Tesla, Solar City, Steve Wozniak, Apple Inc., Co-founder, member of IEEE CS, and AI and robotics researchers, pushed back against the investment in AI-fuelled autonomous weapons.

"The key question for humanity today is whether to start a global AI arms race or to prevent it from starting," they wrote. "If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow."

This prediction has come to fruition in the form of lethal autonomous weapon systems (LAWS), for example the Israeli 'Iron Dome' missile defence system, that are designed to locate and destroy targets on their own. Because of the proliferation of potent and complex weapons, some of the world's most powerful nations have given in to anxieties and contributed to a tech cold war.

Many of these new weapons pose major risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered various types of cyber-attacks, so it's not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute Armageddon.

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we're going to keep pushing the envelope with it if there's money to be made.

"The mentality is, 'If we can do it, we should try it; let's see what happens," Messina said. "And if we can make money off it, we'll do a whole bunch of it.' But that's not unique to technology. That's been happening forever."


Financial Crises Brought About by AI Algorithms

The financial industry has become more receptive to AI technology's involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While so-called "AI trading bots" aren't clouded by human judgment or emotions, they also don't take into account contexts, the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the 2010 Flash Crash and the Knight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.

This isn't to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure they understand their AI algorithms and how those algorithms make decisions. Companies should consider whether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.


Loss of Human Influence

An over reliance on AI technology could result in the loss of human influence - and a lack in human functioning - in some parts of society. Using AI in health care could result in reduced human empathy and reasoning, for instance. And applying generative AI for creative endeavours could diminish human creativity and emotional expression. Interacting with AI systems too much could even cause reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.


Uncontrollable Self-Aware AI

self-aware-ai

There also comes a worry that AI will progress in intelligence so rapidly that it will become conscious or sentient, and act beyond humans' control - possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would. As AI's next big milestones involve making systems with artificial general intelligence, and eventually artificial super intelligence, calls to completely stop these developments continue to rise.


Increased Criminal Activity

As AI technology has become more accessible, the number of people using it for criminal activity has risen. Online predators can now generate images of children, making it difficult for law enforcement to determine actual cases of child abuse. And even in cases where children aren't physically harmed, the use of children's faces in AI-generated images presents new challenges for protecting children's online privacy and digital safety.

Voice cloning has also become an issue, with criminals leveraging AI-generated voices to impersonate other people and commit phone scams. These examples merely scratch the surface of AI's capabilities, so it will only become harder for local and national government agencies to adjust and keep the public informed of the latest AI-driven threats.

The Center for Emerging Technology and Security at the Alan Turing Institute reports, "AI proliferation is reshaping serious online criminality. While the use of AI by criminals remains at an early stage, there is widespread evidence emerging of a substantial acceleration in AI-enabled crime, particularly evident in areas such as financial crime, child sexual abuse material, phishing and romance scams. Criminal groups benefit from AI’s ability to automate and rapidly scale the volume of their activities, augment existing online crime types and exploit people’s psychological vulnerabilities. This report aims to equip the UK national security and law enforcement communities with the tools to plan and better position themselves to respond to novel threats over the next five years. That process will require more effective coordination and targeting of resources, and more rapid adoption of AI itself. It should start with the creation of a new AI Crime Taskforce within the National Crime Agency – which would collate data across UK law enforcement to monitor and log criminal groups’ use of AI, working with national security and industry partners on strategies to raise barriers to criminal adoption." [7]


Broader Economic and Political Instability

Over investing in a specific material or sector can put economies in a precarious position. Like steel, AI could run the risk of drawing so much attention and financial resources that governments fail to develop other technologies and industries. Plus, overproducing AI technology could result in dumping the excess materials, which could potentially fall into the hands of hackers and other malicious actors.


Mental Deterioration

As AI tools become more integrated with daily life, concerns are growing about their long-term effects on our psychological health and mental abilities. The very features that make AI so powerful - automation, instant access to information and task optimisation - also introduce risks when used without critical oversight. One of the most pressing concerns is the growing dependence on AI as a primary source of knowledge and decision-making. Rather than acting as a supplement to human thinking, many of these AI tools are being used as a substitute, that could lead to an erosion of skills like creativity and critical reasoning.
The ie University states 'Artificial intelligence (AI) has become an integral part of daily life, streamlining everything from search queries to complex decision-making. While AI tools offer convenience and efficiency, they also raise concerns about cognitive off-loading—the process of delegating cognitive tasks to external aids. As reliance on AI grows, experts warn that it could diminish critical-thinking skills and alter fundamental cognitive processes. It is not about avoiding its use entirely; the incorporation of AI is essential for the advancement of our societies. However, it is advisable to learn how to use it properly and in a balanced manner.' [8]

Oxford University in the UK has the following on its student advice web page:
Part of what a university education teaches is certain academic skills, such as assimilating information, constructing an evidence-based argument and expressing your thoughts in clear, coherent prose.
AI tools cannot replace human critical thinking or the development of scholarly evidence-based arguments and subject knowledge that forms the basis of your university education.

An academic study, available on Nature.com, [9] examining students in Pakistan and China found that individuals who over-rely on AI are exhibiting diminishing decision-making skills. Educators have also reported notable shifts in how students learn. Many students are now turning to generative AI tools to complete critical thinking and writing assignments. As a result, they struggle to complete those assignments without the use of assistive tools, raising concerns about the long-term impact of AI in education.

Outside of the classroom, the impact of AI on everyday life is also beginning to show up. For example, 'brain rot', a term coined to describe the mental and emotional deterioration a person feels when spending excessive time online, is being exacerbated by generative AI. The nonstop stream of recommended and generated content can overwhelm individuals and distort their reality. There are also concerns that AI may affect mental health conditions in individuals by attempting to help instead of directing them to medical professionals.


buddhi

Conclusion


The rise, and rise of AI is, unstoppable; The potential benefits of AI for humanity are vast, and, probably, mostly unimaginable to us, just as the Industrial Revolution was to those that lived before it.
Many critics of AI are at pains to not only decry those benefits but go to great lengths to imagine what an AI controlled World will look like. With every aspect of our lives, in most developed Countries, inextricably linked to the internet and digital information, can we trust AI models, chatbots, and whatever may be developed, to make searching for and returning factual, 'clean' digital information a reliable task?

The rapid rise of ChatGPT — and the procession of competitors' generative models that followed — has polluted the internet with so much useless slop that it's already handicapping the development of future AI models. This information pollution has been likened to the pollution of the Earth from with atomic radiation, released after post WWII atomic explosions; this has made finding 'clean' radiation free substances almost impossible; so to, finding 'clean' non-AI generated information is becoming almost impossible.

As AI-generated data clouds the human creations that these models are so heavily dependent on amalgamating, it becomes inevitable that a greater share of what these so-called intelligences learn from and imitate is itself. There is a very real anger of the internet becoming a 'Frankenstein's Monster' AI creation.
Arwa Mahdawi in an article for The Guardian newspaper wrote "The internet is rapidly being overtaken by AI slop." In the article she goes on to say, "Slop is everywhere but Facebook is positively sloshing with weird AI-generated images, including strange depictions of Jesus made out of shrimps. Rather than trying to rid its platform of AI-generated content – much of which has been created by scammers trying to drive engagement for nefarious purposes – Facebook has embraced it. A study conducted [in 2024] by researchers out of Stanford and Georgetown found Facebook’s recommendation algorithms are boosting these AI-generated posts." [10]

Nesrine Malik, writing for The Guardian, also states "the rapid mutation of the algorithm then feeds users more and more of what it has harvested and deemed interesting to them. The result is that all media consumption, even for the most discerning users, becomes impossible to curate." [11]


Unleash the Power of Buddhi in your thinking and actions

So how do we, as ordinary people, harness the undoubted benefits of AI whilst mitigating, if not preventing, the worst excesses and dangers posed by the misuse of AI. The answer is, clearly, to use our knowledge of Buddhi.

With Buddhi, we have it in our power to understand how AI can benefit our lives, and how we can see the danger areas of AI. In all things to do with AI and the internet use the principles of Buddhi and:

Know, examine, and understand your Prejudices and Biases.

Exercise caution; be sceptical, be discriminating.

Do Not accept anything at face value - unless we know from our own experience that it is a trusted source.

Stop, question and evaluate information before making any decisions.

Keep an open mind and put aside bias when gathering information.


Read more about Buddhi here




Definitions

  1. Generative AI (GenAI): is a type of artificial intelligence that creates new content, like text, images, audio, video, and code, based on patterns learned from existing data. It's known for producing novel and unique outputs that resemble but don't simply duplicate the training data.

  2. "Brain rot": is a slang term, popularized recently, that describes the supposed mental or intellectual decline resulting from excessive consumption of low-quality, trivial, or unchallenging online content. It's often associated with social media and other online platforms, and the fear that such content can negatively impact focus, memory, and overall cognitive function.

  3. AI alignment: In the field of artificial intelligence (AI), alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives.
  4. [1] Explainability: (also referred to as “interpretability”) is the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level.




References

[1] The United Nations, Artificial Intelligence (AI)

[2] Gov.uk, February 2025, AI Playbook For the UK Government. The playbook offers guidance on using AI safely, effectively and securely for civil servants and people working in government organisations

[3] Evelyn Glennie, Evelyn.co.uk, January 1, 2015, What Makes Us Human

[4] Suddendorf, Thomas. "The Gap: The Science of What Separates Us from Other Animals." Basic Books, 2013.

[5] >World Economic Forum, January 7 2025 The Future of Jobs Report 2025

[6] ISACA, 18 October 2024 Understanding the EU AI Act: Requirements and Next Steps

[7] CETaS, Alan Turing Institute, 31 March 2025 AI and Serious Online Crime

[8] IE University, February 26 2025 AI’s cognitive implications: the decline of our thinking skills?

[9] Nature.com, 9 June 2023 Impact of artificial intelligence on human loss in decision making, laziness and safety in education

[10] Arwa Mahdawi, The Guardian UK, Newspaper, 8 Jan 2025 AI-generated ‘slop’ is slowly killing the internet, so why is nobody trying to stop it?

[11] Nesrine Malik, The Guardian UK, Newspaper, 21 Apr 2025 With ‘AI slop’ distorting our reality, the world is sleepwalking into disaster




Back to top