The Scourge of Artificial Intelligence
- Chief Jonny
- Jan 12
- 18 min read
Updated: Jan 18

Artificial Intelligence is something we have always dealt with because human intelligence is not infallible, most often humans prove to be completely moronic. Artificial Intelligence was coined for machines but many humans are artificially, maybe superficially, intelligent. In our current society we have to deal with a lot of artificial intelligence displayed from humans in control of societal institutions with undeserved influence. It has been deemed that it is of supreme importance to make humans dumber and machines smarter. And ALL of societies institutions have embraced that mission.
I just graduated from the University of Southern California with my Master Degree in Library & Information Science and the proceeding are sections from the essay that I wrote for the Independent Study course that was necessary to graduate. I am not going to post the full essay here (you can download the full paper at the end of this post) because the bulk of it concerns Library and Information Science professionals but it is also a warning to the greater society.
Artificial Intelligence is the future and it has already been weaponized against humanity. Currently, they are purposely holding back the true potential of the technology. It cannot be overstated that the battle against AI is going to be the impetus (the face) for the next great human struggle.
Here are excerpts from my essay The Scourge of Artificial Intelligence: The Present and Future Ramifications of Instantly Adopting This Unpredictable Technology;
Introduction
AI has had a mythological, nearly theological ethos since the coinage of the term in the early 20th century. It has been the focus of futurists, educators, scientists, engineers, has inspired society’s artists, and instilled dread and fear into its luddites and antiauthoritarians. It seems that AI is on the precipice of not only being one of the most transformative technologies of the 21st century but in world history. AI’s applications now permeate nearly every sector of business, healthcare, art, and finance. It has also become of great benefit to educational institutions and the entertainment agencies. Libraries are not immune to AI’s technological dominion as they have also embraced AI and its promises of increased efficiency and enhanced productivity.
This rapid adoption is concerning across all industries but can be immeasurably weaponized in an information hub like the library. This swift embrace of AI has brought and will bring with it significant concerns. These concerns are nearly universally recognized but they are not being mitigated as evident by the mass acceptance of the technology. As libraries rush to adopt and enhance AI’s capabilities within their institutional framework, they face complex ramifications that extend beyond technological advancements. There are ethical, economic, societal, and immediately existential threats that must be considered as the technology becomes more advanced. This paper explores the consequences of this swift adoption, arguing that the haste in which AI technology is integrated has, in many ways, outpaced the library’s preparedness for the technology’s broader impacts.
The appeal of AI and its benefits for the library as an institution are undeniable. It has the abilities to process large datasets, identify patterns, and make quicker decisions than humans.
AI automates repetitive tasks and, has enhanced predictive accuracy in the search fields of the many databases libraries create, maintain, or license.
AI has made Integrated Library Systems (ILS) more efficient, reduced human error, and opened new avenues for librarians and institutions to embrace innovation.
It is important that every benefit is weighed against the significant risks and trade-offs not only for society but also the library and other information institutions. AI’s capability to replace human labor raises concerns about unemployment, workforce displacement, human productivity, and purpose. There are already concerns about how AI-driven algorithms reinforce media and government biases, have made privacy nearly impossible, and perpetuated inequities in information retrieval and access. Furthermore, the potential misuse of AI in surveillance and in kinetic and information warfare presents a host of ethical and security issues that society is only beginning to grapple with. The library as an institution and the LIS professionals that staff them must greatly consider and mitigate these same threats. As AI becomes further embedded not only into the library but the fabric of human commerce and activity, these risks grow increasingly evident and mitigation seems not just less probable but it gets closer and closer to impossible.
The challenges of AI adoption are not a contemporary phenomenon as many dystopian novels and films created in the distant past have expressed. Dystopian artwork typically depicts the future ramifications of an unchecked authority, whether that be a despot, or a technology. In George Orwell’s 1984, the authoritarian regime INGSOC use both a despotic figure and technology to enact control. Future implications of AI adoption include concerns over black box decision-making, as Zednik writes, “Computing systems programmed using Machine Learning (ML) are increasingly capable of solving complex problems in Artificial Intelligence. Unfortunately, these systems remain characteristically opaque: it is difficult to ‘look inside’ so as to explain their behavior. Opacity is the heart of the Black Box Problem in AI—a problem with practical, legal, and theoretical consequences” (2019).
Many of AI’s decisions are unexplainable or anathema to the programmers and engineers that designed it. This is an especially important bug that libraries must consider as AI has improved search results and a stakeholder or patron could be subject to the AI’s whims while searching for specific materials and the LIS professional will have no recourse to explain the error.
In current discourse, philosophers, technologists, and lawmakers debate the control problem, “The control problem related to robots and AI usually discussed is that we might lose control over advanced technologies” (Nyholm, 2022). Essentially, is there a method, can a protocol be created, that ensures that AI will act in the best interests of the humanity? It must be noted that “the what is in the best interests of humanity” as defined and decided upon by the “elites” of the World Economic Forum (WEF), United Nations (UN), World Health Organization (WHO), the big banks, Black Rock, Vanguard, State Street, and the many Non-Governmental Organizations (NGO) that partner with globalist institutions. Despite that, the absence of a regulatory framework to govern AI development at present should also be a signal that the hasty adoption of this technology like past technologies have and are, risking detrimental impacts on individuals, communities before its true impacts can be gauged and determined with some modicum of certainty. This paper argues that the rush to integrate AI into the library must be tempered with careful consideration and measured approaches if that is a possibility. The perceived or imagined threats of AI must be taken with the seriousness of the worst dystopian fiction being a strict road map to the current or the future reality.
The concept of AI can be anachronistically recognized in ancient myths and early philosophical musings about machines capable of mimicking human cognition. That is the most important starting point when considering what artificial intelligence is; not just mimicking the human brain/thought processes but supplanting it. However, AI as a formalized field of study and scientific pursuit began in the mid-20th century, when technological advancements and theoretical frameworks enabled the realization of machines that could, in theory, “think”, rather than compute or process information, but could think similarly to the human mind. AI has received significant investments in research and AI applications from hedge funds, academia, and the federal government. It has evolved from foundational theories and algorithms into a crucial scientific discipline that is allowed to affect every facet of human life.
Early Theoretical Framework
Though the term “Artificial Intelligence” itself was coined much later, early conceptions of intelligent machines are notable in ancient and classical stories. I highlight intelligent machines or robots not only because that is what computers are but because AI as a concept is about mirroring human cognition. The concept of creating an entity, that is not an offspring, but it possessing human characteristics has survived since antiquity. Homer’s The Iliad, and the concept of the automata; a machine which resembles and is able to simulate the actions of a human being; humanoid robot, an android; a machine which performs tasks usually associated with human workers (OED, 2024). Hephaestus, god of craftsmanship and other areas of human activity, was able to create an automata. The process was described in Book XVIII lines 410-420 in The Iliad,
He [Hephaestus] spoke, and from the anvil rose, a huge, panting bulk, halting the while, but beneath him his slender legs moved nimbly . . . but there moved swiftly to support their lord handmaidens wrought of gold in the semblance of living maids. In them is understanding in their hearts, and in them speech and strength, and they know cunning handiwork by gift of the immortal gods. (Book XVIII, 410–420)
D. Kalligeropoulos and S. Vasileiadou, in the paper, The Homeric Automata and Their Implementation, describe Hephaestus’ creation, “Here they are: two mythical robots, two self-moving manlike machines, having sense, speech and strength. Innovative technological visions: The strength, i.e. the feature that transforms low-power commands into powerful mechanical movements, the speech, i.e. the construction of machines producing sounds to communicate, and the sense, i.e. the particular inner structure that results in skillful, learning machines (Kalligeropoulos & Vasileiadou, 2008). Hesiod in Work and Days mythologized the idea of AI. This is illustrated by Talos, a bronze man also created by Hephaestus. Talus was created to protect the island of Crete (Hesiod, 1988).
Over 2,000 years later, novels also familiarized the world with the concept of AI via human simulacra. Mary Shelley’s Frankenstein, Dr. Frankenstein’s nameless creature was not quite a robot, famously depicts a creature created by a human that can mimic human thought process and sentiments. Frankenstein’s monster justifies his behavior and blames his creator, “There was none among the myriads of men who existed who would pity or assist me; and should I feel kindness towards my enemies? No: from that moment I declared everlasting war against the species, and, more than all, against him who had formed me, and sent me forth to this insupportable misery” (Shelley, 1818). Similarly, to many humans the monster cannot accept accountability or acknowledge culpability, only feelings. Frank Baum’s Wonderful Wizard of Oz Tinman could think similarly to humans, and perform general functions, but it could not feel like Frankenstein’s monster. Baum wrote of several robots and described the mechanical man Tiktok in 1907, for example, as an “Extra-Responsive, ThoughtCreating, Perfect-Talking Mechanical Man ... Thinks, Speaks, Acts, and Does Everything but Live” (Buchanan, 2006). Many of these works were adapted into films that helped acclimate the public to the personification of machines, or similar entities. These entities mirror human intelligence and even emotion, the personification of these entities began the cultural assimilation of humanoids.
Science fiction furthered the consciousness of AI in cultural discourse. Famed author of The War of the Worlds and other science fiction works, Herbert George (H.G.) Wells was a prominent member of the Fabian Society. The Fabian Society is a think tank was and is constituted of some of the most influential members of global society and whose thought processes were not just musings but blueprints for current and future societal control. From The World Brain Wells’ writes, “the creation of a greater mental superstructure to reorient the mind of the world is an entirely practicable proposal” (Wells, 1943). The World Brain was series of essays that did not explicitly mean a hive mind as in all minds physically connected to a machine that controlled human thought and therefore human actions and interactions. Wells envisioned a World Encyclopedia that would be the fount of all human knowledge and, would ensure that all of humanity would have access to one information source because it holds the compendium of universal knowledge.
A theory that is heavily embraced by many institutions in society and is connected and will become increasingly dependent upon the technological advancements of AI is Transhumanism. Transhumanism was coined in 1957 by another member of the Fabian Socieity Julian Huxley, who was very influential in the formation of the United Nations and other organizations that arose from the Bretton Wood Conference, he was also a student of H.G. Wells (Pehle, 1946). One of the essential components to transhumanism, is man merging with machines via artificial intelligence, Huxley writes, “The human species can, if it wishes, transcend itself - not just sporadically, an individual here in one way, an individual there in another way, but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man, remaining man, but transcending himself, by realizing new possibilities of and for his human nature” (Huxley, 1957). Transhumanism has evolved and acolytes of the WEF like Yuval Noah Harari echoed this same sentiment at the European Innovation Festival in 2019 when he stated,
“Humanity has always remained constant—with the same bodies, brains, minds—through the Roman Empire, Biblical times, and the Stone Age” (Harari, 2019; Segran, 2019). He continued, “If we told our ancestors in the Stone Age about our lives today, they would think we are already Gods. But the truth is that even though we have developed more sophisticated tools, we are the same animals. We have the same emotions, the same minds. The coming revolution will change that. It will change not just our tools; it will change the human being itself” (Harari, 2019).
AI Used for Societal Control
The elite in many ancient societies have used their power and influence to control society. In the 4th century Before Christ (BC), when the world had a population of less than 200 million, Plato and Aristotle recommended strict control of birth rates by the state (Minois, 2011). In the Republic and the Laws Plato defined an optimum population in terms of space and resources available and described the modes of social organization and functioning required to achieve it. Aristotle did the same in his Politics, “A great city is not to be confounded with a populous one” (Aristotle, 4th Century). Aristotle was less concerned with resources or food than with maintaining order. Ancient Greek demographic thought was setting out the debate in modern terms, eugenics, Malthusianism and a necessary xenophobia (Minois, 2011).
In the early 20th Century, the Fabian Society, an inordinately influential think tank, globalists, and socialist society founded in 1884 in London has influenced and continues to influence the direction of society. The Fabian Society’s goal is the establishment of a democratic socialist state not only in Great Britain but over the entire globe. The Fabians put their faith in evolutionary socialism rather than in revolution (Encyclopedia Britannica, 2024). Members of the Fabian Society include Thomas, Julian, and Aldous Huxley, author Herbert George (H.G.) Wells, and playwright George Bernard Shaw. Contemporary members of the society include former British Prime Minister Tony Blair. There are many associates (rumored members) of the Fabian Society like influential economist John Maynard Keyes, educator John Dewey, and author George Orwell.
Many, if not all of the current (and past) global elite are members of, or beholden to, organizations that continue the want of the Fabian Society to create a One World Government via socialism. As mentioned earlier in this paper Julian Huxley helped create the pre-eminent globalist organization, the UN. The Fabian Society, the Rockefeller Foundation, Carnegie Endowment and other multinational thinktanks and organizations came together for the Bretton Woods Conference in 1947 which is the turning point in the creation of and of influence of multinational organizations like the United Nations Educational, Scientific and Cultural Organization (UNESCO), over the affairs of the global society. This elite farm the universities for middle and upper management, future world leaders, to work in every industry the elite own or have vast influence over. The universities are a pivotal tool in not only disseminating socialist doctrine and ideals but also ensuring they are implemented. This was noted by H.G. Wells in his 1938 work the World Brain wrote,
"We have done nothing to coordinate the work of our universities in the world—or at least we have done very little. What are called the learned societies with correspondents all over the world have been the chief addition to the human knowledge organization since the Renaissance and most of these societies took their shape and scale in the eighteenth and nineteenth centuries. All the new means of communicating ideas and demonstrating realities that modern invention has given us, have been seized upon by other hands and used for other purposes; these universities which should guide the thought of the world, making no protest" (Wells, 1938).
The universities particularly in the western world are the fount of and the proliferation of socialist/communist ideology. They are also the fount of the people that head and work for globalist organizations that must have ubiquitous influence over all facets of society. The heart of the university is the library which is the information hub. The library is where all stakeholders begin when they want to discuss or describe the society or the world. The universities and the libraries have been planned to be used as promoters of a One World Government and one societal narrative. The universities and academia are currently insisting, in my estimation, on inventing, accepting, implementing, and promulgating different critical ideologies rooted in socialist/Marxist doctrine to help achieve this goal. Most of academia is not explicitly aware of their role in the formation of a One World Government. Still, they invent or bolster the ideology, create the technology, AI currently the most prominent, to help ensure that socialist ideology permeates throughout the whole of society.
Technology was known to be a new sphere or a better opportunity for control of every aspect of society. Former United States National Security Advisor Zbigniew Brzezinski wrote in his work “Between Two Ages”;
The postindustrial society is becoming a “technetronic” society: a society that is shaped culturally, psychologically, socially, and economically by the impact of technology and electronics—particularly in the area of computers and communications. The industrial process is no longer the principal determinant of social change, altering the mores, the social structure, and the values of society. In the industrial society technical knowledge was applied primarily to one specific end: the acceleration and improvement of production techniques. Social consequences were a later byproduct of this paramount concern. In the technetronic society scientific and technical knowledge, in addition to enhancing production capabilities, quickly spills over to affect almost all aspects of life directly (Brzezinki, 1970).
The more advanced the technology, especially a technology that has many unknowns, where it is essentially unpredictable (at least it has not been admitted that there are computer scientists and engineers that understand how AI computes results) the greater the mechanism for control. Technology from cars, televisions, to cellphones, and artificial intelligence have become increasingly ubiquitous and are more quickly adopted by the masses. The supply chain for these technologies begins in the universities and ends with the elite and their multinational businesses. The technology must automatically be adopted because its scourge cannot be prevented. It is better to accept the benefits and hope that an executive order or legislation are a true layer of protection.
For more than two decades, computer scientist and futurist Ray Kurzweil has been predicting a “singularity” where technological progress accelerates so dramatically that human life is irreversibly and uncontrollably left behind (2024). He writes in his work The Singularity is Near: When Humans Transcend Biology, “The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but that transcends our biological roots. There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality. If you wonder what will remain unequivocally human in such a world, it’s simply this quality: ours is the species that inherently seeks to extend its physical and mental reach beyond current limitations” (Kurzweil, 2005). Many technical and policy leaders have highlighted, possible negative applications that could introduce two threats to free and democratic societies. The singularity may usurp, democratic systems that work on core assumptions that citizens can form coherent views based upon a “marketplace of ideas” where good information trumps bad information (Allen and Weyl, 2024).
One threat is that AI driven deception based upon certain biases and assumptions could cause the institutions built upon mostly true or good information, to collapse. Under democracies, power and ultimate accountability for the fundamental infrastructure of government including systems of identification, authentication, defense, and basic physical infrastructure should be in the hands of the people. If the peoples’ role in the fundamental building blocks of societal control collapse because of an uncontrolled and unaccountable AI, the result would be complete and utter chaos (Allen and Weyl, 2024).
Another and most predictable threat is that AI development may instead concentrate power in the singular hands of financial interests, technical experts, authoritarian regimes, or even an internal, self-replicating machine logic divorced from human oversight. The paradigm of technology development organized around the concept of a “singularity” a concept of a singular human intelligence that could be matched and even outmatched by artificial intelligence, inevitably leads to “singularity” understood in the second sense of concentrated economic and political power (Allen and Weyl, 2024).
The timidness people feel about AI, the reservations, or fear is not that AI will replace humanity in every sector of human life. It is that its current scourge is not a part of some natural technological growth or innovation. Miles Brundage, former head of policy research and AGI readiness at OpenAI, stated to a tech podcast, that over the next few years, the AI industry will develop “systems that can basically do anything a person can do remotely on a computer” (Varanasi, 2024). That includes operating the mouse and keyboard or even looking like a ‘human in a video chat’ (Varanasi, 2024). These rapidly building capabilities will be used by the elite to assert control over humanity. Fear is being used as a control mechanism for a technology that the elite are building. In October 2023, head of the WEF Klaus Schwab, a figure that represents many other globalist organizations, contributed to an article that urged prudence when researching and adopting AI technologies, the article states, “Now, the Forum (WEF) is calling for urgent public-private cooperation to address the challenges that have accompanied the emergence of generative AI and to build consensus on the next steps for developing and deploying the technology. To facilitate progress, the Forum, will hold a global summit on generative AI... Stakeholders will discuss the technology’s impact on business, society, and the planet, and work together to devise ways to mitigate negative externalities and deliver safer, more sustainable, and more equitable outcomes” (Schwab and Li, 2023).
The article continues, “Generative AI will change the world, whether we like it or not. At this pivotal moment in the technology’s development, a cooperative approach is essential to enable us to do everything in our power to ensure that the process is aligned with our shared interests and values” (Schwab and Li, 2023). It is admitted that AI is something that humanity is going to have to grapple with and it is not our choice. The Responsible AI Leadership Summit like many other global summits are attended by the most influential people across all industries and academia the world over, many of them responsible for funding, creating, and engineering AI. What is being admitted in the article is that the elite will solve a potential problem that could be created by the technology, again, a technology that they have cultivated and created and always knew had the immense potential to upend all of society or at the very least be a great disrupter. The elite and the incestuous relationships they have with the big banks and influential governments get to create the policies that not only attempt to govern the technology but also increase their power and influence. The policies implemented never seem to hurt the most affluent in society, it is unknown why the common good enacted by the elite has caused incredible harm and suffering.
It must be acknowledged that AI will be used to institute mandates similarly to the COVID-19 pandemic. The World Inequality Report produced by a network of social scientists estimated that billionaires in 2021 collectively own 3.5% of global household wealth, up from slightly above 2% at the start of the pandemic (Chancel et al., 2022). “The COVID crisis has exacerbated inequalities between the very wealthy and the rest of the population,” lead author Lucas Chancel said, noting that rich economies used massive fiscal support to mitigate the sharp rises in poverty seen elsewhere. “Since wealth is a major source of future economic gains, and increasingly, of power and influence, this presages further increases in inequality,” they wrote of what they called an “extreme concentration of economic power in the hands of a very small minority of the super-rich” (Chancel et al., 2022; John, 2021).
The World Inequality Report coupled with an article written for the National Institute of Health (NIH) written by Yanovskiy and Socol entitled Are Lockdowns Effective in Managing Pandemics? concluded, “While our understanding of viral transmission mechanisms leads to the assumption that lockdowns may be an effective pandemic management tool, this assumption cannot be supported by the evidence-based analysis of the present COVID-19 pandemic, as well as of the 1918–1920 H1N1 influenza type-A pandemic (the Spanish Flu) and numerous less-severe pandemics in the past. The price tag of lockdowns in terms of public health is high: we estimate that, even if somewhat effective in preventing death caused by infection, lockdowns may claim 20 times more life than they save. It is suggested therefore that a thorough cost-benefit analysis should be performed before imposing any lockdown in the future” (Yanovskiy and Schol, 2022).
Faculty at The Johns Hopkins University wrote a non-peer-reviewed article that echoed the research of Yanovskiy and Schol but a few months earlier. For greater context, “The Johns Hopkins Center for Health Security in partnership with the World Economic Forum and the Bill and Melinda Gates Foundation hosted Event 201, a high-level pandemic exercise on October 18, 2019, in New York, New York. The exercise illustrated areas where public/private partnerships will be necessary during the response to a severe pandemic in order to diminish large-scale economic and societal consequences” (Johns Hopkins University, 2019). The non-peer reviewed article (as peer reviewed, relatively speaking, as the safe and effective studies of certain medical treatments) argued, that “Overall, we conclude that lockdowns are not an effective way of reducing mortality rates during a pandemic, at least not during the first wave of the COVID-19 pandemic. Our results are in line with the World Health Organization Writing Group (2006), who state, ‘Reports from the 1918 influenza pandemic indicate that social-distancing measures did not stop or appear to dramatically reduce transmission’” (Herby et al., 2022). It stands to reason that although it is true that correlation does not necessarily equal causation but there is more evidence of natural connectedness between the globalists and societal events than there are contemporary humans to any of our supposed evolutionary counterparts.
Plutocrats are using AI as a trojan horse to institute a global parent state under the guise of care for humanity. That is also why the adopted ideology is Marxism because a lot of people will gravitate and have gravitated to the utopianism promised by Marxist collectivism. The people are scared into only recognizing the problem and to ignore the problem’s progenitors. The global elite create the conditions that increase the likelihood that humans will ask for increased safety measures, that only hinder personal rights or at the very least indemnify the elite from responsibility for their role in fostering dangerous technology
You can download the full paper below.
Comments