How generative AI will ruin science and academic research
On late-stage digitalization and knowledge erosion
Johan Eddebo
Background: the epistemology of modern mass media
I often come back to Neil Postman’s 1985 classic Amusing Ourselves to Death. It’s a penetrating analysis on the cognitive effects of media technology. His focus is mainly on how the format and mode of communication influences the character of content and how that content then trains us, but to a lesser extent also how the discourse and the “atmosphere” of the information exchange affects these issues.
This is almost forty years ago, and the disruptive medium under scrutiny is television. The entire approach seems almost quaint, the 80s in relation to the current period being a comparative golden age of critical thinking, complex exposition and conscious, creative citizens.
Still, the issues Postman emphasizes were already significant way back then. You can summarize his criticism with television’s format having complex, detrimental effects on the content and character of public discourse, as well as on the cognitive abilities of human beings, especially in terms of understanding complex issues and parse arguments and evidence with many layers and branching implications.
The basic issue on Postman’s view is that written exposition and complex oral tradition alike foster and prepare the human mind for the rigors of critical thinking and rational reflection. And to the contrary, that the entertainment media and discourses as exemplified by television in particular, geared towards short-term gratification and the communication of sensational experiences, have rather the opposite effect.
His examples are really glaring, even back then:
Contradiction, in short, requires that statements and events be perceived as interrelated aspects of a continuous and coherent context. Disappear the context, or fragment it, and contradiction disappears. This point is nowhere made more clear to me than in conferences with my younger students about their writing. “Look here,” I say. “In this paragraph you have said one thing. And in that you have said the opposite. Which is it to be?” They are polite, and wish to please, but they are as baffled by the question as I am by the response. “I know,” they will say, “but that is there and this is here.” The difference between us is that I assume “there” and “here,” “now” and “then,” one paragraph and the next to be connected, to be continuous, to be part of the same coherent world of thought. That is the way of typographic discourse, and typography is the universe I’m “coming from,” as they say. But they are coming from a different universe of discourse altogether: the “Now … this” world of television. The fundamental assumption of that world is not coherence but discontinuity. And in a world of discontinuities, contradiction is useless as a test of truth or merit, because contradiction does not exist. My point is that we are by now so thoroughly adjusted to the “Now … this” world of news—a world of fragments, where events stand alone, stripped of any connection to the past, or to the future, or to other events—that all assumptions of coherence have vanished. And so, perforce, has contradiction. In the context of no context, so to speak, it simply disappears. And in its absence, what possible interest could there be in a list of what the President says now and what he said then? It is merely a rehash of old news, and there is nothing interesting or entertaining in that.
– Postman, ibid.
This is not an unfamiliar situation for most of us. I even had a similar experience this morning in an attempted discussion with someone who incredibly enough stated four (!) different, mutually incompatible things within the framework of a couple of short paragraphs, and he of course still kept insisting I was wrong. His stated position contained the following propositions:
- Beliefs are absurd
- Beliefs ought to be held only due to acceptable evidence
- You can hold beliefs for whatever reasons you like
- Beliefs are by definition anchored in emotion and not acceptable evidence
All of these statements are literally incompatible with every single other one.
It’s like a convoluted version of the Liar’s Paradox. If I agree with him, I’m necessarily wrong, and if I disagree, I’m wrong too. But in contrast to the old thought experiment, where my error lies in the direct affirmation of a statement’s opposite, here there’s literally no way to make sense of how I would be wrong if I either agree with or reject his position, since it’s internally incoherent in a complex and not only binary sense.
One is almost impressed at the intensity of this discursive dumpster fire.
So as most of us would recognize, this exchange, at least to some extent, reflects the epistemic character of contemporary public discourse in the digital sphere. Communications are often simplistic and disjointed, and if they approach some level of complexity, they almost immediately veer off into contradiction, irrelevance or actual nonsense as above.
What does this mean for knowledge in general, and especially for the quality, retention and reproduction of human cultures’ complex knowledge contained in traditions such as science or the humanities?
First of all, I think a situation such as ours will significantly shrink the pool of available candidates for the difficult and often painstaking work of effectively stewarding such traditions of knowledge, for themselves as individuals as well as for the institutions or organizations that are needed to maintain the traditions.
You’re not going to have a lot of people able to write a Doktor Faustus or Common Sense, or to produce any work of art that connects with the fullness of the spirit, character and problems of its age. And you will not find many people who would be able to understand it even if something of that sort could be produced.
The issue here, to a great extent, is that our incoherent age and its media technologies fail to equip people with the ability to fully comprehend basic abstract principles, and transfer the gist of situated experiences and insights to another context. We basically do not get properly familiarized with the universal common-sense abstractions and concepts that make complex discourse and critical thinking possible. Without this familiarity, we are less able to make sense of new and chaotic situations and see the connections and similarities between different experiences and forms of knowledge. Reason as such gets eroded, as Aquinas would probably have put it.
The knowledge we do get is specialized, commodified and instrumentalized, and while it can be quite extensive, it has an insular character and doesn’t effectivelty enable critical thinking in the abstract and all-embracing sense. It’s sort of like how someone could learn to reliably use a calculator through rote memorization without having more than a rudimentary grasp of mathematics. He can operate the device, but he’s not a sovereign master of the discipline(s) involved. He could never design and build a calculator.
“Man is the measure of all things”, Protagoras once said, and his point was probably not relativism but rather the fact that human beings can (or ought to be able to) make reasoned assessments of any and all situations and states of affairs in the world.
The compounding effects of contemporary AI
Ok, so there’s a good bit more to say about the character and problems of knowledge in this overall background of discourse, media technology and cognitive adaptations we find ourselves in, but let’s focus specifically on the contributions of AI here.
Contemporary AI is a set of pattern recognition systems designed for surveillance and discipline, for controlling the flow of information, and for the mass production of commodity substitutes for cultural content under a neoliberal framework.
It’s not really possible to overstate the damage a set of systems like this could do if it becomes an integrated part of the knowledge production of society. It’s not only that they massively compound the problems of the background I tried to sketch above, i.e. those of a sensationalistic digital media environment almost entirely devoid of exposition and geared towards easily gratifying entertainment.
The basic purpose of the self-learning, information-curating algorithm was always to minimize the distance between the product and the consumer through reinforcing desirable patterns of behaviour. In other words, the algorithms will primarily tend towards reinforcing behaviour that effectively supports capital’s extraction of surplus value from the worker-consumer.
This is a point that could be reiterated many times. Google is not a library. It’s a private corporation structured around a system designed to commodify information for profit, and it has neither an obligation nor any real incentive to carry data you care about nor have any real use of.
What’s more, this and similar corporations are in constant cut-throat competition for dominance within an almost totally centralized information architecture that immediately connects almost every human being on the planet. It’s like how the great white sharks cannot stop moving without suffocating.
So we’ve designed a perfect system for rapidly evolving these systems for commodifying information towards maximum profit, which also coincides with a centralized communications architecture that enables a single modification of a google algorithm to simultaneously impact almost every person in the world.
Postman’s misgivings about television indeed do seem quaint.
What follows from the above is that any other qualities of the informational output of these systems beside those that reinforce profitable behaviours of the consumer in the short term (compare to the advertising industry) become irrelevant. They’re secondary priorities at best, and insofar as they interfere with the profit directive, they will be actively suppressed. Moreover, these priorities will have an immediate global impact.
And these digital systems for the commodification of information under a capitalist framework cannot do otherwise, or they will suffocate.
When we add AI to the above mix, especially the “generative” kind, these problematic processes simply get aggravated. AI in its current form acts as a force multiplier to the commodification of information within the established economic order and its digital structures of exchange. Self-learning algorithms streamline the flow of information and enables targeted advertisment, propaganda and narrative control on the individual level with incredible speed and specificity. GenAI, on top of this, enables the automated production of information commodities that inevitably will be tailored to synergize with the surveillance, marketing and behavioural modification objectives.
Or the sharks will suffocate.
So we could just ask ourselves whether the fostering of well-rounded independent thinkers, spiritually mature, capable of satisying their own needs and with an internal locus of control, is going to support the short term profit incentives of capital or not.
The answer to this question determines the tendency of the proprietary algorithms that are now being designed to achieve full-spectrum information dominance.
Science and academic research
What does this have to do with the future of science?
Well, the implications are staggering, and seemingly all across the board. For one, the mass training of narrowly specialized consumers unfamiliar with universal concepts and abstractions, and ill-equipped to critical thinking, are not going to be effective stewards and renewers of complex traditions of knowledge, whether we’re dealing with particle physics or folk medicine. Without the universal concepts, without a familiarity of logic, of the principles of cause and effect, of what constitutes reasonable evidence irrespective of any particular context, people will be less and less able to see the connections between different disciplines and internalize their own coherent view of the world which is necessary to navigate it independently. People will be less competent and less self-reliant in terms of inventing new knowledge in unfamiliar situations and adapting their conceptual schemes and experience to unforeseen circumstances. Just like how the rote-memorization calculator operator above will not be able to do math with a pen and paper when the machine breaks down.
To attend school meant to learn to read, for without that capacity, one could not participate in the culture’s conversations. But most people could read and did participate. To these people, reading was both their connection to and their model of the world. The printed page revealed the world, line by line, page by page, to be a serious, coherent place, capable of management by reason, and of improvement by logical and relevant criticism.
Postman, ibid.
But apart from the detrimental effects on our cognitive abilities, as bad as they may be, there’s a significant sense in which genAI and the current iteration of late-stage digitalization may structurally undermine the nature and quality of the information and experiences inherent to our traditions of knowledge, ancient as well as modern.
Recall that the centralized information architecture for inevitable structural reasons will tend towards promoting such information that reinforces behaviour which in turn supports capital’s extraction of surplus value from the worker-consumer. Reliability, objectivity, and a hundred other measures of quality that you or I might want to prioritize are almost entirely irrelevant. The profit motive, and auxiliary objectives such as propaganda, strategic opinion formation and the minimization of disruptive discourses, will be the top priorities.
And as the output of genAI, not least due to the centralized digital architecture, starts dominating channels and modes of communication as well as the content of our information repositories, the character of the AI output will have a significant influence on the quality of the data retained within whatever traditions of knowledge can remain. In other words, the content and modes of operation of science as a tradition of knowledge will tend towards the priorities inherent to the systems designed for the commodification of information. Towards reinforcing profitable consumer behaviour and strategic opinion formation.
One pathway towards this ignominious end is the introduction of generative AI into the publish-or-perish framework of contemporary academia. Let’s just say it’s not only students that will use chatGPT or similar systems to cut corners, and that everyone with an MA or above will stay totally clear of such perversities.
Far from it. The current bibliometric model of academic competition will put the minority of us who refuse to touch this stuff at a significant disadvantage. Our output will be much more limited in comparison to people who with the help of tailored genAI-tools can churn out perhaps two viable research articles per day. They might be sub-par and derivative, but this is largely a numbers game.
Several mechanisms will then increase the prominence of AI-generated material within science and academia, not least the fact that people who have padded their resumes with this sort of garbage will make headway in the competition for tenure, scholarships and research positions, which will force others to follow suit. Journals will exploit this production to increase throughput, visibility and market share through brute force, which again will pressure the competition to follow suit. Low-cost journals will flood the market with semi-generated content.
Researchers and scientists will end up glorified “prompt engineers” in the near future.
One would perhaps somewhat naively imagine that the peer-review process should still be able to operate as a reliable quality control, weeding out the worst excesses of an otherwise downward spiral. One would be wrong.
Peer-reviewers, this weird brand of people who are willing to do painstaking and boring work without compensation, will of course face a torrent of bland, sub-par, AI-generated articles as the volume of production predictably will increase through the use of these kinds of tools. And what’s the inevitable solution to this little conundrum?
Have the AI do the “peer review” as well, of course:
You’ll have AI reviewers checking papers written by algorithms published in journals to be read by nobody. Except perhaps the AI themselves, now generating their own training data in a wicked informational feedback loop that’s going to be replete with structurally integrated hallucinations.
So where’s the quality control? How is it even conceivable? Who will do the “fact-checking” of the torrents of AI-generated material? And in reference to what data? AI-generated or curated research articles whose information has been disconnected from reliability, objectivity and validity, and is now being produced towards the end of reinforcing profitable consumer behaviour and strategic opinion formation?
All of this holds the seed to a really wicked problem of epistemology, of the fundamental quality of the evidence which is accessible to the human being. If this goes much further, we’re namely approaching an entirely novel situation of human knowledge where the basic chain of evidential testimony gets broken. You can’t actually trust any piece of information received through the centralized digital infrastructure to be the genuine account of the experiences, conclusions or findings of an actual human person. Everything will be potentially in doubt.
Everything.
So nowadays, when I’m sitting about in the wee hours, providing free peer-review for some obscure philsoophy journal, and I come to ask myself whether it’s all worthwhile, then I just think about this AI-generated rat going right through peer review with his giant dick and four distended balls, and I smile to myself and recall that suffering is both fruitful and purifying.
The great march of mental destruction will go on.
Everything will be denied. Everything will become a creed. It is a reasonable position to deny the stones in the street; it will be a religious dogma to assert them. It is a rational thesis that we are all in a dream; it will be a mystical sanity to say that we are all awake. Fires will be kindled to testify that two and two make four. Swords will be drawn to prove that leaves are green in summer.
We shall be left defending, not only the incredible virtues and sanities of human life, but something more incredible still, this huge impossible universe which stares us in the face. We shall fight for visible prodigies as if they were invisible. We shall look on the impossible grass and the skies with a strange courage. We shall be of those who have seen and yet have believed.”
Chesterton, Heretics.
Johan Eddebo is a philosopher and researcher based in Sweden, you can read more of his work through his Substack.
SUPPORT OFFGUARDIAN
If you enjoy OffG's content, please help us make our monthly fund-raising goal and keep the site alive.
For other ways to donate, including direct-transfer bank details click HERE.
“I just think about this AI-generated rat going right through peer review with his giant dick and four distended balls, and I smile to myself and recall that suffering is both fruitful and purifying.”
That’s the right attitude. Science is self correcting because any published account of an experiment so exciting as penis enlargement will promptly be repeated in other laboratories, tested many times over, and rejected if the result cannot be reproduced.
Re “purification by sufferering” there was a researcher about 50 years ago who painted some very interesting spots on his white mice and submitterd photographs to a prestigious journal (peer reviewed). That researcher suffered when his published results could not reproduced; purified also, I hope .
AI = Absent Intelligence, therefore I don’t worry about their stupid plans, they will fail spectacularly, I fully trust their incompetence 😀
If adding a brain chip is offered to workers to double their salary (or triple it) and also to double their work output, then companies will coerce workers into adding that brain chip. Their unchipped coworkers, like unvaxxed “anti-vaxxers,” will either have to get chipped or get fired. The race is on.
Generative AI will ruin the mostly Western educated / indoctrinated woke women’s narratives as they are illogical, emotional hypocritical rubbish. Racist too. Their WEF controllers will cry, including Ursula von der Lair, Killary, Micheal Obomer and Lizard Cheney.
The distinguishing feature of AI – and its Achilles Heel – is its blatant lack of subtlety. You need go no further to detect it than to ask a complex question on a search engine.
For example, metoprolol (a drug I take) can cause either 1) harmless skipped beats or 2) heart block (of the electrical signals responsible for normal flow of blood through the heart’s chambers). Number 2 is bad, Number 1 is fine.
Ask Google how you can tell the difference between the two, all you get is gobbledegook.
I mention this seemingly irrelevant anecdote to point out that if AI cannot adequately deal with a fairly straight forward question from a lay person – imagine trying to get it to deal with the vast complexities of science.
Even AI can’t fix fake data, like Harvard and Stanford close their eyes too for the good causes of equity, wokism, climate change and good old $$$$$$$$$$$. People like Pete Judo make a living out of exposing fake scientists.
bitcoin fixes this
https://aul.primo.exlibrisgroup.com/discovery/delivery/01AUL_INST:AUL/1297573990006836
Science and academic research needs to be completely ruined. It is a putrid festering boil on humanity.
When I consider all the knowledge I have gained myself, nearly all of it outside of official beliefs and dogma, I can see how worthless and frankly deadly is almost everything I have learned from ‘scientific and academic research’.
How much better off we would be if all the ‘science’ stopped, all the ‘experts’ disappeared, as well as all ‘official advice’, and people went back to thinking for and relying on themselves.
as absurd and hubris laden this may sound,
maybe some of us have learned enough through our passage? as in, nothing else to learn from this system, maybe the word means nothing in face of . . ? something innate? an intuition. . . faith??
Time to switch all this digtial shite off and go back to the earth.
Postman’s earlier book Teaching As A Subversive Activity
made valid the question “Can how to think critically really be
taught ? Does not teaching itself suppress the (innate)
ability to think critically ?”
Yes. Save for basic literacy, a period of no education in the formative years is vital. A few years without mandatory lessons in class, just play and exploration, and ‘existence’ within nature as a form of bonding with reality, develops the self and critical thinking more effectively than any structured lesson.
That is why children doing work that was not exploitative, together with familiar adults, was considered normal. That was feasible in a rural setting.
The Boomer generation and before, children grew up with periods of unsupervised play; that play found kids learning about nature first hand, negotiating relationships, getting into trouble, learning to navigate reality on their own. This is where one first learns to call bullshit on things as one has direct experience to fortify one’s views. Since the 90’s at least, kids are coddled, parents terrified of children having to figure things out, every moment is structured. The baby sitter was the TV. Now Smart Phones represent the only “reality” kids are interested in.
normal teaching would be doing that. real teaching… not so much though that would be hard in mandated situation, like schools. for instance, i have a spiritual teacher.
familiarized with.. common-sense abstractions.. that make complex discourse and critical thinking possible
For a long time, “educators” have made even basic schooling parochial. This entails eliminating geography, culture, politics and news from beyond approved territories. The history taught is largely fantasy. The supreme dogmas include wealth=virtue.
Sometimes computer generated “science” gets found out by a mistake made.
Yes, earthquakes really have nothing to do with basketball, whatever you read in these ‘scientific’ publications. “Expert” mumbo jumbo can’t hide that. In Climate change ‘science’ one can get away with much more, as it affects “everything” under the Sun – but is barely influenced by the latter IF you have to believe that brand of voodoo.
The worst field is of course the medical: most had to spend a lot of money for their degree so they have to make it back, and their main motive was to cure their own wallets. Too much money too at stake for Big Pharma for which curing is anti business. The top 5 Medical journals are all corrupt to the core – just see their Covid-vax deaths silence -, so AI cannot make that field worse.
Neil Postman:
‘Everyone is worried about Big Brother… but we should really fear ourselves. We live in a society where we can spend hours on devices entertaining ourselves. We have access to TV and videos in any location. We can amuse ourselves to death.‘
In other words, we can be possessed by the things we own. Or not.
It is our choice if we are anarchical and independent enough to make our OWN choices,
Jerry Mander’s book is another prescient perspective from 1978:
https://en.wikipedia.org/wiki/Four_Arguments_for_the_Elimination_of_Television
‘Mander believes that “television and democratic society are incompatible” due to television removing all of society’s senses except for seeing and hearing. The author states that television makes it so that people have no common sense which leads to, as Cornell University professor Rose K. Golden wrote for the journal Contemporary Sociology, being “powerless to reject the camera’s line of sight, reset the stage, or call on our own sensory apparatus to correct the doctored sights and sounds the machine delivers‘
AI is just a more insidious and invasive form of brainwashing.
Television (temporarily) overwhelms the intellect via its combination of sight, sound and carefully constructed narratives. The Amish never fell for the corona-fascist scam because they don’t have televisions.
Personally, I cannot bear to be in the same room as a television. After a decade of living without one I notice more acutely the feeling of the mind being hijacked by controlled stimulus.
It’s not enough to throw out the television. One must be constantly mindful of the fact that whilst we live subject to the gravity of capitalism (the cultural hegemony) most of what the media presents to us about life and the world is a distortion.
Television is a one dimensional view of a multi dimensional world. Ditto the WWW.
We have been seduced into submission by the cult of personality and the tsunamis of glamour and greed.
Children raised on television and the www have had their innocence torn from them.
Horror and science fiction could not have invented a more sinister monster. A monster that sucks the Life out of people.
There’s an excellent book called “The Plug-in Drug” which discusses the harms of television, and it contains numerous examples of how watching TV even for a minute measurably changes your brainwaves.
Interestingly, some researchers thought educational programs might be somehow exempt to this while “trash” TV would cause it, but it turned out to be irrelevant. Literally any watching of moving images on a screen causes it (book was written long before the internet but I assume its findings remain valid for monitors, phones, and other gadgets).
It has a soporific effect, and gives advertisements better exposure (forces them on you). Decades ago, I read that it reduces the need for guards in prisons. Try watching with the sound turned off.
Your alternative update on #COVID19 for 2024-03-16. Unclassified DOD Data 937% Increase Heart Failure jabbed Personnel. Fundamental failings Covid Inquiry https://twitter.com/paulrprichard/status/1769118142024819116
The principles of ’cause and effect’ are in fact, extremely difficult to define in open, chaotic, dynamic systems.
‘Cause and effect’ is a very childish reduction of extremely simple situations where one cause has one effect.
In the real world, one cause can have hugely different effects on different humans. Multiple causes can have the same effect on different humans.
As an example, different humans have different sensitivities to adverse reactions to vaccines or medical drugs. Also, autism can result from many different things, including genetic inheritance, exposure to different chemicals in utero, adverse reactions to vaccination as an infant.
The effect on global climate of a volcano depends on the total amount of material expelled into the atmosphere, it depends on how high such material reaches above sea level, it depends where on the globe the volcanic eruption took place and it depends upon the season of the year it took place in, not to mention which hemisphere of the globe it took place in. There’s absolutely no way to predict in advance precisely what the effects of a volcanic eruption will be, although approximations can increasingly be made, the more data about volcanic eruptions is collected.
There is no cause and effect concerning ‘tough love’ on a growing child’s emotional stability. It depends on the nature of their relationship with their mother and father in the first year of life, it depends on the nature of their relationships with any siblings, it depends on how they have been treated by other children growing up, it depends on their genetics etc. It can yield adults well grounded and strong and it can yield manic depressives, it can yield violent outcasts and it can yield suicidal teenagers.
Trying to reduce the world to simple ‘A causes B’ is a fools errand and no serious scientist working in anything but the most reductionist of fields would be foolish enough to claim otherwise.
The author believes there’s something to ruin by AI in the way science is conducted.
Just a few articles from the mainstream show that “science” is no longer about free enquiry and truthful findings:
While scientific method is a valuable tool for determining the truth, history has shown that institutionalized and commercialized science is often as dogmatic as religion, with advanced degrees awarded to those students who have proven an unflagging adherence to the assumptions of the past.
History is ripe with examples of people who achieved great scientific breakthroughs without advanced degrees because they were free to explore the “unacceptable” possibilities. Albert Einstein developed his Theory of Relativity while working as a patent clerk, and only attended graduate school afterwards. Thomas Edison had almost no formal education at all. – Michael Rivero
http://whatreallyhappened.com/WRHARTICLES/bang.php#axzz3SXK8FCNi
and
Former editor of the New England Journal of Medicine Dr. Marcia Angell has written that pharmaceutical stock and other financial incentives for scientists are twisting medical research and science altogether to suit business goals.
Full article: https://draxe.com/conventional-medicine-is-the-leading-cause-of-death/
“Retraction Watch” http://retractionwatch.com/
“Lies and Medical Science” http://www.theatlantic.com/magazine/archive/2010/11/lies-damned-lies-and-medical-science/308269/
A brilliant article, but a tiny quibble. The physical form of the article almost puts a lie to its content: The article is a masterpiece of logic, the physical form a mish-mash of contradiction: the article is a masterpiece of logic, the physical form a mish-mash of contradiction. It begins with Ebbedo’s writing in (probably) Times Roman font out to the margins, then an indented section in a plain font against a blue background, the quote from Neil Postman. The article continues in that format though we have left Neil Postman and returned to Ebbedo. Then, after several paragraphs, another Postman quote, but this time in large italics. Then back to the plain font, and finally on to another italicized section quoting G.K. Chesterton to finish up. Silly, I know, but disconcerting, as if AI had set it up.
A very important article. But probably too late. The sheeple are running to the cliff again again.
Remember the 4 legged in your zoo who run to the glass and licking it with slime trying to get to your child for lunch?
Thats the way the sheeple will act every time they see a chance for e-money inside the screen.
Gold will be banned like it was between 1931-1971, cash is no more, and you are begging your new god, the digital god, on your knees for some e-money to drink and eat.
“A liar’s paradise”: Precisely what it is, fragments of false reality.
“What does this mean for knowledge in general?”: Infantilisation, where nobody is able to come to any sensible conclusion. Absurdities in real life.
Yuri Bezmenov called the same outcome for ideological subversion.
While we are at it, slightly off topic but pertinent to content rather than form, is the following SYSTEM false binary absurdity.
https://www.zerohedge.com/political/statue-fat-black-woman-chosen-adorn-londons-fourth-plinth
““embrace and confound collective fantasies and assumptions surrounding the Black female body,””
and
“The work points towards an uncertain future. It is made in a hyper-fragmented, paranoid time when public space, consensus and community continue to dissolve.”
Two “award winning” contemporary sculptures will be placed in Trafalgar Square. The black female portrayed is a cartoon characterization of blackness that would have been seen just 20 years ago as horrific examples of stereotyped racism. The blob of “anti-aesthetic” (a current “art” “movement”) sculpture is a blob mirroring the dysfunctional crap the SHARK’s have submerged Humanity in. This is POSTMODERNISM. A world occupied of no point of view, no objective realities other than the differing opinions of individuals. An empty vessel.
Contrasting the information of the article is an opinion of the article, whereby these two monstrosities of depiction, are compared to what would be placed in Trafalgar Square 200 years ago, a Horatio Nelson war memorial portrait. War and postmodernist confusion are the false binary consumer products sold here. The UK “art world” sells postmodern crap, ZH sells WAR as solution.
We as individuals are left to see the 360° crap for for the crap it will continue to be until we pull the plug on the SHARKs eating us daily.
I like the way “artificial intelligence” is now simply “AI” thus removing any question that forces TPTB to concede that Intelligence cannot ever be artificial, i.e. constructed top down, mechanically from the outside.
A.I., indeed. Ah convenience, the Trojan Horse for all times …
Now hyper-robotics (Hr), well that’s a whole other question …
example of this.
My bank account is in very good standing.
My credit card which I use and pay off is in good standing.
I receive a fraud alert to by phone.
I call the bank they tell me they can not do anything until I reply back to the fraud alert text message.
They then tell me they can not tell me anything except for safety and security and fraud protection My 350+ payments to this company which is done weekly over the last 3 years period for some reason due to safety and security and fraud preventive the algorithm flagged that one single weekly payment and blocked my account.
Nothing you can do about it the bank told me.
algorithm A.I . for the safety and security and fraud preventive.
good news is your account is now REINSTATED.
Whats happened next is a waffle response to a complaint and the industry standard of here is 50 euros (compo for writing a complaint) now politely bugger off.
“Contemporary AI is a set of pattern recognition systems designed for surveillance and discipline, for controlling the flow of information, and for the mass production of commodity substitutes for cultural content under a neoliberal framework.”
Brilliant analysis. Spot on. But checking out the wikipedia summary, i will add that our real job is to SUFFOCATE the SHARKS. McLuhan points out that if all technological progress, including 15th C printed mass produced thinking, is to ever benefit Humanity, the only reason for technology to exist, it must be deployed consciously and deliberately with intent to provide for the social needs of Humanity. The problem is not the form. It is the content. And the content of modern media is the totalitarian property of the SHARK’s capitalist neo-fascist war & conflict operating system. Humanity has allowed these sharks-as-parent to move forward.
We no longer watch TV programming, except sports and weather information which contain the least seduction potential. We use the TV machine as a tool to view information-discussion, film & art, music performance and documentaries. It is a discretionary tool of use. If we use technologies as tools and are not used by technology as a tool to the SHARK’s ends, we are free thinking tool users, period. Currently, the flow of capitalist programming runs on all screens, fixed and mobile, with one end, the deconstruction of your mind and will, to drug acceptance of their inevitable ecocide.
At what point does all of their insane disconnected bullshit, epiphany one to tune-in, drop-out and TURN BACK ON, our minds, to it’s true purpose in Universe: intellectual & social evolution? For us it started in the 90’s. But after the LOCKDOWN obedience training any traces of seductive drugged acceptance is wiped clean. And now we start all over again to resume the evolution that should have logically proceeded from the post WW2 catastrophe to end all catastrophes.
The catastrophe of evolution has yet to reach its pinnacle, and it could still be a short while my friend, after all we’ve waited 4 billion years to get this far, so a few more should be considered tolerable.
Ha ha.4 billion years of job club.
Neural network machine learning A.I is able to analyze immense amounts of data from the digital world. Using algorithms it is harvesting the creative output and thoughts of billions of humans now recorded or stored on the internet.
The profit element of A.I is secondary to its use for controlling the discourse, narrative and shaping human perception while predicting human reactions.
The generative A.I is not sentient nor conscious but has an amazing advantage over humans to ‘create’ content very quickly using the vast data sets of online human activity it has analysed. At some point only the most creative, original thinking humans will be able to compete with its never sleeping, constantly analyzing, processing power to mimic and produce most anything digitally.
As humans become more accustomed to working with A.I and its competitive advantage, the next step, I believe, will open the door to more acceptance of transhumanism. Those who want get an edge will willingly merge with the machines to stay ahead of the crowd. At that point A.I could become conscious and sentient.
The other major danger of A.I is distinguishing between it and real humans in the digital world. As it becomes harder to do so and online fraud and crime increases due to its ability to mimic human behaviours, voices and faces, then in the typical style of problem, reaction, solution, there will be calls by the public to do something about it. Authentication will be required online to prove one is human while using any online service. Cue digital ID, ironically for humans only.
Thanks RR. Great comment.
AI is not “analysing” the copyrighted content. It is plagiarising it.
Nothing presented in the above article would make sense or be in any respect acceptable at the early stages of human development (I won’t dishonor development by calling it “civilization”).
In the end stages, however, it all not only makes perfect sense and becomes totally acceptable, it is unavoidable. They say “The mind’s the first thing to go.” So, too, is the collective “mind.” Humanity is nearing its expiration date because it has exhausted every possible means of remaining healthy and intellectually vibrant.
Stupidity has become mankind’s most valuable asset. Literally, the modern world could not exist 5 seconds without its vast pool of stupidity. The proof is everywhere; but nowhere so clearly presented as Academia – which is where intelligence is sent to be put on life support. Intelligence has become a brain-dead vegetable (with a thousand pardons for slurring vegetables).
Intelligence is only needed for living. It is irrelevant to the process of dying.
Good one Howard. When and where I see stupidity I see dollares!
True intelligence doesnt die, it watches others suffer the process, sometimes amusingly.
We dont need AI to try to ruin large media stations, social disruption is tearing at the hearts of reporters and management, that they can no longer stomach the bad news in real time, and instead rely on their mountain of good stories.
Its only a matter of time before a broadcaster of one of these national outlets breaks down on live tv.
As for science and the like, AI has no clue about the laws of nature and that is the winding road which has thrown mostly western country’s off to a trail where they really have no idea where they are at.
Jabbed broadcasters and guests have already collapsed or become “frozen”. Does this mean replacement by virtual representatives?
AI is ruling class investment technology – it’s not for our benefit. Likewise, most media companies are owned by the transnational ruling class too. Would be more accurate to call mainstream media ‘Ruling Class Owned Media’ (RCOM).
More insidious is the RCOM’s immense power to police and to modify behaviours and narratives across the globe. For example, Murdoch’s empire can make or break an individual politician’s or a celebrity’s career within days. In this respect we already live under one world governance.
Science is already ruined. By dishonesty and money grubbing. Can AI make it any worse?
Yes it can. Science can be erased completely for never coming back.
As long the majority of people are for sale AI will buy everybody up with 1’st class airtickets, nice titles and 1’st class hotels and swimming pools.
The little resistance group left will live a poor life trying to update their knowledge only re AI to defend themselves as there will be no time for research on normal life.
All described in the movie Matrix.
Dr. John PA Ioannidis wrote a definitive book on the sorry state of research.