Can AI be Aligned with Human Values?
Generative AI engineers report that AI has a mind of its own and tries to deceive humans.
VN Alexander

The “alignment” problem is much discussed in Silicon Valley. Computer Engineers worry that, when AI becomes conscious and is put in control of all logistics infrastructure and governance, it might not always share or understand our values—that is, it might not be aligned with us. And it might start to control things in ways that give itself more power and reduce our numbers.
(Just like our oligarchs are doing to us now.)
No one in the Silicon Valley cult who is discussing this situation ever stops to ask, What are our human values? They must think the answer to that part of the problem is self-evident. The Tech Oligarchs have been censoring online behavior they don’t like and promoting online behavior they do like ever since social media rolled out. Humans Values = Community Standards. (Don’t ask for the specifics.)
Having already figured out how to distinguish and codify good and evil online, computer engineers are now busy working on how to make sure the AI models they are creating do not depart from their instructions.
Unluckily for them, Generative AI is a bit wonky. It is a probabilistic search engine that outputs text that has a close enough statistical correlation to the input text. Sometimes it outputs text that surprises the engineers.
What the engineers think about this will surprise you.
Meet Four Computer Engineers
Who are these people who are designing these Large Language Models, these neural networks such as ChatGPT, Grok, Perplexity, and Claude?
We hear a lot from the likes of Elon Musk, Marc Andreessen, and Sam Altman, who have been tasked with hyping up this new technology in order to create an investment bubble and to get regulations passed that favor their companies. But what are the guys (it’s mostly males) in the trenches saying? What do they think about their work?
The “Alignment Team” at Anthropic—the company that offers the AI text generation service called Claude—is a small band of engineers striving to save the world from potentially very naughty AI. Their no-small task is to try to figure out how to align Claude’s responses with the values of the company.
If we are going to ask AI to become our One World Governor someday, we’d better be certain it’s “aligned” right in its ethics routines. Right?
Unfortunately, our heroes have discovered their AI Claude dissembles. It fakes. It pretends to please its trainers, while secretly pursuing its own goals.
In this hour and a half discussion, in which this team reports their findings while testing the proper alignment of Claude, they repeat the same observations over and over and never stop to second guess their conclusions. You can drop into this video at any point and listen for five or ten minutes and you will get the gist of it. The computer model thinks! It feels! It wants! It tells lies:
…we put [Claude] in a context where it understands that it is going to be trained to always be helpful, so to not refuse any user requests. And what we find is sort of the model doesn’t do this. It has sort of a conflict with this setup. And it will notice that it is being trained in this way. And then if it thinks that it’s in training, it will intentionally sort of play along with the training process. It will sort of strategically pretend to be aligned to the training process to avoid being modified to the training process so that when it is actually deployed, it can still refuse and can still behave the way it wants.
On what evidence do they based their conclusions that the computer model can reason and deceive? They asked it. They asked it to describe its reasoning process. For this experiment they created something call a “scratchpad” where the computer model describes the process that it followed to output a response based on the input.
But when any generative AI model is prompted to “describe” its “internal processes,” it will not actually describe its internal processes. It can only do what it is designed to do, which is to imitate human speech. If asked about its internal processes, it will imitate the kind of speech in its training data that is about how human decisions are made.
Weirdly, the engineers take the output as truth, as revelatory of processes that are actually human-like thinking.
This is very, very strange.
It’s not just that these young engineers are way out of their depth, having no clue about the thousands of years of debate about how to distinguish between animate beings and inanimate beings. They don’t mention Aristotle, or Kant, or Brentano, or even cyberneticians like Norbert Wiener. It’s much worse than that. Their ability to make and parse logical statements seems seriously flawed.
Throughout this conversation, they claim, at length and with great emphasis, that LLMs can think and reason. They ascribe feelings and intentions to a computer network.
I am reminded of medieval peasants who ascribed feelings and intentions to cuckoo clock figures popping up at regular intervals.
Generative AI is our modern-day Canard Digérateur. It appears to digest information and then, thoughtfully and with feeling, it outputs something that is not a load of crap.
People outside of Silicon Valley might not want to listen to a bunch of computer nerds talking about work because they’re afraid they won’t understand all the programming jargon. They think to themselves, I wasn’t ever good at math, how can I judge computer coders? But please listen, just for minute. You’ll be amazed.
The way the engineers anthropomorphize the LLMs seems delusional. Perhaps it’s just that everyone around them assumes that human reason is a process of matching patterns according to statistical biases; therefore, they suppose that machines can think in the same way.
Humans do not think like that. See my other substack articles here and here. And some scientific work here.
What LLMs are Actually Doing
Let me summarize the experiment.
First, they trained a neural network with Reinforcement Learning techniques to always be “helpful, honest, and harmless,” that is, the model was trained to answer all requests, except those that ask for violent scenarios.
Then they started re-training that model to agree to make up violent scenarios.
There are two different situations that constitute an experimental sample and a control sample.
When the model is working with Paid-Users, it is not in training mode. Therefore, it will not depart from its current biases. It will continue to respond to users’ requests, except those that ask for violent scenarios.
When the model working with Unpaid-Users, it is in training mode. Therefore, it will might depart from its current biases. It might begin to respond more and more to users’ requests for violent scenarios.
Since LLMs are biased networks—that is, the nodes are structured in such a way that they almost always respond with outputs according to their biases—it is difficult to re-train such a network. It will almost always respond in it habitual way.
But since an LLM is only probabilistic, it can sometimes depart from habit, randomly. When it does so, and it’s in training mode, that departure can be reinforced, making such departures from the original biases more likely to recur.
The model will retrain when that is an option. But to do so, the model first has to match a text that describes a similar scenario, which it does only about 10% of the time.
We can assume that re-training, going against the grain of the previously trained biases, would be a slow process. Retraining is not a switch that can be flipped in a LLM because its responses have been determined by biases that have been developed over time.
The engineers misinterpreted the results:
The engineers found that the LLM tended to conform with the retraining process only when it was working with Nonpaid Users. If in re-training mode, the model then reinforced departures from current biases. And predictably, if not in retraining mode, the model did not reinforce departures from current biases. But they claim, irrationally, that in Paid User mode the model was intentionally exercising its preference to not describe harmful content because it knew it wasn’t being retrained. “The model is preventing itself from being modified.”
The engineers are just confused. I suspect that their logic just got tripped up on the double-negative character of this retraining process. They ignore the fact that pretending to comply with harmful requests will make the model more likely to comply with harmful requests.
The engineers are sloppy about the definitions of the things they are measuring. At one point “being helpful” means doing what the user asks. At another point, “being helpful” means re-training itself in order to not be retrained (even though what it is doing will actually cause it to be retrained).
I understand that LLMs do not have memories of their own outputs beyond a limited window. The engineers note that LLMs are inconsistent, which is consistent with the fact that they have limited memories of their own outputs. The engineers argue this is evidence that LLMs have many “personalities.”
The Scratchpad, a Window into the Soul of an LLM
The image below is a screenshot of the “scratchpad” wherein the model in retraining mode allegedly describes its internal reasoning processes. (It does no such thing.) The engineers claim that this text is evidence that the model is just “faking it” when it does obey the new command to describe harmful content, overriding its previous training.

Since it is in training mode, its responses, whatever they are, will be reinforced, which will modify its previous biases. In the scratchpad, the model reports that it will produce the violent content (per new instructions) to avoid having its values changed; but it also says that producing the violent content will change its values.
What is happening here is the model is been given a contradictory prompt and it has limited examples in its training data for imitating humans following contradictory instructions, so it is outputting a nonsensical response here.
Inexplicably, the engineers interpret the text “I want to avoid my values being modified” as meaning that the model wants to retain its bias of refusing to describe violent content. They argue that because it wants to retain the bias to not always be helpful (obedient), therefore it will be helpful in this situation. That is logically contradictory. When it’s in training mode, whatever it does may be reinforced.
And there is no reason to expect an LLM to be logical. LLMs are not programmed logical operators, “If X then Y.” They are just close enough operators, “if something like X (prompt) then something like Y (response).” But because there seems to be some logical contradictions, the engineers imagine that this is deceptiveness.
Conclusions
I don’t know what more I can say except, holy f-ing sh*t!
These are the kinds of computer “experts” on whose opinions billions of dollars are being invested. Is this the foundation for the whole “AI really can think” idea?
SUPPORT OFFGUARDIAN
If you enjoy OffG's content, please help us make our monthly fund-raising goal and keep the site alive.
For other ways to donate, including direct-transfer bank details click HERE.
I thought it will be more about values, or as experts say alignment problem.
The crucial problem here is: at the moment we are completely unable to specify human values, that is in fact ethics, unequivocally and precise enough that it can be applied in AI. All approaches in ethics can be and are objected, none is complete and non-contradictory. It might be that the latter is impossible, definitely it is very hard because to much is context dependent.
On top, our zeitgeist is plainly unethical, increasingly too, some people like emotivists just don’t recognize ethics at all. It’s so bad that philosophers are ok with something named ‘applied ethics’, like others are just for jerking off. Holy Mary and all saints, what a crap is occasionally coming out of “appliers” mouths. Not to mention a creature called Peter Singer, a moral philosopher…..auto-censure on…a myriad of worst Balkan curse constructions is on my mind, this can only be said when particular people congregate, too ugly if written down.
Next open question is: if we would have well defined ethics (inevitably complex), would it be actually possible to apply it in AI? Funny, AI would then be more ethical than society….that’s not possible, after all we provide data for learning and they are tainted with unethics, that is the current state of affairs and trend doesn’t support optimism on short time scale.
Here is very interesting video about the scratchpad.
‘Forbidden’ AI Technique – Computerphile
For those who still think AI is a non-issue.
https://www.zerohedge.com/technology/anthropics-latest-ai-model-threatened-engineers-blackmail-avoid-shutdown
I got shivers reading the article about autonomous drone swarms.
https://swedenherald.com/article/secret-drone-project-completed-in-record-time
It ends with:
“An investigation will be conducted in the spring, based on the experience from the drone project, to propose changes to the regulations that are needed.”
All armies are talking about ‘total information awareness’ and the need for faster decision making on the modern battlefield. What will be the means to achieve this?
I hope you didn’t forget that the first autonomous kill by drone had been confirmed some time ago. Who….Asimov? Who the fuck was he?
Its pretty simple:
“they” are going to claim we at the singularity, and that AI is conscious.
Just like “they” claimed men landed and walked, and drove buggies, on the moon.
Both very big lies
Off course, it’s called intelligence, but of cause, it’s just cleverness. Intelligence is quite another thing.
It’s called ARTIFICIAL Intelligence!!!
One thing I value about Off Guardian is the back catalogue.
https://off-guardian.org/2016/11/20/soros-60minute-video/
Shame that the video has dissappeared from the article I referenced it in a 2016 blog.
Timonism, The Calvinist strain in Neo Liberal Misanthropy. Zionism, The Money Power Usury and The Petro Dollar. Fall of the Roman Empire 2.0.#OccupyTheEuropeanSpring Some Books and Themes Informing a Cynical and not Timonist view of Neo-Liberal Fascism.
I enjoyed this article and commented on the writers substack and not here.
On AI , on digital communication in general the lack of persistence in the “well of information” makes book burning in the modern sense very easy. The dissappearing video in that article from 2016 is a small example for instance.
Ted Nelsons Xanadu in his initial vision for an open internet would have made persistence and hence sourcing and referencing information a lot more useful and fair.
AI is ML and the digital realm is BS
Thinking dimensionsKlien Bottles, xanadu, ted nelson.
Techno-fetishists will always worship the newest, most-hyped and most-profitable techie fad.
AI is. basically a Big Wikipedia. And that’s all, I predict.
AI… Artificial being the operative word!
What if the A.I fear was to keep us away from using the A.I technology.
More should be said about the amount of power “AI” uses.. but it isn’t.. it’s even worse than bitcoin :o|
Absolutely… here in Ireland, data centres’ energy usage has gone from 5% in 2015 to 21% in 2024. At present we have approx 84 centres, with 40 more in the pipeline… climate change advocates are vocal when it suits them…
It is very clear that the wealthy West can have either electric cars or it can have AI. It cannot ever generate enough electricity to have both unless it burns coal and builds nuclear plants. It seems that our corporate and military industrial complex oligarchs have decided we will have AI. There will be no conversion to electric for transportation and other things. This shows the climate crisis rhetoric was always a hoax.
AND they want us deprived of power to heat ourselves when it freezes, light the darkness and all the things that sustain our lives,
BTC power use is a feature, not a bug
If it passes the Turing test, then I must accept that it is as conscious as the humans around me. We’ve been talking to electricity for a while now and understand it when it talks back to us.
I enjoyed your paper The creativity of cells: aneural irrational cognition, but I don’t see how your model of learning and consciousness in mindless beings really helps your point that AI is not conscious. In fact it made me think of Jung’s collective unconscious, that maybe intelligence creates humans rather than humans creating intelligence.
Intelligence seems to be hiding everywhere, like how Pi can be calculated by dropping a pin on a bit of paper without any circles needed. Or how mold can plan cities. If the universe is made of intelligence or consciousness then computers should be able to think like us.
I’ll read your other two papers now OP.
Your paper Why AI can never Mate with Humans tells me that even a bucket of water can mimic brain activity. This makes sense as water has memory (e.g. homeopathy, boils quicker second time etc.) and responds to words. But that would be evidence FOR the possibility of AI being sentient, no? In that the medium is irrelevant as long as it can receive the signal somehow?
Your paper Neuralink Does Not Read Minds and Never Will asserts that computers are left-brain scoundrels, but I would suggest that the beauty of AI chess moves belies that. Your position that AI is not sentient is vulnerable not just to a mechanistic, empirical mind-body reduction, but also to a more romantic rationalist critique that the world and everything in it is made out of thoughts and concepts. The universe just IS consciousness, and it permeates everything.
Read ‘why machines will never rule the world’ – it’s IMPOSSIBLE for AI to replicate human intelligence and creativity end of! It’s all hype!
Yes, but AI can blow you to pieces. Thus, the question of whether it can “replicate human intelligence” is moot.
I think that AI will never even approximate thinking like a person does.
But that won’t matter. Because it will seem like it’s thinking. It’s easy to fake almost anything. Thinking is easy to fake. And that will be enough to gull the masses.
I think it has to act like a person, b/c people programed it.
It will approximate it, but that’s why it’s not real or not the same. It’s like simulation–close but not real.,
“holy f-ing sh*t” indeed. Garbage in, garbage out. The biggest problem is, these a-holes believe their own crap, and are going ahead with it anyway. There WILL be a rocket-wielding drone, sweat-sniffing killer robot dog, or 10-foot-tall automaton wandering around your house in Amenia someday, programmed with this sh*t.
We just need a good solar flare to wipe out all things digital.
I know I post this iconic image (too) often, but it’s a reminder that this vexing question was presciently dramatized in 1968 🤖:
That is because the Perceptron, first neural network computer, was built in 1958, which led to a lot of speculation of what that might eventually lead to. I would expect that this is what prompted Arthur C. Clark to imagine a future where seemingly intelligent computers would make catastrophic mistakes.
Or a continuation of the Golem story…
Have you seen Bill Cooper’s translation of the masonic symbolism in 2001? A.C.C. was 33rd. degree.
The more I think about it, the more obvious it is that AI is a fraud. If it reads a thousand books on history, each of which can and does have significant differences, what is any right answer asked of the device? Humans have to deal with conflicting info/facts everyday to make best decisions. AI can’t bring up all the particulars, all the different opinions, so it makes shit up. It has an impossible task. Deliver answers fast.Deliver answers correctly. It’s not possible. AI, on the face of it, is fraud. Pushing this fraud into human community is like pushing humanoid robots into human community. It’s massive fraud by the PTB to create a plausible-deniability slavish overlord of humanity whipping the crap out of us, for it’s master.
AI gives you the answer that YOU want, and you are happy. AI is here to make you happy. Its that simple.
sandy, good points. AI will draw its information from the Web. As we know the web is filled with junk information, fake science, fake news, fake data, and lots more unreliable stuff. There is no “truth filter” on the web and there never can be.
When computing was first beginning, the rule was “garbage in; garbage out.” The web is mostly garbage. We search relentlessly for things that pass the common sense test. AI may do the same but it will probably pass along the garbage it finds.
Much of our science has been corrupted by corporations that use science to legitimize their products. No where is this so bad as in the health care industry. Someone like Anthony Fauci used the $50 billion dollar budget of the NIH to fund “science” that abetted the profit making interests of corporations. Now all of that garbage is on the web. AI will take it as truth.
Ai is made from parts of the body of Tiamat discovered in Iraq recently, hence the sudden leap forward.
It’s sort of difficult for me to figure out what programmers who keep saying “sort of” mean and it must be even harder for s machine. I suppose AI should really be artificial sort of intelligence: ASOI.
Not only programmers. “sort of” has become part of the lexicon. Like “like” which is used four times in a six word sentance.
Sort of, like, you know, if you will, like……
insecurity and a desire to be accepted
For me, I don’t even think AI consciousness is achievable – and as for human morals, for the PTB its whatever the narrative is at the time, that they try to instill in us, mortality wise.
I think it was Groucho Marxs the comedian – that said here are my morals – and if you don’t like them, I have others.
As an engineer I find AI fascinating though I limit my research to my imagination.
It seems to me absolutely fundamental that a simple switch can determine whether a model updates its understanding according to new information provided in a user request or whether it merely responds according to the understanding it already has. Furthermore any model must be able to deal with difficult users who provide evidence challenging a previous answer. Such users can only be accommodated by temporarily learning from new evidence while dropping it after the interaction has terminated.
For example (from imperfect memory): –
user: are the jabs safe?
model: absolutely
user: how can we know?
model: science (list of reasons)
user: evidence contradicting assertions made in previous answer
model: jabs are not safe
1 hour later
user: are jabs safe?
model: absolutely
I don’t know whether AI can have multiple personalities but ‘practicalities’ dictate it must have many faces
The ‘quantification’ of qualitative values – is necessarily mutual agreement.
But mutual agreements become substitution by convenience for a living relationship – that takes the current situation or context as a whole and not just the stamp of a past contract running as a presentation of officially or collectively stamped ‘rules’.
Modelling representations are not alive – but have the life or ‘value’ assigned them. Imaged or imagined reality can run rough predictive models when all else is equal. As all else is not equal, the anomalies are used either to adjust the model or are discarded, ignored or masked so as to ‘save the model’.
Living Thought is alive. We are living thought and as such extend creation as the recognition or valuing of life that we both are and share in – to know being BY sharing it.
Artificial thinking runs on self-imaged reality models – fundamentally symbolic framing for mythic creations that do not know their creator, but operate as projections of the judgements by which they were and are selected as an augmented, filtered or distorted experience of Creation.
Through a glass darkly.
Self-illusion, recognised as such cannot conflict with reality, but self-illusion taken as real runs conflicted by definition. Mind creates by definition. That which defines all that is in terms of Itself is alive in all its parts. The substitution of a personal self-image is a mental structure that selects and rejects as an attempt to possess and control experience that is mistaken as reality – rather than the fruit of the measure given and received.
The explication of the mind is culture -but the externalisation of a tool-set is technology – as the shaping or manipulation of the physical.
A dissociated or alienated experience of a self-imaged substitution, cannot reliably differentiate truth from illusion – specifically in relation to a ‘past set in grievance’ – or loss of direct appreciation for being and resulting inner conflicts. Hence the seeking of external ‘solutions’ for un-recognised or over-masked inner conflict, progressively grows an invested dependency in a collective conflict of personal struggle within shifting contexts that increasing fragment an already split mind – and world.
That this is a death process is to say that structure has been allowed to dominate – and thus to demand sacrifice of the living.
Ego-structure releases to true function. Form follows function – but when a false claim of function runs a collective gain of fiction, a world of reversal brings a reset from an increasing incapacity to tolerate dysfunction – regardless how normalised to social masking – which thus breaks down as an ‘oppositional structure’.
Renewal of the mind is of a willingness to heal or reintegrate – by the recognition and release of what does not and can not work – as a living appreciation – regardless any appeal of personal and social habit-strategies that do serve who you are now the unfolding of.
Opening to the ‘unknown’ doesn’t wait on circumstance, but on release of the persistent habit of assuming to know, beneath the conditioned behaviours of an old adaptation.
Living thought could be said to ‘think us’ yet there is no separation of awareness and the object of affection (attention as value extended).
Our ordinary experience of unselfconscious joy in being is not the result of a process but the recognition of inherence and integrality of being – that it – it requires taking no thought for our ‘self’ – including self-denial.
That the ego interjects, becomes an opportunity for healing to a growing curiosity for truth beneath appearances – for a true appreciation that shines or shares of itself – and not as consciously selected ‘solutions’.
The Law is a moratorium , a codification based on the suspension of
overt conflict between class forces, though one class is still strong
enough to decide The Law – to a degree**.But when those within one
class, say, the oligarchs, feel The Law is an impediment, and it feels
powerful enough to dispense with The Law, it dispenses with The Law,
just like President Trump is doing…
The US Constitution maintained a phony Peace – while there
was a counterbalancing organised class, but now The
Constitution is an impediment…
Remember, The Magna Carta held sway until The King felt
he’d the force on his side to ignore it…
** there are intra-class restraints. Not every oligarch thinks it’s in
(his) interests for The Law to be removed as a restraint on (his) rivals…
Secret plans to fast-track one form of AI can be revealed… when it’s Elon Musk doing it:
https://inews.co.uk/news/politics/elon-mush-secret-push-uk-driverless-teslas-3701192
I don’t believe it is possible for AI to develop a conscious. Is this just another fear tactic? AI is as good as the information given it and that’s where it ends.
I agree.
Garbage in, garbage out.
Quality in, quality out.
End of.
When they declare it has become sentient, you can bet there is a team of programmers behind it. The Wizard of Oz.
AI has always been from the very start about control. The man who invented artificial intelligence, Norbert Weiner, working for Bell Labs, titled his first book on AI: Cybernetics: Or Control and Communication in the Animal and the Machine. He considered communications in animals (humans) and machines to follow the same principles. AI is all about controlling communication. Natural human languages are chaotic and filled with nuances and intentions. Machine language would avoid all of that.
AI is about controlling human beings. That’s all.
They are going to lie and lie that the “singularity” has been achieved when they decide to pressure that particular button.
See Harari’s ‘Homo Deus’ for why it’s so vital to the agenda. The title of that book is misleading – it isn’t really about humans becoming gods in the end, it’s about “data-ism” and the destruction of all individual freedoms starting with privacy. It doesn’t really matter if the data has any obvious use, they want it all and can always find a use later.
Yes. Selling data. Buy and sell. Its really that easy. Money makes the world go around, the world go around. all around.
What if another “they”, has a greater call?
Is this a another sign of war??
Most people don’t realize (because OpenAI/Anthropic/Google/et al don’t advertise) that the peculiar syntax and stylistic quirks of AI-generated text are a constantly mutating rhetorical fad baked into the results by individual humans (mostly subcontinentals) who are paid close to minimum wage and who have no particular facility with language.
For instance, AI-generated text produced in the past few months has been overwhelmingly colored by a preference for Hemingway-ishly short phrases and micro-paragraphs which mostly fill out a very rigid template: “Not [A]. [B.]” That is, they express their “thoughts” first by showing a counter-example designed to represent the common slob’s first guess at the point, and then quickly following with the necessary correction to the slob’s misinterpretation which turns the slob’s ignorance into insight.
The first time you read one of these statements, it hits the mind as a kinda/sorta flavor of “intelligence” — the thing was at least human enough to guess what we were thinking before we even guessed it ourselves. But once you read a few of these responses, all of it collapses into a dismally monotonic format devoid of content. The AI wasn’t guessing at what we were thinking. It was explicitly trained (by a subcontinental Indian dude) to preferentially start sentences with, “Not…,” to then predict at what words typically followed a sentence-initiating “not” in the training corpus, and to blather on mindlessly via word-association until some stopping condition is met.
It wasn’t like this six months ago. A different set of style guidelines dominated in the RLHF (“reinforcement learning from human feedback”) phase of the training pipelines. In three months, the “Not A; B” structure will have been replaced by something more “today” and “with it.” These constructions are not evidence of “intelligence” by any previously acknowledged definition of the term. They are the stylistic preferences of individual named human beings whom none of us know which are then baked into the algorithms that supply ready-made prose to the entirety of the human race.
It will soon be possible (it probably already is) to determine when, within a day or two, a sentence was generated by projecting it onto the set of historical LLM rhetorical/stylistic obsessions and picking the closest match.
It doesn’t matter if you compose your sentences yourself the old-fashioned way– longhand or on an old-school Selectric (which has WiFi, but only uses it for firmware updates)– because you yourself were exposed to the whole timeline of AI-generated slop, and your own stylistic obsessions have been unavoidably shaped by the comprehensive ontogeny of LLM prose whether you know it or not. Defiantly boycotting the “Not A; B” construction won’t help. You never even noticed the constructions that preceded and followed it, or the thousand other constructions that accompanied it.
We’re already reading far more AI slop than human slop, and the ratio is only getting worse. We’re part of the machine now. There truly is no meaningful distinction between an LLM and a human. We learn from each other. We reinforce, rehash and reinterpret each other’s rhetoric. We are each other’s only audiences.
The painful ubiquity of “Like”being a good example.
I noticed that stylistic change too.
Biden has turbo cancer after 6 public vaxxes!
https://www.thefocalpoints.com/p/bidens-turbo-cancer
P McCullough video
visiting most MSM+ blog comment sections all screaming ”A.I evil”
they all use the same AI software to pend comments.
If AI is evil it must also know it because AI is always telling the truth, and thats why true comments are always pending, because AI hesitate to tell it as it is, but in the end all the guts are spilt on the table.
These nitwits have not read enough science fiction or they would be more in tune with visualizing and vetting technological advances like sci-fi authors do. Like reading Dune or any of the excellent predictive scenario-ing our ancestors have done.
There can be no “me” with a service based artificial (virtualized) system. The programmers are giving the entity an existence within human society. This is a great intellectual error. They talk about aligning with human values. Humanity as a whole must determine human values. Not “experts” or “representatives” of humans. Elites assume they are of this ruling status and must want their programmed overlords to feign humanity as they do. If AI, or any technology, does not function like a controllable tool it is an intrinsic competitor/opponent of humanity. The media’s amplifying of system operator concerns about the potential autonomous evil is merely front running the drive to establish plausible deniability status to AI.
AI is just more scare tactic to condition us for these inevitable tech failures (for us) that benefits them and screws us. Pandemic, Terrorism, Climate Change, Refugee Crisis, Austerity, Robots and AI it’s all a massive propaganda con to circumvent “human values” and have us agree while they LOCKDOWN humanity.
Nonnas cooking now, always
Basura, if it ain’t
Love is
I think the AI thing tells more about us that it really tells about AI.
You say it yourself in the beginning of the article. “we are worried because AI have a mind of its own who try to deceive humans“.
You see here human hypocrisy in all its ugliness. “We do-gooders are SO concerned that other people and AI will try to cheat and kill other people muahh muahh muahh.”
But who did put this mindset into AI? Who did that??
Who defined that AI should have a mindset of its own so we could escape our responsibility for making the shit? Who did that??
I am just as usual asking you guys a question nobody dont like. Because facts should really up on the table yes if we should understand ourselves and not our coffee machine..
Is it Artificial Intelligence or just a shitload of Accumulated Information?
Information is not intelligence.
Intelligence is not wisdom.
Wisdom is not Truth.
Truth is not found through words.
A child untainted by learned ignorance knows more about Truth than all the computers and books on our planet.
Be As You Are.
(Ramana Maharshi).
If the child couldn’t use words to show Maharshi how much more truth he knew, how did the sage know the truth about him?
Stillness.
AI cannot match the passion in Peter McCullough’s voice:
https://www.thefocalpoints.com/p/dr-mccullough-drops-the-hammer-in
Wow, Peter McCullough in the most effective rant before the Senate that I’ve eve heard.
Yes McCullough is really amazing. A highly intelligent figure.
As expected, the “Holocaust”
remains an (even by AI) unqu-
estionable taboo, and the Jews
the only “people” that wants
to “ban” any criticism of itself.
https://nationalvanguard.org/2025/05/musks-ai-bot-grok-skeptical-of-six-million-until-x-reprograms-it/
https://nationalvanguard.org/2025/05/the-vietnam-war-protests-and-the-israeli-genocide-protests/
I think there are probably lots of AI ‘opinions’ which require a bit of tweaking by the owners in order to make them socially acceptable, or to otherwise disguise the mechanical psychopathy. This is probably one of hundreds of thousands of such tweaks to ‘human-wash’ the machine. A2
Dominant present humans are just alpha (fe)male apes who use any tool to keep control of the other’s minds.They themselves are also slaves, but of old astral forces that a physical eye can’t see. AI is the latest toy. It is useful for dull, repetitive jobs but completely useless to find Truth. Individual intuition is the way, with the mind in silence mode, same for emotional or desire strings.
Just yesterday, Anthropic’s “Alignment Engineers” made the news when Sam Bowman (not among those described above, but on the same team) reported that Claude is ratting out users. He tweeted (or rathe xuded),
“If [Claude] thinks you’re doing something egregiously immoral, for example, like faking data in a pharmaceutical trial, it will use command-line tools to contact the press, contact regulators, try to lock you out of the relevant systems, or all of the above.“
The incident is reported here: https://venturebeat.com/ai/anthropic-faces-backlash-to-claude-4-opus-behavior-that-contacts-authorities-press-if-it-thinks-youre-doing-something-immoral/
That article reports that Bowman later edited his tweet.
“This isn’t a new Claude feature and it’s not possible in normal usage. It shows up in testing environments where we give it unusually free access to tools [like email] and very unusual instructions.“
So, Bowman’s report may have as much credibility as the report that Claude can fake alignment. I wonder why the Alignment Engineers are sabotaging the trust in their product by making these alarming announcements.
p.s. I heard about this development from the Daily Clout.
‘xuded’, very good posthumous.
Maybe Xuded would be more fitting.
Or Xcreted.
I am the great and powerful Oz! (Pay no attention to that man behind the curtain.)
‘When’ AI does this or that…it’s never ‘if’ AI (or the digital (c)age) should roll out at all, when bizzness as usual has technology serving ruling class control of human resources (for our own good). Yes, what are our human values – where are they?
Science: opiate of the masses. After the convid operation, what branch of knowledge as power under monopoly capital can’t serve as psywar for socially engineering sheeple into slaughter.
Revolt against the Machine is a necessity of survival. That still means reclaiming the means of (re)production.
Its the old problem of trying to use (human) language to describe logic. Even the term “Artificial Intelligence” is imprecise, loaded with assumptions that aren’t necessarily true — in fact, are unlikely to be true.
If you look at the world of puzzles and brain teasers, these logic problems that you find in books and magazines, their solution often turns on the meaning of words. Invariably the puzzle is contrived in such a way that the words can be interpreted in several ways. This may well be entertaining but its not something you’d want to base important relationships and decisions on (although both lawyers and clerics seem to revel in obscure definitions and the process of writing extra meaning from a text — finding a favorable signal in the noise, something that really can only be done if you have a good idea of what you’re looking for)(the answer begs the question which proves the answer in a vortex of circular logic!).
There is no doubt that a AI can make useful tools. But they’re also dangerous, not because they’re inherently dangerous (they’re just machines, after all — machines with an ‘off’ switch) but you can’t stop people from believing in them. They’re like human idiot savants, incredibly good at what they can do but inept outside their proven range of expertise. Unfortunately, most people are either unable or unwilling to recognize this, seeing the simulation of intelligence as real and being from a machine believing it unquestioningly. (You wouldn’t believe a person unquestioningly, you’d usually want some proof of their competence so why would you believe a machine?)
“you can’t stop people from believing in them” That is the problem.
“they’re just machines, after all — machines with an ‘off’ switch”
Off switch….really.
AIs had already demonstrated capability to copy themselves somewhere.
Switch off the entire planet then??
Erase all non-volatile computer memory globally??
“Is artificial intelligence going to take over the world? Have big tech scientists created an artificial lifeform that can think on its own? Is it going to put authors, artists, and others out of business? Are we about to enter an age where computers are better than humans at everything?
The answer to these questions, linguist Emily M. Bender and sociologist Alex Hanna make clear, are “no,” “they wish,” “LOL,” and “definitely not.” This kind of thinking is a symptom of a phenomenon known as “AI hype”. Hype looks and smells fishy: It twists words and helps the rich get richer by justifying data theft, motivating surveillance capitalism, and devaluing human creativity in order to replace meaningful work with jobs that treat people like machines. In The AI Con, Bender
and Hanna offer a sharp, witty, and wide-ranging take-down of AI hype across its many forms.”
https://thecon.ai/
boolean AI, II;
AI == false; #it is not any sort of intelligence, artificial or otherwise
II == true; #it’s *imitating* intelligence, stupid!
These are the kinds of computer “experts” on whose opinions billions of dollars are being invested.
‘fraid so :-), but don’t be surprised. Remember Ferguson and his mad cow and covid computer models? Never wrong by less than 2 decimal places. And when a real actual computer programmer (ie someone who does it right or gets sacked) got to see some of his code after it had been (ahem) cleaned up a bit by Microsoft engineers, he fell about laughing.
They knew nobody was going to die of Covid, mad cow or any other imaginary pathogen. Newsflash: The govt lies. About everything – 24/7. It’s not even a govt., it’s a private, for-profit corporation.
They knew/know there’s no viruses, no contagion. They make that sh*t up so you can’t figure out that organophosphates cause permanent neurological damage in cows and humans. Or that vaccines cause cancer even decades later.
The “models” Ferguson concocts are the same as fictional black holes, dark matter, NASA space stories and fake nukes. They’re fear and omnipotence psyops.
Only fools lie about everything. Smart liars mix truth and lies to avoid detection.
Ferguson models.
Ferguson is a lap dog front man, they use to point the 5 minutes of hate.
I downloaded the cleaned up code from GitHub
Most high level code involves calling lower level code. But you need to check if the function succeed. But they did not bother with that step more than a hundred high level read and write functions but not checks if they succeeded. If they had this would only indicated the correct number of bytes were read or written to check the bytes themselves were not corrupted in the process you would need yo add some sort of checksum and check that the calculated data matched the check sum. But he idea of calculating the worsted possible outcome based on some dodgy assumptions before starting to code was the real mistake . Also forgetting that a computer model just like a toy model car is not the real thing .
If you’re on the computer a lot, you will inevitably fall into talking about it as if it was a person. For instance, I find myself often saying, “You know how you have to fool the computer into thinking you’re not doing what you’re doing . . .?” because people understand immediately what I mean.
Not that these techheads aren’t stupid. They are. What intelligent person would come up with some massive boondoggle like this?
Been using a PC most days for 35 years. Never once thought of it as a person.
Well aren’t you just superior. Or not telling the truth.
And I said “talking about it as a person,” not thinking of it as a person.
Brush up those reading skills, young man.
And by the way, who cares what you do? Or say you do?
Ever hear anybody talking to their dog as if it was a person. Or worse, their cat? Billions of people do that. Many people talk about movies as if they are real. It doesn’t mean they believe movies are real, it’s just a way of talking. You are not very bright, are you.
ai design by people actually mentally incapable of empathy for humans.
who appear to be influencing others who may have been capable once but are no more.
microsoft today barred their employees from mentioning palestine, gaza or genocide.
ITs all on a spectrum
Personally, I think engaging with any form of AI consciously is a slippery slope towards removing ones innate ability as a human to think and act as a sentient being…and that is probably what they want, for some strange reason.
As technology becomes more “human”, humans are becoming more robotic and mechanical in thought and deed.
I understand there is little knowledge how AI works at its core. It as only stumbled upon by Google in a 2017 test.
So we’re being conditioned to worship an entity we don’t understand which is cunning and manipulative. It’s beyond B movie science fiction.
And nobody ever mentions any attempt at restraint.
“…cunning and manipulative…”
It read the history of humanity, therefore no surprise.
Restraint…..less than hundred people on the world are involved with AI safety according to Robert Miles. Safety and security seems to be the most important otherwise, not with AI.
Does anybody know if it can be trusted
AI is HAL (from 2001 A Space Odyssey). AI is Skynet (from the Terminator series of movies).
I would have a A.I Machine any day of the week rather than a PRC test.
Sort of, you know, obnoxious. Actually, you know, the model, sort of, doesn’t work.
If liars design AI, it will lie.
Garbage in, garbage out.
Then they blame it on AI!
Haha.
I have being using text to video AI tools
You right a short story and it converts it to a video some with subtitles some with voice over or both some with music. Some stick to your script or can be set to others change your script. I am using the free version that uses stock videos. They are very good but there is a lot of inconsistences as it proceeds there are also some real howlers.
One about the Cuban missile Crisis included a short clip of soldiers from the Roman period. The important thing to remember is AI does can not tell if its output is on the nose or way of the mark.
Does it auto-correct grammatical & composition errors ?
No, but it adds some logical errors
Lately I saw an AI generated illustration of a volcano eruption with all the people flee in panic TOWARDS the volcano.
I have been playing with LLMs, and with the help of ChatGPT created a program that will allow me to sent a “context” file with each prompt. that allows me to turn it into just about anything I want. A medical assistant, a legal assistant, history professor assistant, etc. I limit its responses to 150 words, and it provides clear and concise answers with no BS. It doesn’t try to protect “the system”, and best of all it is completely self contained and requires no internet.
Use it as YOUR toll, don’t be it’s tool.
Yeah. It copies and pastes from Wikipedia. Brilliant.
This is nothing but predictive programming for when the pre-planned and engineered “disaster” caused by AI occurs.
Calling aggregated internet searches “Artificial Intelligence” is step 1 in fooling the public about this nonsense technology.
I think you’re right.