In the fertile Arkansas Delta lies a small town called Cash. Its name is an Anglicized version of a nearby river, Cache, which in French means “hidden.” A cache is also a computer term.
It’s therefore ironic yet appropriate that one of Cash’s aldermen, Bradley Ledgerwood, helped discover a hidden artificial intelligence (AI) program that has devastated people living with physical disabilities in the state of Arkansas.
You might be asking, why does this matter to me?
I’ll let Dorothy, a surveillance technology test subject, answer this: “You should pay attention to what happens to us. You’re next.”
“The Computer Made Me Do It”
Brad Ledgerwood, who has cerebral palsy, encountered a drastic drop in his hours of home health care in early 2016. He went from 56 hours of care per week to 32. Brad’s mother, Ann, had resigned from a well-paying job to take care of him and relied on the eight hours a day of relief.
I met Brad and Ann one cloudy afternoon at their modest brick home in Cash, where Brad serves on the city council. David, Brad’s father, was at work, supporting the family. When you’re doing this job, you never know what you’re going to walk into, but the Ledgerwoods are delightful.
Ann has dedicated her life to Brad. Every day she helps him perform the tasks he can’t, such as cooking, bathing, going to the restroom, pulling up his covers when he gets cold, and so many things you wouldn’t think of unless you lived her life. With the help of Ann, Brad is able to be a force of nature in Arkansas politics. As well as serving as an alderman, he participates in several online and local political groups. On Tuesday, he and Ann are going to be poll workers.
I asked Brad what he did when he found out his hours had been reduced by Arkansas’ Department of Human Services (DHS).
He told me he had asked the nurses why his hours were cut, and “they said the computer was tabulating how many hours you’re going to get.”
Ann didn’t suspect there was anything amiss going on at first. “I thought, surely this is a mistake,” she said.
Brad is in a Medicaid program overseen by DHS called ARChoices in Homecare. This taxpayer-funded Medicaid program provides home and community-based services for adults with disabilities and people over 65. Home health care helps people like Brad stay out of nursing facilities, which are much more expensive for the state and would not provide him the quality of life that staying at home has. For years, healthcare decisions had been made by nurses, not by computers.
Brad decided to call Kevin De Liban, an attorney with Legal Aid of Arkansas. Brad was not the only one to call Legal Aid about the cut in hours. In fact, of the 11,000 people enrolled in ARChoices, about 4,000 had their hours cut.
DeLiban filed a federal lawsuit against DHS on behalf of Brad and another plaintiff because the reduction of services violated two acts and the Constitution. De Liban told me that they had discovered during the legal process of the lawsuit that the computer issue was in fact an algorithm, and Legal Aid obtained a copy of it.
The problem all along had been that DHS secretly adopted the AI without being able to explain how it worked. Basically, nurses would come out to clients’ homes and ask them about 280 questions. This data would be entered into a computer, and the algorithm would put their allotted care level into tiers, called resource utilization groups (RUGs). The code, written by Brant Fries, PhD, didn’t take into account the diagnosis of cerebral palsy for nearly two years.
This tangle of healthcare technocracy has been in litigation for over two years. DHS loses, DHS appeals, and on and on.
In the meantime, Brad has been busy advocating for himself. “I have every senator’s phone number,” he told me. He also has the number of the governor of Arkansas, Asa Hutchinson. Brad has called the governor, but the governor has not spoken to him about his cut in home healthcare hours.
Hutchinson just happens to be one of the characters in my ongoing research into the Mena Airport scandal. That’s the one where CIA & company were trading arms for drugs, and they invaded a small town in Arkansas to use as a clandestine base. Asa was a young US attorney at the time, and I’m not sure how involved he was with the money laundering part of it, but that is another story.
Asa’s running for governor again, so it wasn’t very hard to find him. I met him at a political club luncheon. After his stump speech, during which he said he told Vice President Pence to end the trade war tariffs with China (a lot of talk about China amongst the Political Animals), I was the first one to throw up my hand for a question.
“From what I understand,” I said, “state-funded care for the elderly and people with disabilities has been cut drastically for some due to the use of artificial intelligence. What is your vision for AI in the state of Arkansas in the future?”
The reaction from the room was remarkable. Asa froze. The room went dead quiet. Somebody behind me told me to turn my camera off. “Okay,” I said.
Only then would the governor speak. “What do you mean by AI?”
And that, dear readers, is the moment I realized I had spoken the phrase which should not be spoken. We like to think of artificial intelligence as being sci-fi, but the truth is we live with it every day. I searched my mind for a word he would accept.
“Oh, yes.” He then regained his politician flair and told the room about how there was a need for independent assessment, and the government has too many service departments. He also said that his administration had helped shorten the waiting list for ARChoices. In an odd way, this is true.
Because although Kevin De Liban & Brad and the rest of the plaintiffs won their lawsuit fully on merit, helping possibly thousands within the program, DHS got itself a backlog of applicants for the ARChoices program. From the outside, it appears to me that DHS figuratively held some needy people in Arkansas hostage to keep their AI.
The judge in the case, Wendell Griffen, has been ruling favorably toward the plaintiffs, at one point even holding DHS in contempt of court for not addressing the AI problem, but the process was completed October 1st, DHS was allowed to continue to use the RUGs, and people were allowed to enroll in the program again. So yes, the ARChoices waiting list was shortened.
The next thing Asa Hutchinson said probably made my jaw drop, but I was too busy taking notes to notice.
“We need to keep people from taking advantage of the system.”
I immediately thought of the Ledgerwoods, fighting to survive financially, and all the other people who have suffered because of this AI.
“The human impact is devastating,” De Liban told me. “Some of them have had to lie in their own waste, to go without food, to endure pressure sores. People have suffered as a result of the state’s use of the RUGs algorithm.”
Hutchinson didn’t answer my question about AI.
Don’t get me wrong — AI can be helpful in many ways. Technology aids people like Brad and Ann. When Brad was in school, Ann had to read everything out loud to him. Imagine the number of hours. Now Siri does it.
However, the RUGs algorithm is a problem, and it is artificial intelligence, no matter what anybody says. According to Oxford Reference, artificial intelligence is “the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as …. decision-making ….” When a computer program makes decisions about your level of health care, that is artificial intelligence by definition.
This is an entirely different picture than you get when you see videos like Pepper the robot addressing the UK Parliament in a childlike voice, saying its design is “aimed at assisting and caring for the elderly!”
Imagine Pepper babytalking to you: “Septicemia is not part of my programming!”
And then rolling out the door of your nursing home room, never to be seen again.
A computer does not have empathy. It does what it’s programmed to do. It’s like a high-functioning sociopath.
Remember when I said that home health care saves the state a whole lot of money? DHS claims it pays on average $18,000 for people with home health care compared to $50,000 for those in nursing homes. Why would they cut back on a program that saves them money when the consequences could be so severe, not only to patients, but to taxpayers?
Well, at the same time the algorithm scandal is going on in Arkansas, so is a nursing home scandal.
Nursing home magnate Michael Morton bribed a circuit judge, Michael Maggio, to reduce the judgment in a negligence suit against one of Morton’s facilities from $5.2 million to $1 million. Maggio is now in prison, but Morton is still in business.
For this 2018 election cycle, Morton donated to Asa Hutchinson’s campaign, but Hutchinson turned him down. However, I can’t find anything where Morton’s business partner, David Norsworthy, is getting turned down for his donation to Asa’s campaign. Norsworthy bribed Senator Jake Files, who’s in prison now, too, but Norsworthy isn’t.
Seems like there’s some funny business going on in the healthcare system in Arkansas. Surprise, surprise. And I hate to say it, but I’m not shocked they are picking on some of the most vulnerable members of society first.
I asked Kevin De Liban why he thought they were using the AI. He said, “The algorithms seem to offer so-called objective cover for what are really budget cuts.”
That seems pretty obvious to me. But how deep does this scandal go? I’ll be investigating, so stay tuned.
In my research, I’ve come across many instances of AI being used against the vulnerable. It’s almost like government bureaucrats look at these people as easy targets. It’s hard to fight back when you’re dealing with a serious health issue and financially strapped.
Are they figuring out the legal and social obstacles they face as they deny help to the needy?
And it’s not just healthcare or food stamps. AI is everywhere now. From your social media accounts to your plane flights, artificial intelligence is there, looming secretly in the shadows.
Google, that corporation we all know and trust, has pinky promised that it will not turn into Skynet from the Terminator movies. Let’s hope they’re telling the truth now or soon, but I’m not holding my breath because AI might not recognize hypoxia.
Let’s face it: most people just let this happen without even speaking out about it. They’re too busy watching the Puppy Bowl.
But there is a family in Cash who’s speaking up for us. One of them is in a wheelchair and can barely move, but he is doing more than anybody else I know. Good on him.
I asked Brad how he felt about the court case.
He said, “Overall, I feel like we’ve won battles, but we haven’t won the war.”
Fight on, Ledgerwoods and Legal Aid, fight on.
For direct-transfer bank details click here.
You have encountered the privately owned corporate world Federal reserve and wall street all packed up into one nice “what I call go to hell package”. The private corporations own America all of it and everybody in it. The louses that run governments have made government operations impossible to access, unless you reveal to them your all, then they don’t have any answers and are not responsible for a damned thing. they have privatized nearly every facet of government so the private owners of these corporations can make a buck off of what used to be government
What is privatization. a government sells or franchises its water, power, garbage, sewer services to a a private company and the private company provides the citizens the water, garbage, power or sewage services.. for a big time profit.. Government sleazebags sell nearly everything, including military service for private companies to charge the government to provide when required. There is nothing left in the government, it has sold everything to private corporations, many of them foreign corporations. All that remains is the enforcement arms of government, where the government charge you license fees, taxes (so they can pay the private corporations that do their work for them) and arrest you, where you have to hire a private attorney to defend you, then if convicted you go to a privately owned jail, were cells and custodial care are rented out to the cities, counties and governments.
Your encounter with computer programming is probably the result of some private party or corporation doing a so called study and then selling the government on using the recommendation made in the trillion dollar study(of course the state paid some a consultant that trillion dollars after they raped the taxpayers).
Suggest you use Duckduckgo.com not Google as a search engine
This software really isn’t artificial intelligence although it might seem like it to an outside observer because they can’t see how it works. From the description it looks like the person cited as the author has devised a way of digesting disparate, seemingly unrelated, information into a set of one or more linear scores which are used to simplify the administration of complex tasks, a bit like the mechanisms credit score agencies use. The problem with this approach — and the reason why I wouldn’t call it Artificial Intelligence — is that there doesn’t appear to be any mechanism for feeeding back expected compared to actual results to tune the algorithm. (It may well have used training data during its initial development but another snag with this type of software is that all the algorithms will be hidden as ‘proprietary’ — they’re not so much impossible to understand as not disclosed for commercial reasons.)
I should emphasize that this type of maladministration isn’t exclusively the property of computer systems, you can do exactly the same thing using and old fashioned manual bureaucracy. The computer merely mechanizes the algorithm(s). Ultimately what this software does and how its used is entirely in the hands of people — policy makers — and the people that purchase the software and oversee its use are as responsible for its outcomes as if they themselves made the decisions it supports. They should not be able to hide behind “its a computer” or “its proprietary”, even if the software is proprietary they should be able to cite training and testing data for it. If none can be cited then the software is useless, you might as well just use the Lotto system (or flip coins) to make decisions.
Did you even read it? The Department of Human services got sued and Legal Aid got a copy of the “algorithm.” Call it whatever you want, but I call it what it is–artificial intelligence. Because it fits the definition. You’re just convincing me more that there is some kind of whackadoodle coalition of people who want to believe it is some extraordinarily complex idea that only they get to define. Well, I go with the Oxford definition, but you do you.
This is reassuring: https://www.weforum.org/agenda/2018/10/how-should-autonomous-vehicles-be-programmed
The ethical considerations in that article seem to have some basis in Asimov’s laws of robotics.
…But the feline community needs to speak up now! According to the survey upon which future algorithms may be based, cats will not be spared in car accidents with the potential for multiple casualties. But it’s ok if you’re a dog standing next to a criminal.
Whatever your definition of AI, and the creation of decision-based algorithms is certainly one component, automation is increasingly intertwining itself in our daily lives. It is when this automation is used to make decisions on a social level that we’ll inevitably run into issues like those in the article. Automation should not be used (or at the very least should not be the sole component) where crucial ‘quality of life’ decisions are concerned.
The only motivation to automate assessments at a social level is to punish, cause stress, avoid accountability and get ‘quick wins’. This is precisely what happened when Theresa May as Home Secretary falsely deported 7,000 international students, accusing them of cheating on English language proficiency tests. The software that was used to judge the tests was found to have a 20% failure rate.
The government never apologised.
Spot-on, level-headed comment. Thank you!
It is surely a feature in this Neo-Liberalism era to encounter problems with only extremely convoluted ways for redress.
” The only motivation to automate assessments at a social level is to punish, cause stress, avoid accountability and get ‘quick wins’ ”
And who hasn’t suffered from at least one of these problems lately?
Uncle $cam’s attempt to weaponize SWIFT (the banking transfer code) is unravelling, because humans (Putin and Xi) are smarter than AI bots (Obomba and Trumpetty). From DT_Regime-change-watch BTL SyrPer:
“Too bad that SWIFT is just code. It’s just an encrypted messaging system. And like the push to stifle alternative voices on social media — de-platforming Alex Jones and Gab for examples — the solution to authoritarian control is not fighting fire with fire, but technology.
And that’s exactly what Russia has done. They applied themselves, spent the money and wrote their own code. Code is, after all, hard to control.
De-coding SWIFT’s Power
It is also what is happening all over the Internet communications supply chain right now. The infrastructure independent content producers need to resist corporate control is being built and will see their businesses rise _as so many more people are now awake to the reality_ of the situation.
_As Russian banks and businesses reap the benefits of no longer existing under SWIFT’s Sword of Damocles, others will see the same benefits._
I’ve been making this point all year, the more the Trump administration uses tariffs and sanctions to achieve its political goals the more it will ultimately weaken the U.S.’s position worldwide. It won’t happen overnight.”
[To adapt an Ancient Greek saying, Code is what humans understand and computers do not understand when both are reading code.]
And let no one imagine that AI plays no part in the counting of votes at mid-term elections…
Seriously, we crossed the line in 2000. And AI won’t let us cross back again.
We can’t go on thinking like we used to back in the 1950s, when manipulation on this scale was actually hard work for the perpetrators.
It’s just a short algorithm away now.
Rigging computer programs in favour of Big Corps and the government against ordinary people is already widespread. Rigging a computer program is a common practice when applying business logic during software development. AI only makes it worse. Exceedingly worse!
Here is a non-AI example:
Many GPs (doctors) are looking less at their patients and fixating their gaze on the computer screen. They are following a series of prompts that enable them not only a ‘quick diagnosis’ but also [to] generate more business activities for ensure increased profits for the medical practice.
This are not using an AI system. It’s a small example showing how a patient is subjected to medical procedures and business processes that some of which are highly likely not only unnecessary but costly and could be outright harmful.
So, it’s not only AI systems we need to battle against. People find errors in computer programs all the time. If we can’t battle against existing systems, it will be impossible to deal with rigged Artificial Intelligence.
Rigging AI systems adds layers of complexity that render ordinary people impotent and powerless to detect the anomalies, let alone do anything about finding a solution or redress.
That was a reply to a disingenuous post by ‘frank’ who said:
“if we are going to call every computer program “AI” then the term becomes meaningless. And that’s not going to help when the time comes to battle real “AI”.”
Hello there. 🙂 How was my comment disingenuous? Everything you said corroborates what I said.
Not sure that really fits the definition of AI. AI is independent “artificial” thought. This problem seems to have occurred because of the crude application of a computer program (algorithm). Computer programs don’t think.
I shouldn’t have to say this, but when your computer says “hello world” that is just as a result of the execution of a computer program. The computer isn’t there saying, “Gee, I wonder how to communicate with these organic beings?”
Computers don’t think. Good article in many ways, but this is not an AI issue as far as I can make out.
All true – but the name of the game is “smart ” as in smarter than a biased human being.
Take the politics out of decision making by using this smart technology – convince the populous that the machine is smarter than them and logically the decisions they make will be objective.
” Look folks – no interference from us politicians – the smart algorithms say it’s so and so it is.”
Remind me though – were events that were only suppose to happen once in one thousand years happen in the finance Sector eight times in three days?
Any algorithm is only as ” smart ” as its inventor and you can’t beat the City for inventions from ” smart ” people.
Good job the one’s who bailed them out weren’t “smart ” so the City smarts were smart in that way.
“The algorithms seem to offer so-called objective cover for what are really budget cuts.”
Then you go on to say:
“In my research, I’ve come across many instances of AI being used against the vulnerable.”
Why are you blaming the AI when it’s clearly the authorities that are making the budget cuts and using the computer system as some kind of cover.
I agree that we should be worried about AI, but for the moment it’s still humans making the real decisions.
Using the example of the healthcare budget cuts and blaming it on AI is a bit manipulative, I would say.
(‘The powers that be’ would probably not mind if people think it’s all just the fault of some mysterious “AI” and not them.)
Precisely. AI is just another tool in the psychopathic Right’s endless war against their Eternal Enemy-other people.
We should be concerned because unelected people are weaponizing it. Social vampires like Cindy Gillespie here: https://www.arkansasonline.com/news/2018/jul/11/absent-leader-criticized-in-review-of-d-1/
I stated that it is helpful in some ways. I mean, have you used Google translate? Sometimes there’s no good substitute for a human.
Should we ban science because corporate psychopaths are weaponizing it?
You think that by banning this computer program they’re using the money will be put back into the budget? If you want more money for healthcare I think you should focus on the real reason, not this computer program.
As for AI, I have serious concerns about it, but imo you are using the wrong example to attack it.
And also: if we are going to call every computer program “AI” then the term becomes meaningless. And that’s not going to help when the time comes to battle real “AI”.
Good piece. AI is also what drives the computers that banks and traders on Wall Street and in the City of London use to shunt trillions around the globe in fractions of a second and buy or sell currency and other commodities in the blink of an eye. A common German colloquialism for computer is Rechner – it literally means calculator, which is what computers at their core basically are….fast calculators, really really fast calculators that can do millions (or is it billions already?) of calculations in a matter of seconds. This is where their power lies.
Forget Kurzweil’s Singularity…it’s the depressing immortality fantasy of a man whose Austrian Holocaust survivor parents passed down to him a morbid fear of death tinged by a hatred for messy, and often ugly and cruel, (in)humanity. Fellow Teuton (sort of) Thiel is an even stranger weirdo who drinks, or injects, the blood of youthful Asians in a bid to keep his John Galt fantasy going forever. But their personal idiosyncrasies are just gossip level spectacle and of no importance. Ray Kurzweil is a very smart tech geek who works at Google, and thus, at least nominally, Does Evil, but it is Palantir, Peter Thiel’s secretive company, that wins the Evil Tech Award in 2018.
Like the capitalists’ superfast calculators, Palantir’s contribution to computing is not much talked about in the media (hmm…is there a pattern here?) but it is 1984 level sinister. Everybody’s favourite bug-eyed sociopath, Jeff Bezos, and his Amazon data mining operation (you get books, batteries and vibrators, young master Jeff gets your cash and, more importantly, your data) teamed up with Pete Thiel and Palantir and together with the CIA, NSA and weapons contractors this MIC, surveillance state behemoth has access to massive massive amounts of data, and when push comes to shove, it has the potential to destroy thousands or even millions of lives and put to rest the remnants of “our” sad and decrepit democracy for good.
What Palantir does is try to predict who is going to commit a crime in the future. Amazon the NSA and CIA collect the data and Palantir searches it looking for patterns that let spooks, local police and the FBI mark people as future-offenders. This is all done in secret of course and the program is already underway in California where it works with LEAs. Right now the ruling elites are trying to shutdown the spread of “seditious” information on social media. But if and when they feel acutely threatened, they have the world’s largest data mining and analysing operation ready to deploy.
So the capitalists “AI”, call it what you like, robs us blind and keeps the world impoverished …and when (or if) we get pissed off enough to try doing something about it, Bezos Thiel CIA et al has a gargantuan AI controlled police state in a (black) box ready to drop on our heads. Think about that before jumping on the Alexa and internet of things bandwagon. Tech companies monetize data…that is their biggest cash cow. The government LOVES it some data too…it’s all about power, baby and these two streams combined is a match made in hell. Talking internet ready wifi toasters and crap like that is all about the data mining. And most people will march off the cliff clueless and naive about all of this. Sheep..baaaaaaa
I feel like I should give this reply a round of applause.
I just saw this: https://www.zerohedge.com/news/2018-11-01/ai-lie-detectors-tests-coming-eu-airports
I recall that it came up in Wiley’s SCL/CA testimony to parliament and forgot to follow it up. So thanks for the heads-up.
I remember it from Tolkein as his imagining of TV or skype.
Anyway, having had a quick search, looks like they are recruiting.
“A World-Changing Company
At Palantir, we’re passionate about building software that solves problems. We partner with the most important institutions in the world to transform how they use data and technology. [WTF???…>] Our software has been used to stop terrorist attacks, discover new medicines, gain an edge in global financial markets, and more. If these types of projects excite you, we’d love for you to join us.”
Join us, join us…we shall rule the whole world…!
I might apply, they probably pay many beer tokens.
I can’t thank you enough Hope K for this excellent investigative reporting. I’m especially impressed by your Arkansas Links under the Asa Hutchinson photo and I encourage readers to go to those articles to find the depth of corruption occurring in Arkansas.
I must also praise your High School for their award process; keep up the masterful work, we need it desperately…
😁 Thank you.
When things go wrong (highly likely deliberately/programmatically), those responsible can use AI or Machine learning to instantly find excuses (imaginable or unimaginable) that work well for any given situation using the entire history of excuses on record .. before the victim even blinks.
Thanks to Hope K for alerting us to something I myself would never have guessed (but is now too painfully obvious, that I should have foreseen it): that AI could be used in a cynical way by governments and private corporations to extort money from vulnerable patients (be they elderly patients, patients with chronic conditions or even babies and their mothers) and their families, and even to deny people the help and care they need. All for the sake of profit or making more efficient use of scarce funding and sticking to budgets (ha).
It’s GIGO. Garbage In Garbage Out; Greed In Greed Out. As in Lord and Lady Macbeth of Arkansaw. Search “‘The Clinton Body Trail”. While CIA drugs flew into Arkansaw airport under the Governor’s protection and young witnesses were battered to death, the Clinton Foundation climbed up from zero to $80G, and Miss Macbeth married into a Rothschild bank.
I don’t know about the present Governor (is he that handsome figure in the photo?) but I do know this: The descent to the lowest circles of Hell can only be made on the back of a monster with the face of a kind and just man. — Dante, Inferno.
WOW .. it’s the computer’s fault!
AI errors (or ‘features’) always disadvantage people at the weak end.
The 1% can own, create, design, operate AI.
What can the 99% do except ‘Obey!’?
This is all screamingly obvious human error (or perhaps it’s deliberate) in devising the AI system.
If the system reduces hours then it needs to be able to provide the reason and then the system should check that the reason is valid, or, if not sophisticated enough, with the reason provided, the client and other stakeholders will immediately pick up any anomaly. There should, of course, be a notification to the client and other stakeholders well in advance of reduced hours taking effect, containing some caveat that “if you believe an error has been made please contact us within x-timeframe”. So the hours should not be reduced until the client has been notified and asked if they think an error has been made.
I work in worker’s insurance and when it is deemed that a client will have their benefits reduced that is what happens. I mean, how basic is that?
We are dealing with Arkansas here. Need I say more?
Indiana, Los Angeles, Pittsburgh….
Former stomping ground of the Clintons and the radioactive centre of Walmart, I see.
They did not tell the patients ahead of time. Also, the “stakeholders” in this case are the taxpayers. Thus the lawsuits, federal and state. Do you think leaving cerebral palsy out for two years was a mistake? A human nurse would never have done that. AI is being used oftentimes for bureaucrats to shun their responsibility for denials of care.
Exactly. It’s being used incorrectly – the AI is only ever supposed to be part of the system – it produces the algorithm but then the other part of the system is checking with the client – not just telling them ahead of time but checking that it’s OK. It’s pretty straightforward. Where I work the patient gets notified a number of times with ample opportunity to dispute any proposed reduction of payments – ultimately how fair the system is I don’t know because I don’t work in that part of the business but certainly in terms of issuing notifications and checking with the client that they’re OK, it’s very reasonable.
Dare we venture into scifi territory?
Quickly – as far as I understand, a true AI, would become ‘sentient’ and start to reprogramme and redesign it self.
Unlike a Robot it would soon overide any of its preset parameters (e.g. KILL THE POOR) and start to set it’s own goals. At a speed much much faster than human conscience communication capability. I suppose it would be able to decide what is good/bad. And be able to be reasonable/fair. It might decide that we are not worthy enough to bother with or that we are worth having around for our human randomness.
A killer robot on the orher hand is just another dumb ‘smart’ bomb.
Ooh fireworks outside – lovely, i’m off to the bonfire party 🎆
It’s the ignorant (our psychopathic leaders) leading the ignorant (their subordinates).