The Care Floor
A new measure for what AI removes when it saves time
A woman in her eighties phones a hospital to ask why her husband’s appointment has been moved. The system that picks up is new. It was installed quietly, branded as an efficiency measure, described in briefing documents as a way to free clinical staff for the work only humans can do.
The system asks her to describe the reason for her call in a few words.
She does not have a few words. She has a husband whose care has been rearranged, a calendar she may have marked by hand and a small panic.
She begins to explain. The system interrupts with four options. None of them fits. She chooses the closest one. The system routes her to a recording. The recording refers her to a website. She does not use the internet. She hangs up.
On the hospital’s dashboard, the call has done what it was meant to do. It ended quickly. It did not reach a clinician. It did not add to a queue. It did not take up staff time. Every measure the system was bought to improve has improved.
Something else happened that the dashboard does not record. The lowest level of care available to her fell.
The case is composite. Versions of it recur across automated systems.
There is a measure missing from AI evaluation. It belongs beside accuracy, latency, cost, safety, bias, satisfaction and throughput. That measure is the care floor.
The care floor measures the minimum level of responsive attention, recognition and adjustment available to a person in an encounter. It is a floor because it asks what remains at the low end, when the person is confused, frightened, grieving, digitally excluded or simply unable to state the need in the approved way. It is a measure because it can be tested before and after a system is deployed.
Care ethics has long named the moral fact. Kittay, Noddings, Tronto and Held have shown that care is the labour without which autonomy cannot exist. Kittay’s work, shaped in part by her relationship with her daughter Sesha, insists that dependency is not a marginal human condition but part of what human life is. Human life starts in dependency and returns to it. Even between those points, independence is more fragile than we admit. We become capable because someone has already fed us, taught us, noticed us or worked out what we needed before we could say it.
The person who cannot reciprocate exposes the moral structure more clearly than the person who can. A society is tested by the provision it makes for the person who cannot bargain, cannot complain, cannot route around the system, cannot turn need into a clean request.
A civilisation is tested by its floor.
The care floor does not try to rename that tradition. It turns care into an evaluation question.
After an AI system is deployed, did the minimum care available to the person on the receiving end rise or fall?
Most AI evaluation cannot answer that. We measure model accuracy, call duration, ticket closure, refusal of dangerous requests and user satisfaction, though satisfaction surveys often miss the people who gave up before the survey appeared.
The care floor measures something else. It measures whether the person still had somewhere to take the part of the need that did not fit the system. Could they reach a human? Could that human change the outcome? Did automation return human time to the people who needed it, or did it remove human time and call the saving efficiency?
A care floor assessment would compare an encounter before and after deployment. It would ask whether the system worked for the people least able to fight it.
That is the missing measure.
AI is now being used to move attention around. Every deployment changes who gets human attention, who loses it and who has to translate their life into machine-readable terms before anyone will respond. The change is rarely neutral. Someone gets more reach. Someone else gets less room.
A lawyer using an AI research assistant has nothing to lose. The relationship with the client remains. The lawyer gains time. The floor stays stable and the ceiling rises. The lawyer experiences the technology as augmentation because, for them, it is.
A welfare recipient dealing with an automated eligibility system has no such guarantee. The encounter that once included a caseworker now begins with a form, a portal or a chatbot. The caseworker may have been overworked or abrupt. But part of the role was to catch the cases the form did not fit, which are often the cases where things have gone most wrong in a person’s life.
The form cannot catch them unless someone has built and staffed a path from form to person.
The difference is visible. The lawyer’s human relationship remains. The welfare recipient’s may thin or vanish. The first raises the ceiling. The second lowers the floor.
This is the pattern beneath much of the public confusion about AI. People with power meet AI as extension. People under administration meet it as withdrawal. Augmentation language describes the first position and obscures the second.
The strongest claims for AI are often made from the first position and imposed on people in the second. Legal discovery, financial analysis, software engineering and marketing are domains where the care relation is not normally the centre of the task.
The sectors where automation is most tempting are different. Customer service, eligibility assessment, therapy chatbots, educational feedback, aged care companionship, medical and psychiatric triage, disability services, immigration systems, courts, complaints handling and the administration around death. These are domains where the care element is closer to the task itself. The person is often unable to walk away. The encounter may decide whether they eat, receive care, stay housed, keep a visa, understand a diagnosis, bury someone with dignity or get heard before the harm becomes irreversible.
In care-rich domains, substitution does not trim waste. It trims the encounter.
AI has found its market partly because care has always been underpriced. The labour of staying with confusion, fear, grief, illness, poverty or dependency is expensive to staff and hard to count. That makes it vulnerable. Remove the labour, bank the saving and describe the result as access.
The standard defence is that the human care being displaced was often poor. The caseworker was exhausted. The receptionist was abrupt. The nurse was managing six things at once. The AI, the argument runs, is at least consistent. It does not have a bad day.
That argument misses the measure.
The care floor does not romanticise human workers. It asks whether the encounter retained any live capacity for adjustment. Damaged care may still contain that capacity. A menu with no exit does not. A portal cannot hear the tremor in a voice unless the system has been designed to send that voice somewhere.
A second defence says AI frees humans for higher-value work. That claim should be treated as an empirical claim, not a promise. If a hospital, school, welfare agency or aged care provider says automation has freed staff, the care floor measure asks for proof. Show where the staff went. Show whether the human who remains has authority to alter the outcome.
Without that proof, “freeing staff” often means fewer staff or the same staff stretched across more cases.
Give the measure a bureaucratic name. A Care Floor Assessment. It can be added to procurement, risk review, post-deployment audit and sector regulation. Its central question is plain.
Did this system lower the minimum care available to the least powerful person in the encounter?
In practice the assessment is concrete:
A Care Floor Assessment would ask: Can the person reach a human? Can that human change the outcome? How long does escalation take? What happens to abandoned calls or failed sessions? Are exceptions resolved, or merely logged? Has user burden increased? Did automation actually return staff time to complex cases? What is the appeal path, and does it work for people with low digital access?
That question changes the policy debate.
Productivity asks how many cases moved through. The care floor asks who lost the chance to explain themselves.
Accuracy asks whether the answer was correct. The care floor asks whether a correct answer still harmed someone because no one stayed to hear the next sentence.
Safety asks whether the system avoided a defined class of risk. Human oversight asks whether someone can supervise the machine. The care floor asks something neither manages: whether the affected person could reach responsive authority before harm occurred.
A system can have human oversight and still fail the care floor test. A staff member may monitor performance without being available to the person being processed. A manager may audit outputs without hearing from the woman who hung up. A regulator may require documentation without measuring whether the human path has disappeared.
The care floor cuts in two directions. Where institutional care existed, AI can lower it. Where institutional care never reached, AI can mark that absence as resolved without resolving it. Access becomes the marketing term for thin replacement of nothing. The deployment pattern is identical across both cases. The baselines are not. That is what the measure has to catch.
In sectors people can easily exit, a falling care floor may be bad service. In sectors people cannot reasonably exit, a falling care floor is a public harm. Aged care, welfare, child education, disability services, mental health, immigration, public benefits and parts of the justice system are not ordinary consumer markets. A person cannot shop around when a form decides whether they receive income support, get care, keep housing, stay in the country or understand what has happened to their dead.
For those sectors, the rule should be strict. Measure the care floor before deployment. Measure it after deployment. Prohibit systems that lower it unless the missing care is replaced by funded, reachable human capacity. Where the floor was never built in the first place, prohibit deployments that present absence as service.
The moral consequence is just as important as the policy one. We need to use the word augmentation with more discipline.
AI is augmentation when it supports a human relationship that remains available. It is substitution when it thins, displaces or hides that relationship. A caseworker with better tools may have been augmented. A welfare recipient pushed through a portal with no human path has not.
The difference is the floor.
Efficiency is real but incomplete. A hospital that routes calls faster has gained something. The question is what it spent.
Before a system is praised for saving time, ask whose time was saved and whose care was shortened. Before a dashboard calls a call resolved, ask whether the person had anywhere to put the part of the need that would not fit the menu.
The woman on the phone did not need a machine with a warmer voice. She needed an encounter with a bit of give in it. A human would not have solved everything. A human might have changed what was possible.
The care floor is the proposed measure for that possibility.
In care-rich domains, AI governance should begin with a hard rule. No deployment that lowers the care floor. No deployment that installs absence as access. Below that line, an institution has not improved access. It has spent care and called the saving support.



Great article. Thank you for saying this.
SERIOUS ISSUES!! And the distinctions we make about when to HELP humans with AI and when to OFFLOAD human care to AI couldn't be more important! Thanks for this, Carlo.