You and AI

As the presence of artificial intelligence grows, Minnesotans are finding ways to implement the sometimes scary, sometimes adorable tech of the future
AI can see us in ways we may need to be seen, or even fool us in ways we may want to be fooled.
AI can see us in ways we may need to be seen, or fool us in ways we may want to be fooled.

Illustration by Matthew Custar

A few of the semi-humanoid robots awaken. Six of them had stood, shoulders slumped, arms at their sides, in an unlikely lab: a small, carpeted room tucked away in the student center on the University of Minnesota’s Duluth campus. They lift their round, child-size heads and look about the room for a human face.

Gleaming white, they stand about 4 feet tall. Their bottom halves flare out, like mermaid tails, into three wheels. When one of them at last spots the configuration of a human expression, it moves its head to track it, and blue lights swirl around its big black eyes, and its padded, articulated fingers twitch and curl. Creepy? Yes. But with its babyish face, it is mostly cute.

“It’s trying to listen,” says Arshia Khan, who leads the university’s Dementia and Elderly Care Robotics and Sensing Lab. A robot interrupts her with continual beeping. “Their speech recognition abilities are not really very good.” The Pixar-like appearance is intentional, she notes: People are more patient with robots that are cute. This model, made in Japan and called Pepper, has aided folks in U.K. offices, Japanese banks, and at the Mall of America in Bloomington.

At the University of Minnesota in Duluth, robots are trained to assist staff in nursing homes.
At the University of Minnesota in Duluth, robots are trained to assist staff in nursing homes.

Photo by Erik Tormoen

With help from grad students, Khan has outfitted these bots with artificial-intelligence (AI) algorithms. These work like sets of instructions, or recipes, or decision trees churning through input. As of last summer, 16 bots have assisted caregivers in eight Minnesota nursing homes. Khan and the facilities’ management company believe they are the first in the country used this way.

“A lot of the residents are a little bit in shock and awe,” says Marcus Kubichek, the robot program director of Monarch Healthcare Management, which operates the nursing homes. “They go back to ‘The Jetsons,’ like, ‘Oh, there’s gonna be robots in the future!’ ” A few facilities have named theirs Rosie, after Hanna-Barbera’s metallic maid. “It’s starting to happen, in a very entry-level way.”

The bots launch into karaoke, tell jokes, and show dementia patients old photos as a Khan-patented form of reminiscence therapy. They can read facial expressions, although this does not yet have a function in the nursing homes. Smaller models, nearly toy-size, use jointed arms and legs to lead exercises, including yoga, and under-wraps mechanisms should allow the Pepper bots to preemptively notice “sundowning,” when fading daylight triggers confusion or restlessness in some dementia patients.

Human staff members always accompany the bots, at least for now, Kubichek says. Staff can better focus on one thing at a time—not having to shuttle about the room, keeping a bingo game afloat while helping residents, for instance. With a tap of a Pepper bot’s chest-mounted touchscreen, it says in a chipmunk voice, “Welcome, everyone!” Then, in an unmistakably robotic, off-kilter cadence: “I’m really excited to play bingo with you all.”

We are in a time of AI. Not the time, however. Since the 1950s, a few “hype cycles” have run their course, with outcomes generally undershooting expectations ratcheted up by sci-fi writers. Think of the scandalously lifelike “false Maria” of the 1927 film “Metropolis.” Decade by decade, AI has progressed: In the 1960s, IBM Corp. built a machine that could respond to 16 words—and remember Deep Blue, the computer that beat a world-renowned chess champ in the ’90s? Today, ChatGPT, a “chatbot” launched last year, can write essays with eerie accuracy by predicting which word should follow the last. Via machine learning, it has spotted patterns in myriad written works sourced from the internet. Police have used AI to map likely crime spots, and AI can even “scrape” (or, steal) human-made artworks from the web to create originals—beautifully enough to bag first place at the Colorado State Fair.

It’s not just in the headlines, either. AI—an umbrella term for loads of “smart” tech—is everywhere. Instagram tracks user activity to show supposedly relevant content. Amazon estimates purchase satisfaction. Smartphones unlock after “seeing” the correct face. Inboxes siphon off spam emails. Companies have leveraged invisible, data-crunching versions of what Khan describes as helpful “gadgets.”

And like Khan’s robots, none of it is perfect. Many say this is AI’s latest hype cycle, as
the futuristic tech of Amazon and Facebook is not as self-sufficient as many seem to think. Recent articles have exposed mean working conditions in such places as Africa and South Asia, where low-paid workers train AI systems by labeling images, annotating videos, repeating phrases, and completing other time-consuming tasks necessary for algorithms to “know” enough. Many of us guide machine learning, too, without realizing it. Think of fraud-detection systems like ReCaptcha: By clicking snapshots of bridges, or frogs, or traffic lights—to prove you’re not a bot—you teach Google’s models, improving their ability to identify images.

As AI grows, it stretches against an undefined set of ethics. In the United States, spending on AI solutions will double by 2025, breaching $120 billion, according to an International Data Corp. report last year. Until then, the pandemic has apparently hastened AI’s adoption by U.S. companies, with 86% of executives in a 2021 survey describing AI as a soon-to-be “mainstream technology,” per professional services firm PwC.

The goal can sound Darwinian: to make decisions much more swiftly—and more objectively, more accurately—than humans. What distinguishes our modern-day Information Age is that the computing power has never been so great, the data never so plentiful, and the algorithms never so smart. AI reaches across industries and scientific disciplines—health care, marketing, climate science—and Minnesota features some key players. Drones programmed at the University of Minnesota can notice nitrogen-deficient areas in fields of crops. The Minnesota-based medical tech company Medtronic sells AI that can detect polyps during colonoscopies. An AI-assisted hearing aid by Eden Prairie-based Starkey Hearing Technologies can even tweak performance while scanning the wearer’s environment.

“We are trying to get to a point that machines can think like humans, right?” Khan says. “But, of course, machines can process data much, much faster.” Imagine something like gut-reaction decision-making, informed by rapidly synthesized reams of information. “That’s what we’re trying to achieve.”

New Teammate

One way to train computer science students in ethics is to have them plot AI-centered “Black Mirror” episodes. The dystopian Netflix series, like a 21st-century “Twilight Zone,” catastrophizes about two big unknowns: cutting-edge tech and human nature.

“It’s really important, bare minimum, for technologists and engineers to have training in risk assessment,” says computer science professor Stevie Chancellor, who leads AI research at the University of Minnesota-Twin Cities. “That’s pretty standard in other engineering domains where catastrophic failures can be really, really dangerous to people. We don’t do that in computer science, really.” With AI, she’s not too worried about a cinematic “large-scale catastrophic event,” like insurrection. Many experts do not foresee machines matching humans’ complicated general intelligence anytime soon, or ever. But there are other risks. Chancellor asides that video game designer Hideo Kojima is a favorite AI-inspired doomsayer for his depiction of social isolation—“A lot of it is just people being distant.”

Last year, Pew Research Center released the results of a survey gauging U.S. attitudes about AI getting active in daily life and doing things humans do, such as recognizing speech and images. It found that 37% of respondents were more concerned than excited, while 18% were more excited than concerned (with the rest in the middle). The most commonly cited concern was the loss of human jobs. Second most: surveillance, hacking, and digital privacy.

Scott Litman, a Minnesota-based tech entrepreneur, began working on the AI system known as Lucy about six years ago. He had read up on the emerging field before licensing some open-source AI with three longtime coworkers.

“This is the new Day One of what AI can do,” Litman says, discussing ChatGPT, which made him worry for the first time about AI “outsmarting” us. He brings up a similar AI-based content generator: “One of my team members today wrote me that they’re using Midjourney, and they created three different websites—one for a flower shop, one for a data services business, and one for a bicycle company—and each one took less than a minute to create. To think about how that’s evolved in a year—in three years, five years, it’s going to be mind-blowing.”

Lucy is more disarming than ChatGPT or Midjourney. Rather than replace human intelligence—writing the fake essay, superseding the call-center rank and file, piecing together the truth-neutral news story—Lucy is meant to unburden. “She” draws upon more than 100,000 “people hours” of development, working as a hyper-localized, company-specific know-it-all. Litman calls her an “answer engine,” akin to Google and familiar with heaps of information backlogged at corporations like Target, PepsiCo, HBO Max, and Kraft Heinz. “Businesses are loaded with marketing data,” Litman says, “and for any individual, it’s almost impossible to know everything a company has.”

Her name comes from a granddaughter of IBM’s founder—a human name, for team cohesion, and feminine, for approachability. With natural language processing, Lucy can skirt a common keyword-search conundrum: the litany of irrelevant results, “when what you really need is slide 33 of a PowerPoint.” One challenge, Litman says, is to steer employees away from typing in simple keywords—“cheese,” in one real-life example—when Lucy can parse sentences and pinpoint answers to questions you would ask a human expert, like “What are the trends for cheese consumption in Latin America by Gen Z?”

She’s part of an optimistic take on AI at the office, since many see this tech augmenting human labor. A study released in 2021, with data from the U.S. Bureau of Labor Statistics, suggested that 60% of white-collar jobs are less than 50% compatible with automation. That means automation could take over about half the tasks handled within each of those jobs. Ideally, this would ease the strain of busy work. “Automating menial and tedious tasks frees up workers’ resources,” the study states, “which can then be better invested into more complex activities.”

But job security is hard to predict. Detractors worry about desk jobs disappearing. Potentially at risk are clerical and administrative positions that manage logistics or measurements—things AI should do better and more cheaply in the long run.

Lucy, at least, fills a practically inhuman role. But like any new coworker, she needs help. “We tend to set expectations for customers and say, ‘Today is Lucy’s worst day,’ ” Litman says, describing machine learning. “ ‘She will be smarter tomorrow because every day she’s smarter than the day before.’ ”

Humans will need training, too, to work with AI. “A more time-consuming and bigger part of this project is not sanding out the system; it’s actually rolling out, deploying, training, onboarding, and driving that user adoption.” Lucy can analyze millions of pages of content and deliver an answer in about five seconds. “None of it matters if people still just keep asking Bob and Sally over [Microsoft] Teams for the answer.”

Meets the Eye

As for digital surveillance, the lid is off. People distrust targeted advertising—understandably so. Fed on human-derived data, AI can regurgitate human bias. Researchers recently found that Facebook’s AI, for instance, has skewed ad delivery by gender: Women would more likely see advertising for a job selling jewelry, while men would more likely see an ad for selling cars.

But entrepreneur Rob Flessner is trying to put a positive, even egalitarian, spin on AI-powered ads. He co-founded the Minneapolis-based advertising company Vugo in 2015, focused on the rideshare market. The idea behind Flessner’s company is to eventually make rideshares, such as Uber and Lyft, free. Because what if we monetized rides through targeted ads that played on Vugo’s custom headrest-embedded screens?

“Our whole vision is that, as things go toward self-driving cars and the emerging passenger experience that goes along with that, trips themselves could be heavily subsidized by advertisements and in-vehicle transactions,” Flessner says, “to the point where they’re either going to be free or [cost] is really no longer a factor.” 

If you’re Ubering to a Vikings game, the algorithm may decide you’d like to see a Sport Clips ad. “You can tell a lot about somebody based on where they travel,” Flessner says. “I mean, if they’re going to church every Sunday, if they’re showing up to work on time, if they’re a sports fan, which financial institutions they bank at.” The technology is not perfect. One struggle is to make sure the AI knows you’re going to a yoga class and not the bar across the street. Another venture of Flessner’s, Nrby (“nearby”), similarly alerts users to goings-on of likely interest, based on the algorithm—like a “Netflix for events.”

He understands the digital-privacy concerns. “But the one advantage we have is that it’s fully anonymized,” he says. The idea is that you override the creepy factor by buying in, pleased with the ad relevancy, or the cheaper ride, or the fun events.

Stevie Chancellor, the University of Minnesota professor, has thought a lot about privacy protection, too.

Her “human-centered” approach to AI prioritizes social consequences, in some ways working against hype. As of now, no single governing body defines what is and isn’t OK to do with AI, which can make it feel Frankensteinian, as Chancellor describes it. Surveillance brings to mind tech giants stalking online activity, or the Chinese government monitoring citizens. What would it mean for us to want AI oversight?

Chancellor has turned to Twitter, Reddit, TikTok, and other online platforms to explore that idea. “If you think about the way that doctors look for symptoms that might indicate that somebody has depression, and classic symptoms of depression are low energy, fatigue, low mood, not [being] interested in anything—there is psychology research that shows that these symptoms actually go back to our verbal and behavioral expressions,” she says, describing an advisor’s research. “And we can use what people say online as a verbal or behavioral expression.” 

She hunts for clues about mental health, addiction, and other concerns glinting beneath social-media clutter. Her research has set AI on Instagram discourse, for instance, to estimate the fluctuating mental illness of users who have reported severe symptoms. Other research has zeroed in on Twitter users who post between midnight and 6 a.m. AI can slot these posting times into a wider pattern revealing likely depression—a pattern which those users’ friends, scrolling through their feeds, may pick up on only subconsciously.

There’s a Big Brother quality here, undeniably. “I think people should be able to consent into and opt out of these systems,” Chancellor says. “A lot of tech doesn’t allow us to opt out of surveillance for advertising or for personal recommendations.” She imagines people could check boxes that permit or deny uses of information. But “imagine it’s 40 or 50 different algorithms that run on your data,” she says. That’s just user-unfriendly.

Besides that, what would “intervention” look like? AI has decided you’re in a dangerous headspace, so does it alert one of your friends? “Bioethics has this really, really long history and complex argument around whether or not you should intervene when somebody might injure themselves or hurt themselves.” Facebook, for its part, has rooted out content indicating suicidal intent. But Chancellor argues such a move—banning users, scrubbing their history—is not always the most ethical. It isolates both the triggering “contagion” and the people who may desperately need an online support network, she says. “My biggest concern is that AI automates decisions in ways that we can’t audit and cannot reverse. And that can happen all the way from replacing people’s jobs to having an AI system that evaluates people’s resumes.”

Instead, she supports the combination of AI’s unique strengths with humans. “My ideal scenario for mental health prediction is not that we have this bot calling Twitter all the time, trying to find people who may be depressed or have an eating disorder, but, in fact, supports people when they come into the clinic or want to get help.”

A doctor, for instance, could consult an AI rendering of someone’s social media activity, then recommend steps forward. “That augmenting and improving of decisions is something that computers are very, very good at, because they’re good at looking at patterns in large amounts of data, and humans are exceptionally good at understanding stories and contextual details.” (Facebook says it flags concerning posts for review by a Community Operations team, provides resources, and may even contact local authorities.)

Overall, Chancellor sounds both optimistic and inflexibly cautious. “I think that there’s this interesting opportunity to de-hype AI,” she adds. “It’s not this crystal ball or Oracle of Delphi that’s completely out of our control, right? We are building and developing these technologies, and we have choice in what we build.”

Smarter, Faster

Another scientist at the University of Minnesota is pairing human know-how with AI, to more and more lifelike results. And it’s in the name of another high-stakes issue: climate change.

In 2021, the university joined a $25 million project to improve models that predict global climate change. Funded by the National Science Foundation, it folds in a slew of institutions gunning to draft better ways societies can adapt.

Climate modeling itself is not new, but existing models “can never be precise enough to represent the entire state of the climate system for very, very precise simulation,” says Vipin Kumar, who leads the U’s project team and directs the Data Science Initiative at the College of Science and Engineering. Existing models can simulate car crashes and the weather. They can figure out the general circulation of the wind, or the result of more carbon dioxide in the atmosphere—“it’s going to heat up,” he says. “But exactly how much it heats up depends on how many clouds are formed and exactly where they get formed.” That’s where the simulations fail: in the intricacy of so many details.

Of course, it’s also where AI excels. Thanks to hundreds of satellites and other ecological tools, such as weather balloons and soil-moisture units, we have an immense field of humanly intractable data. Only with AI’s computer vision can we examine those satellite images closely enough to see how the Earth’s bodies of water have grown and shrunk month to month, going back several decades—to use one example from Kumar’s lab. “It actually turned out that the AI algorithm was able to find three times as many water bodies as people knew about on a global scale,” he says. With the naked eye, “if you look at each pixel at a time, you don’t know whether that pixel happens to be water, or is it the shadow of a cloud?” A new generation of algorithms eventually figures it out.

Key to Kumar’s work is a method that makes AI seem a little more human. Called knowledge-guided machine learning, it allows AI to not only spot patterns but to venture “out of sample,” drawing on the laws of physics. This way, it leaps to unseen scenarios. “If you understand that lakes are formed in a certain way, and they grow and shrink in a certain way, and you build it inside the AI algorithms, then you can work with imperfect AI algorithms and still make them produce good results,” he says. AI trained on Florida lakes may not understand how Minnesota lakes freeze, unless it plies knowledge-guided machine learning. Just about every scientific discipline has started adopting this method, Kumar adds. “It’s a way of making the job easier.”

Mayo Clinic has tapped AI to buff processes in a similar way. In health care settings, “we are already seeing the use of AI to ease the workload,” says Ed Simcox, a tech expert at Mayo Clinic, “and to detect the presence of disease earlier.” The Rochester-based hospital can feed AI a patient’s clinical history, plus electrical tracings of their heart’s activity. Before symptoms have set in, the AI should be able to note signs of certain heart conditions.

Simcox imagines a “long-way-off” scenario for patients: “Using natural-language-processing AI, you could talk to a device that understands how a physician would analyze your medical complaint.” He is not worried AI will replace doctors, although, to keep up, “clinicians will need to be fluent in the language of computational data.”

Bias is a top concern at the hospital. To check AI’s garbage-in-garbage-out susceptibility, a Mayo program called Platform Validate reviews algorithms for racial, gender, and socioeconomic prejudice. “Algorithms can be biased in two ways,” Simcox says. “They were created by humans and trained on data collected by humans.” In one famous, egregious example from several years ago, risk-assessing AI software showed bias against Black defendants when predicting future criminals. “We must make sure that bias is acknowledged and that the results are corrected for bias, because AI cannot be 100% objective.”

Such developments nudge AI closer to—or even beyond—the human plane. But machine learning is still just a tool. “They’re still methodologies of finding patterns in the data,” Kumar says. “There is no consciousness being built into these programs. There is no, quote-unquote, human-like abilities sitting inside these programs. Unless, of course, you think crunching numbers is human.”

Meaning of “Life”

The more a computer apparently tries to pass as human, the creepier it gets. That’s another reason Khan wanted her robots to look adorably sci-fi rather than vaguely mammalian: the dreaded uncanny valley effect. Still, among AI’s Minnesota-grown examples, Khan’s robots look the most like our popularly imagined future. Granted, that future remains unclear. And her bots mostly just entertain nursing home residents for now.

Nevertheless, Khan seems to think, and live, a few steps ahead. When she applied for funding in 2016, she says the reviewers “were laughing, saying, ‘She’s going to unleash her robots on these poor, vulnerable people.’ ” Since the pandemic slimmed health care ranks, the United States (starting in Minnesota) may simply have begun to catch up to Japan, a fast-aging country that got the jump on care bots in nursing homes.

About a decade ago, Khan found her focus while programming a nearly 6-foot, gorilla-armed robot, which now hulks in the corner of the Duluth lab. She had wanted to use her engineering background to help her mother care for her father, who was dying of congestive heart failure. Buddy, the robot, would have assisted patients in and out of bed after open-heart surgery. But it didn’t receive backing. It looks too intimidating, Khan says. Learning about the link between cardiovascular trouble and dementia, she moved on to the friendlier-looking models.

She says she often works from 5 or 6 a.m. to midnight, sometimes getting barely two hours of sleep. Her two labs on campus are messy. Robots hang out in various states of unboxing. Aimed at a treadmill, cameras mounted around the ceiling can track reflectors attached to test subjects, measuring the way they walk and from that, she says, the likelihood of dementia down the line. 

Her work on wearable sensors—deployed in some Minnesota nursing homes and soon to debut in a hospital setting, in Michigan, she says—is less eye-catching but just as, if not more, impactful. Like a Fitbit, this tech gleans everyday data, such as the steadiness of a person’s gait or the straightness of one’s sitting posture. AI can get to know these patterns and flag deviances over time, pointing to potentially incipient conditions. “When we go and do our regular physicals, we should be doing this: take a baseline gait and see how we do in the future.”

She runs through other possibilities. What if an endearing, diabetes-monitoring droid watched what patients ate and piped in with lower-glucose alternatives? Thinking back to earlier pandemic times, she recalls a lower-tech asset: “We could not approach the elderly. Nobody could approach them. They were so lonely. Some of them died just because of loneliness, and so the deaths increased not only because of the pandemic, not only because of COVID, but because of loneliness and depression.”

We know AI can map data constellations, monetize industries, and rifle through knowledge—but there is this dimly lit idea that it can also carry out compassion where humans can’t, or won’t. It can see us in ways we may need to be seen, or fool us in ways we may want to be fooled. “I’m just surprised people haven’t thought about all these things,” Khan says. “We need to think outside the box and stop thinking small, right?”