Beyond the human in the loop:  Rebalancing the Individual-Social-Enterprise dynamic

Digital Panopticon AI and Surveillance
Rutuparna Jadhav

Senior Behavioral Researcher, Fractal Dimension

Digital Panopticon AI and Surveillance
Shivani Gupta

Head, Fractal Dimension

Summary
With the increasing integration of AI systems into enterprises, it’s time to reimagine the concept of ‘human in the loop.’ Dive into our exploration of how AI and behavioral science can identify the right humans to step in at the right moments, transforming surveillance into empowerment and promoting a culture of fairness, motivation, and innovation in organizations.
Summary
With the increasing integration of AI systems into enterprises, it’s time to reimagine the concept of ‘human in the loop.’ Dive into our exploration of how AI and behavioral science can identify the right humans to step in at the right moments, transforming surveillance into empowerment and promoting a culture of fairness, motivation, and innovation in organizations.

Picture the iconic Colosseum in Rome but reimagined entirely in glass. Each majestic arch is now transformed into a sleek glass box, hundreds stacked vertically and lining the circumference. Each of these transparent boxes is a luxurious residential unit, complete with modern amenities, flooded with natural light, and offering breathtaking panoramic views of the surroundings. At the heart of this structure is a tower: the central hub of the complex. Residents speak in hushed tones about the vigilant entity stationed within the central tower, diligently looking after the complex keeping everyone safe.

Digital Panopticon AI and Surveillance
*Image generated by Kalaido.ai

Does this look like a space you’d like to live in?

Such an architecture does exist, the only difference being that it was conceptualized to be the design of a prison. Jeremy Bentham was a social theorist who designed the Panopticon (pan –literally translating to all-seeing; opticon – optic), literally translating to all-seeing. It was only 200 years later that Michel Foucault, a French philosopher, first brought this design to life. Foucault, while describing the prison system in Discipline and Punish¹ , wrote that inducing the state of conscious and permanent visibility in a person’s mind assures the automatic functioning of power. This power should feel visible and verifiable, which makes it unnecessary for the authority to be present round-the-clock, as their power is constantly exercised within the individual’s mind.

Why do panopticons feel familiar?

A similar belief is also commonly shared across communities and cultures—it instils a sense of morality by telling us that ‘someone’ is always watching our actions; hence, we must be ‘good.’ Now, extend this construct to the current digital age. Be it with a single, smart device or an ecosystem of connected devices in our hands, homes, and vehicles, the omnipresence of data capture in our daily lives and our awareness of it have a similar effect on us as the panopticon had on its inmates.

Key questions:

As behavioral researchers working with AI, we are often left discussing:

• Are humans more likely to exhibit favorable behavior when we know we are being observed?

• What happens to our behavior over time when the observability is normalized?

• What happens when this normalization is combined with a sense of anonymity?

• Is more humanity always good/the answer?

• What should the human-AI dynamic look like if not a loop? And what should it be centered around?

In our series of articles on the human element of AI, we will unpack these questions and more.

When we look for answers in existing sources, we run up against many popular narratives of the future that come from science fiction. For example, many of us will be familiar with the infamous dystopian tech show Black Mirror and the many recent movies/shows with similar themes of technology draining us of our humanity one way or another. For instance, an episode of Black Mirror (titled ‘Nose Dive’ after the pose we take when glued to screens) shows citizens being ‘scored’ for their social behavior after day-to-day interactions. Not far from the episode’s narrative, one such social surveillance system² has been implemented to promote socially positive behaviors. The challenge remains that with humans in the loop, there is bound to be bias and subjectivity and often disproportionate control of certain humans/entities. On the flip side, consider the impact of telematics on the automotive industry. Good driving behavior is rewarded with low premiums, incentivizing drivers to adhere to safe driving practices.

The question then arises: if having more “human” in the loop doesn’t necessarily address challenges of fairness or justice (and in some cases further distorts them), could we channel our apprehensions, distrust, and resources towards establishing robust governance structures that rebalance outcomes for social-individual-enterprise benefit? These could pre-empt, rectify, and enable ‘mid-course corrections’ (in Neil deGrasse Tyson’s words³) to mitigate potential harm.

People take a new technology and then project it forward without any checks and balances in between the birth of the technology and the complete dystopic expression of that technology. If we did this in the dawn of spaceflight, you’d think anyone who got in a plane would die anytime a plane took off- but no, we made planes safer. Often the storytelling ignores the midcourse corrections that we put into place all the time.” – Neil deGrasse Tyson, American astrophysicist and writer.

The digital panopticon of the future does not have to be a place of surveillance and control. Instead, it can be a tool for empowerment, growth, and positive societal change. The key lies in navigating the ethical complexities and harnessing opportunities, potentially breaking the ‘loop’ and restructuring systems for visibility and shared accountability. For this series, we are leveraging methods from design fiction⁴ to craft positive ‘extremes’ as narratives of fairness and justice can be just as influential. With this series, we hope to provoke thoughtful discussion amongst users and practitioners alike and, in the process, inspire dynamic solutions to wicked problems we face as societies, enterprises, and individuals that leverage resources more intentionally.

Provocation of the day:

What if we leveraged psychometrics to identify our most ethical and moral employees and gave them vote/veto powers during the strategizing and implementation of critical processes in the organization?

Scenarios:

1. Appraisals and promotions: Imagine an AI-driven appraisal system where ethical and moral employees have the authority to review and approve final decisions. This could ensure that promotions and appraisals are fair, unbiased, and merit-based. The organization fosters a culture of integrity and fairness by having ethically sound individuals in decision-making roles, motivating employees to perform their best.

2. Policy changes: When implementing new policies, having a panel of the most empathetic employees with veto power ensures that the policies are scrutinized for fairness and organizational impact. For example, a new work-from-home policy could be evaluated to ensure it does not inadvertently disadvantage certain groups of employees, such as parents or those who are differently abled.

The above scenarios explore the idea of humans not necessarily in the loop but in control and at a distance where they can easily step in for mid-course corrections. We also explore the use of artificial intelligence and behavioral science to determine the type of human ideal to step in depending on the scenario. Which decisions should moral employees make? Which decisions should empathetic employees make? How might we codify these? Minimizing human involvement in AI tools while maintaining space for human mid-course corrections can create a balanced decision-making approach that benefits all stakeholders through an individual, societal, and organizational lens.

¹ Foucault, Michel, 1926-1984. (1977). Discipline and Punish: The Birth of the Prison. New York: Pantheon Books

² Cho, E. (2020). The Social Credit System. Journal of Public and International Affairs. https://doi.org/May 1st, 2020

³ [National Geographic]. (2015, October 26). Neil deGrasse Tyson on a Dystopic Future | Breakthrough [Video]. Youtube.

⁴(n.d.). Examples of Design Fiction. Julian Bleecker.

Discover the potential of AI for well-rounded decision-making
Contact Us