Overview
In Responsible AI and Human Emotion, it is crucial to view Generative AI (GenAI) through an ethical lens. The stakes are high, as data leakage and privacy breaches can significantly impact an organization’s financial standing and reputation.
Let us talk about sensitive customer-centric data used as prompts in GenAI models. It is worrisome that these models automatically store such data for future training without giving users a say. This raises ethical concerns about handling customer data and the need for protective measures to safeguard privacy.
But that is not all! We have also noticed some quirks with GenAI models regarding consistency and reliability. Surprisingly, they can respond differently even when faced with the same prompts. And imagine this: they sometimes reject their answers when asked for validation! It is hard to rely on a system that behaves inconsistently, right? Plus, there is the challenge of transparency around answer accuracy and attribution. It takes time to track where a GenAI response comes from, which makes accountability a real puzzle.
That is why Generative AI must address the issue of attribution head-on. Users deserve to be able to dig into the answers and understand their sources of influence or reference. By bringing more transparency and accountability to the table, GenAI can embrace responsible practices and meet the expectations of its users.
Generative AI todayHow GenAI has transformed the corporate landscape
GenAI for businessesAs GenAI is being used to create human-like content, it is important to look at it from the dimension of societal impact. Fake videos and images (a.k.a. deep fakes) are often generated using the Gen AI algorithm with human-generated content. In such a scenario, it is impossible to differentiate between various content categories, viz. human-generated, AI-generated, hybrid-generated, and everything in between.
Shifting our focus to one of today’s most pressing issues—privacy—we recognize its significance. Many privacy and security breaches are reminders of the challenges we face. Take, for instance, the Samsung incident where a code was compromised. However, amidst these concerns, there is an opportunity to implement differential privacy mechanisms for prompts, enhancing data protection. Additionally, empowering employees with comprehensive training on these powerful tools becomes crucial. By embracing these measures, organizations can proactively address privacy concerns and foster a positive environment for using advanced technologies.
Such incidents call for strict regulations and auditing frameworks, as misuse of GenAI can have an irreversible impact on humans and society. It is no longer a hidden fact that generative AI is misused to create fake news and videos used to manipulate public opinion and spin propaganda. We have witnessed the impact of the Cambridge Analytica fiasco, and GenAI is millions of times more powerful.
Fractal’s toolkit & principles: Safeguarding the promise of Generative AI
Decision-makers, employees, consumers, beware!
If we assume GenAI is the objective truth and not an actively learning machine that makes mistakes – we are bound to subject ourselves to inaccuracy, and we would be doing it with chosen ignorance.
Currently, these are a few appropriate roles to think of Generative AI in the workplace: Task Tracker: Ex. Helps draft scheduling, reminders, managing lists, and making a work plan. Ex. Ability to draft an email to schedule an interview & adapt it based on details (in this case – virtual vs. in-person)
User Beware: Inability to appropriately calculate based on time zones despite specifying
Learning Aid: Ex. Exploring a new domain, creating the learning plan & delivering it to you, curating explanations in a way that is easy for you to understand.
User Beware: Information provided can be 1) outdated 2) misleading 3) inaccurate; at this stage, we cannot rely on the authenticity, credibility & sourcing provided by Generative AI tools.
Negative influence of GenAIVirtual Partner: Ex. An interactive space to brainstorm, to practice conversations/pitches, helps you move at a steady pace/get ‘unstuck.’
User Beware: Content that gets added to tools may be used for training the model; look at Terms and Conditions of usage + pricing and plans before adding anything beyond sample text/content.
Creative/Content Assistant: Ex. Images, text, and videos generated instantly to test multiple versions of an artifact with small iterations, curating/editing to a desired size, texture, tone, quick compilation & analysis of notes.
Screenshot of the Dashboard tool offered by Hyperwrite Screenshot of Hyperwrite rewriting a portion of this article (self-plug) based on criteria we put in (length, tone, etc.)User Beware: Ownership and rights over content created by/with GenAI is still highly contested and uncertain.
GenAI artPerception altered: examining the mental model when using Generative AI
Treating my interactions with it as I would with Research
When researching, exploration is contained to the hypothesis, the focus is learning or proving/disproving the hypothesis (ex., AI is good vs. AI is bad); it can lead to extreme conclusions and not leave room for quick updating.
Treating my interactions with it as I would with digital Products/Services
When using products, there is an expectation of objectivity, front-end finesse, and back-end data management, which may not exist for most GenAI products today. With products/services, there is also a clear understanding of who benefts/profits from a user’s engagement, but in this case, that is not always clear or related, nor is liability or consequence of ‘bad’ experiences.
Recommended: Treating my interactions with it as Experiments
When experimenting, there is a playful yet scientific approach to discovery; finding applications, testing limits, and pushing boundaries; but with caution.
Conclusion
A good metaphor to think of Generative AI in terms of safety is to consider it as a powerful tool in a craftsperson’s workshop.
Core foundations for sustained and successful implementation; going beyond what is technically feasible towards what is human & business aligned (Sourced from Fractal)Let us break that metaphor down a bit
Powerful: Capable of more than you think at first glance; potential to create impact (both positive & harmful).
Tool: It can be considered a suite of tools that can facilitate or enable your work.
Craftsperson: While anyone can take up crafts, to be a craftsperson is a skill to hone; to identify the right tools for different tasks, to understand effort, preparation, and maintenance of the tools, the space, and oneself.
Workshop: You can technically enter a workshop without safety gear and rules; but you will likely cause yourself harm and damage the space if you do; learn and consider safety as you engage with something as small as the metaphoric pin (typing a question into ChatGPT) or the metaphoric chainsaw (replacing people with machines).
Much like a skilled craftsperson, cautiously understanding its capabilities and limitations to use it more effectively, safely, and ethically (they do not have to be in contradiction).
Another reason this metaphor works is that when you are a craftsperson, you are either thinking trial-and-error/experimenting with materials/tools. When you are building something, you are thinking about making things to last and things that will stand the test of time.
When using GenAI, tapping into either one of these mindsets intentionally will help approach it better; with less errors, less existential threat, and reduced safety risks.
Depending on where an organization chooses to play on this matrix – it will require related tools, change journeys, implications & cautions.