Responsible AI (RAI) is crucial in today’s technological landscape. Fair, transparent, and safe AI systems reduce bias and discrimination to help build public trust and foster innovation.
Fractal demonstrates a strong commitment to RAI by integrating it into all our AI solutions. Our approach includes comprehensive frameworks, rigorous certification processes, and practical applications. This ensures that our AI technologies are ethical and compliant with global regulations.
Overview of recent AI regulations
As of 2024, AI regulations are advancing globally. The EU has led with the AI Act, imposing strict rules on high-risk AI applications, ensuring transparency, and banning harmful practices like social scoring. The US has adopted a sector-specific approach, focusing on safety and accountability, guided by initiatives like the AI Bill of Rights and recent executive orders. The UK has emphasized a pro-innovation strategy, using existing regulators to enforce AI principles while promoting international collaboration through agreements like the Bletchley Declaration. China’s regulations focus on controlling misinformation and maintaining social order, with stringent rules on Generative AI (GenAI) and recommendation systems.
These robust and diverse regulatory strategies demonstrate a unified global determination to ensure AI is developed and used responsibly.
The significance of responsible AI
RAI is crucial for building trust and fostering innovation. By ensuring fairness, transparency, and safety in AI systems, organizations can gain and retain the confidence of users and stakeholders. This trust is essential for the widespread adoption of AI technologies.
Moreover, RAI minimizes negative societal impacts, such as bias and discrimination, by embedding ethical considerations into developing and deploying AI systems. This ethical approach fosters innovation by encouraging the development of more inclusive and fair technologies and helps navigate the complex landscape of AI regulations.
Finally, adhering to RAI practices mitigates legal and reputational risks. Organizations that fail to implement RAI risk facing significant legal challenges and risk of reputation loss. For example, non-compliance with the EU’s AI Act can result in substantial fines and sanctions.
Embracing RAI is a moral imperative and a strategic necessity to avoid legal pitfalls and maintain a positive public image.
The Responsible AI approach
At Fractal, we ensure our AI solutions are fair, transparent, and safe throughout their development. Our range of RAI solutions spans data input, model development, deployment, and post-deployment monitoring, ensuring robust, fair, and transparent AI systems.
For example, our app Kalaido, which generates images from prompts, tackles issues like data bias and harmful content. Kalaido uses a two-level guardrail system: It filters harmful text prompts at the input stage and reviews generated images to prevent biased or toxic content at the output stage. The data used for model training is collected unbiasedly: Sample collection is not driven by any specific element or characteristic.
No personally identifiable information (PII) from users is used to train the model. User data is not collected or used by the application except for email IDs, with consent obtained via Cookie Policy (Cookie Bot implementation is currently in progress) and Terms of Use. A subset of non-PII data is used to pre-train the model, combining automated tagging with human validation. Encryption methods secure user data if any, ensuring privacy and compliance.
There is also no application or fine-tuning of the model on personally identifiable data from anyone using the application. Moreover, users have the right to request the deletion of all data linked to their account, as stated in the Terms of Use. However, image creation metadata is stored to recreate images if needed. This ensures accountability and traceability.
Kalaido also uses a unique sequence of diffusion pipelines to achieve the best quality results. The details of these pipelines are explainable but not released externally.
Challenges and limitations
The accuracy of image generation on complicated prompts with multiple elements is challenging. Detailing of human hands, eyes, and some parts of the body might not be perfect. No additional activity is done to remove bias in the sampled data, and societal biases in the data are not addressed during training as that might lead to overcorrection to long-standing racial bias problems in AI. Translation capabilities might also be affected by dialect and the use of certain vocabulary in non-English languages, which can produce inaccurate images.
Certification and compliance
Kalaido’s certification process aligns with global AI regulations. It also involves a thorough evaluation using our comprehensive checklist. Collaboration with Kalaido’s development team ensures each aspect is scored based on RAI principles, resulting in a high compliance rating. This certification validates Kalaido’s adherence to RAI principles and provides actionable recommendations for further improvement.
This demonstrates our dedication to RAI and boosts our clients’ confidence in the ethical integrity of our tools.
Fractal’s leadership in Responsible AI
Fractal’s commitment to RAI is evident through a comprehensive framework and a suite of accelerators and APIs that ensure AI systems are fair, transparent, and safe.
We’ve built our RAI framework on fairness, transparency, robustness, and safety principles. These principles are operationalized through accelerators that can be integrated with existing AI models, enhancing explainability and fairness. We’ve applied this framework across various industries – including consumer packaged goods, pharmaceuticals, and finance – to ensure that industry-specific AI solutions meet high ethical standards. We’ve also collaborated with governmental bodies, such as NASSCOM in India, to develop toolkits that promote RAI practices at a national level.
Our team regularly shares insights and developments in RAI at industry-leading events, contributing to the broader discourse on ethical AI. This multifaceted approach underscores our commitment to leading the way in developing and deploying AI systems that are not only innovative but also ethically sound and socially responsible.
Future developments and innovations
We are dedicated to embedding RAI into all future projects and GenAI solutions. Our strategic goal is to ensure that RAI principles are integral from the onset of every project. This involves incorporating RAI certification into every AI model’s development, ensuring compliance with ethical standards from data input through deployment and post-deployment monitoring. Our upcoming initiatives also include implementing guardrails for the internal use of GenAI platforms like ChatGPT, ensuring secure and responsible usage by employees while protecting client data.
Balancing innovation with responsibility is core to our strategy. We aim to stay ahead of evolving regulations by continuously updating our RAI framework to meet global standards, such as those outlined in the EU’s AI Act and similar guidelines from other regions. By embedding RAI in every aspect of AI development, we’re ensuring that innovation does not come at the cost of ethical standards.