Artificial Intelligence and AI at Scale


AI for Everybody at mrfranklins

AI can deliver significant business impact, but companies can maximize its value by taking an end-to-end approach. Weaving together strategy, process redesign, and human and technical capabilities, we create the fabric of an AI-driven organization, enabling the outcomes that drive businesses forward.

AI leaders—the companies remaking industries—don’t just use AI. They’re fueled by it. And as generative AI brings the power of AI to more businesses, the ranks of AI leaders are poised to grow. But getting there requires a holistic approach. You need to understand how artificial intelligence can drive business outcomes. But just as crucially, you need to reimagine processes, foster adoption, and develop the right capabilities, roles, and governance. We help companies pull all these levers, so they can deploy AI at scale—and scale up the value, too.

Our research shows that fully embedded AI is one of the key attributes shared by future built enterprises that excel across financial and non-financial measures. We unleash the power of AI by building an organization around it. Getting the roles, responsibilities, and culture right is just as important as perfecting the algorithms. As generative AI democratizes artificial intelligence—enabling companies without a deep bench of data scientists to use and benefit from the technology—this end-to-end approach will be crucial to gaining a competitive edge.

What is Generative AI

To gain a competitive edge, business leaders first need to understand what generative AI is.

Generative AI is a set of algorithms, capable of generating seemingly new, realistic content—such as text, images, or audio—from the training data. The most powerful generative AI algorithms are built on top of foundation models that are trained on a vast quantity of unlabeled data in a self-supervised way to identify underlying patterns for a wide range of tasks.

For example, GPT-3.5, a foundation model trained on large volumes of text, can be adapted for answering questions, text summarization, or sentiment analysis. DALL-E, a multimodal (text-to-image) foundation model, can be adapted to create images, expand images beyond their original size, or create variations of existing paintings.

What is Responsible AI

Responsible AI is the process of developing and operating AI systems that align with organizational purpose and ethical values, achieving transformative business impact. By implementing RAI strategically, companies can resolve complex ethical questions around AI deployments and investments, accelerate innovation, and realize increased value from AI itself. Responsible AI gives leaders the ability to properly manage this powerful emerging technology.

Emerging regulations and generative AI are casting a spotlight on AI Technology, but they don’t need to cast a shadow. Our perspective is that responsible AI is more than just risk mitigation, as it also an important value creator. The same mechanisms that reduce AI errors can accelerate innovation, promote differentiation, and elevate customer trust.

How We Help Companies Implement Responsible AI


So far, relatively few companies have unleashed this strategic approach to responsible AI. What’s the holdup? For some organizations, the leap from responsible AI ambition to execution has proved daunting. Others are waiting to see what form regulations take. But responsible AI principles can bring benefits now as they prepare companies for new rules and the latest emerging AI technology.

Our battle-tested framework minimizes the time to RAI maturity while maximizing the value responsible AI can create. Built on five pillars, it is tailored to each organization’s unique starting point, culture.


We help companies articulate the responsible AI principles they will follow. The key is to tailor responsible AI to the circumstances and mission of each client. By looking at an organization’s purpose and values, as well as the risks it faces, we develop responsible AI policies that don’t so much manage risk as adopt an integrated approach to address it. When companies know where (and how high) to set the guardrails, they can build both customer and employee trust, and accelerate AI innovation.


Our responsible AI consultants create the mechanisms, roles, and escalation paths that provide oversight for an RAI program. A critical component is a responsible AI council. Composed of leaders from across the company, this council oversees responsible AI initiatives, providing support while demonstrating the inherent need for such guardrails. 


We define the controls, KPIs, processes, and reporting mechanisms that are necessary for implementing RAI. In a crucial step, we help companies integrate responsible AI into AI product development. And we help them develop the capability for continuous improvement: always looking at how to optimize responsible AI initiatives.


Implementing RAI means building a culture that encourages and prioritizes ethical AI practices. We help create an environment where people are aware of responsible AI and the issues it raises, creating a sense of ownership where individuals feel empowered to speak up and ask questions. With developments in generative AI granting unprecedented access to AI technology, it’s more important than ever to get the cultural piece correct.

Three Things to Know Now About Responsible AI

AUGUST 10, 2023—The recent voluntary commitments secured by the White House from core US developers of advanced AI systems—including Google, OpenAI, Amazon, and Meta—is an important first step toward achieving safe, secure, and trustworthy AI. Here are three observations:

  • These voluntary commitments will help to move the AI ecosystem in the right direction. They can be a foundation for putting the Blueprint for an AI Bill of Rights into operation and bringing together more actors under a shared banner to ultimately make responsible AI the norm. The commitments also can trigger greater investment in training, capacity building, and technological solutions.
  • Supportive initiatives like the Frontier Model Forum, which includes the developers who made the voluntary commitments, will enable AI ecosystem stakeholders to exchange knowledge on best practices for responsible AI, particularly for advanced systems. Given that tech companies’ recent layoffs included some trust and safety experts, a renewed commitment to auditing and analysis through public release will help external expertise fill internal capacity shortages. Rigorous documentation in the form of audit reports and disclosures will help agencies such as the Federal Trade Commission protect consumers from deceptive and unfair practices.
  • Notably missing from the commitments is a focus on mitigating the potentially significant environmental impacts of AI systems. Details on when and how these commitments will be operationalized are also needed to boost public trust, especially given the many ethical issues recently raised by AI. Bipartisan legislation would increase the impact of the voluntary commitments.
Sign Up For Our Newsletter

Join the mrfranklins community to stay up to date.