Why Scaling Enterprise AI Is Hard – Infosys Co-Founder Nandan Nilekani Explains

Challenges in AI adoption for private and pubic sector enterprises.

I came across this very interesting and important lecture by Infosys Co-Founder – Nandan Nilekani on scaling enterprise AI. Infosys is one of the leading IT services companies. They are involved in scaling novel technologies for real world applications. This is especially true for enterprises.

At the Carnegie India Global Tech Summit, Infosys Co-Founder Nandan Nilekani took the stage with a refreshing dose of realism. He cut through the fanfare surrounding artificial intelligence, especially for enterprise AI use.

“I thought today I’ll briefly do a reality check on AI,” he said. This set the tone for what would be a sobering address. Yet, it was also visionary.

At a time when AI is often discussed in the language of miracles and revolutions, Nilekani offered a grounded perspective.

Drawing from his extensive experience in technological transformation, he provided valuable insights into the gap between AI’s promise and its practical implementation.

Particularly, he spoke about both the opportunities and the complex challenges of scaling AI.

This is especially true in a country like India – one with diverse languages and cultures. But the truth is, there are many such nations across the globe – and that is what caught my attention.

You can watch the video here:

Else, go through my notes and opinion on what he said further in the blog post.

Key Takeaways

  1. AI is not a magic bullet — real implementation is hard work.
  2. Consumer AI is easier than enterprise AI; public sector AI is the toughest.
  3. India’s unique digital infrastructure positions it to lead in AI usage, not development.
  4. Trust, data, and contextual language models are key to inclusive and scalable AI.
  5. AI in India is being built with a mission: to improve lives, not just convenience.

The Overhyped promises of AI

Nilekani began by contextualizing the current AI fever within the broader history of technological hype cycles. “Hype and technology have always gone together,” he observed, reminding us how similar patterns emerged with cloud computing and cryptocurrency. Each technological wave arrived with grandiose claims – cloud computing was heralded as “the greatest thing since sliced bread.”

“Five years back, the crypto bros told us that they’ll solve world peace and world hunger with crypto. I don’t know where that went.”

He emphasized that while AI does have transformative potential, “the hype levels are unprecedented,” making it necessary to temper expectations.

Now, AI has taken center stage with even more extravagant claims of transforming everything for everyone. However, as Nilekani pointedly noted, significant challenges exist in building AI at scale. Making it work for everyone is difficult. The hype often glosses over these challenges.

Let’s understand each of the AI adoption challenges he mentions:

AI is complex, not just code

Developing AI at scale is about much more than clever algorithms. It’s about real-world complexity — infrastructure, user experience, governance, institutional inertia, and human ego.

“Internal politics plays a part… egos also apply to the world of AI,” he remarked.

I think that AI companies, due to capital raised, seek ways to make sure AI gets adopted, irrespective of risks.

Also, I found this recent Microsoft study that finds top AI agents fail over 50% of SWE-bench Lite debugging tasks. This occurs even with frontier models.

Trusting non-human intelligence, can enterprise AI make decisions?

What truly sets AI apart from earlier technological revolutions is a profound shift in trust dynamics.

For the first time, we’re being asked to place our trust in non-human intelligence for decision-making.

“Earlier technology was deterministic, predictable,” Nilekani explained.

“Now we are essentially expecting the machine to make decisions, and there’s a huge leap of confidence, a huge leap of faith in the ability of technology to take us forward.”

This trust leap becomes even more challenging because of our asymmetric standards:

“We are far more forgiving of human error but much less forgiving of machine error.”

Nilekani illustrated this with a powerful example. We accept hundreds of thousands of road fatalities caused by human drivers as an unfortunate reality. Yet, a single death caused by an autonomous vehicle can send developers “back to the drawing board for two years.” This higher standard for machine performance makes AI adoption particularly difficult at scale.

The consumer vs enterprise AI divide

If consumer AI applications can afford occasional hallucinations or mistakes. A chatbot that suggests the wrong dinner recipe is not the end of the world. But in enterprise or public applications, even a 1–2% error rate can severely damage brand reputation or citizen trust. Nilekani explained that enterprises must make sure AI systems don’t give wrong answers because their brand reputation is at stake.

“Enterprises are putting their brand behind an offering… If they provide AI at scale and it has 1 or 2% error… that affects the brand.”

This brand risk is combined with the absence of robust guardrails. These missing guardrails are needed to make sure error-free machine performance. This explains why enterprise AI implementation is progressing more slowly than the hype suggests.

Meanwhile, corporate boards are demanding immediate AI transformation, creating “unrealistic expectations” that further complicate implementation efforts.

Public sector AI: The most difficult frontier

If enterprise AI is hard, government AI is herculean. Three key factors make public sector AI particularly challenging:

  • Structural constraints: Government ministries and departments often operate as separate territories. This impedes the data sharing that is vital for effective AI implementation.
  • Public trust concerns: The ethical implications and potential for bias in government AI systems demand extraordinary care and thoroughness.
  • Bureaucratic risk aversion: Officials must sign off on AI implementations. They need assurances there will be ‘no blowback’ in case the AI makes mistakes.

This creates a risk-averse environment where innovation is stifled by caution.

“If data is the lifeblood of AI, we have to find a way to bring all AI together irrespective of which part of the government it comes from.”

Digital Public Infrastructure (DPI) + AI: India’s Recipe for Scaling AI

Understanding how India in the past has adopted technology and scaled it can give some hints for AI adoption.

India’s unique approach combines Digital Public Infrastructure (DPI) with AI, creating what Nilekani calls a ‘virtuous cycle’ where each enhances the other:

  • AI improves DPI: For example, AI is used for liveness testing in Aadhaar authentication. It reduces errors in UPI’s 18 billion monthly payments. It also enables voice payments.
  • DPI enables AI at scale: Bhashini is an an API-based platform for Indian languages. It already processes 300 million inferences monthly across 36 languages.

This infrastructure birthed homegrown giants like Meesho, PhonePe, PhysicsWallah, and Zepto. These are all built on DPI (Digital Public Infrastructure) that Nilekani believes will become the foundation for AI.

One of Nilekani’s most compelling arguments is that India is not aiming to build the most advanced AI models. It is aiming to use AI at scale.

“While we expect AI adoption to take 10–15 years globally… in India, it can happen much faster. India will be the place where this stuff gets used at scale of 1 billion people — like we have shown before with Aadhaar and UPI.”

AI for the masses — Not the elites

Nilekani identified three key factors that will drive AI adoption to reach a billion Indian users:

  1. Language expansion: Moving beyond Hindi and English to include all major Indian languages
  2. Interface evolution: Shifting from keyboard and touch to voice and video interfaces
  3. Knowledge enhancement: Advancing from static information to dynamic, contextual intelligence. It should give real-time, personalized, contextual help.

These aren’t just lofty goals — they’re being implemented in labs and pilot programs across India.

A recurring theme in Nilekani’s talk was the ethical orientation of Indian AI development.

“It’s not about dumbing down people. It’s about using AI to improve the capacity, capability, and potential of human beings.”

India’s AI story is not Silicon Valley-style disruption. It’s careful, targeted augmentation. This applies to education, agriculture (via open agri networks), and public services.

The path forward: practical steps for improving AI adoption

Nilekani concluded by emphasizing that AI implementation is “not a slam dunk” or “magic” that will transform everything overnight. Instead, success requires:

  1. Focus on narrow use cases: Identifying specific transactions and use cases that can work at scale.
  2. Iterative improvement: Creating systems that evolve through better data collection and continuous refinement.
  3. Responsible deployment: Ensuring AI is safe, secure, unbiased, and responsible.
  4. Cost efficiency: Making AI affordable at “one rupee per inference” to enable mass adoption.

In closing, Nilekani offered a clear message:

“AI is not easy… not some Kool-Aid… It’s about doing it properly.”

If India can navigate these challenges with its trademark jugaad and inclusivity, Nilekani’s vision of AI for Bharat — and not just for Silicon Valley — could become a powerful global template.

What do you think about practical AI adoption?

USA and China is head-locked in the AI race of launching improved models. While, the rest of the world is thinking of practical use of this technology.

As a consumer, I am using AI for my daily workflows.

But even as a small business – I have not adopted full AI automation. I do expect humans (me, editors, or the writers) to fact check and vet the generated AI output. I can understand Nandan sir’s point – this must be difficult to scale.

I think small step at a time is a good approach – to adopt AI for few processes. One can scale as it gets better.

Learn more about real-world application of AI across industries:

  • Generative AI for communities and CPaaS – use cases and trends – read
  • Generative AI for contract drafting and review – expert insights and trends – read
  • Generative AI for Consulting – ChatGPT prompts, Big-4 examples, AI tools list – read
  • Generative AI For HR Recruitment – 10 use cases, 50 AI Tools, 5 Examples of companies using AI – read
  • 8 Generative AI for eCommerce use cases with 40 AI tool recommendations – read
  • When to use Generative AI vs Traditional AI – examples, limitations, and benefits – read

We will share more such insights from AI and technology leaders – subscribe to stay updated!

This blog post is written using resources of Merrative. We are a publishing talent marketplace that helps you create publications and content libraries.

Get in touch if you would like to create a content library like ours. We specialize in the niche of Applied AI, Technology, Machine Learning, or Data Science.

One response to “Why Scaling Enterprise AI Is Hard – Infosys Co-Founder Nandan Nilekani Explains”

  1. […] I also covered opinions shared by Nandan Nilekani, co-founder of Infosys, on why scaling AI for enterprises is hard. One of the points was about ‘taking the blame or responsibility’. One can’t blame or make AI […]

Leave a Reply

Discover more from Applied AI Tools

Subscribe now to keep reading and get access to the full archive.

Continue reading