India has a Chance to Lead the World on Ethical AI
As artificial intelligence accelerates globally, most governments are focused on regulation, risk mitigation and compliance. But a growing body of thinking suggests that rules alone won’t be enough to guide the systems shaping economies and societies.
![]() |
Nicole Junkermann – AI needs a moral framework
Drawing on her work with technology companies and policymakers across Europe and Asia, investor and NJF Holdings founder Nicole Junkermann argues that the next phase of AI development will depend less on oversight frameworks and more on the values embedded into the technology itself.
“Regulation can limit harm after the fact,” she says. “But it doesn’t determine what gets built, or why. That’s a question of incentives, design choices and ultimately moral judgement.”
India is emerging as one of the most important testing grounds for this shift.
With its combination of engineering talent, digital infrastructure, entrepreneurial energy and public sector ambition, the country is not simply adopting AI. It is actively shaping how it is developed and deployed. Initiatives such as the IndiaAI Mission are accelerating investment into sovereign AI capabilities, from compute infrastructure to research and startups, with an explicit focus on responsible and inclusive deployment.
At the same time, a new generation of Indian AI companies is beginning to take shape. Firms such as Sarvam AI and Krutrim are focused on building models and systems tailored to India’s linguistic, cultural and economic context, rather than replicating Western approaches. Their focus is less on frontier scale alone, and more on real-world applicability.
That combination creates a unique opportunity.
While much of the global AI ecosystem is concentrated in the United States and China, India is not locked into either model. It has the space to define its own approach, one that integrates innovation with societal priorities from the outset.
“India doesn’t have to retrofit ethics into AI later,” Nicole Junkermann says. “It can build them in from the start. That’s a structural advantage.”
This matters because the limitations of current governance models are becoming clearer. Regulatory frameworks such as the EU’s AI Act focus heavily on classification, compliance and enforcement. These are necessary tools, but they are reactive by nature.
They struggle to address deeper questions: what kinds of systems should be built, what outcomes should they optimise for, and how should trade-offs between efficiency, fairness and human agency be resolved.
In practice, many of these decisions are already being made at the design stage by engineers, product teams and private companies.
That is where India’s opportunity lies.
By embedding ethical frameworks into education, research and product development, India can influence AI not just at the level of policy, but at the level of architecture. This includes aligning incentives for developers, setting expectations for corporate behaviour, and integrating ethical reasoning into technical training.
The implications extend beyond India itself.
As AI systems become globally distributed, the norms and principles embedded within them will travel across borders. Countries that shape those norms early will have disproportionate influence over how the technology evolves.
“AI is becoming a foundational layer of the global economy,” Nicole Junkermann adds. “The question is not just who builds it, but what values it reflects.”
The global race in AI is often framed around capability and scale. But a different contest is now emerging, one defined by trust, usability and alignment with society. The United States may lead in frontier models, and China in rapid deployment. India’s opportunity is to lead in how AI is applied, integrated and governed in the real world.
For policymakers, investors and technology leaders, the message is clear. The next phase of AI competition won’t be defined solely by compute, capital or capability. It will also be defined by trust.
And trust, in the age of intelligent systems, is ultimately a moral construct.
About Nicole Junkermann
Nicole Junkermann, born in 1980, is an international investor and entrepreneur focused on technology, artificial intelligence and life sciences. She is the founder of NJF Holdings, leading its venture arm NJF Capital and Gameday by NJF Holdings.
Through NJF Capital, she has built a portfolio of more than 40 companies, with a focus on early-stage investments in artificial intelligence, deep tech and life sciences. Notable investments include SpaceX, Rippling and Revolut, as well as Groq, where Nicole was an early and major investor prior to its recent acquisition by Nvidia.
For more information about Nicole Junkermann, The Human Code and NJF Holdings, visit NJFHoldings.com

