At Sanofi, artificial intelligence is not a side project, but an expectation. That was the message in a candid discussion with Country Lead and Head of Pharma Liz Selby, Head of Medical James Scott, and Head of Corporate Affairs Luke Cornish, at this week's AI Health Summit.
The three described the company's adoption of AI as less like a pilot and more like an operating system for how work will get done.
Selby framed the moment, describing it as the industry’s “industrial revolution, or bigger,” she said. She said that reality means leaders cannot simply delegate AI to others. "They have to show up, set the standard, and be accountable for outcomes."
Sanofi signalled its intent years ago when it elevated digital to its leading executive group and then invested in its foundational elements, including governance, infrastructure, platforms, and data protection, to adopt AI safely at scale.
According to Selby, the company required leaders to build their own AI competence. She and her peers completed formal executive training to ensure they would not inadvertently become “blockers” due to capability deficits. That expectation now runs both ways, she said, top-down through executive development and bottom-up through grassroots adoption, with the aim of meeting “in the middle” on confident and responsible use.
Expectation is enforced in the day-to-day, not just in policy. Selby gave a simple example. This year, she required her leaders to complete their individual development plans within the company’s in-house generative platform (Concierge). The result was faster turnarounds, tighter plans and better conversations.
For James Scott, that expectation is translating into the medical affairs function. The internal AI platform plays a critical role in operational policies, answering procedural queries, and advisory board minutes. Market research and customer insights are also integrated into a global gen-AI environment, enabling medical science liaisons and medical teams to interrogate evidence and shape strategy in hours, not weeks. None of that, he stressed, “just happened.” Usage has grown exponentially because the tools have improved, and leaders have made time for hands-on workshops to build competence.
Productivity gains are visible along the value chain. In R&D, Scott said Sanofi is targeting a reduction in clinical study report preparation times to roughly one-third of today’s cycle, bringing regulatory filings and patient access forward. For trial feasibility and launch planning, the company is utilising population-health analytics on electronic health records to identify candidates with rare diseases and those with presymptomatic type 1 diabetes, thereby shifting recruitment from a hopeful to a targeted approach.
Luke Cornish described a parallel shift in corporate affairs and advocacy, with large language models as a thinking partner, not a ghostwriter. Routine internal briefs can be machine-drafted. High-stakes ministerial letters are still human-crafted, but stress-tested by AI to anticipate objections and stakeholder views. Used lazily, he warned, these tools homogenise communication. However, used deliberately, they pressure-test strategy and sharpen judgment. They also compress the front end of advocacy. In minutes, teams can map who has spoken on an issue, what they said, and identify the openings. This is legwork that once took weeks.
None of this happens without guardrails. Cornish pointed to the company’s RAISE framework, which stands for Responsible AI at Sanofi for Everyone, covering transparency, data stewardship, human oversight, and environmental considerations. Reputation risk, he noted, can escalate faster than ever. Getting AI ethics wrong is not a problem that can be recovered from later, he said. Scott added that patient expectations around transparency and accountability align with the direction of regulation. When AI edges into Software-as-a-Medical-Device territory, explainability becomes a formal risk determinant, not just a virtue, he said.
Leadership expectation also extends to culture. Selby called change management “a massive part” of the work, including normalising the idea that non-experts can, and should, use AI to answer questions they once outsourced. The payoff is organisational capacity. When commercial and medical teams can self-serve accurate insights, specialists can focus on higher-value problems, she said.
The panel was clear-eyed about risk. Failures so far have been “refinements” rather than disasters, Selby said, because vigilance is built in and the company course-corrects early.
Externally, the human stays central. Cornish argued AI will increase the value of authentic engagement because both sides now turn up better prepared. Patients, they said, will remain sceptical about data use, but an opt-in future, where wearable and home-device data tangibly improve care, could make the benefits obvious and personal.
Asked to fast-forward three years, Selby hoped we would not be talking about AI as much as we would be using it. Leaders will be fluent, with concrete examples, failures learned from, and the technology quietly embedded in how value is created. That future only arrives if leadership expects it of themselves first, and then of everyone else, she added.

