June 2025

AI won't replace biotech analysts

LLMs are powerful and improving fast. But a skilled analyst with AI will outperform AI alone.

AI is powerful and improving fast. People are anxious that AI will replace analyst work.

Not just junior analysts, but any job that involves analyzing data and making judgments based on that data.

That includes investors, business development professionals, bankers, and consultants.

Haven't used LLMs in a while? Here's a quick way to see how they've improved

If you haven't followed LLMs over the last few months, here's a quick way to see how much things have improved:

Open up ChatGPT, select the 4o model, and give it this prompt:

"Based on the most recent interim topline results, estimate the probability that Summit Therapeutics' HARMONI trial of ivonescimab + chemo demonstrates a statistically significant benefit in overall survival when the trial data is mature."

Then, open up a new chat. This time, select model o3 and the "Deep research" option (click "Tools" at the bottom of the prompt message bar, and "run deep research"). Then enter the same prompt and compare the responses.

When we tested this, the deep research response was significantly better (you can see our test output later in this blog post).

LLM chatbots like ChatGPT and Claude have improved greatly in the last several months due to 1) ability to search the web to access real-time info and 2) ability to "reason" by talking with itself before responding (which allows LLMs to to fact-check their answers and increase the quality of output).

Web search and reasoning have largely removed two big limitations of earlier-generation LLMs: lack of access to current events and hallucinations. Hallucinations still exist, but with less frequency.

That worry is valid. But if we look at recent productivity-enhancing technology - the computer, the spreadsheet, and the internet - we see that these technologies created, not eliminated, massive numbers of analyst jobs.

Analysts survived the computer, the spreadsheet, and the internet. Analysts will survive AI.

Companies that empower analysts with AI will outperform those that replace them

We think AI is powerful when used appropriately (in a way that acknowledges and mitigates its limitations). We think ignoring AI is a mistake.

We also think it's a mistake to replace humans with AI.

For complex analytical work, humans working with AI have a massive edge over AI without humans at the wheel.

The human edge is asking the right question, finding the critical counterintuitive insight, and hunting down answers with relentless curiosity. The human edge is to quickly learn what everyone else knows (or what the LLM knows), then always push to learn more.

Companies that replace humans with AI will lose that edge. Companies that empower humans with AI will push that edge even further.

Is there a point where superintelligent AI replaces that edge? Maybe. But we're nearly 3 years out from ChatGPT's launch, and analyst jobs haven't changed all that much despite massive improvements in AI capabilities.

How analyst work will change with AI

This post will be a tour of how analysts might be empowered in an AI future.

Rather than just talk about how AI can empower analysts (and all knowledge workers), we will show 4 demos of AI tools that can empower analysts.

These are just demos (except for this first one, which is a working tool we have built), but they could be developed into full products with current technology.

To make the examples tangible, they are all related to the work of biotech investment analysts. But these principles apply to many, if not most, knowledge-based jobs.

The goal is to illustrate how AI can empower analysts, and to help companies understand that empowering analysts with AI is a much better bet than replacing them with it.

AI as a research copilot

Interactive Demo
Research Copilot Demo: Snapshot from our Spinal Muscular Atrophy analysis. Each assumption in the valuation models is backed by trial data, SEC filings and other reputable sources. This is actually from a working product we've built (the rest of the demos are just prototypes).

Most people are using AI today as a research copilot: having it scrape the web or proprietary data sources to extract structured data, asking it to analyze that data, or just asking it questions about a particular pathway or disease area.

Existing AI tools are quite effective at these tasks, and only getting better. ChatGPT can extract financial information from PDFs, add it to an Excel worksheet, and create a simple valuation model. Claude can use its training data and web search to size drug markets or review literature. Gemini can read long, dense papers and accurately answer questions by retrieving specific data and reasoning with detailed nuance.

By linking together tools that exist today, you can run complex workflows with AI:

  • Reviewing all ASCO abstracts and extracting data relevant to your portfolio
  • Analyzing and ranking hundreds of assets from a GlobalData csv dump
  • Reading thousands of earnings call transcripts to identify subtle trends in prescriber behavior

The demo above is one such tool we've built. But you can create AI tools for a massive variety of tasks.

If all this is true, then won't AI replace humans?

Not so fast.

Are you smarter than AI?

Pop quiz!

Here is an example of a question that AI consistently gets wrong in our testing. Can you figure out the right answer?

When we had AI estimate annual list pricing for dozens of approved prostate cancer drugs, it was correct in most cases. But it always got this one wrong.

The AI was off by a factor of 4. Using a price that's 4x lower than the actual price in an NPV model will undervalue the drug by billions of dollars.

When we reviewed the AI's valuation of Erleada, we instantly noticed something was off. We had done work in prostate cancer before and knew Erleada should be more valuable than the AI's estimate. Because the AI lays out its assumptions, and because we can review its full Excel model, we were able to catch the error quickly. But if left to its own devices, the AI would be devastatingly wrong.

It is possible to build guardrails against these types of mistakes. But as we will see in the next demo, even avoiding these errors doesn't reduce the need for a human.

How AI research copilots will change an analyst's job

This illustrates what we think will be a big change in how analysts work. Analysts will shift from doing research to reviewing research.

This can make analysts more productive. It can also make them more insightful. Less time spent collecting and aggregating information means more time deeply researching the core questions.

This is analogous to what is happening in the world of software. Software engineers are using tools like Cursor, Windsurf and Claude Code (not to mention just the basic chat models) to write a lot of code, instead of writing the code manually.

Engineers still need to review the code before deploying it into production. But they can now spend more time building cool products instead of plugging away writing boilerplate code.

AI as a thought partner

Interactive Demo
Thought Partner Demo: This is an interactive demo, not an actual product. This entire demo was coded by AI, and AI generated all of the text and data. The information is not meant to represent any real company or drug.

With the advent of web-search enabled reasoning models (like OpenAI's o3, Google Gemini 2.5 pro-preview, and Claude's Sonnet and Opus 4 with extended thinking), AI is increasingly capable of sophisticated analysis.

Combining these capabilities with access to third party data sources (KOL call transcripts, GlobalData competitive landscape exports) and tools (like statistical modeling and data analysis tools) gives AI abilities beyond just conducting basic research and collecting data. It makes AI useful as a thought partner.

The above example shows how AI can be used to process news releases and update price targets. This is just a toy demo, but it is possible to use AI for this today.

And the tools will only get better.

But humans are still required -- not just to review the AI's work, but to exercise judgment around how to use AI analysis to make real-world decisions.

The below example highlights why human expertise and judgement is essential to getting the most out of these powerful LLMs.

Can LLMs predict trial outcomes?

We asked several leading AI models to predict the full results of Summit Therapeutics' much-watched HARMONi trial.

Last year, Summit's ivonescimab, an anti-PD-L1xVEGF bispecific antibody, stunned the biotech world when it release data positioning it as potentially better than Keytruda in NSCLC. Ivonescimab, originally developed by Chinese biotech Akeso, helped bring the industry's awareness to the growth in the potential of Chinese biotechs to develop impactful innovative medicines.

Last week, Summit announced disappointing data from its HARMONi study, which did not hit a statistically significant improvement in OS as of the date of the analysis. Becuase OS benefit is required for FDA approval, a huge question among investors is "when the data is mature, will there be a statistically significant OS benefit?".

See how AI answered this question:

Are the LLM's answers perfect? No (ChatGPT o3's statistical analysis had some significant mathematical errors).

Are they helpful? Well, a 35-75% range for the estimated probability of trial success is not very helpful. But the overall analysis, and some of the sources cited, do have value.

How much value these answers provide depends on your role and expertise. If you have access to sophisticated tools for simulating clinical trials, or are you a hedge fund analyst covering immuno-oncology, maybe the LLMs are not that helpful, except as a sanity check.

But if you don't specialize in these areas, need a quick answer, or are doing a first draft of an analysis, these responses are much more informative than pulling the average oncology trial success rates from the literature.

The AI answers are just a starting point. They give you a head start, but there is always room for you to run farther.

Is this a fair test of LLM capabilities?

These examples represent the floor of what these models are capable of. I did not test different prompts, asked no follow up questions, provided no additional context or information to the models, and did not design any custom tools or workflows to help them accomplish the task more effectively.

Given the right tools and guidance, these LLMs could perform significantly better.

For example, an expert statistician could write custom software for statistical analysis of oncology study results, and give the LLMs access to this software through their "tool calling" functionality.

Or, you could just keep pushing the AI to improve (the same way you would coach an intern). Ask the AI to identify errors in its reasoning, tell it to go deeper on a certain topic, challenge its thinking, etc.

These techniques significantly improve the quality of output: because they leverage human expertise and judgment. Humans + AI > AI alone.

How AI thought partners will change an analyst's job

Using AI as a thought partner is a fast way to get multiple perspectives on a topic. Sometimes these perspectives conflict, but usually they are reasonable.

AI thought partners can bring new ideas to bear, offer expertise in areas where you are not an expert, and help challenge your own thinking. And it does this near-instantly, for essentially $0 marginal cost.

When using AI as a thought partner, the analyst still must review AI responses not just for factual accuracy, but for reasoning errors. And the ultimate judgement is up to the human.

AI as a superpower

Interactive Demo
Portfolio Manager Demo: This is an interactive demo, not an actual product. This entire demo was coded by AI, and AI generated all of the text and data. The information is not meant to represent any real investor, and company-specific information is hypothetical and AI-generated.

AI can expand your skillset. It can bump you from a 5/10 to a 7/10 in areas where you aren't an expert. But this amplification still requires human oversight and judgment.

Take quantitative portfolio management. Many biotech analysts understand that position sizing and risk management matter, but rely on their portfolio manager or CIO to implement sophisticated portfolio management techniques. AI can bridge this gap — helping you apply complex statistical methods, build risk models, or optimize portfolio construction. The better you understand how your alpha increases the portfolio's alpha, the more powerful you will be.

AI can help analysts more deeply understand portfolio management. And help portfolio managers better appreciate the power of an analyst's fundamental research. It can tighten the feedback loop, reduce friction, and make an investment team more nimble.

But AI can't replace the PM, and it can't replace the analyst. An analyst won't risk-manage a whole portfolio any more than a PM will spend all day interviewing KOLs. Each role has their respective edge, the expertise to push the AI farther than it could go without expert guidance.

How AI superpowers will change an analyst's job

AI democratizes access to specialized technical skills.

Instead of waiting for engineers to build a dashboard, you can prototype one yourself. Instead of outsourcing statistical work, you can test approaches quickly and iterate.

But this expertise-on-demand must be used with extreme caution. AI-generated code might run, but may not be robust or secure. AI-built statistical models can use incorrect techniques, wrong formulas or be based on flawed assumptions.

The easier it is to reach outside your area of expertise, the more important it becomes to understand the dangers of doing so.

In other words, you must know what you don't know. The most successful analysts will be those who use AI to rapidly learn and explore new areas, while realizing they must bring in human experts to validate, refine, or rebuild their AI-assisted work.

You won't replace your PM, or your engineering team. But you will probably make better use of their time.

AI as an opportunity creator

Interactive Demo
Allocator Demo: AI-assisted dashboard for LPs and allocators. Like the other demos, this is entirely AI-coded with hypothetical data. These are not meant to represent real investors or investments.

AI won't just change your job. It will change the jobs of everyone in your industry.

What will your job look like if AI levels up your business partners?

Many investors are already using AI to comb through patents, publications, and social media to identify promising investments before everyone else. What happens to deal sourcing when everyone has these tools?

If LPs have an X-ray view into your portfolio, how will LP fundraising change?

These changes will create a whole new set of opportunities for analysts.

Investing will still be a human-led business (imagine if AI made a high-conviction bet using the incorrect pricing calculation and pTS estimation we outlined above!). But AI will eliminate many of the frictions of the deal-making process. It will reduce unnecessary meetings. It will reduce reliance on heuristics like brand or social proof. It will reduce the cost and time required to make decisions.

AI won't replace human judgment, but it will reduce human friction. Less time winnowing the top of the opportunity funnel (for startups, investors and LPs), and more time investing in the high-impact relationships.

How AI as an opportunity creator will change an analyst's job

In a world where AI is used to empower (not replace) analysts, opportunities will expand.

If AI gives you deeper insight into potential investments, it will also give potential investments (and potential investors) a deeper insight into you.

Your business partners may change how they evaluate and transact opportunities. The market may reward different behaviors. But humans will still be the interface.

This use-case of AI is speculative -- if it does occur, it won't be for a while. But if you are bought into the idea of AI changing your job, it makes sense to consider how it might change related jobs.

Building a better biotech industry

It's no secret that the biotech industry (especially in the US) is facing many headwinds.

Declining R&D productivity. Drug pricing pressure. FDA reform. Macroeconomic uncertainty. Financial market volatility.

AI can also be a source of anxiety (we feel it too).

But we also think new AI tools offer a chance to reshape the industry.

These tools provide an opportunity to amplify and empower the sharpest minds in our industry. The scientists who develop our medicines, and the analysts who allocate capital towards that development.

Investing more in these people will lead to more innovative medicines, and more efficient development of these medicines. And the industry as a whole can become more effective at its core mission of funding promising science and inventing impactful medicines.

But replacing people with AI will result in continued stagnation (unless all-knowing superintelligence is invented, but that's not something we're planning for).

The future isn't predetermined. The choices we make today about how to integrate AI will determine whether it becomes a tool that amplifies human expertise or one that eliminates it. We have the chance to build an industry where the sharpest analytical minds are empowered by the best AI tools.

That's the future we're working toward. If this resonates with your vision of how AI should develop in biotech, we'd love to hear from you.

How can AI help you?

How does AI help you? What are its limitations? What keeps you up at night?

We are learning about this (exciting and terrifying) technology just like you. We want to know what you think.

Let's compare notes