Tag: AI

Notes en passant: how AI could unlearn in HEOR Modelling

In a recent paper, Tinglong Dai, Risa Wolf, and Haiyang Yang wrote about unlearning in Medical AI.

With more and more CPU and storage thrown at Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in general, the capacity to “memorise” information grows larger and larger with each generation of LLM. This is further accelerated by the capacity to add specific details to generalist LLMs using Retrieval Augmented Generation (RAG) and agents (e.g., with the ability to query real-world systems at the interface with the physical world).

LLMs are learning more, but what about unlearning? Dai and colleagues didn’t evoke the analogy with human memory: our capacity to learn more relies, in part, on our capacity to forget, to reorganise, to summarise, and to prioritise the learnt information. Sleep and stress play a role in this reorganisation of information; this was the overarching topic of my Ph.D. thesis [link]. I will de-prioritise the visual cues along the path leading to a bakery if I no longer go to this bakery (“unlearning”). However, practising navigation to the bakery improved this skill, and this improvement will serve me later when I need to go to another place (something I could call “secondary learning”). It may seem we diverge from AI, but Dai and colleagues actually start their paper with the EU GDPR possibility for a patient to remove their data from a database, wondering how this is technically possible with LLMs (where data is not structured like in a traditional relational database and where the way data is retrieved is often unknown).

The “unlearning” process in LLMs can be considered from three encapsulated levels: algorithm, legal, and ethical levels.

Continue reading “Notes en passant: how AI could unlearn in HEOR Modelling”

What to look for at ISPOR25 – Artificial Intelligence

After Modelling and Regulations & Pricing, and just a few days before ISPOR25, here is my take on the potentially interesting sessions on Artificial Intelligence (AI, which generally means: the use of Generative AI, or GenAI, in HEOR).

First, Sven Klijn, William Rawlinson, and Tim Reason are again offering their introductory course on Applied Generative AI for HEOR. Last year, I followed it in Barcelona, and it was nice. In my opinion, “nice” means that although I didn’t learn much more than previous presentations by the authors and my own experience, it was a great course for beginners because it struck the right balance between theory (which too many sessions end up only covering) and practical examples. Don’t expect hands-on exercises (that would be too long, and the course synopsis doesn’t mention that either). But “nice” to me means that the presenters dared to show actual working code, with all the humility that it implies. This year, they mention they’ll cover Retrieval-Augmented Generation (RAG) and agents. Hopefully, their coverage of these aspects will be as good as last year’s on the other topics.

Note that there is another course on AI and its use in Real-World Evidence (RWE) Research. I never attended this one, but I hope the instructors will give the audience practical instructions, independent of the AI tool their company is selling.

ISPOR’s key areas of focus for AI in HEOR (source)

Now on to the sessions! After several ISPOR sessions filled with hype from AI-enthusiasts and AI-deniers, we are slowly coming to some “âge de raison.” However, GenAI is still relatively new, and sessions reflect the need to cater to all audiences.

For the beginners (in AI), a few sessions will introduce GenAI and its use in HEOR. Even if hidden (intentionally or not), GenAI relies on prompt and prompt engineering; one session will present an overview of this technique. A second session will present an overview of progress and challenges brought by GenAI.

For the more advanced AI users, most sessions will talk about the newest tools. One session will talk about reliability in LLMs and prompting (as a side note, I will be interested in the Aide solutions that were teased for some time now). From another session’s title, advances in GenAI should be presented; however, the abstract lacks the latest trends, like agents and functions. For these, one should probably attend the session specifically on agents or this other session on RAG (both with one of the same presenters).

Another approach is from the perspective of AI applications. Literature reviews and AI will be covered in two sessions (AI-Assisted Literature Reviews: Requirements and Advances and Leveraging Automated Tools for Literature Reviews in Health Economics and Outcomes Research: Opportunities, Challenges, and Best Practices). RWD/RWE will be covered in three sessions: Identifying Gaps and Establishing a Development Plan for Consensus Real-World Data Standards and two commercial sessions (054 and 048; disclaimer: this last session is from my current employer). Health Preferences and AI have their own sessions, as do rare diseases and AI (with no less than three pharma companies as presenters!).

Finally, the most practical sessions, IMHO, will be the Research Podiums, as they should marry the technological approaches with the domain approaches. Interestingly, the first of these sessions, The Power and Pitfalls of AI in Health Data Analysis, only presents posters using NLP and Machine Learning (i.e. no GenAI per se). The second session, AI-Assisted Literature Reviews: Requirements and Advances, is focused on literature reviews. This year, it looks like there will be no sessions specifically focused on Modelling; my opinion is that either no significant progress was made (compared to previous ISPOR conferences) or this progress is now kept internally (for pharma’s own use or for consultants’ clients’ use).

Did I miss any important sessions? Do you have another take on sessions at this ISPOR conference or AI in HEOR? Although I enjoy a good quasi-philosophical debate on the good and evil of AI in HEOR, I’m happy to see practical applications being presented and discussed 🙂