DigiPHrame AI

Automating Digital Public Health Intervention Evaluation Using Artificial Intelligence (Research cluster: RC 2: Theory and Frameworks, AI and Technology, Evaluation)

Background

Digital public health (DiPH) interventions, such as mobile apps and wearable devices, are proliferating rapidly and demand structured evaluation to ensure safety and effectiveness. To address this, the Leibniz ScienceCampus Digital Public Health (LSC DiPH) developed DigiPHrame, a comprehensive evaluation framework comprising 13 domains and 207 criteria. However, manual application of this framework remains resource-intensive and time-consuming, limiting its practical uptake.

Objectives and research questions

The DigiPHrame AI project aims to address current evaluation bottlenecks by developing an automated, web-based assessment tool powered by large language models (LLMs). The primary goal is to increase the scalability, reliability, and accessibility of DigiPHrame-based assessments across technical, ethical, legal, social, and clinical dimensions.

Methods

The project will develop an application that follows a structured five-step workflow:

  1. Setup: Users will create a project and configure their preferred LLM provider (cloud versions via API or local through Ollama).

  2. Upload: Users will upload intervention documentation (PDF, DOCX, TXT, Markdown, or source code ZIPs).

  3. Ingestion: The system will ingest all materials into a hybrid retrieval architecture using vector and graph RAG.

  4. Evaluation: Thirteen domain-specific AI personas, equipped with regulatory and standards knowledge (WHO guidelines, GDPR, etc.), will systematically assess the intervention, producing evidence-informed scores, narrative justifications, and targeted recommendations.

  5. Export: Results will be reviewed and exported as CSV or JSON reports.

To validate the tool, the research team will evaluate four to five real-world DiPH interventions using DigiPHrame AI and independently rate the AI-generated assessments against human evaluations. The RAG pipeline quality will also be assessed in parallel using the integrated RAGAS meta-evaluation module.

Expected results

The project will yield a validated DigiPHrame AI prototype, demonstrating the alignment between AI-generated and expert human evaluations. Expected outcomes include RAGAS-evidenced retrieval quality metrics, validation findings, and the integration of the tool into DigiPHrame v1.3, accelerating the framework's adoption across the LSC DiPH network and the wider digital public health sector.

Duration

September 2025 – September 2026

Research team

  • Dr. Furqan Ahmed, Department of Prevention and Evaluation, Leibniz Institute for Prevention Research and Epidemiology – BIPS

  • Dr. Laura Maaß, University of Bremen, SOCIUM Research Center on Inequality and Social Policy

  • Dr. Tilman Brand, Department of Prevention and Evaluation, Leibniz Institute for Prevention Research and Epidemiology – BIPS

  • Prof. Dr. Hajo Zeeb, Department of Prevention and Evaluation, Leibniz Institute for Prevention Research and Epidemiology – BIPS

  • Prof. Dr. Ansgar Gerhardus, Department of Health Services Research, Institute for Public Health and Nursing Research, University of Bremen

Project type

Seed-Money Project funded by the Leibniz ScienceCampus Digital Public Health (LSC DiPH)

Contact Dr. Furqan Ahmed Leibniz-Institut für Präventionsforschung und Epidemiologie - BIPS

E-Mail: ahmedf@leibniz-bips.de

Speaker

Professor Dr. Hajo Zeeb
E-Mail: zeeb(at)leibniz-bips.de
Tel: +49 421 21856902
Fax: +49 421 21856941

Project Office

Dr. Moritz Jöst
E-Mail: joest(at)leibniz-bips.de
Tel: +49 421 21856755
Fax: +49 421 21856941

Press

Rasmus Cloes
E-Mail: cloes(at)leibniz-bips.de
Tel: +49 421 21856780
Fax: +49 421 21856941

Partners

BIPS
Offis