Grzegorz Jacenków

Hi, I'm Greg - Chief Scientist at VitVio, building the world’s first AI-powered operating theatre (OR) system.

I’m an AI scientist with a background spanning research and product at Amazon, CERN, and Microsoft. I’ve published work on multimodal and medical AI, applying computer vision and machine learning to improve safety, efficiency, and understanding in clinical environments. At VitVio, I lead the development of the world’s first AI-powered operating theatre system, combining perception, reasoning, and workflow intelligence. Formerly a PhD student at The University of Edinburgh and recognised in Forbes 30 Under 30 for advancing healthcare AI.

Resume

Education

Ph.D. Engineering (Multimodal Machine Learning)

The University of Edinburgh
(2018 – 2022 interrupted)

I was a PhD student researching deep learning for clinical decision support, focusing on integrating imaging and non-imaging data for diagnostic modelling. The programme was sponsored by Canon Medical Research Europe.

M.Sc. Artificial Intelligence

The University of Edinburgh
(2017 – 2018)

I focused on machine learning, computer vision, and distributed systems. My dissertation explored modality-invariant segmentation of cancer structures from brain MRI.

B.Sc. Computer Science with Business and Management

The University of Manchester
(2013 – 2017)

I graduated with a First-class degree. My final project focused on efficient deep learning models for real-time computer vision, and I also took part in hackathons, including winning TechCrunch Disrupt Hackathon.

Selected Experience

Chief Scientist

VitVio
(May 2024 – present)

I am leading the development of the world’s first AI-powered computer vision platform for operating theatres, combining perception, surgical workflow understanding, and real-time clinical intelligence. I am responsible for scientific direction, research strategy, and the productisation of AI.

Data Science Manager

Amazon
(August 2022 – September 2024)

I led a team at Amazon Ring developing large-scale personalisation, recommendation, and conversational AI systems used by millions of customers worldwide.

Research Intern

Canon Medical Research Europe
(April 2021 – June 2021)

I developed multimodal NLP-vision models for clinical data and explored knowledge graphs to capture medical context, improving the interpretability and detail of downstream modelling.

Technical Student

CERN
(September 2015 – August 2016)

I was part of the Technical Student programme, working at Inspire-HEP on the author disambiguation problem using machine learning - addressing the challenge of correctly attributing scientific publications to their respective authors.

UX/UI Design Intern

Microsoft
(July 2014 – September 2014)

I was part of the UX/UI team designing and implementing native applications for desktop and mobile platforms, focusing on seamless user experience and cloud integration.

Publications

Patent Application: Method of Determining the Position of an Object in a 3D Volume
Kowalewski, K., Jacenków, G., Rennert, P.
GB Patent Application GB2414339.8A. Google Patents
Method for determining object position within 3D volumes using advanced imaging techniques.
Privacy Distillation: Reducing Re-identification Risk of Diffusion Models
Fernandez, V., Sanchez, P., Pinaya, W.H.L., Jacenków, G., Tsaftaris, S.A., Cardoso, M.J.
MICCAI 2023, Deep Generative Models Workshop. arXiv
Teacher–student training with filtered synthetic data lowers re-identification risk while preserving downstream utility for generative models.
Indication as Prior Knowledge for Multimodal Disease Classification in Chest Radiographs with Transformers
Jacenków, G., O'Neil, A.Q., Tsaftaris, S.A.
ISBI 2022, Oral Presentation. arXiv
Uses referral "indication" text to guide X-ray interpretation, improving multimodal classification over image-only baselines.
Improving Image Representations via MoCo Pre-training for Multimodal CXR Classification
Dalla Serra, F., Jacenków, G., Deligianni, F., Dalton, J., O'Neil, A.Q.
MIUA 2022. Springer
Demonstrates that contrastive MoCo pre-training strengthens visual backbones inside image–text models for chest X-ray tasks.
Patent: Data Processing Apparatus and Method
Jacenków, G., Tsaftaris, S.A., O'Neil, A.Q., Lisowska, A.
US Patent US11610303B2. Google Patents
Methods and apparatus for efficient data handling in medical imaging pipelines.
INSIDE: Steering Spatial Attention with Non-Imaging Information in CNNs
Jacenków, G., O'Neil, A.Q., Mohr, B., Tsaftaris, S.A.
MICCAI 2020, Oral Presentation. arXiv
Introduces a conditioning mechanism that injects structured priors to spatially steer attention, improving localisation and robustness.
An Image is Worth More Than 1600 Labels: What Do Images Tell Us?
Jacenków, G., Kochkina, E., Madabushi, H.T., Vidgen, B., Liakata, M., Margetts, H., O'Neil, A.Q., Tsaftaris, S.A.
NeurIPS 2020, Hateful Memes Challenge — Top 1% submission.
Top-ranked multimodal system combining OCR-derived text and visual cues for meme hate-speech detection.
Conditioning Convolutional Segmentation Architectures with Non-Imaging Data
Jacenków, G., Chartsias, A., Mohr, B., Tsaftaris, S.A.
MIDL 2019, Extended Abstract. arXiv
Evaluates concatenation vs feature-wise modulation to inject tabular priors into segmentation networks; conditioning yields gains in low-data regimes.
FIRE: Unsupervised Bi-directional Inter-modality Registration Using Deep Networks
Wang, C., Papanastasiou, G., Chartsias, A., Jacenków, G., Tsaftaris, S.A., Zhang, H.
Preprint. arXiv
Unsupervised framework with inverse-consistency to align images across modalities without ground-truth correspondences.