Stuart Synakowski

Projects Overview

I’ve had the chance to work on a pretty wide range of projects that sit at the intersection of research and real-world impact. Most of my work lives in applied AI, machine learning, and computer vision, but some of my AI research pull-ideas from cognitive science and topological data analysis. I also had the chance to work on some computational biophysics projects when I was younger.

What you’ll find below is a high-level snapshot of the kinds of things I’ve been working on. Each project links out to more detail where I can share it. This page is very much a work in progress — I’m always adding, refining, and occasionally re-thinking how I explain things.

Corporate R&D

At P&G, my role has mostly been about exploring what’s new in AI and figuring out where it can actually create value for real business problems. Most of those problems live in accelerating clinical and consumer research.

In practice, that usually means building AI tools that help researchers move faster, scale their work, or see patterns they couldn’t easily see before. My typical workflow starts with deeply understanding how my collaborators already work — their processes, bottlenecks, and constraints — and then experimenting with AI techniques that might meaningfully improve those workflows.

I can’t share many of the details, but I try to give high-level overviews of the underlying ideas and approaches. The goal here isn’t to show off products, but to explain the concepts, trade-offs, and ways of thinking that drive the work.

GenAI tools Tailored for Value Creation in Clinical and Consumer Research

Overview of GenAI Tools and Workflows

Large language models clearly accelerate research, but the path from utility to measurable value creation is still unclear. Tools like chatbots and vanilla RAG systems help with report generation and knowledge lookup, yet they are not well integrated into the real workflows clinicial scientists and product researchers use to drive impact.

I’ve found that measurable value emerges when domain experts can orchestrate LLMs directly within their existing processes. Much of my work focuses on building tools that democratize this orchestration—making advanced LLM capabilities accessible, reliable, and easy to use for non-technical experts.

In practice, this has enabled users to automate tedious text-processing tasks, mine and analyze massive datasets, extract and structure relevant information, and generate hypotheses at scale. These tools have proven especially effective for analyzing large volumes of consumer transcripts to surface patterns in customer sentiment and prioritize product improvements. They have also helped structure and consolidate decades of internal clinical research to support claims or develope/identify new research directions.

Key Themes

Democratizing LLM orchestration, building insight co-pilots for large-scale analysis, automated hypothesis generation, and ensuring reproducibility and interpretability in LLM outputs.

Read more about the concepts behind these GenAI tools

AI & Computer Vision for Skin Analysis

AI for Skin Analysis Overview

A lot of my work at P&G focuses on building computer vision and machine learning tools to better understand skin. I’ve developed a suite of methods for synthetic skin image generation to support perception studies, high-quality image capture for clinical research, appearance metrics for tracking treatment progress, and models that predict treatment response from selfies.

The work is deeply interdisciplinary. I collaborate closely with clinical and biomolecular scientists, product and consumer researchers, and statisticians to turn messy, real-world questions about skin into measurable, scalable insights.

I’ve been working in this space long enough that I could probably teach a course or write a book on it. If any of this sounds interesting, check out the links below for higher-level explanations of the core ideas.

Key Themes

Synthetic skin generation, perceptual modeling, skin appearance metrics, mobile image quality and calibration, optics and color science, first-principles computer vision, and predictive modeling for treatment response — with a healthy dose of cognitive science mixed in.

Read more about these projects

AI systems for Consumer Research

AI for Skin Analysis Overview

Earlier at P&G, I worked on applying more classical computer vision techniques to consumer research problems. A common challenge for product teams is understanding how people actually interact with products at a fine-grained level. To support this, I built tools that spatially track products and users to infer micro-actions during real product use.

I also worked on problems around product fit. Many consumer products physically interact with the body, so getting the geometry right matters. I developed photogrammetry-based methods to estimate specific regions of the human body and assess how well products fit real consumers.

A lot of this work took advantage of the sensing hardware built into iPhone Pro devices — including depth, camera, and motion capabilities — to bring more quantitative measurement into everyday consumer research.

Key Themes

First-principles computer vision and image processing, 3D modeling, action analysis, object detection, and human pose estimation.

Read more about these projects

AI Consulting for Sustainability Initiatives

AI for Skin Analysis Overview

One project I’m especially proud of was representing P&G as part of the The Perfect Sort Consortium, a collaboration between industry partners and the National Test Centre for Circular Plastics (NTCP) focused on improving the sortability of plastic packaging waste using AI vision systems.

For a bit of context: in most developed countries, recyclable waste passes through a Material Recovery Facility (MRF), where automated vision systems play a critical role in identifying and sorting materials that still have economic value. One of the consortium’s core goals was improving plastic sorting — especially separating food-grade from non-food-grade plastics — which is a key bottleneck for building a truly circular plastics economy.

There are a lot of startups working on vision-based sorting in this space. My role was to help map out a long-term technical solution for AI waste sorting vision systems. I developed an evaluation criteria for companies pitching their solutions to the consortium. I worked closely with plastics scientists, recycling experts, and packaging teams to translate real-world constraints into technical requirements.

I learned a lot about the technical, economic, and political challenges that still stand in the way of effective recycling. I also walked away with a long list of ideas for how AI could push us closer to a circular economy.

If you’re working in this space and need technical guidance, feel free to reach out — this is the kind of problem I’d happily spend my life working on.

Key Themes

AI consulting, AI for sustainability, vision-based waste sorting, circular economy, for-profit and non-profit collaborations for environmental impact.

see a video from demostration day short press release

AI Research Projects (Graduate Work)

The problem definitions in my PhD work were intentionally broad (I had a mild case of scope creep because the ideas were interesting). In hindsight, several of these projects probably could have been split into multiple papers, but as my advisor liked to say, “don’t slice the salami.” Feel free to check out the work below. I still think the ideas hold up and remain relevant to modern AI systems.

Here is a More detailed Overview of My PhD Thesis

A Recurring Problem in AI: Sharing Knowledge Between Systems

A central theme in my research is how AI systems can share knowledge across tasks. Humans are remarkably efficient at learning new tasks from very little data, while modern AI systems often require massive datasets and compute. Work by François Chollet highlights this gap and motivates the need for representations that allow systems to reuse knowledge acquired from prior tasks. While approaches like transfer learning and meta-learning exist, it’s still unclear what knowledge is being shared and how it is represented inside modern models. My PhD work explored concrete ways to represent and leverage shared knowledge in computer vision systems.

Leveraging the Structure of Deep Neural Networks that Learn

AI AI for Topology Overview

One line of work examines the topological structure of deep networks. Certain topological properties of learned representations correlate strongly with generalization performance, independent of model architecture or task. I developed fast, differentiable proxies for these properties, enabling early stopping, task similarity estimation, and topological constraints on learning. Feel free to check it out on Arxiv

Computer Vision Research

AI for Skin Analysis Overview
A second line of work focuses on intent recognition. Humans readily infer intention even in abstract settings (e.g., simple moving shapes). By identifying first-principles cues like self-propelled motion, I built systems that infer intent in simplified scenes and generalize the same inference rules to motion capture data and real-world video. This work was published in International Journal of Computer Vision. You should be able to download it for free here

Physics Projects

Before moving fully into AI and engineering, I originally planned to pursue a PhD in physics. As an undergraduate, I had the chance to work on several computational physics projects that strongly shaped how I think about modeling, first principles, and simulation.

AI for Skin Analysis Overview

Computational Models of Crystal Structures

NSF Physics REU Lehigh University 2016

One fascinating idea in materials science is that you can “program” how particles interact by coating them with specific DNA strands. By choosing the DNA sequences, you can effectively design the interaction rules and induce different crystal lattice structures.

I worked on computational models that approximated the pair potentials between DNA-coated particles (loosely inspired by modified Lennard–Jones–potential particle interactions). Using lightweight simulations, we explored how different DNA encodings influenced lattice properties, including sensitivity to temperature and melting behavior.

This project was my first real exposure to large-scale scientific computing — writing Python and bash scripts, running simulations on XSEDE supercomputers, and thinking carefully about how to balance physical fidelity with computational efficiency.I built computational models to simulate how these DNA-coated particles self-assemble into different crystal structures. The simulations helped validate theoretical predictions and provided insights into the dynamics of the assembly process.

Feel free to checkout Jeetain Mittal's research group (Moved to Texas)

Computational Models of Dielectrics in Nanopore Sensors

Physics REU Clarkson University 2015

Nanopores show a lot of promise for applications like particle filtering and DNA sequencing, where information is inferred from changes in ionic current as particles pass through a pore.

I worked on modeling ionic current flow through cylindrical nanopores partially blocked by dielectric nanoparticles. Using COMSOL Multiphysics (and some Fortran-based simulations), we built finite-element and analytical models to describe how current blockage depends on particle geometry and material properties.

The goal was to better understand how physical characteristics of particles could be inferred from electrical signals — a theme that, in hindsight, connects pretty directly to many inverse problems I still find interesting today.

Feel free to checkout Maria Gracheva's research group