Nearly nine in ten companies now deploy artificial intelligence in some capacity, according to McKinsey. But a 2025 Harvard Business Review survey found that only six percent fully trust AI to run core business processes. Damini Rijhwani has spent most of her career working in the space between those two numbers. After nearly a decade in AI and machine learning, with several years building clinical imaging systems at Philips, she founded Automation Core Inc., developing patient management software for medical aesthetics, a market projected to reach $200 billion globally by 2033 according to Straits Research.

Rijhwani began working in AI and machine learning in 2016. By the following year, she had joined a seventy-person research team at Purdue University, processing continuous feeds from one of the largest distributed camera networks in academic research, covering more than a hundred and sixty countries. The project demanded systems that could hold up across wildly variable conditions: different hardware, inconsistent connectivity, unpredictable lighting. That constraint, she recalls, shaped how she thought about building software for the rest of her career.

It was also where she started publishing research on AI fairness, studying how models trained on large public datasets can absorb and reproduce the biases present in that data. The work led her to co-organize a workshop at CVPR, the premier computer vision conference, focused on low-power visual recognition. The following year, the Computing Research Association awarded her an Honorable Mention in its national Outstanding Undergraduate Researcher program.

That research background led to an internship at Philips as a computer vision researcher before she had finished her degree. The company later hired her as a permanent research scientist, even though the posting had specified a graduate degree. Rijhwani came from a medical family and had a familiarity with clinical environments that preceded her engineering training. At Philips she worked alongside PhDs from Harvard and Johns Hopkins, building data infrastructure that connected more than forty clinical datasets in interventional and diagnostic imaging to the machine learning pipelines that depended on them. Over several years she trained and deployed more than twenty AI models for medical imaging. Some could detect and track devices frame by frame during interventional procedures. Others helped clinicians locate anatomical landmarks in diagnostic scans. Her earlier work at Philips had also involved AI computer vision research in person re-identification, pose estimation, and 3D reconstruction, building with transformer architectures during the same period that foundational technology was finding its way into what would become large language models.

"When I was at Philips, I spent a lot of time in procedure rooms watching interventional imaging cases. You stand there in a lead apron for hours. And the thing that strikes you is how much of the clinician's cognitive load has nothing to do with the patient. They are navigating software menus during a live catheter procedure. That stayed with me," Rijhwani says.

Part of her role involved observing those procedures in person, studying how imaging data flowed from the operating environment into the models that were supposed to support clinical decisions. Keeping models accurate when the data came from real procedures, with motion artifacts and patient anatomy that no training set could fully represent, required a different kind of engineering discipline, she says.

That hands-on clinical work informed a patent filing she is named on: a method for using fiber optic sensors embedded in catheters and guidewires to automatically generate labeled training data during interventional procedures. The system, called FORS-enabled image labelling, tracks devices as they move through a patient's body and uses that positional data to annotate the imaging feeds for model training. The patent was filed across Europe and Asia. Rijhwani also co-authored three publications, including work in IEEE Computer and Medical Physics.

The disconnect she documented at Philips is not unique to interventional imaging. Dr. Amira Barmanwalla, a general surgeon in New York, has made a similar observation.

"Operating room technology has advanced rapidly, but the software linking pre-, intra-, and postoperative phases has lagged behind. As a result, valuable data is collected without sufficient context, leading to fragmented workflows. These gaps introduce inefficiencies that quietly compound over time," she says.

As Barmanwalla describes, it is the kind of compounding that is difficult to see. Each small friction, a system switch here, a missing detail there, is easy to absorb in the moment. Across enough patients and enough shifts, the cost becomes structural, woven into how a practice operates rather than sitting on top of it.

For Rijhwani, it was a pattern that became visible through proximity. Building clinical imaging systems at Philips meant working at the intersection of software and live procedures, where the distance between what a system could do on paper and what it needed to do in a real clinical environment was something she measured every day. That distance, she noticed, varied enormously by specialty. Some had attracted the investment to close it. Others had not. Aesthetic medicine, she found, was among the more underserved.

For many practice owners, years of patient photos, treatment notes, and clinical records become a kind of anchor. The more history they accumulate in one system, the harder it becomes to imagine starting over somewhere else, even when the software no longer serves them.

"Switching software is one of the biggest anxieties in this space," Rijhwani says. "A practice's patient photos and treatment records are years of clinical history, and moving that data between platforms is still harder than it should be. That's something I think about a lot."

Automation Core includes AI-assisted migration tooling that can parse patient records across formats, converting data from legacy platforms and maintaining redundant backups throughout the process. It is a technical problem Rijhwani first

encountered at Philips, where clinical data integrity carried consequences far beyond a lost appointment.

The company is still early-stage, and the market is genuinely difficult. State-level regulations vary widely, practices are cautious about change, and established competitors have years of customer data and distribution behind them.

Rijhwani's longer-term plans go beyond aesthetics. She sees the same structural mismatch in other cash-pay medical verticals: clinical workflows running on software that was designed for a different kind of practice. Regulatory scrutiny around patient data is intensifying across these sectors, and practitioners relying on general-purpose tools may find themselves underprepared.

Whether Automation Core can gain traction in a market this competitive remains an open question. But the thesis, that software for specialized medicine has to grow out of the specialty itself, is one Rijhwani has been testing since her years watching procedures at Philips.

"The best software in medicine disappears. It becomes an extension of the practitioner, not a separate system they have to manage. When the technology handles the weight quietly, the person in the room gets to be fully present for the person in front of them. And that's why building software for medicine is a commitment I don't see myself outgrowing," Rijhwani says.


VentureBeat newsroom and editorial staff were not involved in the creation of this content.