MyHair AI uses computer vision to quantify hair loss from smartphone photos
Consumer AI health apps keep making the same pitch: upload a few photos, get answers. Most stop at advice. MyHair AI is trying to quantify hair loss from smartphone images with a computer vision model trained on more than 300,000 hair images. That’s ...
Are you balding? There’s an AI for that
Consumer AI health apps keep making the same pitch: upload a few photos, get answers. Most stop at advice. MyHair AI is trying to quantify hair loss from smartphone images with a computer vision model trained on more than 300,000 hair images.
That’s the part worth paying attention to.
According to TechCrunch, MyHair AI launched this summer. It was co-founded by Cyriac Lefort and Tilen Babnik, and its first prototype was built quickly in Cursor before engineers took over to harden the product. The app asks for photos of the scalp and hairline, estimates density, flags early thinning patterns, tracks changes over time, and recommends products with notes on the science and side effects. The company also wants to steer users toward specialists and clinics with verified reviews.
A lot of startups say some version of this. Hair loss happens to be a plausible computer vision problem, at least on paper. It’s visual, localized, and progressive. If image capture is consistent, you can track change.
That condition matters.
Why vision fits the problem
Hair loss analysis from photos has little to do with language. You’re looking for coverage, scalp visibility, shaft thickness, miniaturization patterns, temple recession, crown thinning. Those are image problems.
A general-purpose LLM can explain finasteride or summarize the Norwood scale. It can’t reliably estimate density from a scalp photo. For that, you need segmentation and region-level measurement.
A plausible stack for a product like this looks familiar if you’ve seen teledermatology systems:
- capture guidance for repeatable angles and lighting
- preprocessing for exposure and white balance normalization
- segmentation to separate hair, scalp, and background
- density estimation across regions of interest
- pattern classification against scales like
NorwoodandLudwig - longitudinal tracking so the app compares you to yourself over time, not to a generic baseline
The last point matters most. Single-image diagnosis is shaky. Hair can look thinner because it’s wet, greasy, flattened, overexposed, or shot under harsh bathroom lighting. Tracking changes over time is a stronger use case than pretending one photo can settle the question.
What to watch
The limitation is that creative output quality is only one part of adoption. Rights, review workflows, brand control, and editability matter just as much. Teams should separate impressive generation from repeatable production use.
Useful next reads and implementation paths
If this topic connects to a real workflow, these links give you the service path, a proof point, and related articles worth reading next.
Compare models against real workflow needs before wiring them into production systems.
How model-backed retrieval reduced internal document search time by 62%.
Carbon Robotics has a new model called the Large Plant Model, and the practical change is straightforward: its farm robots can now identify and act on weeds they weren't explicitly trained on ahead of time. That matters because this is applied comput...
Akara’s pitch is straightforward: hospitals lose hours every day in operating rooms because nobody has a dependable real-time view of what’s happening between cases. On TechCrunch’s Equity podcast this week, the company argued for ambient sensing ove...
SixSense, a Singapore startup building defect detection and prediction software for chip manufacturing, has raised an $8.5 million Series A led by Peak XV’s Surge. Total funding now stands at $12 million. The company says its platform is already depl...