Melanoma, a type of skin cancer, is usually visible, can be cured if caught early, and affects millions of people per year. Most people, even highly motivated to monitor skin changes, have poor visual memories for patterns and feature shapes. *Early diagnosis saves lives*. While smart-phone cameras should be a valuable tool in early detection of visible skin surface changes, I found no applications meeting the following criteria: easy to use; efficiently guided; memory conservative (imagery between features is not retained, for minimum data, maximum privacy); recognizes previously encountered skin surface features/locations (allows image-directed search for comparison set); simple direct visual comparison and multi-sample analysis of changes-over-time at any given location.
This application does not attempt/imply any diagnostic capability, just offers the ability to A) capture and establish an initial baseline reference of skin surface features, B) recognize those features and their locations, and C) do “flicker comparison” between captured surface features from different scans (allowing reasonably accurate detection of changes across that accumulating baseline). No automated comparison: if a person can SEE change-over-time and then *provides that same information to a trained specialist* after finding any reason for concern, then early detection, diagnosis and treatment WILL be accomplished.
The scanning process can be handled by one individual or with assistance. It proceeds in stages, beginning by imaging one area of the body from a lower-resolution distance (see capture1) not in focus, capturing only outlines and topology to establish/maintain an ongoing context, followed by a higher-resolution short-range scan of that area (prompting “slower” if needed), automatically capturing images of any location containing visible variations. Techniques from laser mice (movement in field of view indicates camera movement, see capture2) allows tracking camera motion, distance, and relative location. Precise camera movement is not required; “scrubbing” works.
After the entire initial area has been scanned, user is prompted to change areas. Image continuity while changing camera range helps maintain camera location vector. By starting a body scan at a short range above a known feature/pattern, the application will recognize where it is on the baseline body surface map, able to guide the rest of the current scan (or later re-scan), steering between known blemishes and any unknown areas, flagging any missed known skin features or areas that weren't covered during the current scan. The application emphasizes the detection of any NEW blemish during a scan; expected in a baseline scan, noteworthy in a rescan.
Actual visual inspection of scan results can be sequenced by complexity, color, verbal or touch-input indices, time-stamp, location, sequence, etc. Specific features can be indexed and put into a sequence, non-diagnostic features can be deleted. Flicker comparison between two or more scans (normalized) can be selected.
I've described calibration, tracking, and reconstruction. The user interface has room for development and improvement. The manufacture, marketing and distribution of apps has become a commodity process, and the product will entice a large self-interest-driven target population. Once this app stabilizes, additional “monitor change-over-time”, AR medical presentation, etc. opportunities exist.