Better algorithms needed to flag patients with recurrent cancer

Using billing or treatment codes can lead researchers astray, study finds
Tools

Using billing or treatment codes to select patients with recurrent cancer can be misleading for researchers hoping to study the effectiveness of treatments, according to a study published recently in the journal Medical Care.

Acording to the researchers, one study might look in a database for all patients who had chemotherapy and then another round of chemotherapy more than six months after the first, assuming that a second round defines recurrent disease. Another study might look in a database for all patients with a newly discovered secondary tumor, assuming that all patients with a secondary tumor have recurrent disease.

"Our study shows that both methods are leave substantial room for improvement," Debra Ritzwoller, health economist at the Kaiser Permanente Colorado Institute for Health Research and investigator at the University of Colorado Cancer Center, said.

The study used two commonly used datasets--the HMO/Cancer Research Network and CanCORS/Medicare--to determine whether algorithms to select patients with recurrent cancer worked as intended.

While the combination of codes for secondary malignant tumor and chemotherapy was the most sensitive for patients with lung, colorectal, and breast cancer (75 to 85 percent), no code-set was highly sensitive and highly specific. For prostate cancer, no code-set offered even moderate sensitivity (less than 19 percent).

Citing the need for better ways to determine who has and does not have recurrent cancer, the researchers are working on a paper to suggest different algorithms to replace the ones found ineffective, according to an announcement.

Kaiser Permanente has been at the forefront of research based on data, closely tracking and following up with patients with abnormal test results and claiming success with a SAS-based natural-language processing tool to detect breast and prostate cancers from pathology reports.

Previous research published in Medical Care looked at the informatics platforms in use for comparative effectiveness research, finding them a work in progress with multiple challenges in collecting and analyzing data in providing easy-to-use tools for nontechnical researchers.  

To learn more:
- read the abstract
- here's the University of Colorado announcement

Related Articles:
Using malpractice claims data to dodge mistakes
Natural-language processing tool flags breast, prostate cancer
Platforms for comparative effectiveness research still evolving