11 Comments

You are correct -- Topol has to be the biggest tool on the Internet. You can assume if he writes something, it is wrong. That is easier than doing all the research to prove it so -- it can be safely assumed. There are not many people who can be that consistent, but he is one.

Expand full comment

It's infuriating to me that so many people who should know better still retweet him!

And yeah, I usually don't read the papers, but since this one was so aligned with my recent posts and my class, it was a good exercise.

Expand full comment

In this case, is it correct to think that the 29% headline number means that ~4 women per thousand were diagnosed in the control and ~5 women per thousand were diagnosed in the other arm?

Expand full comment

Yes. 338 out of 53043 in the treatment group were detected to have cancer confirmed by pathology (6.4 out of 1000). In the control group, it was 262 of 52872 (5.0 out of 1000).

Expand full comment

But surely the point is that the study shows that Sweden could reduce the number of radiologists by 50% using this software. That would be a significant cost saving, with detection rates not reduced.

You can call this AI CAD, but it is machine learning (ML) and that is a subcategory of AI. Attach it to an expert LLM which also accesses the patient's medical records, and you might get a better explanation than a simple numeric rating. A real test would be to eliminate the radiologists and determine whether the AI/ML/CAD could diagnose and make recommendations that conform to the radiologist[s] recommendations. [Ethically the radiologists and other physicians would determine the treatment]. Then we have reached the sci-fi level of "The Little Black Bag" (1950) by C M Kornbluth.

https://en.wikipedia.org/wiki/The_Little_Black_Bag

I'm reminded that while IBM's Watson failed to impress the Sloane Kettering cancer experts, it was regarded as valuable by less expert physicians in other hospitals.

Expand full comment

I get that the point of the post is more to point out that 1) evaluation of these systems is trickier than it seems and 2) it's unclear that such a system really provides better downstream outcomes, but at a higher level it seems like such automated health systems stand to have way higher impact in hospitals that are short-staffed with experienced radiologists. I know that Pranav Rajpurkar has looked into how automated systems help different kinds of doctors differently.

Expand full comment

> In healthcare, we’re supposed to care about patient outcomes. Instead, this study looked at detecting, not treating, cancer.

This is true in general, but not so for radiology studies. The radiologist's job is to detect, not treat, cancer. There's no "Treatment" subsection in radiology textbooks. It's reasonable for studies asking questions of the form "Should we replace radiologists with tool X?" to have diagnostic outcomes (and not patient well-being outcomes) as primary.

Expand full comment

So how many actual radiologists are there as compared to the 1990s and are their job prospects improving or declining? Saying the patient is not dead yet doesn't tell you if they're coming or going.

Expand full comment

"Ground truth was based on pathology reports on surgical specimens or core-needle biopsies"

Does this mean that ground truth was only measured for patients who were judged to have cancer? I can't see how you could take a tissue sample from a patient who you think doesn't have potentially cancerous tissue. If so, doesnt that mean that the error rate is an FPR?

Expand full comment

Good to know, and wonderfully written. Thanks!

Expand full comment

THANK YOU! This dude is such a grifter

Expand full comment