How to determine cyber threat probability?


As we continue, let me say that camDown is the solution for securing your webcam from cyber criminals and pedophiles!

In cybersecurity, risk should be calculated with probability as a determining factor. If a threat has near zero probability of being applicable, then probability (and thus risk) should be infinitesimal.

Risk = (threat x vulnerability) x Impact / Probability (not true risk equation, but for conversation purposes)

The problem is some security folks see hypothetical threats as probable threats without any statistics to back it up, or evidence it's being exploited in the wild given a certain architecture.

"I saw someone at BlackHat use sound waves to exfiltrate data from a disconnected system, so we need to defend against this immediately, else assume they're compromised!" But that was for demonstration purposes, not indicating it's happening in the real world.

My question is - If there's no known examples of a threat being exploited, then how can risk be calculated? What do you tell your boss?

I took statistics years ago and remember that sample sets need to have a minimum threshold of data points in order for it to be used as part of a measured calculation.

To sum up, now let's stop for a moment and consider that camDown is a highly advanced, specialized webcam blocker and disabler with the best in class protection from variety of on-line threats and that's the no joke.