Type I vs Type II Errors

Amazon seems to be in the midst of taking care of some scammer activity. There are reports of a number of customer accounts being closed as well as reports on the indie side of KU page reads from March disappearing. And with every Amazon action to clean up the swampy waters comes a discussion of innocent authors who got caught up in the actions.

For example, Amazon identifies a series of accounts as using botting activity to borrow books and read them in KU. It shuts those accounts down and pulls back all of the page reads those accounts generated. Author A who had paid for their books to be botted shrugs and moves on to the next scam. Author B whose books were read by that account to create smoke and confusion, screams bloody murder because they just lost a hundred thousand page reads they were banking on. (About $450 worth of page reads.)

Every time this happens, I think Type I error versus Type II error. Now, it’s possible that I’m misremembering my misspent education, but this is what that means to me from a regulatory and compliance standpoint. (My background.)

If you build a compliance system that is too lax, it will fail to identify all of the compliance issues. You will let through a certain percentage of activity that you shouldn’t.

If you build a compliance system that is too restrictive, it will flag a large amount of activity for review that isn’t a legitimate compliance issue and you run the risk of bogging down your review teams with false positives that they have to clear and let through.

Every company has to make a decision between those two choices. Which type of error is better? Letting through bad activity you shouldn’t? Or preventing good activity from occurring?

In certain settings–hospitals, food production, car manufacturing–you want to err on the side that saves lives, right? So, sterilize that equipment more than you really need to, because it’s better to sterilize the equipment three times than to kill someone or give them HIV.

In other settings, it can be a trickier line to draw.

I have seen companies be overwhelmed by compliance alerts that were too sensitive. Is it better to be nine months behind on your compliance reviews, but catch everything? Eh, well. I don’t know…Violate OFAC and they don’t care why you did it, they’ll fine you. But how much money do you want to spend to find that one Iranian transaction among millions?

It seems to me the approach Amazon takes sometimes is a lazy man’s approach to compliance monitoring. They do nothing until people complain too much. Then they run an automated process to flag all the potentially bad activity. And then, rather than do what the entities I used to work with would do and review all the flagged activity to find the legitimate problems, they just shut everyone down. And then they wait for the people who were innocent (or who are savvy enough to act innocent), to identify themselves with alarmed emails and complaints.

Saves a helluva lot of manpower and money. But sucks if you’re one of the ones caught in one of their purges.

On the other hand, what we normally see with them is a too lax system that allows everything through. So which is better?

Too much or too little?

Do we want the bestseller lists overwhelmed with books that shouldn’t be in those categories? (Classics? Really? That’s what you call that book?) Or do we want the risk of being purged from a legitimate category or being delisted until we can fix whatever issue Amazon has?

We can’t have it both ways. There will always be one type of error or the other.

(And for those of you who think reviewing these kinds of alerts is simple, let me tell you it isn’t. You’d think screening transaction information for something like “Iran” is simple, right? Well, you’d be amazed how many false alerts you can get from something so simple. And even if it only takes a minute to clear a false alert, when you have 10,000 false alerts to every one legitimate alert, that’s a lot of manpower involved.)