Commonwealth Bank has turned to artificial intelligence to try and deal with the rapid growth in “abusive payments” – electronic transactions that include offensive language in the transaction descriptions.
The bank reported that over the three months to the end of July it blocked 100,000 electronic payments with offensive messages.
The bank said it used AI to detect 229 unique senders of “potentially serious abuse”. It developed the AI model in the CBA AI Labs.
CBA is the second bank to report that abusive payments are a growing concern. Last month, Westpac reported that it had blocked 24,000 payments made by 19,000 customers, telling them to change the language used in the messages attached to payments.
On more than 800 occasions it wrote warning letters, suspended or cancelled accounts. It has referred more than 70 customers to “authorities”.
Abusive payment that are of particular concern are threatening messages attached to payments to former partners, who might have been victims of domestic violence or are in a financially vulnerable position.
CBA said that depending on the severity of the abuse, it may de-link a victim’s bank account from PayID so the abuser can no longer use their email address, mobile number or ABN to send abusive payments.
Other actions include setting up new accounts for victims, referring victims to support organisations, sending warning letters to abusers and terminating abusers’ accounts.
CBA was using a blocking filter to screen abusive payments, but the bank’s deputy chief executive David Cohen to a Parliamentary committee last month that the filter was a limited tool and people could get around it.
The new AI model allows the bank to be “more targeted and proactive”.