Banks and financial institutions are set up to detect fraud in ways which will tend to spot 'unusual' behavior and anomalous patterns.
The combination of someone who is technically competent to the point that they use seemingly different behaviors to the typical user population and are taking actions which have traditionally been seen as high risk (wire transfers, transactions from new accounts, ...) is going to lead to automated flags being raised.
A wise implementation will avoid feedback loops, introduce human review which assumes the best unless clear evidence of fraud is present, and will provide mechanisms for individuals to clear/restore their status.
The latter resolution workflow, unfortunately, introduces further risks, especially if implemented halfheartedly. Training individuals that it's OK / expected to provide additional personal details to use a service ends up leading phishers to attempt the same techniques, and simultaneously creates a high-value target database of personal data should any of that information be stored long-term.
I don't know clear answers here - and yes, perhaps the author could have taken a more gradual or slower approach in order to avoid some problems, and maybe they are angry based on things about the financial system that they don't understand.
But there is an ongoing and serious problem here with the way that we provide access to systems and services and then attempt to remediate concerns via automated means.
Source: am European and have lived in the U.S., thus have experienced being 'unusual' to many U.S. financial services, have experienced not understanding systems in a country new to me, and have also worked in fraud and care about computer security and overall freedom and safety.
Banks and financial institutions are set up to detect fraud in ways which will tend to spot 'unusual' behavior and anomalous patterns.
The combination of someone who is technically competent to the point that they use seemingly different behaviors to the typical user population and are taking actions which have traditionally been seen as high risk (wire transfers, transactions from new accounts, ...) is going to lead to automated flags being raised.
A wise implementation will avoid feedback loops, introduce human review which assumes the best unless clear evidence of fraud is present, and will provide mechanisms for individuals to clear/restore their status.
The latter resolution workflow, unfortunately, introduces further risks, especially if implemented halfheartedly. Training individuals that it's OK / expected to provide additional personal details to use a service ends up leading phishers to attempt the same techniques, and simultaneously creates a high-value target database of personal data should any of that information be stored long-term.
I don't know clear answers here - and yes, perhaps the author could have taken a more gradual or slower approach in order to avoid some problems, and maybe they are angry based on things about the financial system that they don't understand.
But there is an ongoing and serious problem here with the way that we provide access to systems and services and then attempt to remediate concerns via automated means.
Source: am European and have lived in the U.S., thus have experienced being 'unusual' to many U.S. financial services, have experienced not understanding systems in a country new to me, and have also worked in fraud and care about computer security and overall freedom and safety.