Automated financial advice services are not as innovative as they seem. The idea of tapping into the financial advice demands of the mass market has been around since 2007-2008. However, as the first offerings have been released, it became clear that customer acquisition may be a more significant issue than the robo-advisors anticipated. The automated financial advice market share continues to grow, but according to the study by Boring Money in the UK, it would cost £500 for a robo-advisor to acquire a customer. In many cases, such acquisition costs would not cover the revenues. At the same time, the DIY investment incumbents such as Charles Schwab and Vanguard thrive in the robo-advice space by leveraging their existing customer bases. Realising that the traditional players, as well as wealthy clients, have to remain in the picture at least in the medium term, more and more formerly B2C robo-advisors adopt B2B models aimed at the players with mature customer bases.
Some traditional financial advisors believe that a digital offering would be best suited for consumers who have the necessary financial knowledge and don't necessarily need human guidance in their investment journeys. Hence, their target markets do not intersect. Besides, it suggested that the millennials continue to be the target generation for automated financial advice providers. According to the study, one in five millennials would be comfortable with a robot managing their money, which is the largest share among the population. However, this generation-based approach was widely criticised at the recent Robo Investing conference in London. The alternative method, proposed by many delegates, concerned preparing and targeting offerings based on different life situations rather than targeting the "average representative of the generation".
The GDPR covers all automated individual decision-making and profiling, which stands for automated processing of personal data to evaluate specific characteristics of an individual. Article 22 of this regulation has additional rules to protect the consumers if businesses engage in solely automated decision-making that has legal or similarly significant effects. In case any processing falls under Article 22, the companies must commit to the following: (i) give individuals information about the processing; (ii) provide simple ways for them to request human intervention or challenge a decision; (iii) carry out regular checks to make sure that automated systems are working as intended. The first and third points of the conditions gain particular importance in the context of the popularisation of machine learning methods. Some of the machine learning algorithms, such as artificial neural networks (ANNs), do not provide any grounds for interpreting or explaining their decisions. In the context of Article 22, this could mean that solely automated systems leveraging the ANNs for human resources-related processes would be considered non-compliant. Therefore, a high-quality model validation and interpretability framework became an ethical responsibility of European businesses.