Do financial services firms need an AI ethics function?

Do financial services firms need an AI Ethics function?

Do financial services firms need an AI Ethics function?

AI has generated a tremendous amount of interest in the financial services (FS) industry in recent years, even though it had its peaks (2018 – 2019) and throughs (2020 – mid-2022, when the focus and / or hype cycle was on crypto, operational resilience or SPACs) during that relatively short period.

The multi-billion-dollar question about actual investment and adoption levels (particularly on the asset management side) remains open and some anecdotal evidence may lead us to believe that they may be lower than the surrounding hype would have us believe (I covered that to some extent in this post). However, the spectacular developments in generative AI dating back to late-2022 (even though decent quality natural text and image [deepfake] generating systems started to emerge around 2019) indicate that “this time is [really] different”, because both the interest and the perceived commercial opportunities are just too great to ignore.

However, as is being gradually recognised by the industry, there is more to AI than just technology. There are existing and upcoming hard and soft laws. There are cultural implications. There are risks to be taken into account and managed. There are reputational considerations. Lastly, there are ethics.

AI ethics experienced its “golden era” around 2015 – 2018, when just about every influential think tank, international agency, tech firm, government etc. released their takes on the topic, usually expressed as sets of broad, top-level principles which were benevolent and well-meaning on one hand, but not necessarily “operationalizable” on the other.

The arrival of generative AI underscored the validity of AI ethics; or at least underscored the validity of resuming a serious conversation about AI ethics in the business context.

Do financial services firms need an AI ethics function?

Interestingly, AI ethics per se have never been particularly controversial. It was probably because the “mass-produced” ethical guidances and principles were so universal that it would be really difficult to challenge them – alternative explanation could be that they were seen as more academic than practical and thus were discounted by the industry as a moot point; or both. Hopefully in 2023, with generative AI acting as the catalyst, there is a consensus in the business world that AI ethics are valid and important and need to be implemented as part of the AI governance framework. The question then is: is a dedicated AI Ethics function needed? I think we can all agree that there is a growing number of ethical implications of AI within the FS industry: all aspects of bias and discrimination, representativeness, diversity and inclusion, transparency, treating customers fairly and other aspects of fairness, potential for abuse and manipulation – and all of those considerations have existed pre-generative AI. Generative AI amplifies or adds its own flavour to existing concerns, including trustworthiness (hallucinations), discriminatory pricing, value alignment or unemployment. It also substantially widens the grey area at the intersection of laws, regulations and ethics such personal and non-personal data protection, IP and copyright, explainability or reputational concerns.

The number of possible approaches seems limited:

  • Disregard ethics and risk-accept it;
  • Expand an existing function to cover AI ethics;
  • Create a dedicated AI ethics function.

Risk-acceptance is technically possible, but with the number of things that could potentially go wrong with poorly governed AI and their reputational impacts on the organization, the risk / return trade-off does not appear very attractive.

The latter two options are similar, with the difference being the “FTE-ness” of the ethics function. Creating a new role allows more flexibility in terms of where that new role would sit in the organisational structure; on the other hand, at the current stage of AI adoption there may not be enough AI ethics work to fill a full-time role. Consequently, I think that adding AI ethics to the scope of an existing role is more likely than a creation of a dedicated role.

In either case the number of areas where AI ethics could sit appears to be relatively limited: Compliance, Risk or Tech; ESG could be an option too. A completely independent function would have certain advantages too, but it seems logistically unfeasible, especially in the long run.

Based on first-hand observations I think that the early days of “operationalizing” AI ethics within the financial services context will likely be somewhat… awkward. To what extent ethics are a part of the FS ethos is a conversation for someone else to have; I have seen and experienced organisations that were ethical and decent and others that were somewhat agnostic, so my personal experience is unexpectedly positive in that regard, but I might just have been lucky. I think that ESG is the first instance of the industry explicitly deciding to voluntarily make some ethical choices, and I think that ESG adoption may be a blueprint for AI ethics (I am also old enough to remember early days of ESG, and those sure did feel awkward).

While I have always been convinced of validity of operationalised AI ethics, I think that until very recently I had the minority opinion. It is only very recently that perceptions seem to be changing, driven by spectacular developments in generative AI and the ethical concerns accompanying them. Currently (mid-2023) we are in a fascinating place: AI has been developing rapidly for close to a decade now, but the recent developments have been unprecedented even by AI standards. Businesses find themselves “outrun” by the technology, and, in a bit of a panic knee-jerk reaction (and in part driven by genuine interest and enthusiasm, which currently abound) they become much keener and open-minded adopters than only a year or two ago. Now may be the time when the conversation on AI ethics reaches the point of operational readiness… or maybe not yet; time will tell. Sooner or later, mass implementation and operationalization of AI ethics will happen though. It must.


Is the value of the CFA designation what it used to be?

Is the value of the CFA designation what it used to be?

Is the value of the CFA designation what it used to be?

When I was a fresh grad (i.e., in 2010’s), working in my first financial services job (Bloomberg), CFA designation was really *something*. It was the “s***”. The way Master’s has more or less become the new Bachelor’s (or so my generation managed to convince ourselves), CFA became *the* qualification to really stand out and distinguish oneself among the crowd of ambitious, driven, predominantly Russell Group-educated peers.

I was pleased with myself and my Master’s in Finance from London Business School for about one afternoon, until my Bloomberg colleague (who shall remain nameless) asked me only half-jokingly “so are you going to do CFA now?”; and of course, that’s exactly what I did.

CFA then…

I was never much of a social butterfly, so the lame-but-true “cancel on friends again” definition of CFA prep process didn’t entirely apply to me, but I have to say compared to studying for Master’s studying for CFA was another level. Studying for Master’s or an MBA the general principle is “if you do the work, you *will* pass”; sometimes clearing the admission hurdles (GMAT, resume screening, interviews etc.) felt more challenging than the actual degree course. With CFA “the default is pass” rule simply did not apply. In fact, it was the exact opposite: in order to maintain the exclusivity and value of the qualification, the CFA Institute had every incentive to make fail a default; and it had. Back in my days the pass rate for Levels 1 and 2 was below 50%, which meant that the average candidate did indeed fail the exam; Level 3 had a slightly higher pass rate at around 50%.

Is the value of the CFA designation what it used to be?

The exams themselves (particularly Levels 1 and 2) were completely clinical. It did not matter who you were, who your parents were, where you studied, or even how you arrived at the answer. The answer was all it mattered, and seeing that it was a multiple-choice exam, there was no room for ambiguity either; you either got the right answer or not. The exams were arguably quite reductive in completely ignoring the reasoning and the thought process, but on the other hand they were the most meritocratic exams I had ever taken (save GMAT). They were clinical, sterile, anonymised with a hint of cruel; they cost me three years of my life; I loved them.

Once completed, I wasted no time updating my resume and expected my value in the professional / financial services job market to go up a significant notch. I can’t say for sure, but I think this is what more or less happened. Until…

…and now

Fast forward to 2023 and things feel quite different. Back in my days the exams used to be taken end masse, in a group setting, once a year on a first Saturday of June; the extreme intensity of the examination experience was considered to be very much a part of the package; a feature, not a bug. Back then it wasn’t just about *knowing* how to answer the questions, but also about *taking* and *passing* the exam in the sort of mid-19th century Prussia setting; the intellect counted at par with a tough stomach. The limited supply of exam dates (twice per year for Level 1 and once a year for Levels 2 and 3) helped maintain the value and exclusivity of the qualification; and the pain of failing a level (especially Levels 2 or 3) was truly… soul-crushing.

Nowadays there are *four* exam date options for Level 1, three for Level 2 and two for Level 3. It does, at least theoretically, reduce the minimum period of time required to complete the qualification, but the real crux of it is that the hugely increased supply of exam dates instantly dilutes the exclusivity of the qualification, because there is much lower cost (in terms of stress, effort, time) to retake it, and then retake it again if need be.

Furthermore, the exams are no longer real-time, all-in-one group sessions; they are administered at the computerised test centres. I appreciate that the pandemic forced the CFA Institute to get creative during two years of lockdowns and I don’t begrudge that, but I’m not sold on keeping this as a permanent solution. If anything, it feels like a convenient way for the CFA Institute to rid itself of organising costly and logistically demanding “physical” exam sessions in favour of outsourced test centres. I took GMAT at a computerised test centre and I took the CFA (all three levels) at a physical one; both the exams were super-important to me, but the test conditions for the former were a piece of cake compared to the latter in terms of the overall intensity of the experience.

Then there is the question of CFA ethics principles. They used to comprise some 20% of the curriculum back in my days. Those were (and presumably still are) perfectly good principles. The problem is that they never seem to have really taken hold in the mainstream financial industry even though this was, I assume, CFA Institute’s hope and plan. If anything, I think that over time those principles became even more obscure and effectively unknown outside the CFA circle.

Costs and benefits

Lastly, there is the ever-contentious issue of the annual designation fee (currently USD 299). That’s something I’ve always found somewhat problematic, and, increasingly, I don’t think I’m the only one. The golden rule for any qualification out there – starting with A-levels and ending with a Nobel – is that once you’ve earned it, it’s yours. With the CFA, you never really *own it* proper; you basically just lease it like an SUV or a holiday condo. The fee is usually paid for by the employers, but that’s besides the point, really. If I earned my right to those three letters after my last name, then why exactly do I need to keep paying for them year after year? USD 299 is not nothing…

Only my closest loved ones and I know how much work I put into becoming Wojtek Buczynski, CFA, and how much it cost me (financially and otherwise; mostly otherwise). I am definitely keeping the designation for now for those reasons alone; plus, I think it still benefits me professionally. That being said, I feel quite strongly that what was once extremely valuable has substantially devalued in recent years, thus lowering my ROI. For an investment qualification, I sure was hoping it would go up, or at least hold its value. But as they say: “past performance is not indicative of future returns”.


UK government AI regulation whitepaper review

UK government AI regulation whitepaper review

UK government AI regulation whitepaper review

On 29th March 2023 the UK government published its long-awaited whitepaper setting out its vision for regulating AI in the UK: A pro-innovation approach to AI regulation. It comes amidst generative AI hype, fuelled by spectacular developments of foundation models and solutions built on them (e.g., ChatGPT).

The current boom in generative AI has served as a catalyst for a renewed discourse on AI in financial services, which in recent years has been slightly muted due to the spotlight being mostly on crypto, as well as SPACs, digital operational resilience and new ways of working.

The whitepaper cannot be analysed without reference to another landmark AI regulatory proposal i.e. the EU AI Act. The EU AI Act was proposed in April 2021 and is currently proceeding through the legislative process of the European Union. It is widely expected to be enacted in the second half of 2023 or in early 2024.

UK government AI regulation whitepaper review

The parallels and differences between the UK whitepaper and the EU AI Act

The first observation is that the UK government whitepaper is just that: a list of considerations with an indication of a general “direction of travel”, but nowhere near the level of detail of the EU AI Act (which is not entirely surprising given that the latter is a regulation and the former just a whitepaper). Both laws (the UK one being a draft soft law and the EU one being a draft hard law) recognise that AI will function within a wider set of regulations, both existing (e.g., liability or data protection) and new, AI-specific ones. The UK whitepaper acknowledges the very challenge of defining AI, something the initial draft of EU AI Act caused notable controversy with1.

Both proposals offer general, top-level frameworks with detailed regulation delegated to industry regulators. However, the EU AI Act comes with detailed and explicit risk-weighted rules and prohibitions while the UK act stops shy of making any explicit guidelines in the initial phase.

The approach proposed by the UK government is to create a cross-sector regulatory framework rather than regulate the technology directly. The proposed framework centres around five principles:

  1. Safety, security, and robustness;
  2. Transparency and explainability;
  3. Fairness;
  4. Accountability and governance;
  5. Contestability and redress.

These principles are fairly standard, however, for certain systems and in certain contexts, some of these principles may become challenging to observe from the technical perspective. For example, a generative AI text engine may be difficult to query in terms of explaining why specific prompts generated certain outputs. I also note that these principles are high-level, and largely non-technical, whereas the EU AI Act sets out requirements which include similar principles (e.g., transparency), but tangible requirements (e.g., data governance or record keeping).

The most unique aspect of the UK AI whitepaper is that it is targeted to individual sector regulators, rather than users, manufacturers, or suppliers2 of AI directly. The UK approach could be called “regulating the regulators” and is different not just compared to the EU AI Act, but also other soft3 and hard4 laws applicable to AI in financial services – all of which primarily focus on the operators and applications of AI directly.

The UK whitepaper states that it does not initially intend to be binding (“the principles will be issued on a non-statutory basis and implemented by existing regulators”), but anticipates doing so further down the line (“following this initial period of implementation, and when parliamentary time allows, we anticipate introducing a statutory duty on regulators requiring them to have due regard to the principles”). It reads like the principles are expected to “morph” from soft to hard law over time.

Despite being explicitly about AI, the paper remains neutral with respect to specific AI tools or applications such as facial recognition (“regulating the use, not the technology”). It is one of very few regulatory papers to bring up the issue of AI autonomy up, which I think is very interesting.

In an uncharacteristically candid admission, the whitepaper notes that “It is not yet clear how responsibility and liability for demonstrating compliance with the AI regulatory principles will be or should ideally be, allocated to existing supply chain actors within the AI life cycle. […] However, to further our understanding of this topic we will engage a range of experts, including technicians and lawyers.”. In October 2020 the EU published a resolution on civil liability for AI5. The resolution advised strict liability for the operators of high-risk AI systems and fault-based liability for operators of all other AI systems. It is important to note that the definitions of “high-risk AI systems” as well as “operators” in the resolution are not the same as those proposed in the subsequently published draft of the EU AI Act, which may lead to ambiguities. Furthermore, European Parliament resolutions are not legally binding, so on one hand we have the UK admitting that liability, as a complex matter, requires further research, and the EU proposing simple approach which may be somewhat *too* simple for a complex, multi-actor AI value chain.

In another candid reflection, the UK government recognises that “[they] identified potential capability gaps among many, but not all, regulators, primarily in relation to: AI expertise [and] organisational capacity”. It remains to be seen how UK regulators (particularly financial and finance-adjacent ones, i.e., PRA, FCA, ICO) respond to these comments, particularly considering shortage of AI talent and heavy competition from tech firms and financials.

Conclusions of the whitepaper are consistent with observation within the FS industry – that while there is reasonably good awareness of high-level laws and regulations applicable to AI (MIFID II, GDPR, Equality Act 2010), there is a lack of regulatory framework to connect them. There is also a perception of regulatory gaps which upcoming AI regulations are expected to bridge.

There are both similarities and differences between the UK government whitepaper and the EU AI Act. The latter is much more detailed and fleshed out, putting EU ahead of the UK in terms of legislative developments. The EU AI Act is risk-based, focusing on prohibited and high-risk applications of AI. The prohibitions are unambiguous and this part of the Act is arguably rules-based, while the remainder is principles-based. The whole of the EU AI Act is outcomes-focused. The UK govt AI whitepaper explicitly rejects the risk-based approach (which is one of very few parts of the whitepaper that are completely unambiguous) and goes for context-specific, principles-based and outcomes-oriented approach. The rejection of the risk-based approach reads like a clear rebuke of the EU approach.

The main parallel between the UK govt whitepaper and the EU AI Act is sector-neutrality. Both the EU AI Act and the UK govt whitepaper are meant to be applicable to all sectors, with detailed oversight to be delegated to respective sectoral regulators. We also need to be mindful that the primary focus of general regulations is likely to be applications of AI that may impact health, wellbeing, critical infrastructure, fundamental rights etc. Financial services – as important as they are – are not as critical as healthcare or fundamental rights.

Both laws are meant to complement existing regulations, albeit in different ways. The EU AI Act, as a hard law, needs to clearly and unambiguously fit within the matrix of existing regulations across various sectors: partly as a brand new regulation and partly as a complement of existing regulations (e.g. product safety or liability). The UK principles, as a soft law (at least initially) are meant to “complement existing regulation, increase clarity, and reduce friction for businesses”.

In what appears to be an explicit difference in approaches, the UK whitepaper states that it will “empower existing UK regulators to apply the cross-cutting principles”, and that “creating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators.”. This stands in contrast with the proposed EU approach, which – despite being based on existing regulators – also anticipates creation of a pan-EU European Artificial Intelligence Board (EAIB) tasked with a number of responsibilities pertaining to the implementation, requirements, advice and enforcement of the EU AI Act.

However, the UK government proposes the introduction of some “central functions” to ensure regulatory coordination and coherence. The functions are:

  • Monitoring, assessment and feedback;
  • Support coherent implementation of the principles;
  • Cross-sectoral risk assessment;
  • Support for innovators (including testbeds and sandboxes);
  • Education and awareness;
  • Horizon scanning;
  • Ensuring interoperability with international regulatory frameworks.

Even though the whitepaper does not offer details about the logistics of these central functions, they do appear similar in principle to what the EAIB would be tasked with. It does note that the central functions would initially be delivered by the government, with an option to deliver them independently in the long run. The whitepaper references UK’s Digital Regulation Cooperation Forum (DRCF) – comprising the Competition and Markets Authority (CMA), Ofcom, Information Commissioner’s Office, and the Financial Conduct Authority (FCA) – as an avenue for delivery.

The fundamental difference between the UK and EU approaches is enforceability: the UK whitepaper is (at least initially) a soft law, and the EU AI Act will be a hard law. However, it is reasonable for a regulator to expect that their guidances be followed and to challenge regulated firms if they’re not, which means that a soft law is de facto almost a hard law. The EU AI Act has explicit provisions for monetary fines for non-compliance (up to EUR 30,000,000 or 6% of the global annual turnover (whichever is higher)6, which is even stricter than GDPR’s EUR 20,000,000 or 4% of annual global turnover (whichever is higher)) – the UK AI whitepaper has none. It is entirely plausible that when the UK whitepaper evolves from soft to hard law, additional provisions will be made for monetary fines, but as it stands right now, this aspect is missing.

Conclusion

Overall, it is somewhat challenging to provide a clear (re)view of the whitepaper. It has a lot of interesting recommendations, although most, if not all, have appeared in previous AI regulatory guidances worldwide and have otherwise been tried and tested. The whitepaper’s focus on regulators rather than end-users / operators of AI is both original and very interesting.

The time for AI regulation has come. I expect a lot of activity and discussions around it in the EU, UK, and beyond. I also expect the underlying values and principles to be relatively consistent worldwide, but – as the top-level comparison of the UK and EU proposals has shown – similar principles can be promoted, implemented and enforced within substantially different regulatory frameworks.

 

________________________________________________________

(1) https://www.wired.com/story/artificial-intelligence-regulation-european-union/

(2)  The EU AI Act introduces a catch-all concept of “operator” which applies to all the actors in the AI supply chain.

(3)  European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’, 2020; European Commission, ‘Report on the Safety and Liability Implications of Artificial Intelligence, the Internet of Things and Robotics’, 2020; European Commission, ‘Liability for Artificial Intelligence and Other Emerging Digital Technologies’, 2019; The Ministry of Economy, Trade and Industry (‘METI’), ‘Governance Guidelines for Implementation of AI Principles ver. 1.0’, 2021; Hong Kong Monetary Authority (‘HKMA’), ‘High-Level Principles on Artificial Intelligence’, 2019; The International Organisation of Securities Commissions (‘IOSCO’), ‘The Use of Artificial Intelligence and Machine Learning by Market Intermediaries and Asset Managers’, https://www.iosco.org/library/pubdocs/pdf/IOSCOPD658.pdf ; Financial Stability Board, note 9 above; Bundesanstalt für Finanzdienstleistungsaufsicht (‘BaFin’), ‘Big Data Meets Artificial Intelligence – Results of the Consultation on BaFin’s Report’, 21 March 2019, https://www.bafin.de/SharedDocs/Veroeffentlichungen/EN/BaFinPerspektiven/2019_01/bp_19-1_Beitrag_SR3_en.html ; Information Commissioner’s Office, ‘Guidance on the AI Auditing Framework Draft Guidance for Consultation’, 2020; Information Commissioner’s Office, ‘Explaining Decisions Made with AI’, 2020; Personal Data Protection Commission, ‘Model Artificial Intelligence Governance Framework (2nd ed)’, Singapore, 2020.

(4) Directive 2014/65/EU of the European Parliament and of the Council of 15 May 2014 on Markets in Financial Instruments and Amending Directive 2002/92/EC and Directive 2011/61/EU; European Commission, ‘RTS 6’, 2016; Financial Conduct Authority, ‘The Senior Managers and Certification Regime: Guide for FCA Solo-Regulated Firms’ (‘SM&CR’), July 2019, https://www.fca.org.uk/publication/policy/guide-for-fca-solo-regulated-firms.pdf ; Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data (General Data Protection Regulation), 27 April 2016, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32016R0679&from=EN; European Parliament, ‘Proposal for an Artificial Intelligence Act’, 2021; People’s Bank of China, ‘Guiding Opinions of the PBOC, the China Banking and Insurance Regulatory Commission, the China Securities Regulatory Commission, and the State Administration of Foreign Exchange on Regulating the Asset Management Business of Financial Institutions’, 27 April 2018.

(5)  European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL))

(6)  Interestingly, poor data governance could trigger the same fine as engaging with outright prohibited practices.


Ron Kalifa 2021 UK Fintech Review

Ron Kalifa 2021 UK Fintech Review

Having delivered the 2020 UK budget, the Chancellor of the Exchequer Rishi Sunak1 commissioned / appointed a seasoned banking / FinTech executive Ron Kalifa OBE to conduct an independent review on the state of UK FinTech industry. The objective of the review was not merely an assessment of the current state of UK FinTech, but also developing a set of recommendations to support its growth, adoption, as well as global position and competitiveness.

On that last point, measuring UK’s (or any other country’s) position in FinTech has always been a combination of science and art. While certain financial metrics are fairly unambiguous and comparable (e.g. number or total value of IPOs, or total value of M&A transactions), the ranking of a given country in FinTech is more nuanced. Should a total value of all FinTech funding / deals be used on absolute basis? Or perhaps adjusted per capita? What about legal and regulatory environment – should it be captured in the score as well?

The abovementioned considerations are usually pondered by researchers in specialist publications or market data companies, while most people in financial services simply work out some sort of internal weighted average of the above, and usually get the Top 3 (the US, the UK, and Singapore) fairly consensually.

Lastly, FinTech industry is unlikely to flourish without world-class educational institutions and a certain entrepreneurial culture. UK universities are among the very best in the world across all disciplines in all global university rankings. The entrepreneurial culture, as an intangible, is challenging to quantify, but most people will agree that London is full of driven, risk-taking, entrepreneurial people from all over the world.

Ron Kalifa 2021 UK Fintech Review

One way or another, UK’s FinTech powerhouse status is indisputable, and the Chancellor’s plan to build on that makes perfect sense. Brexit is the elephant in the room, because by leaving the single market, the UK created a barrier for the free movement of talent where there was none before. Brexit also arguably sent a less-than-positive message to the traditionally globalised / internationalised FinTech industry at large. On the other hand, there is some chance that Brexit could enable UK’s FinTech’s a more global (as opposed to EU-focused) expansion, although this claim can and will only be verified over time.

To his credit, Ron Kalifa approached his review from the “to do list” perspective of actions and recommendations rather than analysis per se. As a result, his document is highly actionable as well as reasonably compact.

The Kalifa Review is broken into five main sections:

  1. Policy and Regulation – which focuses on creating a new regulatory framework for emerging technologies and ensuring that FinTech becomes an integral part of UK trade policy (interestingly, this part includes a section on regulatory implications of AI as regards the PRA and FCA rules; this is the highest profile official UK publication to address these considerations that I’m aware of. Prior to the Kalifa Review AI regulation in financial services has been discussed at length from the GDPR perspective in ICO’s publications, but those do not directly feed into government policies).
  2. Skills and Talent – interestingly, the focus here is *not* on competing for Oxbridge / Russell Group graduates nor expansion of finance and technology Bachelor’s and Master’s course portfolio at UK universities. Instead the focus is on retraining and upskilling of existing adult workforce, mostly in the form of low-cost short courses. This aligns with broader civilisational / existential challenges such as the disappearance of “job for life” and the need for lifelong education in the increasingly complex and interconnected world. Separately, Kalifa proposes a new category of UK visa for FinTech talent to ensure UK’s continued access to high-quality international talent, which represents approx. 42% of FinTech workforce.
  3. Investment – in addition to standard investment incentives such as tax credits, the Kalifa Review recommends improvement of the UK IPO environment as well as creation of a (new?) global family of FinTech indices to give the sector greater visibility (this is an interesting recommendation for anyone who has ever worked for a market data vendor; indices are normally created in response to market demand and there are FinTech indices within existing, established market index families. Creating a new family of indices is something different altogether).
  4. International Attractiveness and Competitiveness.
  5. National Connectivity – this point is particularly interesting, as it seems solely focused on countering London-centric thinking and recognising and strengthening other FinTech hubs across the UK.

The Kalifa Review makes a crucial sixth recommendation: to create a new organisational link which the Review proposes to call the Centre for Finance, Innovation, and Technology (CFIT). The Review is slightly vague on how it envisions the CFIT structurally and organisationally (it does mention that it would be a public / private partnership but does not go into further details). CFIT is one recommendation of the Kalifa Review which seems more of a vision than a fully fleshed-out idea, but Ron Kalifa himself spoke about prominently in his live appearances and gave the impression of CFIT being the organisational structure upon which the delivery of his recommendations would largely hinge.

Upon release, the Kalifa Review was met with a great deal of interest from the financial services industry, as well as legal profession, policymakers, business leaders and academics. Ron Kalifa made multiple appearances in different online presentations on the topic, and most brand-name law firms carefully analysed and summarised his report. That, however, was the visible and easier part. The real question is to what extent the recommendations made in the Kalifa Review will be reflected in government policies for years to come.

You can read the report in its entirety here.

__________________________________________

(1) Rishi Sunak took over the job of the Chancellor of the Exchequer from his predecessor, Sajid Javid, less than a month prior to the publication of Budget 2020. Consequently, it can be debated whether the FinTech report was Rishi Sunak’s original idea, or one he inherited.


The Technological Revolution in Financial Services

LSE: The Technological Revolution in Financial Services 23-Apr-2021

The term “FinTech” is currently enjoying truly global recognition. Definition is much trickier though: a bit like art or artificial intelligence, everyone has a pretty good intuitive / organic understanding, while a formal, universally agreed-upon definition remains somewhat ambiguous. Given that this blog has an entire section dedicated to FinTech, I will proceed on the assumption that all of us have at least that intuitive understanding of the term.

It will be a truism to say that FinTechs are disrupting financial services on multiple levels, such as:

  • Convenience (e.g. branch-less opening of an account using a camera and AI-powered KYC software in the background; availability of all services on a mobile);
  • Cost competition vis-à-vis established models (think Robinhood’s commission-free share dealing vs. standard USD or GBP 6 – 10 fee per single trade [for retail clients] at established brokerages or investment firms; think Revolut’s zero-fee [or nearly zero-fee] fx transactions at interbank rates);
  • Innovativeness and fresh thinking;
  • Inclusion and reduction of barriers to entry (e.g. by allowing access to investment products below the once-standard minimum thresholds of GBP 1,000 or more or by making global remittances easier and cheaper);
  • Smoother and sleeker user experience;
  • Greater customisation of products to individual client’s needs;
  • …and many more…

On a more personal note, FinTech (and many of its abovementioned benefits, chief among them cost reduction and innovativeness) are like future in that famous William Gibson quote: they’re here, but they’re not equally distributed. My primary home (London) is one of FinTech capitals of the world; we all take it for granted that all the latest innovations will be instantly available in London, and Londoners are (broadly generalising) early – if not very early – adopters. I instantly and very tangibly benefitted from all the forex transaction features from Revolut, because compared to what I had to contend with from my brand-name high-street bank it was a whole ‘nother level of convenience (one card with multiple sub-accounts in different currencies) and cost (all those ridiculous, extortionate forex transaction fees were gone, reduced to near-zero1). My secondary home (Warsaw) is neither a global nor even a European FinTech capital – consequently, it’s a markedly different FinTech environment out here. Challenger banks are close to non-existent; as for other FinTechs, they are clustered in payday lending (whose use of technology and innovation is notorious rather than beneficial), forex trading, and online payments (one area which is of some benefit, though sometimes the commissions charged by the payment operators behind the scenes seem higher than they should be).

The speakers invited by the LSE (Michael R. King and Richard W. Nesbitt) have recently concluded comprehensive industry research, summarised in their book “The Technological Revolution in Financial Services: How Banks, FinTechs and Customers Win Together”. They shared some of their insights and conclusions.

Firstly, they stated that technology itself is not a strategy (which, depending on the interpretation, might run a bit counter to the iconic “we are a technology company with a banking license” declarations of many CEO’s). Technology is a tool, which may be a part of the strategy.

Secondly, technology itself does not provide a sustained competitive advantage, because it is widely available, and it can be copied by competitors. I found this observation both interesting and counterintuitive. I have always thought that for FinTechs technology *defines* their competitive advantage, but perhaps they operative word here is “sustained”. It’s one thing to come up with a disruptive solution, but if it is widely copied and becomes the new normal, then indeed the initial advantage is eroded. Still, personal observations lead me to challenge this statement: Revolut may have introduced zero-fee forex conversions some years ago (I joined in 2018), and yet many high-street banks still charge conversion fees and use terrible, re-quoted rates. They could copy Revolut’s idea in an instant, and yet they choose not to. Another example: if technology does not provide FinTechs’ sustained competitive advantage, then how come challengers Revolut, Monzo, or Starling are enjoying growth in customer numbers and have strong brand recognition, while NatWest’s Bo was but a blip on the radar?

King and Nesbitt further argue that the biggest barrier in financial services is not technology or even regulation, but access to customers. Again, I acknowledge and respect that conclusion, but I can’t fully agree with it. All the brand-name banks have generated and sat on enormous troves of data for decades, and only PSD 2 compelled them to make this widely data available – Revolut and Monzo succeeded without reverting to Open Banking as their main selling point; they just offered innovative, sleek products; countless remittance companies entered the market and succeeded not because they gained access to data Western Union previously kept to itself – they just offered better rates.

Another major component of the “FinTech equation” is, according to the authors, trust in financial services (trust defined as both data and privacy). They argue that erosion of that trust post-2008 was what paved the way for FinTechs. I agree with the erosion of trust part, but it was neither data leaks nor customer data leaks that led to public’s distrust during the 2008 financial crisis: it was imprudent management of money and too-creative financial innovation (back then sometimes labelled as “financial engineering”, even though this term is a terrible misnomer, because finance, no matter how badly some people in the industry would want it, is not an exact science, it’s a social science).

On the social side, FinTech (expectedly or not) may lead to substantial benefits in terms of financial inclusion of the underbanked (e.g. people without a birth certificate and / or any proof of ID) and / or previously marginalised groups (e.g. women). One of the panellists, Ghela Boskovich, brought up India’s aadhar system, which allows everyone to obtain ID number based purely on their biometric data, and Kenya’s M*Pesa mobile payments system (which does not even require a smartphone – an old-school mobile is sufficient), which opened the financial system to women in ways that were not available prior.

On the more traditional thinking side, the authors concluded that regulation and risk management remain pillars of financial services. On the cybersecurity side they advocated switching from incumbent thinking of “if we are hacked” to FinTechs’ thinking of “when we are hacked”, with prompt and transparent disclosures of cybersecurity incidents.

King and Nesbitt concluded that in the end the partnership model between established banks and FinTech start-ups will be the winning combination. It is a very interesting thought. On one hand, many (perhaps most) FinTechs need these partnerships throughout most of their journey: from incubators and accelerators (like Barclays Rise in London), through flagship / strategic client relationships (whereby one established financial institution becomes a FinTech’s primary client, and the FinTech effectively depends on it for its survival). Sometimes established financials end up acquiring FinTech start-ups, though it doesn’t happen anywhere as often as in the tech industry.

Overall King, Nesbitt, and their esteemed guests gave me a huge amount of food for thought around the area of great interest to me. I may or may not fully agree with some of their conclusions, and it doesn’t matter that much – we will see how FinTech evolves in the coming years, and I’m quite certain its evolution will take some twists and turns few have foreseen. The really important thing for me is inclusion, because I see it as massive and undeniable benefit.

_______________________________________

1. Disclosure: I am not and have been an employee of Revolute to date. This is not an advertorial or any form of promotion.


100 dollar money bill with face mask

Humanity+ Festival 07/08-Jul-2020: Max More on UBI

Sat 05-Sep-2020

In the past 2 years or so I have been increasingly interested in the transhumanist movement. Transhumanism has a bit of a mixed reputation in “serious” circles of business and academia – sometimes patronised, sometimes ridiculed, occasionally sparking some interest. With its audacious goals of (extreme) lifespan / healthspan extension, radical enhancement of physical and cognitive abilities, all the way up to brain uploading and immortality one can kind of understand where the ridicule is coming from. I can’t quite comprehend the prospect of immortality, but it’d be nice to have an option. And I wouldn’t think twice before enhancing my body and mind in all the ways science wishes to enable.

With that mindset, I started attending (first in person, then, when the world as we knew it ended, online) transhumanist events. Paradoxically, Covid-19 pandemic enabled me to attend *more* events than before, including Humanity+ Festival. Had it been organised in a physical location, it would have likely been the in US; even if it was held in London, I couldn’t take 2 days off work to attend it, I save my days off for my family. I was very fortunate to be able to attend it online.

I attended a couple of fascinating presentations during the 2020 event, and I will try to present them in individual posts.

I’d say that – based on the way it is often referred to as a cult – transhumanism is currently going through the first stage of Schopenhauer’s three stages of truth. The first stage is ridicule, the second stage is violent opposition, and the third stage is being accepted as self-evident. I (and the sci-fi-loving kid inside me) find many of the transhumanist concepts interesting. I don’t concern myself too much with how realistic they seem today, because I realise how many self-evident things today (Roomba; self-driving cars; pacemakers; Viagra; deepfakes) seemed completely unrealistic, audacious, and downright crazy only a couple of years / decades ago. In fact, I *love* all those crazy, audacious ideas which focus on possibilities and don’t worry too much about limitations.

Humanity+ is… what is it actually? I’d say Humanity+ is one of the big players / thought leaders in transhumanism, alongside David Wood’s London Futurists and probably many other groups I am not aware of. Humanity+ is currently led by the fascinating, charismatic, and – it just has to be said – stunning Natasha Vita-More.

Transhumanist movement is a tight-knit community (I can’t consider myself a member… I’m more of a fan) with a number of high-profile individuals: Natasha Vita-More, Max More (aka Mr. Natasha Vita-More aka the current head of cryogenic preservation company Alcor), David Wood, Jose Cordeiro, Ben Goertzel. They are all brilliant, charismatic, and colourful individuals. As a slightly non-normative individual I suspect their occasionally eccentric ways work can sometimes work against them in the mainstream academic and business circles, but I wouldn’t have them any other way.

During the 2020 event Max More talked about UBI (Universal Basic Income). I quite like the idea of UBI, but I appreciate there are complexities and nuances to it, and I’m probably not aware of many of them. Max More definitely gave it some thought, and he presented some really interesting thoughts and posed many difficult questions. For starters, I liked reframing UBI as “negative income tax” – the very term “UBI” sends so many thought leaders and politicians (from more than one side of the political spectrum) into panic mode, but “negative income tax” sounds just about as capitalist and neoliberal as it gets. More amused the audience with realisation (which, I believe, was technically correct), that Donald Trump’s USD 1,200 cheques for all Americans were in fact UBI (who would have thought that of all people it would be Donald Trump who implements UBI on a national scale first…? Btw, it could be argued that with their furlough support Boris Johnson and Rishi Sunak did something very similar in the UK – though these cheques were not for everyone, only for those who couldn’t work due to lockdown; so this was more like Guaranteed Minimum Income).

The questions raised by More were really thought-provoking:

  • Will / should UBI be funded by taxing AI?
  • Should it be payable to immigrants? Legal and illegal?
  • Should UBI be paid per individual or per household?
  • What about people with medical conditions requiring extra care? They would require UBI add-ons, which undermines the whole concept.
  • Should people living in metropolitan cities like London be paid the same amount as people living in the (cheaper) countryside?
  • How should runaway inflation be prevented?

Lastly, More suggested some alternatives to UBI which (in his view) could work better. He proposed an idea of universal endowment (sort of universal inheritance, but without an actual wealthy relative dying) for everyone. It wouldn’t be a cash lump-sum (which so many people – myself included – could probably spend very quickly and not-too-wisely), but a more complex structure: a bankruptcy-protected stock ownership. The idea is very interesting – wealthy people (and even not-so-wealthy people) don’t necessarily leave cash to their descendants: physical assets aside (real estate etc.) leaving shares, bonds, and other financial assets in one’s will is relatively common. Basically the wealthier the benefactor, the more diverse the portfolio of assets they’d leave behind. The concept of bankruptcy-protected assets is not new, it exists in modern law (e.g. US Chapter 13 bankruptcy allows the bankrupting party to keep their property), but to me it sounded like More meant it in a different way. If More meant his endowment as a market-linked financial portfolio whose value cannot go down – well, this can be technically done (long equity + long put options on the entire portfolio) – but only to a point. Firstly, it would be challenging doing it on a mass scale (the supply of required amount of put options could or could not be a problem, but their prices would likely go up so much across the board that it would have a substantial impact on the value and profitability of the entire portfolio). Secondly, one cannot have a portfolio whose value can truly only go up – it wouldn’t necessarily be the proverbial free lunch, but definitely a free starter. Put options have expiry dates (all options do), and their maturity is usually months, not years. Expiring options can be replaced (rolled) with longer-dated ones, but this would come with a cost. Perpetual downside protection of a portfolio with put options could erode its value over time (especially in adverse market conditions, i.e. underlying assets values not going up).

If More had something even more innovative in mind then it could require rewriting some of the financial markets rulebook (why would anyone invest the old-fashioned way without bankruptcy protection when everyone would have their bankruptcy-protected endowments?). I’m not saying it’s never going to happen – in fact I like the idea a lot (and I realise how much different my life could be from the material perspective had I received such endowment when I was entering adulthood), I’m just pointing out practical considerations to address.

And one last thing: speaking from personal experience, I’d say that this endowment *definitely* shouldn’t be paid in full upon reaching the age of 18 (at least not for guys… I was a total liability at that age; I’d squander any money in a heartbeat); nor 21. *Maybe* 25, but frankly, I think a staggered release from mid-20’s to mid-30’s would work best.


Candle stick graph

Utilisation of AI / Machine Learning in investment management: views from CFA Institute, FCA / BoE, and Cambridge Judge Business School

Mon 31-August-2020

I spent a better part of the past 18 months researching Machine Learning in equity investment decision-making for my PhD. During that time two high-profile industry surveys and one not-so-high-profile were published (FCA / BoE, CFA Institute, and Cambridge Judge Business School respectively). They provided a valuable insight into the degree of adoption / utilisation of Artificial Intelligence in general and Machine Learning in particular in the investment management industry.

Below you will find a brief summary of their findings as well as some critique and discussion of individual surveys.

My research into ML in the investment management industry delivered some unobvious conclusions:

  • The *actual* level of ML utilisation in the industry is (as of mid-2020) low (if not very low).
  • There are some areas where ML is uncontroversial and essentially a win/win for everyone – chief among them anti-money laundering (AML), which I discussed a number of times in meetups and workshops like this one [link]. Other areas include chatbots, sales / CRM support systems, legal document analysis software, or advanced Cybersecurity.
  • There are some areas where using ML could do more harm than good: recruitment or personalised pricing (the latter arguably not being very relevant in investment management).
  • There is curiosity, openness, and appreciation of AI in the industry. Practicalities such as operational and strategic inertia on one hand and regulatory concerns on the other stand in the way. It’s not particularly surprising nor unexpected, and attitude towards this situation is stoical. Investment management has once been referred to as “glacial” in its adoption of new technologies – I think the industry has made huge progress in the past decade or so. I think that AI / ML adoption will accelerate, much like the adoption of the cloud had in recent years.
  • COVID-19 may (oddly) accelerate the adoption of ML, driven by competitive pressure, thinning margins (which started years before COVID-19), and overall push towards operational (and thus financial) efficiencies.

I was confident about my findings and conclusions, but I welcome three industry publications, which surveyed hundreds of investment managers among them. These reports were in the position to corroborate (or disprove) my conclusions from a more statistically significant perspective.

So… Was I right or was I wrong?

The joint FCA / BoE survey (conducted in Apr-2019, with the summary report[1] published in Oct-2019) covered the entirety of UK financial services industry, including but not limited to investment management. It was the first (chronologically) comprehensive publication concluding that:

  • Investment management industry as a subsector of financial services industry has generally low adoption of AI compared to, for example, banking;
  • The predominant uses of AI investment management are areas outside of investment decision making (e.g. AML). Consequently, many investment management firms may say “we use AI in our organisation” and be entirely truthful in saying so. What the market and general public infer from such general statements may be much wider and more sophisticated applications of the technology than they really are.

The CFA Institute survey was conducted around April and May 2019 and published[2] in Sep-2019. It was more investment-management centric than the FCA / BoE publication. Its introduction states unambiguously: “We found that relatively few investment professionals are currently exploiting AI and big data applications in their investment processes”.

I consider one of its statistics particularly relevant: of the 230 respondents who answered the question “Which of these [techniques] have you used in the past 12 months for investment strategy and process?” only 10% chose “AI / ML to find nonlinear relationship or estimate”. I believe that even the low 10% figure represented a self-selected group of respondents, who were more likely to employ AI / ML in their investment functions than those who decided not to complete the survey.

Please note that for any of the respondents who confirm that their firms use AI / ML in investment decision-making (or even broader investment process) it doesn’t mean that *all* of their firm’s AUM will be subject to this process. It just means that some fraction of the AUM will be. My educated presumption is that this fraction is likely to be low.

Please also note that both FCA / BoE and CFA Institute reports relied on *self-selected* groups of respondents. The former is based on 106 firms’ responses out of 287 the survey was sent to. 230 respondents answered the particular question of interest to me in the CFA Institute report – out of 734 total respondents the survey was sent to.

The Cambridge Judge Business School survey report[3] (published in Jan-2020) strongly disagrees with the two reports above. It concludes that “AI is widely adopted in the Investment Management sector, where it is becoming a fundamental driver for revenue generation”. It also reads that “59% of all surveyed investment managers are currently using AI in their investment process [out of which] portfolio risk management is currently the most active area of AI implementation at an adoption rate of 61%, followed by portfolio structuring (58%) and asset price forecasting (55%)”. I believe that Cambridge results are driven by the fact that the survey combined both FinTech startups and incumbents, without revealing the % weights of each in the investment management category. In my experience within the investment management industry, the quotes above make sense only in sample dominated by FinTechs (particularly the first statement, which I strongly disagree with on the basis of my professional experience and industry observations). I consider lumping FinTech’s and incumbents’ results into one survey as unfortunate due to extreme differences between the types of organisations.

That Cambridge Judge Business School publishes a report containing odd findings does not strike me as particularly surprising. It is, frankly, not uncommon for academics to get so detached from the underlying industry that their conclusions stand at odds with observable reality. However, the CJBS report has been co-authored by Invesco and EY, which I find quite baffling. Invesco is a brand-name investment management firm with USD 1+ Tn in AUM, which puts it in the Tier 1 / “superjumbo” category size-wise. I am not aware of it being on the forefront of cutting-edge technologies, but as is the case with US-centric firms, I may simply lack sufficiently detailed insight; Invesco’s AUM seem sufficient to support active research/implementation of AI. One way or another, Invesco should know better than to sign off on a report with questionable conclusions. EY is well on the forefront of the cutting-edge technologies (I know that from personal experience), so for them to sign off on the report is even more baffling.

Frankly, the Cambridge Judge report fails to impress and fails to convince (me). My academic research and industry experience (including extensive networking) are fully in line with FCA / BoE’s and CFA Institute’s reports’ findings.

The fact that AI adoption in investment management stands at a much more modest level than the hype would have us believe may be slightly disappointing, but isn’t that surprising. It just goes to show that AI as a powerful, disruptive technology is being adopted with caution, which isn’t a bad thing. There are questions regarding regulation applicable to AI which need to be addressed. Lastly, business strategies take time (particularly for larger investment managers), and at times the technology is developing faster than business can keep up. Based on my experiences and observation with cloud adoption (and lessons seemingly learned by the industry), I am (uncharacteristically) optimistic.

[1] https://www.fca.org.uk/publication/research/research-note-on-machine-learning-in-uk-financial-services.pdf

[2] https://www.cfainstitute.org/-/media/documents/survey/AI-Pioneers-in-Investment-Management.ashx

[3] https://www.jbs.cam.ac.uk/wp-content/uploads/2020/08/2020-ccaf-ai-in-financial-services-survey.pdf


Money laundering

Nerd Nite London – AI to the rescue! How Artificial Intelligence can help combat money laundering

15-Apr-2020

In April 2020, in the apex of the UK lockdown, I had the pleasure of being one of three presenters at online edition of Nerd Nite London. Nerd Nite is a wildly popular global meetup series, with multiple regional chapters. Each chapter is run by volunteers, and the proceeds from ticket sales (after costs) go to local charities. In this sense, lockdown did us an odd favour: normally Nerd Nites are organised in pubs, so there is venue rental cost. This time the venue were our living rooms, so pretty much all the money went to a local foodbank.

I had the pleasure of presenting on one of the topics close to my heart (and mind!), which is the potential for AI to dramatically improve anti-money laundering efforts in financial organisations. You can find the complete recording below.

Enjoy!


Solar Panels in a field

London Business School Energy Club presents: the renewables revolution

Thu 26-Sep-2019

As an LBS alumn (or is it “alumnus”? I never know…) I am a part of a very busy e-mail distribution list, connecting tens of thousands of LBS grads worldwide. LBS, its clubs, alumni networks etc. regularly organise different events, and I make an active effort to attend one at least every couple of months. I went to “the business of sustainability” a couple of months ago, so the upcoming “the renewables revolution” organised by LBS Energy Club (and sponsored by PWC) was an easy choice.

Renewable energy is not a controversial topic in its own right (unless you’re a climate change denier or a part of the fossil fuel lobby, especially on the coal side). It’s a controversial topic along the lines of disruption of powerful, established, entrenched industries (mostly mining and petrochemicals) and also along the lines of disruption of life(style) as we know it. Most of us in the West (the proverbial First World, even if it doesn’t feel like one very often) want to live green, sustainable, environmentally-friendly lifestyles… as long as the toughest environmental sacrifice is ditching a BMW / Merc / Lexus etc. for a Tesla, and swapping paper tissues for bamboo-based ones (obviously I am projecting here, but I don’t think I’m that far off the mark). Us Westerners (if not “we mankind”, quoting Taryn Manning’s character from “hustle and flow”) love to consume, love the ever-expanding choices, love all the conveniences we can afford – the prospect of cutting down on hot water, not being able to go on overseas holidays once or twice a year, or not replacing our mobiles whenever we feel like it, is an unpleasant one. Renewables, with their dependency on weather (wind, solar) and generally less abundant (or at least less easily and immediately abundant) output are an unpleasant reminder that the time of abundance (when, quoting Michael Caine’s character from “Interstellar”, “every day felt like Christmas”) might be coming to an end.

Furthermore, even for a vaguely educated Westerner like myself, renewables are a source of certain cognitive dissonance. On one hand we have several consecutive hottest years on record, floods, wildfires, disrupted weather patterns, environmental migrants, the prospect of ice-free Arctic ocean, Extinction Rebellion etc. – on the other hand we have seemingly very upbeat news like “Britain goes week without coal power for first time since industrial revolution”, “Fossil fuels produce less than half of UK electricity for first time”, or “Renewable electricity overtakes fossil fuels in the UK for first time”. So in the end, I don’t know whether we’re turning the corner as we speak, or not.

There is no shortage of credible statistics out there – it’s quite a challenge for a non-energy expert to understand them. According to BP, renewables (i.e. solar, wind and other renewables) accounted for approx. 9.3% of global electricity generation in 2018 (25% if we add hydroelectric). Then, as per the World Bank (spreadsheets with underlying data from Renewable Energy), in 2016 all renewables accounted for approx. 11% of global energy generation (35% if we add hydroelectric). Then, as per IEA, in 2018 renewables accounted for measly 2% of total energy production (rising to 12% if we add biomass and waste, and to 15% if we add hydro).

2% looks tragic, 9.3% looks poor, 25% or 35% looks at least vaguely promising – but no matter which set of stats we choose, fossil fuels still account for vast majority of global energy generation (and the demand is constantly rising). Consequently, my anxiety remains well justified. It was the reason I went to the event in the first place – to find out what the future holds.

The panellists were:

  • Equinor, Head of Corporate Financing & Analysis, Anca Jalba
  • Glennmont Partners, Founding Partner, Scott Lawrence
  • Globeleq, Head of Renewables, Paolo de Michelis
  • Camco, Managing Director, Geoff Sinclair

The panellists made a wide range of observations, depending on their diverse geographical focus and nature of their companies. You will find a summary below, coupled with my personal observations and comments. I intentionally anonymized the speakers’ comments.

One of the panellists remarked that in the last decade a cost of 1MW of solar panels went from EUR 6-8m to EUR 3.5m to EUR 240k, and at the same time ESG went from being a niche area in investment management to being very much at the core (I echo the latter from my own observations). At the same time, according to research, in order to meet Paris Accord targets, by 2050 50% of global energy will need to come from renewables. So no matter which set of abovementioned statistics we choose, we’re globally nowhere near 50%.

The above comments are probably fairly well known, sort of goes without saying. However, the speakers made a whole lot of more targeted observations.

The concept of distributed renewables (individual households generating their own electricity, mostly using solar panels on their roofs, and feeding surplus into the power grid) was mentioned. This is being encouraged by some governments, and the speakers noted that governments are the key players in reshaping the energy landscape. They were also quite candid on there being a lot of rent seeking behaviour in the (established) energy sector (esp. utility companies). Given the size and influence of the utility sector, it is fairly understandable that they may have mixed feelings towards activities that may effectively undercut them. At the same time, one would hope that at least some of them see the changes coming, and appreciate their necessity and inevitability by adapting rather than opposing. Interestingly, emerging markets where energy infrastructure and power generation are not very reliable were mentioned as an opportunity for off-grid renewables.

We were also reminded that electricity generation is just part of the energy mix. It’s a massive part, of course, but there is also automotive transport, aviation, and shipping – all of which consume vast amounts of energy, with very few low-carbon or no-carbon options. Electric vehicles are a promising start (not without their own issues though: cobalt mining), but aviation and shipping do not currently have viable non-fossil-fuel-based options (except perhaps biofuels, but I doubt there is enough arable land in the whole world to plant enough biofuel-generating crops to feed the demands of aviation and shipping).

The need for (truly) global carbon tax was also raised. I think (using tax havens as reference) it may be challenging to implement, but, unlike corporate domicile and taxation, energy generation is generally local, so if governments would tax emissions physically produced by utility companies within their borders, that could be more feasible. Then again, it could be quite disruptive and thus challenging politically (think the fight around coal mining in the US or gillet jeunes in France as examples).

On the technical side, intermittency risk is a big factor in renewables, and energy storage is not there yet on an industrial scale. It is a huge investment opportunity.

In terms of new sources of renewable energy, floating offshore wind farms were mentioned as the potential next big thing, even though it is currently not commercially viable. My question about the panellists’ views on feasibility of fusion power was met with scepticism.

In terms of investment opportunities, one of the speakers (prompted by my question) mentioned that climate change adaptation is also one. This echoes exactly what Mariana Mazzucato said at the British Library event some time ago (pls see my post “Mariana Mazzucato: the value of everything” for reference), so there might be something there. More broadly, there seemed to be a consensus among the speakers once subsidies disappear, only investors will large balance sheets and portfolios of projects will be in the position to compete, given capital-intensive nature of energy infrastructure.

I ended by asking a question about the inevitability and scale of impact of the climate change on the world as we know it and on our lifestyles. I didn’t get a very concrete reply other than there *will be* impact, and adaptation will be essential. It hasn’t lifted my spirit, but I don’t think I was expecting a different answer. In the end, it looks like the renewables are currently more of an evolution than revolution. Evolution is better than nothing; it might just not be enough.


Natural Language Generation

Natural Language Generation (NLG) is coming to asset management

Sun 17-Mar-2019

Natural Language Processing (NLP) is a domain of artificial intelligence (AI) focused on, well, processing normal, everyday language (written or spoken). It is used by digital assistants such as Siri or Google, smart speakers such as Google Home or Alexa, and countless chatbots and helplines all around the world (“in your own words, please state the reason for your call…”). The idea is to simplify and humanize human-computer interaction, making it more natural and free-flowing. It is also meant to generate substantial operational efficiencies for service providers, allowing their AI’s provide services that were either previously unavailable (human-powered equivalent of Siri – not an option), or costly (human-powered chats and helplines).

Natural Language Generation (NLG) is an up-and-coming twin of NLP. Again, the name is rather self-explanatory – NLG is all about AI generating text indistinguishable from what could be written by a human author. It has been slowly (and somewhat discreetly) taking off in journalism for a couple of years now.1 2 3

NLG is far less known and less deployed in financial services (and otherwise), but given potential for operational efficiencies (AI can instantly, and close to zero cost produce text which would otherwise take humans much more time to prepare, and at a non-negligible cost) it makes an instant and strong business case. There are areas within asset management whose primary (if not sole) purpose is the preparation of standardised reports and summaries: attribution reports, performance reports, risk reports, or periodical fund/market updates. Some of these are so rote and rules-based that they make natural candidates for automation (attribution, performance, perhaps risk). Fund updates and alike are much more open and free-flowing, but still, they are rules- and template-driven.

AI replacing humans is an obvious recipe for controversy, but perhaps it is not the right framing of the situation: rather than consider AI as a *replacement*, perhaps it would be much better for everyone to consider it a *complement* or even more simply: a tool. You will still need an analyst to review those attribution reports and check the figures, and you will still need an analyst to review those fund updates. And with the time saved on compiling the report, the analyst can move on to doing something more analytical, productive, and value-adding. At least that’s the idea (QuantumBlack, an analytics consultancy and part of McKinsey, calls this “augmented intelligence” and did some research in this field which they shared during a Royal Institution event in 2018. You can watch the recording of the entire event here – the key slide is at 16:44. There is some additional reading on Medium here and here).

Some early adoption stories begin to pop up in the media: SocGen and Schroders (who, with their start-up hub, are quite proactive in terms of being close to the cutting edge of tech in investment management) are implementing systems for writing automated portfolio commentaries4. No doubt there will be more.

Disclaimer: this post was written by a human.


https://www.fastcompany.com/40554112/this-news-site-claims-its-ai-writes-unbiased-articles
https://www.wired.co.uk/article/reuters-artificial-intelligence-journalism-newsroom-ai-lynx-insight
https://www.wired.com/2017/02/robots-wrote-this-story/
https://www.finextra.com/pressarticle/75910/socgen-to-use-addventa-ai-for-portfolio-commentary