Is cyborg professional work any closer to reality?
Time will tell whether 2023 will be most remembered as the year that publicly available Artificial Intelligence (AI) capability came into common view in the same way that the Netscape debut of 1995 opened up the world wide web to mere mortals. Our view is that we have just finished the first chapter of this story and the subsequent chapters are likely to bring substantial change in the way professional and financial service organisations deliver their value to clients. No spoilers so far, and no surprise that our aim is stay very close to the leadership teams of professional & financial firms and support those businesses through the next round of change.
The recent LinkedIn Future of Work Report: AI at Work highlights a 70% increase in AI-related conversations over the past year, particularly in professional services. Thomson Reuters’ paper on The Future of Professionals goes into more depth and suggests that two-thirds of professionals believe AI will have a transformational or high impact on their professions in the next five years. The optimism extends to AI creating new career paths and improving productivity, internal efficiency, and client services. Overall, AI is not only a technological advancement for professional and financial services firms but also a strategic imperative. It’s enabling these firms to redefine their service models, enhance efficiency, and create new opportunities for innovation and client engagement. As the technology continues to evolve, it will likely become an even more integral part of the professional services landscape.
One year on
The launch of ChatGPT by its developers, OpenAI, on 30th November 2022 was regarded by all in the world of AI as a break from the herd and led to the fastest ever subscribed-to software release. Where the true balance between technology hype and reality lies is uncertain. The AI doom-mongering has probably peaked but we can’t be really sure yet and perhaps a little fear is helpful to grab our attention. We have also witnessed the recent boardroom drama at Open AI that resolved itself within days but hardly inspires confidence that the sector is in safe hands.
Back to the wow-factor. As most of us know by now, AI-powered tools or products can outperform humans in even the most sophisticated of thinking games such as Go and Chess. They can produce impressive attempts at business writing for marketing and sales; conduct and compile research from internet and uploaded sources; write fiction; compose music; create visual artworks and videos, compose music; generate and fix computer code; and produce student essays. The applications of this technology are too numerous to cover comprehensively here but open up the possibility of huge advances in medical and other sciences including those focused on sustainability.
In many fields AI is likely to play a key role in problem-solving scenarios over the coming years. In the nearer term, it seems that the combination of AI and human intelligence working together will continue to outperform either humans or AI working independently, hence our opening cyborg quip. More negatively, AI advances also open up the possibility of miscommunication and manipulation on a grand scale which will take fake news in the public and political domain to another level.
In business, the use of AI is currently targeted at cost reduction, generating new offerings, and speeding up operations. Each week we see many new business application products incorporating the derived capability of Large Language Model (LLM) AI and the total number of such products is in the hundreds. The risk of inaccuracy and bias in today’s AI generated outputs remains significant, though, and a reason not to rely on these tools 100% but it’s worth remembering that we are at the dawn of this type of technology and the landscape for its use will continue to unfold in line with continued investment, scaling, and next generation releases.
Same but different
We have had AI of some sort in our working and personal lives for some time but the use of a Generative Pre-trained Transformer (that’s the GPT part) by the general public in a chatbot format was a leap forward and allowed us all to catch up with what had been going on behind the scenes. The release of ChatGPT (v4.0 with add-ons at the time of this article) also opened the eyes of many to a new vocabulary including LLMs, natural language processing (NLP), neural networks, back propagation, and reinforced learning plus a raised awareness of the socioeconomic and political issues arising and implications of using AI technology in the near future. The progression from what most term as narrow AI, ie what we have now, to a position where the technology behaves more intelligently than humans in all realms (Artificial General Intelligence or AGI) is regarded by all experts in the field as merely a matter of time and money – the latter being many billions of dollars. Those working in AI have already witnessed emergent properties and capabilities that exceed the expectations of those involved in programming and teaching these models to learn by themselves.
It’s worth focusing on the here and now, though, and remembering that large language or, more correctly large sequence, models work on a probabilistic conversation or exchange whereby the user requires the model to predict the next best fit answer to a prompt. That prompt could be a question or instruction, very often provided in text form. The answer could be text, image, audio, video, music, computer code or other output modalities. The key here is that the models have been trained on huge volumes of data, not to regurgitate content but learn to assemble or generate output in a way that matches what its trainers favour most and requires huge levels of computing resource in the process. The output is probably less biased than the worst content the internet can provide but it’s far from perfect and the probability for random utterances that are inadvisable to use with clients, lack situational context, or are plain wrong remains high. The term used in AI parlance for these utterances is hallucinations. For many reasons including lack of regulation, variable content reliability, risk of misinformation, and lack of provenance on content, generative AI tools remain banned in many countries and highly restricted within many organisations.
AI is an eventual game changer for our clients
Initially, we hesitated to comment and reflect on an area where our insights are largely confined to the strategic, leadership, and talent aspects of likely change from increased AI use but, when we compared notes across our team, it became obvious that our clients are interested in our view would like us to share the thoughts and considerations of others in their market. We have been keenly watching the developments in this field during the year and taking stock of how our clients feel about the advent and progression of this technology both for professional use and more generally in terms of probable impacts on society, economies, and democracy more generally. Those who know our team well will know that we don’t prioritise deep technology expertise in-house, but we do know a thing or two about how professional service firms and financial institutions operate and adapt to technology-led change. Our people are, first and foremost, experts in the behaviour and development of professional people and the teams and organisations they choose to work with.
It’s probably no surprise that the larger firms in the worlds of Law, Accounting, Consulting, and Finance have made considerable investment in AI technology and we have witnessed the power of this first hand. A&O’s use of the AI platform Harvey, which is built upon OpenAI’s models, made headlines earlier in the year but they are not alone. Other law firms have overcome any reticence to embrace leading edge technology and many have been using AI-enabled e-discovery tools to enhance and speed up analysis tasks involving large volumes of data particularly to support legal proceedings for many years. These tools cover the early ground of due diligence work and to automate document production. It’s also no secret that the Big 4 firms have been integrating AI into work processes for many years and see this as a way to augment the way their people approach client work. Each of the Big 4 firms has also made it clear that the further adoption of AI is strategically central to their future business models and have committed eye-watering sums by way of investment and partnership spend to stay with or ahead of the pack.
We support and advise across the many strategic and operational challenges faced by our clients and the lenses through which we view professional life are, unsurprisingly, those of Leadership. Performance, Talent, Clients, and Coaching. Unashamedly we will use these lenses to reflect on what we see and hear of the AI story so far in the land of professional and financial businesses, after we briefly take a look at what the AI enabled future hold in store more generally.
Looking ahead
First things first, AI capability will continue to improve, probably exponentially rather than linearly, and its use will challenge the way business is done across many sectors, especially in knowledge working and with heightened impact over the next two years. So, it’s here to stay and it’s best that we all adapt to that fact and embrace it rather than avoid it. There is little debate that AI tools can enable professionals to do more things right, many things faster, and even do new things. The question is the pace at which AI will cease to be a sideshow tool and become more of a main stage act. This is where the room divides, and not just in professional circles. In broader society there are those who hold the view that the AI genie is so far out of the bottle that it could easily disrupt society at many levels – particularly through the generation of misinformation and job elimination and eventually play a larger part than intended in the way information technology works on a global scale. On the other side of the room are those who view current AI capability as no more than a clever illusion – a stochastic parrot (to quote Bender, Gebru, McMillan-Major, Mitchell 2021) that generates convincing language and other media through probability but is devoid of understanding, full context, and reason. Both views have merit but the truth lies somewhere in the middle.
The supervised involvement of AI simplifies and speeds up many professional and administrative tasks and its use will only become more pervasive in general applications and enable professionals to become seemingly more productive. With increased use comes risk, however, and it is the balance of risk and opportunity that we are most concerned with when it comes to helping our clients steer the right course for their businesses. We have already seen that generative AI models can make a credible attempt to pass themselves off as capable legal students (recently passing the Uniform Bar Exam) and compose acceptable essays in record time across many other subjects (with the proviso that AI watermarking is on the rise to serve a similar purpose to existing anti-plagiarism software). The full impact of that capability for professional education is yet to be assessed but it raises many questions as to what legal and professional competence means in the longer term. Thankfully, the firms that currently deploy AI recognise its current limitations and largely confine its use to backroom automation which in turn frees up talent to focus on more judgmental and less analytical processes, but the field is changing rapidly. There is a twin promise and peril in the use of AI for professional work and some early evidence that over-reliance on this iteration of AI can lead to unexpected falsehoods (the fictional legal case issue faced by Steven Schwartz of Levidow, Levidow & Oberman being the most high-profile). There are also security and confidentiality issues associated with providing information to the AI tools in order to receive a helpful response. Many organisations, quite rightly, restrict the use of such models for commercially sensitive matters for fear of giving valuable or confidential content away and there are many legal rumblings about the use of training data used to teach the models in the first place. Increasingly these models will come in the form of private or enterprise versions that keep confidential data in-house but there have and will continue to be unintended data leaks in this arena.
Leading the way
Many of our clients report that the topic of AI is a Board level item for conversation and are investing in the background education required at Board level and briefing to stay in tune with developments and gradually engage the wider partnership on matters AI. For some, that discussion results in a wait and see position with some investment to explore new ways of working and new policies to minimise the risks involved with both the use of AI tools and the possible erosion of business from competitors. For others, however, this has become a more permanent feature of the Board agenda with the intention that the organisation will continue to prioritise human-only services but increasingly build AI solutions into the service model so that its teams are augmented with the capabilities that AI can provide in terms of doing things right, doing things differently, and doing different things. For the few, AI is already seen as a strategic game changer and a means to reimagine and revolutionise business models and activities. All three stances are valid provided the Board is remaining true to the firm’s ambitions, purpose, and declared strategy and has the right Information Technology or Digital Solutions capability for their current and immediate future needs.
Of the many ways that AI can provide value to leadership teams, improved decision-making that comes from AI doing more analytical work as a prelude to critical analysis conducted by Boards and those who support them is an obvious starting place. Strategically, there is also a responsibility for top teams to consider all aspects of how AI will affect their business, their people, and their clients. The advent of these tools presents opportunities for growth and challenges for risk mitigation, including reputation management and cyber security, for all firms irrespective of whether their services are highly bespoke or more standardised. What is clear is that leadership teams will need to have an evolving viewpoint on the impact of AI on their business and to provide reassurance to both clients and their people that they are considering the opportunities and risks. In the risk department, most firms will naturally focus on their immediate competition and what they need to do in order to keep pace or outpace their peers. This makes sense but leaders of professional businesses should also be aware of the big data assets they hold that can now be more easily mined using AI tools which in turn makes their businesses more attractive to external investors.
Performance implications
The obvious call upon AI in most of our client markets is to automate repetitive or data intense tasks with adequate supervision so that each individual’s overall productivity improves, but it’s far from the only use case. Legal matter management has evolved in many ways over the past 20 years and the increased use of automation is one aspect of disaggregating legal projects into the more complex and simpler elements so that the right resources are used to get the job done most efficiently. Some of that disaggregation involves the near complete automation of legal work such as contract reviews, due diligence preliminary work, and other process-based activities, plus quality assurance steps when used judiciously. We are also seeing clients using AI to work with their dashboard or KPI systems to spot areas for improvement, both in financial and non-financial measures or change by crunching data in new ways. Most commonly, though, AI tools are being used to produce initial business report drafts or copywriting for marketing and business development purposes; assist with data analytics to derive insights, identify patterns, and support decision-making; automate some processes including data extraction and processing; act as chatbots or virtual assistants for handling customer queries and improving user experience; help with spotting frauds and anomalies; and speeding up recruitment processes, and employee performance analysis.
All about the people
Those working in our Talent practice are noticing a gradual shift in the types of conversations we’re having about AI. Top of the agenda is undoubtedly the shifting nature of work involving the use of AI tools. For most top firms we’re some time away from noticeable job erosion but the use of AI requires new skills and approaches to both personal and team management that few firms have in place but are working on in terms of recruitment and learning. AI tools are increasingly used to screen for best fit candidates when hiring and to act as a resource to individualise learning. Some AI tools are making in-roads into traditional learning and development arenas. One of the top tips we have heard in from HR teams is to encourage people to play with the technology privately as a step towards understanding its implications for their business roles and to promote engagement with the topic. In the longer term, it’s inevitable that the growing use of AI will lead to new job roles and opportunities for those who adapt well to using AI alongside traditional methods to complete their work.
What is unclear as yet, but firms will need to keep a close eye on developments, is the likely change to the composition of fee earners by level of experience and type of skill in order to remain competitive where the use of AI has an eroding effect on headcount. Depending on the profession and market segment within each profession, HR teams are used to the shape of resourcing that suits the gearing or leverage needs of their firm. For some it’s a triangle or pyramid from senior to junior, for others more of a diamond but the trend will most probably reduce the need for less experienced professionals in the mix, and increase the need for those with hybrid capability of technical and technological skills.
In the more immediate term, having professionals who can deftly blend the moderated output of generative AI tools with the product of their own intellectual labour and those who can assist firms to adapt the business model to emerging needs will be in highest demand – a challenge for both recruitment and retention. Likewise, for most firms there will need to be a wave of change in the capability of business services teams, through recruitment, reshaping, and additional learning and development.
Through the client’s eyes
Our focus is predictably on how we can support our clients to better serve their own clients by improving their business practices, teamwork, and relevance to the markets they serve. One aspect of this is to get involved with initiatives to ensure our professional and financial clients are sufficiently forward thinking in how AI will affect their own clients, and how they can use AI-powered tools to improve or enhance client service. Many of the firms we talk to are aware that their clients have already embraced the possibilities that AI can open for future business models and operations and are keen to have advisors who have already considered the consequences for their businesses.
At this stage, few would leap into the world of a fully automated legal service but inevitably that option will creep into segments of all professional work and the better prepared firms will have anticipated how they will adapt to the demands from clients for ultra-low cost and real time simple advice that does not always require the human touch.
Coaching
Talking of humans, we are somewhat relieved to know that none of our clients have plans to adopt generative AI as a substitute for professional coaching but there are developments in near fields worth noting. The first is the use of AI-powered apps to keep a check on how an individual is progressing with a learning or development pathway very often with feedback loops inbuilt and the use of virtual tutors. This monitoring technology is commonplace in other markets and can serve a useful purpose for those who favour notifications and prompts to keep their life in order. There is also an initiative headed by Google to bring AI capability into the world of personal coaching founded on similar advances in the world of virtual counselling but most acknowledge this is more fraught with risk than benefit at this stage.
Wrapping up
There is no doubt that AI has changed the way people think about and use technology. For some it has opened up a world of possibilities with acknowledged risk. For others it remains to be seen whether there is more hype than substance to the projections of how AI will change human society. Our focus is on how the knowledge-working businesses in professional sectors and financial services can distinguish fact from fiction and prepare their organisations, their people, and their clients for the journey ahead. What’s certain is that many existing business practices will change, some job types will be at risk of at least partial substitution as has always been the case with technological advancements, and that the capability of AI will continue to improve. The next iteration of ChatGPT, 5.0, from market leader OpenAI is probably a year or so away and the launch of rival Google’s Gemini product has been pushed into 2024. The issue of AI safety is also gaining momentum as evidenced by the UK Government’s recently hosted AI Safety Summit which should build on the work conducted by the UN, Organisation for Economic Co-operation and Development (OECD), Global Partnership on Artificial Intelligence (GPAI), Council of Europe, G7, and G20.
For the professions and financial institutions we work with there are already known risks and opportunities to come from AI and there remain uncertainties as to how quickly such tools will reliably transform business practices in ways that are competitively and strategically significant. Some larger firms have already bet part of the firm on being part of the AI-enabled future whereas many are taking a more cautious approach and adopting in a more piecemeal fashion. For film buffs, Skynet and Judgment Day may be a way off yet but best we stay alert just in case.
If you would like to talk through any of the issues raised in this article with the PSFI team, just drop us a line.
PS – No AIs were put to work in the creation of this article.