Deadline approaching: file your CPD

File your CPD compliance statement before February 28, 2025, to avoid a fine and ensure that your membership remains in good standing.

Who’s afraid of AI? Exploring actuarial advantages over machines

As we step into the new year, conversations about artificial intelligence, or AI, continue to captivate us, especially as its application increases through the use of such mediums as ChatGPT and Copilot. Contributors to the CIA have previously covered AI from several different angles (including principles and regulations for AI in general, impacts on actuarial work and approaches to AI/AI training for actuaries), but I wanted to add a fresh perspective; namely, from someone who is completely clueless on how all this works but has a few thoughts on whether the actuarial profession needs to be worried about it.

I should preface this article by saying that there is a lot happening in the AI world, not just in terms of development but also in what sort of constraints – ethical and otherwise – need to be in place. If you are interested in what’s happening in this area, I encourage you to check the above articles as well as some of the sources listed in this footnote,[1] … because you’re not going to get any of those insights in this piece.

AI’s struggle for accuracy and context

My perspective here is rather pedestrian – how the average person may end up interacting with AI. Like many other users, I was initially intrigued by ChatGPT, which can instantly produce an essay or article on the topic of your choice. Having this sort of technology at your fingertips has the potential to save time and effort when you need to generate written content. So, of course, I tried to game it.

To do so, I reached into some of my obscure personal interests, and asked ChatGPT to tell me about the Topps hockey cards issued for the 1964-65 NHL season (110 “tall boys” cards – if you’ve seen them, you know what I mean). It immediately produced a nicely written description of the set. It also included significant factual errors. ChatGPT seemed to think the cards had facsimile autographs (they didn’t), backs printed in red and blue ink (red and black, actually) and that they had a panel to be scratched off with a coin, which didn’t appear until the following NHL season. Believe it or not, though, this was an improvement – earlier in 2024, it thought the photos on the cards were in black and white (huh?) and contained Bobby Orr’s rookie card (two years too early).

All right, you may ask, what’s the big deal? ChatGPT got a couple of facts wrong on an obscure sports collectible. The point I wanted to make, though, was how would it know what it did and didn’t get right? It would seem to me that this ability to self-check for accuracy is an important process that we undertake as humans, but that AI is currently lacking. I know I always re-read my emails for errors before I send them, but evidently ChatGPT doesn’t do such assessment. And how could it? The internet has as much disinformation as real information. It’s not clear how an algorithm can confirm what’s accurate and what isn’t.

As actuaries, we excel at combining data-driven insights with professional judgment, a skill that remains uniquely human.

The limits of AI’s creativity

The text generated by AI is, to be honest, pretty boring. It sounds like it was written by a machine, which makes sense because it was. As I read some of the output from ChatGPT, it reminded me of that one person we all know who can speak eloquently on a topic without saying anything substantive. Good form but lacking real insight.

But, you may say, that’s not a problem if we’re asking ChatGPT to produce a couple of paragraphs of information. After all, we’re just looking for a few words, not something that sounds like it was written by William Faulkner. That may be fine, if we were to just stop there. But AI is creeping into other areas as well.

You may recall the 2023 release of a “new” Beatles song, “Now and Then,” constructed with the assistance of AI. One would think that such an event would create excitement among people like me whose musical tastes are solidly rooted in the late 60s and early 70s (and for good reason). But I was disappointed with the final product. It took me a few minutes to identify why, but eventually I realized it sounded like something that was recorded in 2023, absent the influences of the period that made pop music so creative and appealing.

In other words, it’s what the Beatles might sound like had the Beatles not existed in the first place. (Thank you, AI, for making me write such an indeterminate sentence.)

Ultimately, while AI can mimic structure and form, it still lacks the depth (and soul) that human creativity brings – something no machine can fully replicate.

Accountability and human judgment

You might dismiss the previous complaints as minor in nature; quibbles about a very impressive – still nascent – technology from someone who refuses to keep up with the times (and get off my lawn while you’re at it). But there are other considerations that should give us pause, a key one being accountability.

Reliance on AI to do work may be a great labour-saving device, but what happens when it makes a mistake? Who takes responsibility for that? We all know how much we love (read: dislike) when a new actuarial hire presents results that seem illogical, and defends their work by saying, “That’s what the model gave me.” If you rely on AI to, for example, make an investment decision for you, and it turns out to be poor advice, then that’s pretty much the end of the conversation – there is no ability to get an explanation, a justification or even an apology.

“But we can program AI so it doesn’t make mistakes!”

The problem with this statement is that it assumes there is a formulaic solution to every problem. As actuaries, we know better than that, and this is where AI meets the reality of our profession. What sets us apart is our ability to use our professional judgment, to assimilate information from disparate sources, and make our best estimate. To do so, we combine technical ability with more intuitive skills. AI could be trained to do the former; I have doubts as to whether it can do the latter.

To look at it another way – if you are performing a task that follows a strict formula and process, then it can (and probably should) be replaced by an algorithm. But that’s not what actuaries do. We can find it frustrating that the range of actuarial practice is not sufficiently narrow, but that is the nature of our business.

Artificial intelligence: A tool, not a replacement

If there is a common theme to all this, it’s that the human element is still essential. That’s what allows us as actuaries to provide unique insights; it’s the ability to assess and react to different pieces of information and provide creative recommendations and solutions. Yes, AI will probably replace more routine functions, but we’ve been there before.

Actuaries in the distant past used to spend hours manually creating tables of commutation values, a process that was eventually made obsolescent by computer programs. Did that development put actuarial students out of work? Of course not – because what we do is a science. Science means that there is an endless source of things to be learned and mastered, and any tools we develop will simply help us get to the next area of investigation faster.

We can welcome AI into the actuarial world, but like so many other things we work with, it will be a tool to help us, not a means to replace actuaries. I fully acknowledge that we are still in the early phases, but replicating human behaviour and decision-making will be a daunting challenge – especially given that we often don’t understand it ourselves, even at the best of times. And let’s not forget how important that human element is to what we do.

After all, I doubt that an article written by AI would manage to combine actuarial science, the Beatles and hockey cards. That’s something we should be thankful for.

This article reflects the opinion of the author and does not represent an official statement of the CIA.


[1]