≡ Menu

Artificial Intelligence: The Power and the Perils Part I – By Charles Schwab

The AI genie isn’t going back in the bottle, so how do individuals, companies, and governments harness its power while mitigating inherent risks?



MIKE TOWNSEND: At the beginning of 2023, like a lot of people, I had never heard of ChatGPT. The artificial intelligence-powered natural language processing tool, better known as a chatbot, was launched just last November. It allows people to have human-like conversations on just about any topic imaginable.

By January, it had 100 million users. Now it gets more than 1.5 billion visits per month.

ChatGPT can write essays and articles, give you instructions, edit your term paper, tell mostly bad jokes, craft a poem, provide advice, design a website, plan your weekly menu based on what’s in your refrigerator—and it can do it in any language, in any style that you can think of.

Search the internet today, and you can disappear down a rabbit hole of the crazy questions people have put into ChatGPT and the even funnier responses it has produced. My favorite is the user who asked ChatGPT to explain in the style of the King James Bible how to remove a peanut butter sandwich from a VCR. Look it up—it’s hilarious.

But there’s a dark side to ChatGPT, too. It can make mistakes but insist that it’s correct. It can spread misinformation. It can aid criminals in their scams. Some users worry that it could eventually make many human jobs obsolete or even wipe out humanity entirely.

ChatGPT may still be fairly new, but artificial intelligence has been around for decades. But the interest in AI as a technology has exploded in the last few months. And investors have noticed that some companies that have bet on AI technologies have risen to the top of the markets.

But what is AI? How is it being used? How will it be used in the future? And how worried should we be?

Welcome to WashingtonWise, a podcast for investors from Charles Schwab. I’m your host, Mike Townsend, and on this show, our goal is to cut through the noise and confusion of the nation’s capital and help investors figure out what’s really worth paying attention to.

Today’s episode is the first of a two-part series focusing on artificial intelligence. In just a few minutes, I’m going to welcome Bashar Abouseido, the chief information security officer here at Charles Schwab, to walk us through the rapidly changing artificial intelligence landscape and discuss the power and perils of this technology.

In Part 2, which will be available in two weeks, we’ll explore the investing side of the equation, examining the opportunities and risks for investors looking to capitalize on AI’s momentum in the markets.

But first a quick update on some of the issues making headlines right now here in Washington.

Congress returned to the Capitol this week after the two-week July Fourth recess and will have a busy three weeks before breaking again for the annual August recess. Here are three things I’m watching during this Congressional work period.

First, the government funding process is picking up speed. Both the House and Senate are working on their versions of the 12 appropriations bills that fund every government agency and every federal program. Those bills need to be approved by the start of the government’s fiscal year on October 1, or there’s a risk of a government shutdown.

That risk is going up because the two chambers are starting this process in totally different places. As part of the debt ceiling deal that was reached last month, President Biden and House Speaker Kevin McCarthy agreed to freeze non-defense spending for next year at roughly 2023 levels. No increases due to inflation or other factors. But House Republicans, frustrated that the agreement did not produce a larger government spending reduction, are crafting bills that fund the government at 2022 levels, which represents about a $120 billion cut to discretionary spending.

A funding gap of that size sets up a standoff between the Republican-controlled House and the Democrat-controlled Senate. Ultimately, both chambers have to pass the same bills, so how these differences are going to get resolved―it’s anyone’s guess. The real drama will come in September, when the clock is really ticking toward a government shutdown. This is starting to feel a lot like the debt ceiling drama, where the two parties were stuck for months and only reached a resolution at the last possible moment. Look for both Republicans and Democrats to be jockeying for position this month in anticipation of a high-stakes showdown in September.

Second, this week the House Financial Services Committee launched what it is calling “ESG Month.” The committee will be examining environmental, social, and governance-focused investing with a series of six hearings. Toward the end of the month, the panel is expected to consider one or more bills related to ESG investing, though exactly what that legislation will look like is up in the air. One possibility is a bill that would require money managers to ensure that a client’s financial returns are always the priority over any non-monetary factors when making investment decisions.

ESG investing has become a political hot potato over the last few years, particularly at the state level. Republicans have said that the trend is helping investors and asset managers to push companies into political battles that may be at odds with the goal of securing the highest possible returns. Democrats say that ESG investing allows investors to make their own choices about whether they want to use their investments to send a message about their values and priorities.

The fight is also playing out in the regulatory arena. The SEC, for instance, has found itself at the center of a huge controversy since it proposed a rule last year that would require public companies to disclose more to investors about the risks climate change poses for them, as well as how they’re actually contributing to climate change. At the Department of Labor, there has been a long-running battle over whether companies should be permitted to offer ESG funds among the investing options for employees in a company’s 401(k) plan. A rule permitting these options is in effect as of January but is currently facing a court challenge.

Any bills that emerge from “ESG Month” in the House of Representatives are likely to face stiff opposition from the Democrat-controlled Senate. But it’s a symptom of a larger battle that I expect will remain front and center all the way through next year’s elections over whether ESG represents investor choice or the politicization of investing.

And the third thing I am watching right now is the likely confirmation of the president’s nominees to the Federal Reserve Board of Governors. Earlier this week, the Senate Banking Committee voted to approve the nomination of World Bank economist Adriana Kugler to fill the open seat at the seven-member board. Kugler would be the first Hispanic ever to serve as a Fed governor.

The committee also voted to advance the nomination of current governor Philip Jefferson, who was nominated to move up to the vice chair seat, and the nomination of current governor Lisa Cook to her own 14-year term. She’s been filling an unexpired term that is set to expire in January.

The final step is confirmation votes on the Senate floor for each of the three nominees, which may happen before Congress adjourns for the August break.

All three nominees appear to be on track for confirmation. I don’t expect any big changes in terms of the Fed’s monetary policy direction or its plans for toughening regulations for big banks, which is the Fed’s other priority right now. But the confirmations will ensure that the Fed has its full complement of seven governors as it heads toward tricky decisions about when to stop hiking interest rates and what other steps can be taken to keep inflation moving on its downward path.

On my Deeper Dive today, I want to focus on a topic that has become the center of an enormous debate, artificial intelligence, or AI. Interest in AI has exploded in 2023 with the launch of ChatGPT and other tools that allow ordinary people to explore AI in a way that perhaps they have never done before. AI is also one of the hottest investing topics, as investors are increasingly looking at companies that might be positioned to take advantage of changes that are coming as a result of AI. But there’s also plenty of concern about the risks with AI, such as if it’s used for nefarious purposes or the potential problems that could arise if this technology is allowed to develop without appropriate oversight.

”I want to begin today’s discussion by getting a better understanding of what’s been happening in the artificial intelligence space. To help me sort through a lot of information, I’m pleased to welcome to the podcast Bashar Abouseido, managing director and chief information security officer here at Charles Schwab. Thanks for joining me, Bashar.

BASHAR ABOUSEIDO: Great to be here, Mike.

MIKE: Well, Bashar, with all the buzz that’s going on around about AI, it would be easy to think that it’s something completely new, but in truth, neither the term “artificial intelligence” nor the technology itself is new. But what is new is the wide range and sheer number of ideas that are being put forth on how to apply AI. I know you’ve been a keen observer of what’s happening in the AI space for a long time. So let’s just start with the most basic of questions. What is AI, and how is it currently being used that we may not have even noticed?

BASHAR: Let me start with what AI isn’t. AI is not magic, Mike. Basically, we leverage computers and machines to turn everything into mathematical formulas, run complex mathematical problems, in an attempt to predict the next number. So we use lots of math, science, and probability to try to predict that next outcome that’s going to happen from a math perspective. So AI, itself, as a science, is not new. ”

Let me give you some examples. We’ve seen that technology used in planes, as we see very large, complex planes travel fully automated, leveraging computers, leveraging and processing the data, and being able to basically safely take us from one place to another. We’ve seen it also leveraged in robots in manufacturing, so we can improve productivity and efficiency and allow humans to be leveraged for better things, instead of the standard manual labor that we had in the past. We’ve seen it even in movies and articles about Deep Blue and how Watson and the various computer models were able to actually work and play the chess game against the top chess players in the world. And it was fascinating to see that, actually, the computers were prevailing in many situations where we thought it was impossible, but it was mimicking the human behavior. So it’s been there for a while.

I think what’s fascinating, in my opinion, is the fact that we’ve seen some tangible and important advancements that have come to light in the last 10 to 20 years, making AI more accessible at the individual level. In the past, it was only accessible to large, commercial corporations that were able to afford getting access to that data. But now, with the variety of different advances that we’ve seen around natural language processing—being able to speak to it with a simplicity of just asking a question and getting very, very interesting, valuable, meaningful responses in a very timely fashion—those type of advancements have made this a lot more tangible to the regular person, for you and I. In addition to that, there are many factors involved, as well. We’re seeing lots of advances in technology from chip manufacturers doing things faster, better, more processing power, more accessibility, and commercially reasonable solutions in the backend from a computing perspective, and many areas where we’ve been also gaining quite a bit at collecting more information where we can make better decisions leveraging these capabilities.

Overall, we are in a very, very fascinating stage of getting access to that technology.

MIKE: Yeah, Bashar, I think it’s really interesting here in 2023, as I have traveled around the country talking to investors over the last six or eight months, this has risen to right at the top of the questions that I’m getting. And when I talk to people, it feels like there are kind of two main reactions. One, is people immediately think of the entertainment side, you know, the crazy things that people have asked ChatGPT, the funny responses that have come back. The other reaction seems very negative and very extreme. There’s a big contingent of people out there who are worried that AI is going to make their job obsolete or worse, that machines are going to rise up and wipe out humanity in, you know, an Arnold Schwarzenegger movie-type scenario.

What has not received very much attention and what I think people are still trying to get their arms around are the business applications that will offer real benefits to ordinary consumers. So as you watch where this is all headed, what do you see as some of the near-term applications that businesses are embracing, and what are those key benefits to consumers that we can look forward to?

BASHAR: There are areas where AI can influence innovation and accelerate knowledge across many organizations. In particular, the biggest and interesting part is what I call the blank page kind of challenge that we have. What’s interesting is if I can leverage AI by asking AI to help me with some ideas in particular spaces. For example, if I want to write a new email, if I want to start a new marketing campaign, to write a contract or even a software program, wouldn’t it be really nice to be able to ask AI about what’s possible here, where should I start, and get some ideas that will allow me to get going so I can get a head start, instead of starting from scratch?

But I think there are other ideas and areas where we can leverage AI to help us improve in the day-to-day and in terms of different areas of our economy and our society. I think, in particular, we can leverage it, just like we’ve seen the use of technology to allow us to gain significant and substantial productivity within the economy. The whole science of AI is about making machines think and process data like the human brain does. If we’re able to do that, to offload many of the repetitive lower-level things to machines to do that work for us, that will make us a lot more productive. It will save us a ton of time, and it will reserve that important human brain for bigger and more complex and innovative challenges.

The second part, in my opinion, is also we will address the talent and skill level challenge that we have in so many areas of the economy. For example, you require a significant level of skill and education to get into healthcare, to be able to respond to the demands within the healthcare system. If I’m able to allow machines to process the majority of the data and help me with some of the decisions, that means I can take more of the resources that don’t have those skills and allow them to leverage AI to produce the same outcomes. And so on and so forth. We can apply that to finance. We can apply that to manufacturing. We can apply it to customer service. If we look at the big picture, the possibilities and potentials are limitless in my opinion, but that comes with both cost benefits and some downside, as well.

MIKE: Bashar, “transformational” is a word often associated with AI. It’s kind of one of the buzzwords that you’re hearing right now. Are there certain parts of our economy, certain types of businesses, where you think AI will really be transformational?

BASHAR: Yes, I believe that’s the case, Mike. AI can be transformational. I’ll give you a few examples. In customer service, AI can aid human customer service agents to support and provide helpful resources or solutions to customer questions and do that a lot faster. AI can supplement high call volumes, so more customers can get that answer that they’re looking for a lot faster. It can be leveraged in music and creative arts. For example, AI can create or suggest a music or art design to appeal to our preferences. A new Marvel streaming series can use an AI-built animation for its opening credit sequence. AI can write a song preferred by humans, for example. We can leverage it in healthcare, travel, and culture. Imagine AI being able … or ask your AI assistant to translate not just English language, but just help you converse real-time with a voice with anybody in any country.

Build a full and complete travel plan for you, including active assistance, based on the length of the vacation and what country you want to visit, and what preferences you have and interests you have.

Personal assistant for making appointments for you, to call your dentist and be able to kind of negotiate a time and date to do the services that you want.

You can have many of these interactions that we have and be able to be a lot more productive, versus the fairly robotic, inefficient way we get today from customer service, or anytime we deal with voice, it just feels unsatisfying, unproductive, it’s not what we want to be.

We can also apply that to financial planning, for example.’ Your AI assistant will be able to look at your portfolio, look for opportunities based on your age, your goals, where you want to be, and identify those opportunities within the context of the marketplace to say, “Hey, listen, I think you should look at these possibilities of maybe changing your structure of portfolio, some opportunities and ideas for you to discuss in the next meeting that you have with your financial advisor.” And now, instead of going with a blank kind of set of objectives to your advisor, you can have specific questions and ideas you want to discuss so you can gain more productivity for the time that you use with your financial advisor or wealth advisor.

Again, the ideas are limitless, the possibilities are great, and that’s why I think this is going to be very transformational.

MIKE: Well, let’s talk a little bit about the downside, because I certainly think there’s a lot of attention being paid right now to the potential danger points of AI, and that’s where you start to get people with this reaction of AI is going to take over. So what about this sort of keeps you awake at night?

BASHAR: What I worry about is that this is getting so accessible that we’re just sometimes so fascinated by what the upside of this type of technology is, and we don’t realize that there are some significant downsides, as well. For example, we need to understand exactly what type of data are being kind of fed to the models and to the data science behind AI and machine learning, so we understand how the computer or AI was able to generate some of the recommendation. So we need a little bit more of that transparency. We also know that models and computers basically get trained by humans, and eventually what you’ll see is that they will produce some biases or occasionally come back with the wrong answer. How do we account and be prepared to understand when what the computer is producing is a good outcome or is a questionable outcome? And are we careful enough in terms of the decisions we base on the recommendations and predictions generated by AI, are we able to understand whether there is a bias associated with that or not? Are we aware of the type of risks that we’re dealing with? Because even in perfect conditions in applications of AI, like we’ve had with planes or manufacturing or autopilots and cars, we still see accidents. But the frequency of those occurrences can be highly mitigated by continuing to train the computer and putting controls around the usage of what type of decisions and how do we override these models.

So humans, ultimately, have the accountability for what decision’s being made, and we’re very well aware from a risk appetite perspective, how much risk we’re willing to take versus the productivity that we gain. All of that is a fascinating point of views that we need to consider and look at and evaluate as a society as we continue to leverage that science and continue to try to get more of the upside and minimize the downside of that science.

MIKE: Bashar, one of the aspects of, I think, what you’re talking about that has already started to play out is that it’s becoming more and more difficult to tell what content, for instance, is AI-generated versus what’s human-generated. And we’ve seen this play out in, you know, written articles on the internet. We’ve seen it play out on music where there was an AI-generated song that purportedly was by Drake and the Weeknd and fooled everybody until they said, ‘We didn’t record this song.’ So what kind of dangers do you see there? And how are we going to address the question of being able to determine for ourselves what’s AI-generated and what’s not? And I think that goes to what you mentioned about knowing whether something is coming up with the correct answer.

BASHAR: That’s a great question, Mike, because I think, as we continue to see the benefits of AI and realize the benefits of AI, we’re also going to see the lines blurring between the human-created content and the decisions made by human versus the machine. That’s why I said it’s important that we have better visibility across the board. If you look at the various studies that were made in the last 10 years or 20 years, computers with AI are able to mimic the human behavior in a way that is hard to differentiate at some point in time. They had a study, I believe, where they created a piece of music, a music note versus one that was created by human, and they gave it to a random sampling of people, and they said, which one was machine created versus human created. Most of the people that got surveyed picked the computer generated as the human version. They did the same thing with an article that was written by a human versus a machine, and people consistently picked the article written by the machine as the version that was written by a human.

So, yes, it is very interesting. That’s why I continue to focus on the need to provide oversight to understand the differences between what’s public, like ChatGPT, where everybody is having access to it. It gets access to a vast amount of data that is determined by the creators of ChatGPT, versus commercial private use of AI, where we control the amount of data and the type of data that gets fed to the model, and we stand behind the accuracy and the decisions or recommendations that are created by that model that is supervised, supported, and trained by that corporation.

So there are different sets of use cases, and we have to differentiate that not all AI models are going to search the internet and process everything. There are many cases where we can take a limited set that is only relevant for a use case that adds value to our customer, and that’s the limitation. And we should have more trust in the outcome with that because it gets tested a lot more frequently, a lot more thoroughly, versus the public ChatGPT-like model, which is just offering you, basically, assistance and a variety of different wide range of topics.

MIKE: So I think, then, the next logical follow on is government regulation. And here in Washington, you know, we’re already seeing Congress start to think seriously about how to put some guardrails around the development of AI technology, at least the AI technology that we’re talking about that has a broad accessibility to the public. But we also know from experience that Congress and regulators tend to be way, way behind in developing those parameters and those regulations. You know, we’ve seen this play out right now in cryptocurrency, where Congress is still struggling to put any real parameters around that. So how important do you think it is to have government guardrails in place before this technology gets developed any further? And, you know, at the explosive pace that AI appears to be moving, do you think policymakers can even get ahead of any of these problems?

BASHAR: I don’t know if they can get ahead of these problems, Mike, but I think some type of guardrails are necessary, as well. I think it’s important, but let’s remember, we’re still in the early innings of learning about the science and the development of the science, and how we apply that science to various sectors of the economy, as well. So if we over rotate on the regulatory front, we may inhibit the upside of that technology, and we need to be careful.

But I think it’s appropriate to also recognize that there are certain downsides and certain risks associated with the science that we need to be careful of. ‘But I would definitely think of ideas to establish appropriate governance and oversight and registration and allow for more transparency on how the technology is being used and what’s being fed into that technology.

MIKE: What about international cooperation? I was interested to learn just recently that there are more ChatGPT users outside of the United States than there are in the United States. So that would speak to perhaps some kind of, you know, sort of global framework which, again, we know is really, really hard. How important do you think that is?

BASHAR: I think it’s very important. This is something that everybody on this planet earth is going to be using and leveraging. Just like we’ve seen with, again, technology, the internet, what you’re going to see with AI is something similar. That level of supplementing talent and skill and productivity is going to be applied across the board—everybody wants to leverage this. And it’s appropriate to have that level of coordination and collaboration around how do we want to get the best out of this without certainly impacting society in a negative way.

But it’s also interesting to know that if we’re not careful in how we collaborate on this, we’re not going to be able to stop certain countries from advancing their own usage of it and their own kind of agenda that are built on leveraging that science for competitive reasons.

Now, sometimes also that has an upside and a downside to it. So it’s important, just like anything else that impacts all aspects of society and humanity, we need to have a discussion on how do we appropriately use it to allow us to get the upside and work together to manage the downside. And I think there are many elements of that downside that we need to think of and be prepared for, and possible impact that we need to consider as we continue to advance in our usage of that technology.

MIKE: Bashar, this is a fascinating conversation. I think I could talk about this all day, but I’ll wrap up with this. As you look at your role as a chief information security officer, what are the main risks and challenges that you and your peers, both inside the financial services world and at other types of businesses, are facing in order to kind of balance improving the customer experience, which I think everybody agrees is a potential big benefit of AI, with protecting customers? How do you strike that balance, and what are you focused on today to try to make that happen?

BASHAR: The key, in my opinion, is to make sure that we understand how the science works and the type of applications and use cases we have that we can get value from. But we have to also realize that the bad guys, let me just say for example in cybersecurity, the concern I have is the bad guys will also get access to the same level of capabilities. I’ll just give you a quick nugget of information here. When we have a digital process and we put it on the internet, it used to take anywhere between four weeks to six weeks for the bad guys to map the digital process, to understand the technology we use, and to find weaknesses in trying to exploit that level of vulnerability or weakness they identified. Now with technology and all the tooling and capabilities and public cloud, takes them less than five minutes to recognize, map the process, and exploit the process. So they’re leveraging AI and machine learning just like we do. And it will help them also be more efficient, and it will help them accelerate the bad things that they want to do to get around the controls and processes and the good mission that we have.

We have to always be aware that computers will have biases. We have to be aware that computers will have to have some transparency and guidance from us. So the question here is how do we get the best out of that technology…by putting appropriate governance and be able to be fully transparent in reporting what goes to the machine versus where the human is taking the accountability for the decisions that we make. And we have to differentiate between public AI capabilities and private AI capabilities, and how those are being controlled to allow us to achieve the positive outcomes that we want from that technology.

MIKE: Well, Bashar, thanks so much for your time today. Really interesting conversation, and I appreciate you sharing your perspective.

BASHAR: Thank you, Mike.

MIKE: That’s Bashar Abouseido, managing director and chief information security officer here at Charles Schwab.

Well that’s all for this week’s episode of WashingtonWise. We’ll be back with Part 2 of our two-part special in two weeks. Take a moment now to follow the show in your listening app so you don’t miss an episode. And if you like what you’ve heard, leave us a rating or a review—that really helps new listeners discover the show.

For important disclosures, see the show notes or schwab.com/washingtonwise, where you can also find a transcript.

I’m Mike Townsend, and this has been WashingtonWise, a podcast for investors. Wherever you are, stay safe, stay healthy and keep investing wisely.

Copyright © 2009-2024 Interconnected Business Services, LLC.