Investing and the future of responsible AI

Thomas Mucha, Geopolitical Strategist
Caroline Conway, ESG Analyst
2024-02-22T12:00:00-05:00  | S3:E2  | 27:16

The views expressed are those of the speaker(s) and are subject to change. Other teams may hold different views and make different investment decisions. For professional/institutional investors only. Your capital may be at risk.

Episode notes

As investors, policymakers, and companies race to capture the opportunities associated with artificial intelligence, they must first understand myriad ethical risks. ESG analyst Caroline Conway joins host Thomas Mucha to explore this important issue.

2:06 Professional background
4:05 Primary risks and resources for AI companies
8:00 Will AI regulation stifle innovation?
9:45 Intersection of AI and national security
14:00 How AI will impact financial services
15:58 Other industries under AI scrutiny
17:57 How Wellington helps companies navigate AI
20:20 Overarching concerns and hopes for AI

Transcript

Conway: This year what we’ve really focused in on is the kind of oversight side of things, so as these new tools are coming online how is the board assessing their potential? How is the board thinking about the risk side of things of course, but also how are they thinking about integration with their kind of innovation pathway, with productivity and the potential to reduce costs? How is the board connected to management, and how is management connected to the regular employees on this topic? That’s really been kind of the core of what we’re trying to figure out.

 

MUCHA:    If you enter what is responsible AI into ChatGPT, ChatGPT will tell you that responsible AI encompasses a set of principles, practices, and guidelines aimed at ensuring that AI technologies are developed and used in a manner that aligns with human values, respects fundamental rights, and minimizes potential negative impacts. But: principles, practices, and guidelines set by whom? And values, fundamental rights, and negative impacts defined as what, exactly? Now, the ethics of AI not only raises thorny philosophical questions, it also carries significant social and financial costs. Governments, companies, and investors are today racing to understand those costs and capture the opportunities while managing the risks associated with machine learning. Now joining me today to talk about the intersection of responsible AI and investing is Caroline Conway, an ESG research analyst here at Wellington and an expert on AI, machine learning, and data analytics. Caroline, welcome to WellSaid.

CONWAY:   Thank you.

MUCHA:    So this is a massive topic. We could devote several episodes to this. So, I want to home in on what investors are most focused on today. But before I do that, Caroline, what was your path to becoming, or continuing to become, an expert on these complex topics?

CONWAY:   Sure, so it’s been a very interesting journey for me over my career, I have been in finance for about six years now, a little bit on the sell side before I came to Wellington. But before that I was on the corporate side for a very long time. Much of my experience was at Walmart where I started out on the energy team working on climate-related topics, so working on renewable energy, efficiency, working with a lot of international teams to help them get their goals in order. And while I was on that team, Walmart was actually starting its first data and analytics operation. I happened to meet the person who was starting that, seemed very interesting, and that ended up taking me on quite a journey to really build out what was kind of the first version of machine learning and neural network model development at Walmart. That is kind of the predecessor to what we’re seeing today with artificial intelligence.

MUCHA:    So you got the data bug early.

CONWAY:    I did, yes, and one of the really interesting things about it was we saw opportunities pretty much immediately to improve productivity, to enhance decision making, using all of this new data that was coming in. So we ended up building tools for workforce management, for pricing, for merchandising, ended up doing supply chain work, e-commerce work. So it ended up being something that touched literally every area of the organization even at that early stage.

MUCHA:    So how did you get from Walmart to Wellington?

CONWAY:   My manager from Walmart eventually moved on from the company and he actually went to the sell side and after a couple years he said, “Why don’t you come join me here?” And for me, there were actually a lot of parallels between the work that I’ve done in the past both in data and analytics and strategy and with the kind of analysis side of finance. So I thought it was a really interesting new journey to try out. And, made the leap. And I really I learned that I really love research and so I’ve kept it going.

MUCHA:    Now in your mind, what are the primary ESG business risks that AI companies face today? It’s a long list I know but yeah, how do we think about this?

CONWAY:   Yes, so this is an interesting, question because for me it touches on pretty much every element of ESG that I’ve already been covering and my counterparts have been covering from the beginning. So governance. It’s an entirely new area of governance for many companies. You have to think about how you’re going to approach these new tools from both an opportunity perspective and a risk perspective. On the opportunity side how are you going to integrate this with your innovation processes? How are you going to actually apply it in a way that’s productive? And on the risk side, how are you going to manage all of these other ESG risks that are inevitably going to come up? Among those other risks I would say many of them fall within what I would call the social bucket. So all of the issues that we already see with data privacy, cybersecurity, IP protection, all of the issues with potential social impacts from the company to their customers and to the public at large. All of those things become much more complicated as you see these new tools being adopted.

MUCHA:    Now that’s a long list of very knotty issues, very complex issues. Issues that a not a lot of companies have a lot of experience dealing with. So what resources are available or will be available to help companies and investors for that matter manage these risks?

CONWAY:   What’s great is that there has been a lot of work on responsible AI already, it’s something that even before the most recent generative-AI developments came about we were talking about with companies, since at least since I joined and even before that. And there have been efforts going on I would say since around 2016, 2017 to really codify what responsible AI means, really try to get that into frameworks and into standards. And so you do have very structured way to think about these risks today that companies can pretty easily adopt. The resources that I would reference most, uh, specifically today are the NIST standards that are already partly developed and they’re going through new iterations too.

MUCHA:    What does that mean?

CONWAY:   This is the National Institute of Standards and Technology at US government level. And from the US regulatory perspective that’s really going to be one of the key bodies that’s going to decide how all of this is managed from the regulatory perspective. And it’s the combination of NIST and the ISO standards. So they both come up with standards around both governance in general for AI and then how to manage these very specific risks. So I would definitely recommend that folks take a look at both of those resources.

MUCHA:    Well, how are the tech companies responding to these emerging frameworks? They arguably have more to gain or lose from responsible development of AI than anyone else, so what’s their perspective?

CONWAY:   Right, it’s been very interesting this past year to see all of the responses and how quickly the tech companies got on board with the idea that this does need to be regulated in some form. They are all very well aware of the risks as you develop these very complex tools. They already have responsible AI teams in-house. They’ve spent many years developing their own internal standards. And so for them to be commercially successful, I think there was a pretty quick recognition that this all needs to be managed correctly. And so we saw most of the big tech companies get on board very quickly with engaging with regulators in the US, engaging with global regulators, and trying to educate and then also make sure that they understood where those regulations were going.

MUCHA:    Is some of this PR?

CONWAY:   That’s a good question. I think that there was an impression, maybe early last year that a lot of it was PR. But as we’ve seen it develop over the course of the year, I think there’s real substance to it. Just as an example, I was on a call just this week with NIST. They’re starting to really build out the details of their cybersecurity and privacy standards for this new phase of AI. And every major company was there giving their perspective on it. And it was really interesting for me to see the depth at which they have thought about this and how they’re taking it seriously. So I think there’s more substance to it than hype. 

MUCHA:    So Caroline, a common argument here is that regulation of technology stifles innovation, in fact I hear that a lot when I speak with lawmakers on Capitol Hill on both sides of the aisle. So is anyone today arguing that there should be zero AI regulation?

CONWAY:   I’m sure that you can find somebody who will say that but the conversation that I see happening now is more about getting the regulation right than it is about having no regulation versus some regulation. So there’s a lot of discussion about making sure that it’s flexible enough that you can have innovation continue. But that it’s also controlling for these risks and the best way to control for those risks is to have a strong risk management framework to be ready for anything that comes up. And so that’s really the direction that I see things going more in the US. There are other aspects of regulation that are a little bit more hard line; so certain use cases where there’s going to be a lot more constraints. But at the core I think the technology providers and the governments are pretty aligned on how to approach that topic. The other thing that I would just mention there is that no matter what every country has a tension between this desire for innovation and this desire to kind of control the technology. And it’s also been really interesting over the past year to see how different the country approaches are. So you have ones like South Korea and Singapore and India where they’re very much focused on how do we make the most of the innovation side of this, how do we even turn it into an exportable industry? And I think the tension between when you put regulations on and what those regulations look like is quite different from Europe where they’ve been much more hard line about putting firm restrictions on the technology from the get-go.

MUCHA:    So Caroline, you mentioned hard lines, you mentioned constraints. You mentioned differences among countries and national objectives. You’re veering dangerously close to my area of expertise which is geopolitics. Among world leaders, allies, US allies, US adversaries alike, there seems to be an agreement that AI does have the potential at least to destabilize societies, to upend governments. Given these shared concerns, what do you anticipate in terms of global baseline accords on AI policy? Is it even possible?

CONWAY:   It’s a great question whether it is possible. I don’t think that there’s any guarantees that it will happen. But it is also very interesting to see how quickly governments have agreed that there needs to be some kind of an alignment and that has ranged from forming a specific international body to regulate everybody, to leaving it to the regions and leaving it to the local countries, but making sure that there’s some kind of interoperability between regulations. So I think the desire is very much there. The question will definitely be whether that comes to fruition in the way that we hope it will. I think there will be a major tension there too between, your area of the kind of defense, national security aspect of things, and the kind of innovation side of things and the protection of one’s own citizens. 

MUCHA:    Well, let me keep my geopolitical strategist hat on for a moment and dig into this a little bit more. So I would just highlight in my conversations that AI is central to almost everything that I do. Particularly with national security policymakers in DC. But also globally. Given that again the technology has the potential to reorder global economic power, military power, intelligence gathering as well as enhancing disinformation campaigns. Powering weapon systems, etc., etc. China of course is at the top of that list of concerns and given these accelerating great power dynamics, much of this as you say flows directly to national security. So as an investor, Caroline, and as you think about the future of this, how do you personally try to balance these national security imperatives with AI, which frankly have been racing ahead for years, with the business and economic, to say nothing of the societal impacts of this quickly emerging technology? How do you split that?

CONWAY:   One thing that all of the countries involved have in common is that they want to maintain their own political orders, and so there is a lot of commonality in the desire to prevent disinformation. There are actually, I think, emerging tools that are going to solve for at least a good portion of that. That is a major concern that we saw come up straightaway from every country when generative AI began developing. 

MUCHA:    Everybody knows they have weaknesses. In this area.

CONWAY:   Exactly. Right. Exactly. And the capability if you had these new tools kind of freely available, the capability to develop misinformation is definitely exponentially higher than it was a couple of years ago. So, genuine concern, and I think that commonality along with the commonality of wanting to maintain economic stability, is where the countries are already collaborating. And so it will be very interesting, I think, to see over the next couple years how that dynamic between the US and China, dynamic between other countries possibly moves on two parallel paths at the same time, so you will have a certain degree of coordination and then you’ll have these pockets where I think the competition is going to outweigh that. So we’ve already seen that as you’ve talked about in semiconductors. We’ve seen that in other specific strategic industries. I think we will see that in AI. But it will be going in parallel to this effort to coordinate.

MUCHA:    Yeah, that’s my view as well. I think it’s going to be very hard to separate the national security piece from the innovation side of it and we’ll have two worlds and that’s not that dissimilar to how we’ve lived and thought about technologies in general for decades but this is such a fast-moving, such a complex, such a dynamic industry. I’ll be watching to see if governments can keep up with it.

CONWAY:   That’s true and there’s plenty of other innovations I think on the horizon and so I think it’s a good thing that at least today we have that desire to collaborate, because there are these future iterations that I think get us closer and closer to something that’s autonomous.

MUCHA:    All right, well, let’s delve a bit into some other sectors. Because I know frameworks and policies will vary.by industry. And let’s start with our own: investing and financial services. So what are some of the rules of the road that you expect to appear around banking, investment, asset management, etc.?

CONWAY:   I would maybe take a step back to what’s really at the root of all of the concerns around AI. And that’s the fact that it is when you really boil it down, it’s probabilistic modeling, so this is not perfect information. You can get it to 90 percent as a lot of the initial tools got to, and that’s a very impressive number in terms of accuracy for such a complex system. But is that good enough for every application? In a lot of cases you need to get it to 95 percent. In cases you need to get it 99 percent. And in some cases it might not be good enough even then. So I think that’s the limiting factor for all sectors. And when you look at financials in particular you have a lot of potential high-risk applications, if you just start applying the system and you don’t have any human oversight. And so I think the regulators of the financial industry have recognized that pretty quickly. The SEC was one of the first to come out with rules to ensure that broker-dealers are not using this technology willy-nilly. And we’re seeing pretty much every other regulatory body kind of come together and say we really need to have a strong regulatory framework around this topic.

MUCHA:    Just for financial stability reasons.

CONWAY:   Exactly. Financial stability and also fairness. So what’s interesting within that world is the credit bureaus, the companies that already use data to evaluate creditworthiness for regular folks, they’re already highly scrutinized on this topic to ensure that there aren’t inadvertent denials, that there isn’t bias showing up in their decision making. And so that is another I think major element as we look at asset managers that there’s this kind of fairness in the outcomes of the AI tools.

MUCHA:    What are some of the other industries that are getting the attention of regulators given the sensitivity to financial stability, fairness, and other factors?

CONWAY:   Finance is definitely one of the biggest ones. Anything where there’s a high-risk application is getting scrutiny right now. So, health care definitely and that’s where we’re seeing, again, the need to avoid bias but also the need to avoid kind of bad decision making in drug discovery. We need to avoid bad decisions in insurance. All of those things are definitely getting scrutiny already. There are other specific applications like energy infrastructure, industrial applications, where a failure can cause a major economic issue.

MUCHA:    So critical infrastructure.

CONWAY:   Critical infrastructure. And that’s one of the areas that the EU has already said this is going to be restricted area. And then the other one that I was thinking of was the kind of education and employment space. So there’s already been a lot of history with employment analytics to manage the risk of bias, and that’s another one where I think we’ll see a lot more.

MUCHA:    So they’ve got their hands full.

CONWAY:   Yes. And what’s interesting is the executive order from the White House this past year, when you look at the assignments made to agencies it’s really all the way across the board. Even agencies like the housing agencies, Department of Education, Department of Labor, everybody has an assignment.

MUCHA:    Do you think that a potential change of administration or differences between the parties, is that going to matter to the regulatory environment?

CONWAY:   It’s something that I think about. What’s also been good to see this past year is that the interest in this has been very bipartisan, and so there’s at least agreement across the board that there needs to be some kind of control around the high-risk applications, there needs to be some kind of standardization. And even though we haven’t gotten quite to privacy regulation in the US right now, I think that’s brought that back on the table. So there are a lot of commonalities. I think if we do see an administration change or we see a change in Congress next year, certain topics and certain focal areas will definitely change. But I don’t expect to see a full reversal of what’s happened so far.

MUCHA:    All right, Caroline, I’ve heard you say that responsible AI comes up on every call with companies these days.

CONWAY:   Yes.

MUCHA:    So how do you as an expert on AI and ESG contribute to company engagements with investors around the firm?

CONWAY:   When we started talking about this a couple years ago it was really very basic so, it was essentially do you have a responsible AI program, that’s great, and move on to the next topic. But this year what we’ve really focused in on is the kind of oversight side of things, so as these new tools are coming online how is the board assessing their potential? How is the board thinking about the risk side of things of course, but also how are they thinking about integration with their kind of innovation pathway, with productivity and the potential to reduce costs? How is the board connected to management, and how is management connected to the regular employees on this topic? That’s really been kind of the core of what we’re trying to figure out.

MUCHA:    So helping them ask the right questions of companies basically.

CONWAY:   Exactly. And the other thing that’s kind of come out of that that’s been really interesting even within tech itself is that there are quite a few divergences between companies in terms of how much they’ve thought about this. And if you look other sectors I think we’ll see the same thing. So there are companies in consumer and finance, etc that have already been doing this for years and they have everything set up to be able to make the most of this new phase. There are other companies that are not there yet.

MUCHA:    How much of a differentiation point do you think that might be in capturing future earnings or growth potential of a company? Are the ones who get this right likely to jump-start or increase their leads?

CONWAY:   To me it’s going to be a huge difference. Huge difference. And it’s going to be in a number of different respects. So we have some of the kind of clear-cut areas where if you haven’t thought about how this impacts things from a cybersecurity and privacy perspective clearly there’s a fallout if you don’t do that, right? The other area that I’ve been thinking a lot about is human capital management. And this is coming back to my experience at Walmart. When you roll these tools out, you do want to make sure that you have a deep understanding of how your organization already works. You need to know where the expertise is sitting what processes are happening. If you don’t have that visibility and you start rolling these tools out, it’s not necessarily going to make you more productive, it might even make you less productive. And so that I think is going to be a huge differentiation where we’ll see a lot of things that maybe were not as apparent become apparent. Because some companies have thought about this much more than others.

MUCHA:    Yeah, that’s an interesting investment point there. It’s going to be a differentiator in the future. So, I have a bunch of things that I’m worried about with AI. I’m sure they’re different than what you worry about.

CONWAY:   Sure.

MUCHA:    Killer robots and Skynet and those sort of --

CONWAY:   Well, I’m worried about that too. Hopefully not in this lifetime but, yes.

MUCHA:    But, you know, from your perspective what does keep you up at night about the AI risks?

CONWAY:   I think it’s a combination of the potential misuse by people who understand this technology really well and the potential misuse by people who don’t understand it. And, really for me it all comes back to how we decide to use it. It’s not the technology itself, it’s how humans decide to put it to use. And so a lot of my concerns are shorter-term so, definitely concerned about how far disinformation is going to go before there’s controls put on it. Definitely worried about cybersecurity risks, which we are already seeing increase because people are using these new tools to be very creative. If I go to kind of the misuse side it’s really deploying these tools too quickly and not necessarily understanding the impact that they’re going to have on businesses, on people and society, and so on.

MUCHA:    Well, 2024 is a huge year for elections.

CONWAY:   Yes.

MUCHA:    Around the world. Not only in the United States but all over the world. Do you think that it’s inevitable that we get some sort of disruptions in a lot of these elections?

CONWAY:   It’s definitely something that I am watching. And we’ve already been talking with certain companies about exactly that question. And the interesting thing is, when we look back to past elections, there was already a disinformation issue in previous elections. The studies that have come out on that have shown that the effect has been pretty limited, but the question is there a potential tipping point in the effect that that has in upcoming elections. The one thing that makes me feel a little bit more assured is the companies that we talk to about this are very well aware of the issue. They have been working for years to build up their teams that do content moderation that manage this issue. They’ve also been building up their internal AI tools for detection. And I think especially over the last couple years, that’s reached a new level where those tools are much more effective than they were. And then the other thing is that they’ve also built a lot more governance around this topic so we’ve had very robust discussions about human rights programs and how these firms are prioritizing very complex issues. And it’s again miles ahead of where they were even five years ago. And then the last thing that I think gives me a little bit more assurance is the regulatory focus on this so there are some pretty clear-cut tools to label content to require takedown of content. And I think all of those things will come together to at least manage it to a degree. But this will definitely be a year to watch. And I think do some analysis of how effective the tools are.

MUCHA:    You tried there very hard to make me feel better. You didn’t work. But we’ll see. We’ll see. Let’s hope.

CONWAY:   We’ll see. We’ll see what happens.

MUCHA:    All right, well, let’s end this fascinating conversation, Caroline, with what are some of the things that give you the most hope about this technology. Let’s set the risks aside. Let’s set the scary stuff aside. What can we look for here in a better future?

CONWAY:   Yeah, definitely. Well, I kind of go back to what got me excited about data in the first place and it’s the potential to make decisions in a new way that you couldn’t before. And so I think there are a lot of areas that are going to benefit from that and areas where society more broadly are going to benefit from that too. So, a couple of interesting areas climate analysis. We’re already seeing that getting rolled out. We’re seeing better detection of weather patterns so you can actually predict disasters before they happen, you can save a lot of lives that way. And then with the climate modeling side of things, I think we’ll get to a new level of accuracy that’s more convincing that feeds in to other models directly and helps with better decision making. Another area that I think is really interesting is within health care. There’s the potential for poverty reduction. So you have a lot of capability within the health care industry already. If you make that a lot more cost-effective you can start to roll that out to a much wider range of people. And I think we’ll see some really interesting things come out there. And then the other thing that I’m interested in is just generally, this potential for ideation. So we’ve seen it already in simulation software. We’ve seen it in a few other areas where the ability to just iterate a whole bunch of different outcomes, different ideas helps you make a better decision down the road. And I think we’ll see that show up in a lot of different applications.

MUCHA:    I suppose every technology in human history has had good and negative impacts and 

CONWAY:   That’s right.

MUCHA:    Why would AI be any different?

CONWAY:   That’s right and I think it really is all about how we decide to use it collectively. And, again the thing that gives me some hope around that is the fact that there has been all this coordination so far.

MUCHA:    All right, let’s end on that positive note. Thanks again for joining us on WellSaid. Once again Caroline Conway, an ESG research analyst here at Wellington and our expert on AI, machine learning, and data analytics.

CONWAY:   Thank you, Thomas.

----------

Views expressed are those of the speaker(s) and are subject to change. Other teams may hold different views and make different investment decisions. For  professional/institutional investors only. Your capital may be at risk. Podcast produced February 2024.

Wellington Management Company LLP (WMC) is an independently owned investment adviser registered with the US Securities  and Exchange Commission (SEC). WMC is also registered with the US Commodity Futures Trading Commission (CFTC) as a  commodity trading advisor (CTA) and serves as a CTA to certain clients including commodity pools operated by registered  commodity pool operators. WMC provides commodity trading advice to all other clients in reliance on exemptions from CTA  registration. WMC, along with its affiliates (collectively, Wellington Management), provides investment management and  investment advisory services to institutions around the world. Located in Boston, Massachusetts, Wellington Management also  has offices in Chicago, Illinois; Radnor, Pennsylvania; San Francisco, California; Frankfurt; Hong Kong; London; Luxembourg; Milan;  Shanghai; Singapore; Sydney; Tokyo; Toronto; and Zurich.     This material is prepared for, and authorized for internal use by, designated institutional and professional investors and their  consultants or for such other use as may be authorized by Wellington Management. This material and/or its contents are current  at the time of writing and may not be reproduced or distributed in whole or in part, for any purpose, without the express written  consent of Wellington Management. This material is not intended to constitute investment advice or an offer to sell, or the  solicitation of an offer to purchase shares or other securities. Investors should always obtain and read an up-to-date investment  services description or prospectus before deciding whether to appoint an investment manager or to invest in a fund. Any views  expressed herein are those of the author(s), are based on available information, and are subject to change without notice.  Individual portfolio management teams may hold different views and may make different investment decisions for different clients.  In Canada, this material is provided by Wellington Management Canada ULC, a British Columbia unlimited liability company  registered in the provinces of Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Nova Scotia,  Ontario, Prince Edward Island, Quebec, and Saskatchewan in the categories of Portfolio Manager and Exempt Market Dealer.   

In Europe (excluding the United Kingdom and Switzerland), this material is provided by Wellington Management Europe GmbH  (WME) which is authorized and regulated by the German Federal Financial Supervisory Authority (Bundesanstalt für  Finanzdienstleistungsaufsicht – BaFin). This material may only be used in countries where WME is duly authorized to operate and  is only directed at eligible counterparties or professional clients as defined under the German Securities Trading Act. This material  does not constitute investment advice, a solicitation to invest in financial instruments or information recommending or suggesting  an investment strategy within the meaning of Section 85 of the German Securities Trading Act (Wertpapierhandelsgesetz).   In  the United Kingdom, this material is provided by Wellington Management International Limited (WMIL), a firm authorized and  regulated by the Financial Conduct Authority (FCA) in the UK (Reference number: 208573). This material is directed only at eligible  counterparties or professional clients as defined under the rules of the FCA.   In Switzerland, this material is provided by Wellington Management Switzerland GmbH, a firm registered at the commercial register  of the canton of Zurich with number CH-020.4.050.857-7. This material is directed only at Qualified Investors as defined in the Swiss  Collective Investment Schemes Act and its implementing ordinance.  In Hong Kong, this material is provided to you by Wellington Management Hong Kong Limited (WM Hong Kong), a corporation  licensed by the Securities and Futures Commission to conduct Type 1 (dealing in securities), Type 2 (dealing in futures contracts),  Type 4 (advising on securities), and Type 9 (asset management) regulated activities, on the basis that you are a Professional  Investor as defined in the Securities and Futures Ordinance. By accepting this material you acknowledge and agree that this  material is provided for your use only and that you will not distribute or otherwise make this material available to any person.  Wellington Investment Management (Shanghai) Limited is a wholly-owned entity and subsidiary of WM Hong Kong.   

In Singapore, this material is provided for your use only by Wellington Management Singapore Pte Ltd (WM Singapore)  (Registration Number 201415544E). WM Singapore is regulated by the Monetary Authority of Singapore under a Capital Markets  Services Licence to conduct fund management activities and is an exempt financial adviser. By accepting this material you  represent that you are a non-retail investor and that you will not copy, distribute or otherwise make this material available to any  person.   In Australia, Wellington Management Australia Pty Ltd (WM Australia) (ABN 19 167 091 090) has authorized the issue of this  material for use solely by wholesale clients (as defined in the Corporations Act 2001). By accepting this material, you acknowledge  and agree that this material is provided for your use only and that you will not distribute or otherwise make this material available  to any person. Wellington Management Company LLP is exempt from the requirement to hold an Australian financial services  licence (AFSL) under the Corporations Act 2001 in respect of financial services provided to wholesale clients in Australia, subject to  certain conditions. Financial services provided by Wellington Management Company LLP are regulated by the SEC under the laws  and regulatory requirements of the United States, which are different from the laws applying in Australia.  In Japan, Wellington Management Japan Pte Ltd (WM Japan) (Registration Number 199504987R) has been registered as a  Financial Instruments Firm with registered number: Director General of Kanto Local Finance Bureau (Kin-Sho) Number 428. WM  Japan is a member of the Japan Investment Advisers Association (JIAA), the Investment Trusts Association, Japan (ITA) and the  Type II Financial Instruments Firms Association (T2FIFA).  WMIL, WM Hong Kong, WM Japan, and WM Singapore are also registered as investment advisers with the SEC; however, they will  comply with the substantive provisions of the US Investment Advisers Act only with respect to their US clients.  

©2024 Wellington Management Company LLP. All rights reserved.