Leaders Shaping the Digital Landscape
May 30, 2023

AI-Driven Future

Join us on a captivating journey through the realm of AI as host  engages in yet another riveting conversation. This time, he will be exploring the depths of knowledge with  Thee, Managing Director of Data and AI at . ...

Join us on a captivating journey through the realm of AI as host Tullio Siragusa engages in yet another riveting conversation. This time, he will be exploring the depths of knowledge with Lisa Thee (Polvi) Thee, Managing Director of Data and AI at Launch Consulting Group.

Together, they will unravel the intricacies of trust, safety, and the triumphs that await businesses in the world of artificial intelligence we find ourselves immersed in.

Mark your calendars!

#ai #artificialintelligence #aicontent #technologytrends #business #success

Transcript

Tullio Siragusa (00:13):

Good day everyone. Welcome back to Tech Leaders Unplugged. This is Tullio Siragusa. Today I am getting unplugged with Lisa Thee who is the Managing Director of Data and AI at Launch Consulting Group. Hi, Lisa, welcome to the show.

Lisa Thee (00:30):

Thank you so much. It's so nice to be here.

Tullio Siragusa (00:32):

Good to have you. The topic today is an AI-Driven Future, and we're going to talk about trust, safety, and business success. Before we dig in and see what we can learn here, I'm looking forward to this conversation. By the way. Let's get to know you a little bit. How did you get here? Tell us about your journey.

Lisa Thee (00:51):

Yeah, so I spent 18 years in corporate America. My last couple roles in a multinational tech company were the AI for Good Leader for Intel Corporation. And I exited and retired as the director of Hybrid Cloud for enterprise and Government. So it gave me a really broad perspective on what's happening in technology. This is dating back to about the 2015 timeframe. So, it was very clear that AI was going to be very relevant, but it wasn't as clear today as how it was going to touch our daily lives. From there, I really wanted to focus more on the impact that I was having on social justice issues. So I exited corporate America and ran my own AI startup called Minor Guard, focused on making children safer online and in real life, which was inspired by some of the wonderful collaboration work that was done between Intel and the National Center for Missing and Exploited Children and Thorn. From there I exited that company into a larger player in the market with Bark Technologies, and I have been doing keynote speaking, writing, and management consulting since that time.

Tullio Siragusa (02:00):

Thanks. So we're talking about something that I think is top of mind for a lot of people as it relates to trough safety and business success when it comes to leveraging AI. And I know one of the biggest questions or challenges that maybe we don't talk about it enough pro perhaps we should, is the role of ethics when it comes to ai. Can you give us some thoughts in terms of what you're seeing there in the marketplace? What, what's trending in terms of concerns and helping the organization build more trust as they adopt AI?

Lisa Thee (02:41):

Absolutely. So I think depending on the industry that you're in, your threat vectors may vary, vary pretty widely. So if you're in the banking industry, for example, fraud detection is probably top of mind. And things like understanding link analysis for criminal activity is probably something you're doing cause your data is regulated with know your customer risks. You also see in healthcare the privacy requirements under HIPAA protection. There's a lot of data regulation there, so I tend to play in the spaces where the regulation is high, the data is very sensitive, and the impact of a breach is really impactful. The other place that you see a lot is in the national security types of applications, which is where I started my career. So, working with industries, typically they tend to follow, the legislation. So, when there are regulations that are measurable with fines and impacts that tends to be the place where people start to make the investments. So just like we saw with the cybersecurity industry, there's a maturation of the field that I've been observing over the past decade going from what do we have to find so that we can remove it to now getting a bit more proactive in terms of thinking of this as a brand safety perspective, wanting to protect the brand that you've built for many, many years. You don't want someone who has a nefarious interest to destroy your brand within 30 seconds on open platforms with third-party content. All the way to really think through how we make sure that we can actually leverage data that can improve things like patient outcomes in healthcare by leveraging that data in a zero-trust manner for better patient outcomes. Some of those applications in healthcare that can be really important are things like balancing out gender and diversity information in patient recommendation AI systems. For example, if you don't train on data from women and ethnic minorities, you're probably going to get results that don't have good prediction outcomes for those patients. And so making sure that we build things in a regulated way is incredibly important from the ethics perspective, but also that we are leveraging enough information that everybody is reflected in the information. So some examples that are close to my heart where I started this AI journey coming from being more of an engineer and sales and marketing kind of person for the majority of my career was really around human trafficking prevention. So it was, it was around how we look at training models to better recognize when currently missing children are being sold online for illegal activities with human trafficking so that we can recover them more quickly and make sure that law enforcement has the best tools possible to be able to protect our most marginalized populations. And you wouldn't think of that as a tech problem, but it actually is, a really great example of one, which is when we started that process, the facial recognition algorithms just didn't perform very well for the use case. And the reason for that is the majority of the labeled information they've been trained on was middle-aged white men, which doesn't translate very well to primarily diverse teenage girls in terms of accuracy of facial recognition matching. So we were able to modernize that stack to do some nearest neighbor searches to get it significantly closer and allow the subject matter experts, the detectives to be able to look at the top 10 likely matches of a child, for example, versus scrolling through 50,000 images hoping to stumble on it. So that's some examples of how trust and safety can be applied in more of a government application.

Tullio Siragusa (06:27):

So when it comes to businesses adopting ai, specifically those who work and deal with consumers, what are some of the key steps that they ought to be looking at into how to make sure that those AI initiatives are, you know, have the right ethics in place, they don't have a bias or missing data that's important for it to be equitable for all their consumers. Any key steps that you would recommend in terms of how to do an audit? What are, what are the steps to figure out if they're, if they're on par, or if they're off course?

Lisa Thee (07:04):

Fair enough. So I think first and foremost when you're trying to embrace ethical AI is to really get clear on the level of transparency that you need to be having in order to build trust with your end users and your employees a lens on fairness and making sure that you are not taking patterns that have been known biases and things like hiring or things like recommendation engines based on the training of human biases that we'll propagate forward. So, I think we've all seen the examples with books like Weapons of Mass Destruction, where sometimes algorithms can be developed in one area and then applied to things like the loan area or the hiring industry where the recommendation engine won't recommend anyone who is a woman or won't recommend anybody who's a minority because has been trained on what has been successful in the past. So, I think at the end of the day, it's really going to come down to an accountability step. So you can't achieve what you don't measure. And so if you're not looking for these kinds of missteps or inherent biases, not by the machines, but by the humans that have fed the machines, the label data, you can start to find yourself in some unexpected consequence zones. So really from an AI lifecycle perspective, there's a maturity curve to it. At Launch Consulting, we offer a digital safety assessment that allows people to understand at least where they are today because it's really hard to envision where you want to be when you don't even know where you are. So we try to help everybody locate where they are today with the data that they have, whether that be third-party data that are posted on their platforms or internal data as well. And then we also help them create a journey map to where they would like to be in terms of de-risking their platform and ensuring that they have that customer loyalty because they really step forward with trust and safety as a brand. I think this new generation of buyers and this new generation of leaders is really looking for more ethics in the companies that they work with. And just getting by and doing the minimum is no longer going to be the competitive advantage that you need to thrive in this new environment.

Tullio Siragusa (09:15):

Lisa, how many layers are there to it? We just touched a few, right? You talked about gender and race, but how many layers are to it in terms of making sure that the ethics around AI are very inclusive? I would've to think there are also different belief systems and age groups and origins, etcetera. How do you kind of tackle this? It seems like a pretty big challenge, and maybe some companies assume, well, it's the system, they don't have any bias, it's a computer, right? But it's working off historical data that might have already had bias built into it, as you mentioned. So, is there some kind of guide that's being built either from the legislature's point of view or, or a consortium of companies that are thinking this through a little bit more you know deeply what do you or does that need to happen and what should that look like?

Lisa Thee (10:11):

Yeah, so I advise Spectrum Labs, which is a trust and safety-focused company in this domain space that typically services markets like gaming and dating and the places where you might have that kind of content. With that said, I think a lot of this is going to be a collaboration ex with experts in order to be truly successful and thriving Criminals tend to not pick a single platform and just do one bad thing on it. There's a lot of opportunity for learning amongst the different players in the market, and I've seen a lot of collaborations in this space where big companies that you would think are competing actually will go into closed doors and collaborate to make sure that people aren't taking advantages of weaknesses of multiple platforms. But in collaboration with the experts, you really need to start thinking this is well beyond just the typical data science, which is already kind of a hybrid of industrial engineering and statistics and computer programming, and it really moves into the realms of things like philosophy and governance and legal, and it, it really is a team sport. So, I think the places that you probably want to start for the best outcomes are to invest in robust data security in the places where cybersecurity will really matter. So that will be personally identifiable information of your clients and the intellectual property of your businesses. So for example, spectrum Labs has a, a software as a service solution that you can partner with them to make sure that you're not accidentally leaking that kind of information or your agents are, are sharing that more broadly. I think the next thing that's really important is, you know, having a really clear transparency process on trust and privacy. Being really clear about the data that you're collecting and why you're using it and what you will use it for I think is another way that you can really start to mature along that curve of the ethics of ai because once the AI engines are trained and built up on you don't really need the personally identifiable information anymore to predict what people will do. I think we're kind of all learning that real-time with some of the larger companies and how they have moved forward. So, if you want to learn more about that topic, I really like the Center for Humane Technology. They have a great documentary out there. It came out a few years ago called The Social Dilemma. I think it talks a lot about some of those topic areas you mentioned in terms of radicalization and hate speech. This industry is very broad, it's as broad as cybersecurity can be and it has a very evolving regulatory landscape. So I think that it's really important to be leveraging external resources to collaborate with your internal teams because it's going to constantly be an evolving playing field as fast as bad actors can evolve, how they are attacking systems as, as fast as trust and safety teams need to evolve to determine that somebody has violated terms of service policy and that content needs to be removed.

Tullio Siragusa (13:24):

I'm curious like most technology over time tools have been developed to test, to see what's broken or what's not proper, you know, processing correctly, etcetera. And I'm curious, is there anything being done around this, you know, where someone can actually do an assessment of the data and identify any potential biases or just missing information or any red flags if you will, that could help an organization figure out what's broken? Because I'm guessing maybe some organizations just don't even know, like, what, how do, how do you identify the red flags?

Lisa Thee (14:07):

Absolutely.

Tullio Siragusa (14:08):

Or is this manually done today?

Lisa Thee (14:11):

Yeah, so that's why we offer that digital safety assessment so that you don't have to become an expert in what to look for before you go to find it. So, we can identify over 65 different applications of problematic content on your platforms, but it's almost like a Lego system where maybe on your platform, it's really important to make sure that you don't have any issues with a specific type of pr abuse type, but you are more relaxed on others. I don't behave the same way in a nightclub that I do in a library. Every platform is going to have a different type of community response. And so by being able to partner with places like Spectrum and Launch, it allows you to be very laser-focused on what you're concerned about. So for example, if I am a dating website, maybe it's not a problem that I have sexualized conversations. If I'm a chatbot for HR within a company, I probably want to flag that and know that that's happening, right? So, it's a matter of getting collaborations to understand things like what your policies on hate speech are. What are your policies on age verification? What are your policies on li radicalization doxing? There, there are so many different threat factors there. We can rely somewhat on the regulatory landscape to guide us today, which is it's felonious to create and distribute child sexual abuse material, which colloquially can be called child pornography. We don't like to use that term in the industry because it's not consensual, because it's inherently of a child. And so therefore it, it softens the crime that it really is, which is the worst day of a child's life being shared for the pleasure of other people. Also, live streaming terrorism events. Australia has come forward with some of the more progressive laws about taking down times of those kinds of activities. So that's where you see the majority, the maturation of the industry today because those are illegal, and it is very clear that companies have to take action on those. So, you will see industry forums that have combined those topics. I really like the Oasis Consortium and their white paper about what some of those threat vectors are. You can go to their website and get a really nice white paper about responsible AI in those spaces. I think what we're going to see moving forward is things that are a little bit more proactive a little bit earlier in the process. Things like making it illegal to groom children for those types of crimes making things that you probably wouldn't be allowed to do in, in the real world. But there are some loopholes in tech with the regulatory landscape of third, third-party liability on platforms that allows some poor choices of criminals to skate through some laws, especially around section two 30 and who's liable for what once things are being broadcast internationally.

Tullio Siragusa (17:15):

Thanks for sharing that, Lisa. I'm, I'm thinking over the past 30 years or so I've seen in organizations what used to just be the CIO's role, now you have a CTO, but the advent of democratization of software and technology, you needed to have a CTO with digital transformation companies adopted this chief digital officer with more increased cybersecurity threats. They adopted the CSO officer. So, I've seen these new roles that have basically evolved as the need has dictated that, that evolution. Is there a role that needs to evolve as it relates to AI safety and security and ethics that you think is missing in most companies today?

Lisa Thee (18:03):

Yeah, so what I typically think about when I think about these types of roles is it's not specifically AI-based. AI is typically the tool for detection when people are, are violating terms of service violations and content moderation problems. But I do think that most of the mature tech companies are global in nature that have been handling large volumes of data. They have had chief Digital Trust and safety officers for a while now because the impact that bad actors can have on your community and your population and your business is pretty pervasive if it's not well managed. And you also have a lot of regulatory requirements that are going to be coming in from overseas that affect all multinational companies primarily out of Europe, around digital safety that will be enacted in the next few months, if not a few years. And that will be an evolving landscape. So, having someone that really understands the requirements of the company and the due diligence that they need to do, similar to maybe what a CIO was 10 years ago, I think that will be a very common role as we move forward here for companies to really understand the trust and safety vectors of what do we allow on our platforms? What do we stop? How do we warn people that they're starting to make bad choices? How do we escalate those warnings with ramifications if there are, if it doesn't end, how do we build healthy communities and how do we make sure that we're in the business of selling or creating this wonderful product or service? I would never expect that if I go on Pinterest that I can find legal content at the touch of a button, but when platforms are there to host information, there will people that abuse those opportunities. And so I really think it is something that all companies have the risk of. I mean, even if you go back to Silicon Valley, there are some pretty classic episodes where, you know, they build a platform for communication and it ends up being, you know, a bunch of middle-aged dudes and teenagers communicating. If you don't have good, clear legal ramifications in terms of service for your product, you can get yourself into some pretty sticky situations pretty quickly. And as an entrepreneur myself, I can guarantee you a pound of an ounce of prevention is worth a pound of cure, especially when you're talking about the safety of minors. Yeah. So, there are some really great free tools for people. If you go to Microsoft's website and go check out Photo DNA, that is a lightweight tool that you can think of, kind of like the same level of a spam filter that would be, that would identify felonious content on your platforms that have already been identified by law enforcement so that you can make sure that no one under your employee umbrella, whether that be vendors, contractors, or your employees themselves, is sharing, creating, or trading that information. You can also look at things like Safer with Thorn which is a pay-for-service that allows you to have all the automated processes for reporting those to the proper places if you do find that kind of content, if you find that you have an issue. So, I think there are a lot of really well-established organizations in the child safety and the livestream terror areas that will be a great baseline model for tools for small business medium, and large size business owners to be more proactive and that will continue to evolve.

Tullio Siragusa (21:39):

Are you seeing companies being proactive around this or are they just kind of still responding after something goes wrong? What, do you, what's the trend now?

Lisa Thee (21:50):

You know, it, it's company by company, to be perfectly frank with you. A lot of it is the awareness of the board and the C-suite on the amount of risks that they're taking. So, there are some companies that have been on the front page of the New York Times for having problems in this area, and funny enough, they seem to have a pretty strong stance on making sure that they don't have that situation again. And they make a lot of investments in that space. There are other companies that are claiming that are, you know, indicating that free speech is paramount. And I am a big fan of Free Speech, but I don't think that the privacy of adults outweighs the value of protecting the most marginalized in society. And if we don't use AI tools to be able to identify when there is flagged known illegal content being distributed intentionally the same way that we do with cybersecurity, we're missing opportunities to really reduce the amount of problematic content for businesses. And I don't think that the privacy of the adults trying to commit crimes outweighs the need for privacy from the vulnerable people that are getting taken advantage of. And I think we'll continue to see regulation start to tip in that other direction. It really, really concerns me when people get political about their points of view in content moderation. Unfortunately, I get called in when things go really off the rails and if you lived a date in the life that I have to lead, we're not looking at marginalized tweener situations most of the time in these companies, because you can only look at about 5% of the content at any given time, even using the best technology tools based on the volume of videos and chats and everything else going on. We're really prioritizing the absolute most egregious things that I think most people would not be super concerned about protecting the rights of the people that are doing them because they are very, very clearly illegal. And these technologies that have been developed to identify those situations are very, very lightweight. If you're concerned about the privacy violations of those tools, you should also go ahead and turn off your spam filter and turn off all your cybersecurity tools because it's about the same level of invasive Desk. It's looking for hashes, it's looking for known content is not reading your it, it's not, nobody's going in and reading your emails, let's say it that way.

Tullio Siragusa (24:28):

It sounds like regulation is definitely needed in this area. And it's, you know, as of today, it's really up to, I guess, CEOs and their willingness to be proactive with this. But seems like regulation probably would serve us really well in this area.

Lisa Thee (24:47):

Yeah, I have been working with my local representative to see if we can do more around sextortion legislation. I'm specifically passionate about that. And I think that you will also see a lot of regulations coming out of Europe. I follow along with Spectrum Labs' master's class because they do a nice job of consolidating all the regulation information and how to stay current. That's how I stay current with it. And last but not least, we need a lot more tech people that are really passionate about the mission of creating a society we all want to live in. Once this is all fully deployed. And so I have a book coming out called Go Reboot Your Career in 90 Days. That will drop from Fast Company on September 5th. And right now you can check out the web blank below. And really my goal in writing that book was to help people that have an interest in bringing innovation and their influence together to be able to leave their own legacies. Whether you're passionate about animals or seniors or the environment I would love to see more people step into entrepreneurship or as entrepreneurs at large companies helping to get the focus of the C-Suites and the boards onto these kinds of topics. Because it will be the thing that will differentiate the companies of the future. And I want to see a more inclusive landscape for people that are in the room when these decisions are being made.

Tullio Siragusa (26:12):

Lisa, it's been a pleasure to have you with us this morning. Thanks for joining us. Definitely check out Lisa's book. Stay With Me as we go off there in just a second. AI can make things a lot easier for organizations to be proactive and managing those who are at most risk and being proactive is the key. So thanks for being with us. We got a few more guests coming up this week. On Thursday, we have Preetham Shankar, who's the VP of Data Engineering and Analytics at Broadview Federal Credit Union. And on Friday we have Andrey Knourenko, VP of Engineering at Aras Corporation. And then we also have another guest next Monday, June 5th, Ebenezer Schubert, who's the VP of Engineering at OutSystems …will announce the topics as we get closer, keep your eyes open on your…wherever you're following us. Thanks for being with us today, and enjoy the rest of your day. See you again Thursday.

Lisa Thee (27:19):

Thanks for having me.

 

Lisa TheeProfile Photo

Lisa Thee

Managing Director, Data and AI

Lisa Thee is a Top 50 Global Thought Leader for AI, Privacy, and Safety with demonstrated experience in delivering revenue and solving complex business technology, governance, privacy, and risk challenges at scale.

Ms. Thee is a consultant to some of the world's most innovative healthcare, and global technology companies including Microsoft and UCSF’s Center for Digital Healthcare Innovation to accelerate FDA approval for AI use in clinical settings. She is the CEO and Co-Founder of Minor Guard, an Artificial Intelligence software company focused on making children safer online and in real life. She is a keynote speaker including her TEDx talk "Bringing Light To Dark Places Online: Disrupting Human Trafficking Using AI." She hosts the Navigating Forward podcast. She has been named to the 2022 Top Health and Safety, Privacy, and AI Thought Leaders and Influencers and Women in Business you should follow by Thinkers 360. She was recently named to the 2022 “Top 100 Brilliant Women in AI Ethics” global list.