AI
April 9, 2024

In the World of AI, is Your Data Team a Liability or an Asset?

We’re barely scratching the surface of AI and it’s already playing a big role in politics, schools, corporations, and society. It’s helping us save the bees, improve quality of life for people with disabilities, and combat climate change. And while AI holds massive potential, like with any new technology, it has its shortcomings. From Google’s rendering of Black founding fathers, to ChatGPT’s payment-related snafu, to Samsung’s accidental leak of sensitive information, how organizations integrate and execute AI is incredibly complex and critical to get right. 

As a data leader you’re in the driver’s seat and your actions will determine whether your organization views your team’s use of AI as a liability or an asset. Here are two data leaders at high-growth, high-visibility companies to share their perspectives on how AI is impacting themselves and their teams. 

The impact of AI on data leaders and data teams

Perspective by Daniel Morris, former Global Head of Subscription and Consumer Analytics at HBO Max

Ultimately, I view AI as an asset. My role as a data leader is in making this reality by evaluating the potential and power of AI, while also questioning the value of it. I truly believe that AI has enormous capabilities, but I do think we are farther off than we’d like to admit. 

When ChatGPT came out last year I think many of us in the field thought it might be possible to ‘talk to’ our data in natural language within a year or so. While I’m bullish on that possibility - and there are several promising startups doing amazing work here like dot and zing - I think most data teams have real work to do before we get there. Some of that work is fine tuning the technology (e.g., avoid hallucinating KPIs when I’m demoing to my CEO), but much of it is really work we already know we have to do (read “important but not urgent”) to improve the hygiene of our data environments and ensure we have rock solid semantic layers. 

As Gen-AI evolves I also think about our data teams and the skills they possess. The teams that truly understand AI and can utilize it well will define how an organization benefits from it. In my view, some roles on a data team will be more impacted than others.

Machine Learning (ML) Engineers in my opinion are in the toughest spot. I think this is primarily because the core technology appears to be on track to be commoditized. We’ve had this moment of breakthrough technology for AI and while there is certainly a race to develop the best models, most companies are not developing their own. Instead they are looking to benefit from core tech with some modifications. Unless you’re a ML Engineer at one of these leading companies or a startup that is ‘AI first’, I suspect this moment of excitement for Gen AI is not a tailwind for you.

Interestingly, it seems that Analytics Engineers and Data Engineers are in the best position to harness the power of AI. Ultimately their job is to engineer solutions that get data from A to B in a way that data integrity is not lost, which is highly context dependent. At HBO Max when we were prototyping our first analytics chatbot I found it was the Analytics and Data Engineers that were doing a lot of the heavy lifting to ensure that the data models were clean and straightforward.  

In addition, the two primary use cases for Gen AI right now that seem to have legs are (1) analytics chatbots to ‘talk to your data’ and (2) copilot-like capabilities for building data modeling. To enable the former you need a solid semantic layer (as Sequeda, Allemang, and Jacob’s technical paper published last November showed us), which requires dedicated analytics engineering and/or data engineering work. In the case of the latter this is a huge part of the day to day for these roles, and so being able to use Gen AI and LLM capabilities to accelerate or improve the work becomes important.

Lastly, I think a Cinderella moment is to come for those focused on Data Governance. Data needs to be clean and well organized in order for it to be coherent and avoid hallucinations. It seems possible to ‘trick’ LLMs into providing responses that developers did not intend them to, and so this may present a challenge for certain organizations. If we are indeed marching towards more data transformation and interpretation by AI then we’ll need to spend more time on the principles that govern the actions that can be taken v. the action steps to be taken.  

Data security and privacy in the usage of AI

Perspective by Nate Coleman, Head of Data Science at Calm

AI, and specifically generative AI has certainly been the hot topic among my peers in the analytics space. While it's fun to talk about the potential upsides of AI in the data space, I take the downside risk seriously. The AI space is fraught with many unknowns, which means that it’s difficult to estimate the risk of it creating a catastrophic situation for the business. As a result, the bar for applying AI solutions in production is high. In practice, this means that when I'm doing impact sizing for an AI initiative, it has to look much more like a fundamental change to our product or business rather than an optimization on top of our existing product. 

Today, AI is impacting me and my team directly through the tools we leverage in our day-to-day (e.g. Co-pilot). It’s also a source of creativity for us. With the low-friction availability of LLMs we’re able to explore programmatic solutions to previously unscalable problems, for example, deriving semantic meaning from a cluster of similar pieces of content. While it hasn’t fundamentally transformed our work, it’s certainly becoming a part of our toolkit.

At Calm we support both a consumer and enterprise business - which are quite different, but have one key similarity - they are built on a foundation of trust with customers. Once that trust is lost, it’s almost impossible to win back, which makes data security and privacy a priority consideration for my team in the use of AI. What this effectively means for us is a lot more “red tape” than we’re used to. Instead of just writing code and building analysis, we’re having conversations with our legal and security teams before moving forward. That’s definitely a new muscle for us. 

We’re all building this jet plane while we’re flying it at the speed of sound, but the question I keep coming back to is how individual data ownership will play a role in the future of AI. Today, we see companies like the NYT filing suits against open AI for the use of their content - will there be a world where users opt-in or out of their data being used in AI by specific businesses? If that happens, how will our data teams have to change infrastructure to handle security compliance? How will data science teams handle the model bias that introduces? There’s a lot that can change at the drop of a hat. It’s both concerning and exciting.

The data team of the future

Companies and data leaders that can find a way to deliver all of AI’s promises while respecting security and privacy, will come out on top. In the rapidly evolving AI landscape, what are those skills and best practices that teams need to have to build this? Modal and other similar AI upskilling tools are critical for data teams to leverage now to realize the potential of AI’s future.

Most importantly, cultivating talent for AI at your organization ensures that your data team is valued as an asset, and not a liability. Data leaders will ultimately be the champions for AI, helping organizations understand its applications, shortcomings, and benefits. And they are the ones responsible for building the teams that will find the efficiencies and capabilities to make their businesses better and more productive.

To learn more about how Modal can help your data team succeed in the face of AI, get in touch with us here.

See how Modal works

Learn more about Modal's unique approach to training data professionals.
Learn more