Few IT leaders dispute the fact that AI is this decade’s breakthrough technology. Yet this wasn’t always the case. In fact, until relatively recently, many AI cynics failed to recognize the technology’s potential and, therefore, fell behind more astute competitors.
As they begin to make up for lost time, business and technology leaders should focus on key readiness areas: data infrastructure, governance, regulatory compliance, risk management, and workforce training, says Jim Rowan, head of AI at Deloitte Consulting. “These foundational steps are essential for success in an AI-driven future,” he notes in an email interview.
Rowan cites Deloitte’s most recent State of Generative AI in the Enterprise report, in which 78% of respondents stated they expect to increase their overall AI spending in the next fiscal year. However, the majority of organizations anticipate it will take at least a year to overcome adoption challenges. “These findings underscore the importance of a deliberate yet agile approach to AI readiness that addresses both regulation and talent challenges to AI adoption.”
Getting Ready
The key to getting up to speed in AI lies in hiring the best advisor you can find, someone who has expertise in your company’s area, advises Melissa Ruzzi, AI director at SaaS security firm AppOmni. “Some companies think the best way is to hire grad students fresh out of college,” she notes via email. Yet nothing beats domain expertise and implementation experience. “This is the fastest way to catch up.”
Many organizations underestimate the amount of cultural change needed to help team members adopt and effectively use AI technologies, Rowan says. Workforce training and education early in the AI journey is essential. To foster familiarity and innovation, team members need access to AI tools as well as hands-on experience. “Talent and training gaps can’t be overlooked if organizations aim to achieve sustained growth and maximize ROI,” he says.
Every company has multiple projects that can benefit from AI, Ruzzi says. “It’s best to have an in-house AI expert who understands the technology and its applications,” she advises. “If not, hire consultants and contractors with domain experience to help decide where to get started.”
Many new AI adopters begin by focusing on internal projects tied to customer delivery timelines, Ruzzi says. Others decide to start with a small customer-facing project so they can prove AI’s added value. The decision depends very much on the ROI goal, she notes. “Small projects of short duration can be a good starting point, so the success can be more quickly measured.”
Security Matters
AI security must always be addressed and ensured, regardless of the project’s size or scope, Ruzzi advises. View developing an initial AI project as being similar to installing a new SaaS application, she suggests. “It’s crucial to make sure that configurations, such as accessibility and access to data aren’t posing a risk of public data exposure or, worse yet, are vulnerable to data injection that could poison your models.”
To minimize the security risk created by novice AI teams, start with simple implementations and proofs of concepts, such as internal chatbots, recommends David Brauchler, technical director and head of AI and ML security at cybersecurity consulting firm NCC Group. “Starting slow enables application architects and developers to consider the intricacies AI introduces to application threat models,” he explains in an email interview.
AI also creates new data risk concerns, including the technology’s inability to reliably distinguish between trusted and untrusted content. “Application designers need to consider risks that they might not be used to addressing in traditional software stacks,” Brauchler says.
Organizations should already be training their employees on the risks associated with AI as part of their standard security training, Brauchler advises. “Training programs help address common pitfalls organizations encounter that lead to shadow AI and data leakage,” he says. Organizations that aren’t already providing guidance on security issues should incorporate these risks into their training programs as quickly as they can. “For employees who contribute to the software development lifecycle, technical training should begin before developing AI applications.”
Final Thoughts
As organizations gain experience with GenAI, they will begin to understand both the rewards and challenges of deploying the technology at scale, Rowan says. “The need for disciplined action has grown,” he observes.
As technical preparedness has improved, regulatory uncertainty and risk management have emerged as significant barriers to AI progress, particularly for newcomers, Rowan says. “Talent and workforce issues remain important, yet access to specialized technical talent no longer seems to be a dire emergency.”
Although tempting, Brauchler warns against rushing into AI. “AI will still be here in a few years [and] taking a thoughtful, measured approach to AI business strategy and security is the best way to avoid unnecessary risks,” he concludes.