
Operational AI company Verta works to simplify AI and machine learning model delivery at scale for a more intelligent future. Verta recognizes that our everyday digital experiences – image-recognition software, voice assistants and the whole nine yards’ worth of chatbots, powerful computers in our pockets and hands via smartphones and all of their capabilities, and so on – are increasingly reliant on both AI and ML models to function.
That said, while the tools to build out practical AI and ML models are growing more and more mature, scalable and robust, the processes through which these models become operationalized are new, relatively speaking. In Verta’s words, many are “brittle” and not yet in a “prime state.” Plus, between the difficulties of managing model and metadata versions (as well as the packaging and deployment of models within existing systems), maintaining model observability throughout lifecycles and operating nuanced AI and ML in production can be a frustrating struggle for both SMEs and global corporations alike.
So, Verta builds software for high-velocity data science, ML, and AI product teams. By addressing problems within model management – tracking, versioning and auditing models used across products – Verta provides solutions for entire AI and ML lifecycles, from experiment tracking and production registry to deployment, inference, serving, and monitoring.
So, that’s Verta in a nutshell.
Now, let’s talk Verta Insights.
As the research practice arm of Verta, Verta Insights conducted a study that lasted between this past March and April. This 2023 AI Regulations study surveyed more than 300 AI and ML practitioners in order to, in Verta Insights’ own words, “benchmark the awareness of current and pending regulations covering AI, as well as companies’ levels of preparedness to comply with regulatory requirements around ‘Responsible AI’ and ML model transparency.” It also touched on data and model lineage, and even a host of what it deemed to be “widespread concerns” about generative AI (e.g. ChatGPT) and the increased urgency individuals and organizations are feeling around regulating AI.
The full study will be linked at the end of the article, but here are the biggest takeaways from it, in terms of recap.
Few companies today are considered to be in well-prepared states to meet current or future regulatory requirements in regard to AI.
- 55% of companies said that regulatory compliance was a “C-level” or board-level priority, and only 11% called it a “low priority.” This contrasts government-level acts that are set to pass in the near future; the European Union, for example, looks to pass an EU AI Act this year, and the U.S. Congress has taken up the American Data Privacy and Protection Act (ADPPA) on top of the Algorithm Accountability Act. The goal is to create new compliance and reporting requirements around companies’ usages of AI and ML, “intended to protect consumers against privacy violations, bias in automated decision-making and other potential harms.”62% of Verta’s study participants unwittingly highlighted a disconnect between the aforementioned low-priority stakes companies feel about AI and the overall in-process acts being passed. This percentage of participants claimed to not be confident that their respective company would be able to complete algorithm impact assessments called for in the U.S.’s ADPPA. (Only 28% stated they were semi-confident in completing them.)
That’s an alarming issue.
- Tacking onto it, unfortunately, close to 90% of companies have little or no real automation in place for the, per Verta, “the AI governance processes they will need to rely on to best ensure regulatory compliance, like bias detection and mitigation, model explainability, and model validation and testing.”
Companies that do adopt advanced AI/ML models are seen with a more “mature edge” when it comes to pending regulatory measures.
- What instills more hope is that 43% of participants said that compliance was “one of the key priorities that drives their investments.” Verta then broke companies into two categories; “leading performers” versus “lagging performers,” i.e. those that do usually meet their financial targets regarding advanced AI preparedness, and those that rarely (or have yet to) meet targets. In every cross-reference, Verta’s leading performers excelled with active ML tool usage and with their support of adaptive model governance and risk management, whereas – not unexpectedly – the lagging performers weren’t sure how to establish a model catalog, nor how to test or deploy in ways that align with ethical and regulatory standards.
This issue also must change.
“Companies typically react to regulatory pressures in a predictable curve, where we see leaders and fast followers making substantial early investments in the kind of people, processes and technology necessary for compliance,” said Rory King, Head of Verta Insights Research and Verta Go-to-Market. “Laggards and later followers, on the other hand, tend to wait until a regulation is imminent or has taken effect before they prepare for compliance. This means that leaders are prepared for compliance on ‘Day One’ when the regulations go into effect, while laggards are don’t proactively take initiative and are frequently left scrambling to comply, resulting in additional cost, lost business or even legal risks.”
Overall, though there isn’t a single definitive “moral of the story” here (given what has still yet to change in the world of AI), it is still the wiser move to invest in more mature AI preparedness as regulatory measures loom and new AI and ML advancements crest the horizon as hundreds of big-name companies integrate them into competing tech that will command the direction of today’s markets.
Edited by
Greg Tavarez