12 Sep Why pre-trained accelerators are the key to successful AI adoption
by Sanjay Srivastava 12 September 2019
On your next commute home, think for a moment. Do you remember the number of times you turned right or the color of each traffic light? If you are like most people, you don’t retain these minute details. It is widely recognized that the human brain receives up to 11 million inputs per second, yet the conscious mind seems to only process around 50.
In effect, the human mind has evolved so that it can execute the right thinking at the right time, i.e., the many decisions that we make each day.
the most advanced artificial intelligence (AI) systems can’t match the brain’s processing
power. But like our brains, these systems rely on specialized algorithms that come
into play for various processes and subprocesses. So, just like our brains subconsciously
handle numerous subtasks to recognize when to hit the brakes, these specialized
AI services form the building blocks of AI applications.
are seeing more enterprise AI applications being made from granular algorithms weaved
and orchestrated together. Successful organizations are building these
algorithms using a combination of data and domain knowledge of their specific
industry and business. These pre-trained AI accelerators help companies
automate specific tasks and increase AI adoption on a grand scale.
will explore what is driving the rise of pre-trained AI accelerators and why
they are the key to successful AI adoption.
Giving context to AI through data and
large enterprises around the world say that AI adoption is not a question of
“if,” but “when.” According to Genpact’s AI 360 survey, 79 percent of senior executives
plan to fundamentally reimagine their business model or significantly transform
their business processes using AI by the end of 2021.
more enterprises moving beyond planning and taking action? Unfortunately, there
are a few hurdles to AI deployment, including where to start, what to build and
how to execute. After all, AI—by design—is horizontal and use-case agnostic.
deployment of AI requires tuning algorithms and systems to the desired use
case. For instance, an algorithm for financial forecasting will be
significantly different from one used in predictive maintenance on machine
parts. Both can use the same core AI platform, but they need to be trained with
different datasets, semantic understanding, process knowledge and ontologies.
This contextualization is why relevant datasets and domain knowledge are
critical to AI applications. For instance, with machine learning, domain expertise
orients an algorithm towards its goal, providing context so that data samples
are used effectively to train the machine for the desired outcomes.
A move towards more modular solutions
A shift is
happening towards more modular AI applications, presenting a major step forward
in easing AI adoption and scalability for large enterprises. Almost all
technologies eventually evolve into more modular components. For instance, the earliest
computers took up entire rooms and required considerable working knowledge.
Today, my barely teenage son can build his own computer using memory chips and
graphics processors, stacking them like Lego blocks.
same vein, most cloud-native applications are now built as microservices using interchangeable “building blocks” that we can
separately optimize and update. Such a modular architecture using pre-trained
AI accelerators makes applications not only more flexible, scalable and resilient,
but also open to innovation. Microservices represent the next stage in AI’s
evolution, unlocking AI’s true power by using pre-trained accelerators to
assemble components of end-to-end processes.
example, the Accounts Payable (AP) department is normally plagued by labor-intensive
and transaction-heavy processes. Teams have to deal with documents in a variety
of formats and widespread unstructured data, turning decision-making into a highly
complex endeavor. Instead, an organization can use multiple pre-trained AI accelerators
to reimagine its AP processing and apply AI to several subprocesses. One pre-trained
AI accelerator might disambiguate tables and extract information from paper
invoices, while another turns unstructured data into structured data sets. This
modular fashion helps to automate once complex business decisions.
A sum as great as its parts
accelerators enable enterprises to rapidly and modularly build applications
while avoiding time-consuming customization and curation. While each
pre-trained AI accelerator doesn’t solve for an entire end-to-end process by
itself, they provide a separate algorithm for each subtask.
example, in invoice processing, an organization can tune one algorithm to
identify and extract information from the correct fields on a balance sheet,
while another can assign scores based on semantically extracted text in a
footnote. Each unique accelerator is tuned to perform its specific subtask—through
relevant data and domain knowledge—with high accuracy. They can then work in
concert to measure a lending portfolio’s risk. This streamlines development so
the organization can reap the benefits of AI sooner, with the flexibility and
scalability to apply it to other critical tasks.
If our brains had to process every tiny task involved in driving, our daily commute might not be possible. Similarly, throwing terabytes of data at a generalized horizontal AI system doesn’t scale. Instead, enterprises should invest in pre-trained AI accelerators that perfect individual tasks, allowing large applications and end-to-end processes to become the sum of these optimized parts. As technology trends toward modularization, pre-trained AI accelerators—fueled by data and informed by deep, contextual expertise—are the key to unlocking AI’s potential.
Sanjay Srivastava is chief digital officer of Genpact, an American professional services firm focused on digital transformation.
Catch up with Sanjay and the Genpact team at the AI Summit San Francisco, Sept. 25 – 26. Find out more about how you can attend.