Five steps for getting started with AI in your business | Artificial intelligence
For all the buzz about AI, it can be difficult to get a clear idea of how techniques such as machine learning could actually benefit your business.
Nearly two-thirds of business leaders don’t understand the potential returns from using “AI”, according to a new report by Microsoft ‘Maximising the AI opportunity’.
“The default position tends to be to associate AI with prohibitive or unjustifiable expense,” says the report, which surveyed 1,000 business leaders and 4,000 employees in the UK. That scepticism is backed up by some industry watchers, who highlight the gap between promise and reality in AI-related technologies ranging from chatbots to self-driving vehicles.
In spite of these doubts, the report says its survey found “that organisations already on the AI journey are delivering a 5% improvement on factors like productivity, performance, and business outcomes compared to organisations that are not”.
“Those organizations that we’ve surveyed that have done something, even something small that hasn’t cost so much, they’ve seen tangible benefits off the back of that,” said Michael Wignall, Microsoft’s UK CTO.
According to Wignall, AI-related technologies being implemented by firms include everything from website chatbots to custom machine-learning models making predictions based on manufacturing data.
Here are the five steps the report identifies for rolling out AI in a business.
1. Identify the business problems you need to solve
First identify the business problems you need to solve, then assess each one to see if they would be a fit for an existing AI-related technology, such as chatbots for handling simple customer queries or using Robotic Process Automation (RPA) for certain back-office roles. (There is debate over whether RPA’s rules-based approach should be classed as a form of AI, but this report mentions RPA as an AI-related technology).
SEE: IT leader’s guide to deep learning (Tech Pro Research)
Writing in the report, Microsoft UK CEO Cindy Rose said: “As with so many other business issues, overcoming this sense of inertia comes down to first identifying the business problem that needs to be solved.
“Is it, for example, the need to increase efficiency in administrative tasks, such as payroll and invoicing? If so, a Robotic Process Automation (RPA) solution could be the answer. Is it about improving the customer experience with a chatbot or automated telephone system? Is it the need to free up employees’ time for creative tasks by using machine learning to handle the more mundane or straightforward parts of their job? Is it all of the above and more?”
In the report, the British energy company Centrica said it started with the goal of trying to resolve customer enquiries quickly, and to do so built a natural-language bot to work alongside its call center operatives “providing them with the information to best support customers and feeding updates into our back-end systems”.
2. Determine how ready your organisation is to build, manage, and support AI-related systems
Once a suitable problem has been identified, companies need to check they are prepared to build and manage their system of choice.
There are a wide variety of machine learning services available from each of the major cloud providers, both on-demand services for the likes of image and speech recognition, and tools for building custom models, as well as the option of building an in-house system using GPUs and one of the many ML-focused software frameworks.
With machine learning, the report says a key concern is whether the right data is being captured and is being processed appropriately to train the model to make useful predictions.
“What we mean by good data is there’s both a quality and a quantity aspect to it,” said Microsoft’s Wignall.
“You need a lot of data to better train the AI models and the more data you can get the better, whether it’s customer interactions, Internet of Things or sensor data.
“But actually it’s the quality as well, making sure it’s the right data for the purposes you want to use it for.
“You need to think ahead of implementing any of the technology why you’re collecting this data and what you’re using it for. Indiscriminately capturing everything is not the way forward anymore — it’s about quality and quantity.”
Data collection and cleaning pipelines are already in place at a number of companies, the report found, with more than one third of firms saying their organization is already using tools like predictive data analytics and data integration.
3. Map the core skills in your organization and those that are missing
Once the problem has been scoped out, and the potential technologies and data needed identified, the next step is to pinpoint what skills you have in-house to realise the project.
The report recommends mapping the skills available and those needed in the mid to long-term, before working out ways to build any missing skills — including using in-house skills programs, recruitment, and harnessing partner organizations.
These are not just the skills needed to realise the project, but also those needed by employees whose jobs will change as a result of using AI-related technologies like chatbots in the business.
Microsoft found about one third of business leaders admit to being uncertain how to provide staff with the skills needed to cope with disruption caused by AI reshaping roles.
And while nearly half of employees felt able to develop new skills important to their jobs, only 15% said their organization was providing support to help them do it. In addition, just 18% said they were actively learning new skills to help them keep up with “future changes to their work caused by AI”.
This finding of a lack of preparedness chimes with a report by the UK HR body CIPD, which last year found that UK training spend has been falling 2005 and that “participation in job-related adult learning has fallen significantly in recent years”.
4. Foster a culture in which employees can experiment with and evaluate AI
Employees and business leaders are open to experimenting with AI-related technologies to help them do their job, Microsoft found, with 67% of leaders and 59% of employees saying they were open to the idea.
However, employees may need some encouragement before engaging with these new technologies. Newcastle City Council, which has been experimenting with using various bots to handle simple interactions with the public, recommends some simple steps.
“Part of this comes down to picking the right kind of projects with which to experiment,” writes digital transformation programme manager Jenny Nelson in the report.
“Starting small and scaling up helps teams build trust, get feedback, learn lessons, and build confidence.”
Microsoft’s Wignall said: “Employees we surveyed were getting quite open to the use of AI, there wasn’t an ingrained resistance from an employee perspective. But they might not necessarily have the skills to take advantage of the change and they also might be worried about what will happen to their jobs if their existing skills aren’t the right skills needed for the future.”
He recommended “fostering a learning and development environment from the bottom up, where people are thinking about how their jobs might change under the auspices of AI” and “investing in skills”, and encouraging “continuous learning and development”.
5. Don’t forget about bias and ethics
As mentioned above, machine-learning systems are only as good as the data they are trained on.
“If that data is misrepresentative, biased or downright wrong, the way the machine learns to use it will, in turn, be fundamentally flawed,” the report states, adding there is a need for human oversight.
“Only by helping engineers eliminate data blind spots around factors such as gender, race, ethnicity, and socioeconomic background can we ensure that AI technologies deliver the unbiased, societally responsible outcomes we want.”
The report recommends developing an AI manifesto setting out a framework for using the technology in an ethical manner, and which “protects data privacy, guards against the malicious misuse of AI, and lays out clear guidelines around issues like inherent bias, automation, and where responsibility lies when things go wrong”.