Susanne Tedrick Susanne Tedrick

Realistic expectation of artificial intelligence, four things you should know

While we are still a few decades out from seeing most businesses and industries use artificial intelligence (AI) in their operations, there is certainly greater adoption than there has been in years past. According to a top IT training company, 2021 saw a sharp uptick in respondents for their annual AI Adoption in the Enterprise survey, (nearly three times than the previous year), as well as 61% of respondents showing that they are actively considering or evaluating AI solutions.

If you are interested in learning more about AI, here are four things you should know:

AI is already embedded in people’s day to day – While widespread and more sophisticated uses of artificial intelligence are perhaps slated in the not-so-distant future, people are already using or interacting with some form of AI every day. If you have used web-based email services, used ridesharing services, or shopped on e-commerce platforms, you’ve likely been interacting with artificial intelligence in some capacity and didn’t know it.

There are currently no federal or state laws that require companies to disclose that they are using AI in their applications and, for applications that are used by the public, customers are more likely to be interacting with a very well programmed AI interface than with another human being. However, many consumer advocacy groups are pushing to have more formal laws instituted for requiring disclosure, as they believe that there is the potential for customers to be deceived and misled. While this may not be a worry for simpler transactions, this can be more concerning where there is more at stake, like financial services and telemedicine.

AI vs. Machine Learning vs. Deep Learning – The terms AI, machine learning, and deep learning are often used interchangeably, but each term has a specific meaning and is interrelated.

General, high-level uses of AI, like text chatbots on e-commerce sites, are likely pre-programmed by humans to mimic typical human interactions or calculations. The AI here depends on what is already programmed; it cannot “learn” on its own from mistakes or added information, nor can it start up or power down on its own. heavily relies on human interaction.

Machine learning helps to address these scenarios. In machine learning, tools are put into place where the AI learns from feedback and information it is given, and then uses it to improve its operations or performance in the future. Human intervention is sometimes necessary, but not at the same level as basic AI. Voice assistants are examples of machine learning in action.

Deep learning takes this to the next level, where the learning that takes place mimics that of the human brain, using sophisticated algorithms called neural networks. Neural networks are currently being used in several applications, including aircraft fault detectors, guidance systems used by cars, and even ATM machines.

AI is more accessible than you think – Two of the biggest barriers of entry for organizations to use artificial intelligence tools are overall costs and gaining the right set of skills to use these tools properly.

While the overall cost of an AI solution will largely depend on the targeted use case, or problem attempting to be solved, and the level of sophistication needed. Building a general AI chatbot will not be as expensive as, say, building a self-driving car. That said, many cloud computing platforms offer cost-effective AI and machine learning tools to people and organizations, as well as free or low-cost training to help them get started

AI requires ongoing investment – The costs that come with AI and related solutions are not a “one and done” affair. Although AI solutions can minimize the time and effort needed for monotonous, low-value tasks, they require monitoring to ensure that operating properly, biases are minimized and that they comply with any industry regulations, as well as local, state, and federal laws. AI solutions, much like cloud computing-based solutions, should be considered an operating expense vs. a capital expense. In addition, as AI solutions change and mature, ensuring that staff is properly skilled is crucial.

As leaders look to bring AI into their organizations, it is important that they have a strong computing infrastructure in place to support its use. AI tools require a significant amount of computing resources – specifically, central processing units (CPUs), graphics processing units (GPUs), storage, and networking – to perform optimally. Organizations will also need to have a clear understanding of what AI adoption will look like in their organizations and a solid strategy on how to make that adoption a reality.

Lastly, it is important that leaders have realistic expectations on the problems that AI can, and cannot, solve. AI solutions cannot fix antiquated or poor internal processes and systems. Throwing AI solutions on top of flawed systems will not only lead to failure but lead to failure quicker and with huge tangible and intangible costs. It is important that an organization reflect on its existing systems and culture before embarking on AI solution adoption.

About the Author:

Susanne Tedrick is an infrastructure specialist for Azure, Microsoft’s cloud computing platform. In her work, Susanne helps her clients address needs and challenges surrounding cloud adoption, migrating on-premises workloads to the cloud, and cost optimization. Susanne previously worked as a technical specialist for IBM Cloud. For more information, please visit www.SusanneTedrick.com.

Author: Susanne Tedrick

Share This Post On