Skip to main content

The Great AI Migration in 2022 and Beyond

Artificial intelligence, at its core, can be defined as an information system that allows computers or robots the ability to perform analytical and reasoning tasks typically done by humans. This is an intentionally vague definition as the abilities of AI continue to evolve, allowing for more complex cognitive tasks to be performed completely autonomously. These advancements have propelled AI into the spotlight, making it the ‘next big thing’ in technology as its abilities can be utilized across just about every industry.

Now, AI is rarely used alone. It’s more commonly found working in conjunction with machine learning (ML) and deep learning (DL). Before we look at the future of the industry, it helps to have a solid grasp on what each of their functions are and how they work together.

As stated above, AI describes any human-like intelligence that is exhibited by a computer or programmable machine, effectively mimicking the capabilities of the human mind.  The discipline is not new but has come into sharp focus throughout recent years. Computers have become more powerful and the semiconductor industry are making great strides towards manufacturing chips containing processors specifically designed for executing AI-based algorithms.

Machine learning is a subset of artificial intelligence that concentrates on developing algorithms and statistical models that enable machines to make decisions without specific programming. The machine learning process effectively refines its results and reprograms itself as it digests more data. Over thousands of iterations, this increases efficacy in the specific task that the machine-learned algorithm is destined to perform, delivering progressively greater accuracy.

And at the centre of this is deep learning. This is a further subset of machine learning that can teach itself to execute a specific task autonomously from large training data sets.  The aspect that sets deep learning apart from other methods is that these tasks are often learnt entirely unsupervised and without human intervention.

By combining the three disciplines of AI, ML and DL, computers can effectively learn from examples and construct knowledge. They can recognise objects, understand and respond to verbal natural language, tackle intractable problems, and even make decisions that humans might consider rational and perceptive.

The Transition from the Cloud to the Edge

Currently, training of AI is often driven by large corporations with the necessary compute resource and training data. Major factors driving the growth of AI include developments in machine learning, which in turn are occurring due to advancements in computer science. This allows for an increase in computing power and cloud storage which provides companies the ability to collect, store, and analyse large volumes of data.

However, there are issues with cloud-based AI. The primary one lies in latency. The lag time between deciphering the data and deriving insights to help make intelligent decisions is becoming a more common issue, as sensors and smart devices are being more widely used. And in product-based industries, the need for more accurate and specific data is also prompting the need for an alternative method of collecting data and generating insights.

While AI is broadly now considered “a better algorithm” for data analytics and insights, it’s not superhuman. For example, if machine learning uses biased training data, then the output from the AI can be incorrect at best and dangerous at worse. Therefore, the need for highly representative data is becoming vital. Indeed, there are companies today that specialise in creating synthetic data sets for AI, such is the necessity of having access to large volumes of unbiased data during training to deliver optimum results.

Until recently, AI required relatively high-performance processors to deliver results. However, with the advent of silicon chips with integrated neural network accelerators, it’s now possible to shift AI’s host from the cloud to the device, otherwise known as edge computing. Today, running AI “on the edge” in consumer devices is rapidly becoming commonplace. This leads to improvements in speed, accuracy and privacy.

The advancements made in machine learning are now helping to optimize models that are based on deep neural networks, making them compact and efficient enough to run at the edge. This allows for the integration of cognitive reasoning without needing to appeal to cloud-based servers. Essentially, this allows newer machine learning models the capacity to deal with videos, speech synthesis, data gathered by cameras and microphones, voice recognition, and other data collection sensors. And when edge devices are integrated with special AI chips, the insights and analysis that is hidden in these data clusters can be uncovered.

Factors Driving Growth in Edge Computing
  • Increasing investments on the advancements and development of AI and ML.
  • Machine learning tools and training sets being made available to third-parties, allowing development of their own AI-based solutions.
  • The design of silicon chips with neuromorphic computing technology by tech giants; this is coupled with improvements in silicon chip manufacturing to build large SoCs containing several neural network accelerators.
  • The ability for AI to be trained using large volumes of synthetic data, alongside real-world examples to remove bias.
  • The advancements made in probabilistic modelling wherein computing systems determine the best solution to a problem by taking into account all the uncertain variables.
  • Advancements in machine learning, in which a computer gets better at performing a task based on the data that it receives.
Examples of AI Edge Computing

To showcase recent examples, think of virtual assistants such as Alexa, Siri or Google Assistant.   Currently, the speech processing elements of these assistants can be run directly on today’s silicon chips. Moreover, the new Amazon AZ2 neural processing units are being integrated into silicon chips for Smart Speakers and are now processing some of the speech recognition on the device itself. And smartphone processors have been able to do this for a few years now.

However, it’s not quite at a place where it can be entirely handled on the device. Beyond simple command and control aspects, and now some of the speech processing itself, the intent – what the user requests – is still processed in the cloud because the assistant must refer to large databases that reside online to find the result.

AI on the edge extends to visual tasks as smartphones utilize AI for ‘computational photography’ which blends pictures from several lenses to create a polished final image. Another example is the use of AI to upscale HD content to 4K on smart TVs, by recreating the missing details.

Where the Future is Headed

In order to provide devices with the capability to handle AI workloads, the development of chips with neuromorphic computing technology is happening. These chips are being funded and created by tech giants like Intel Corp, Samsung, Apple, IBM and others. Ultimately, the design of the neuromorphic chip is inspired by the neural networks of the human brain. It works to replicate the connections that are formed when neurons communicate with each other.

Naturally, this means neuromorphic chips are packed with artificial neurons and synapses that mimic activity spikes that occur in the human brain, allowing them to become smarter and more efficient in their ability to handle heavy algorithms. As they are now an intrinsic part of the AI hardware market, they create new market opportunities by connecting with servers for cloud computing. This will be especially relevant with IoT devices; security systems, smart meters, connected cameras, and other connected wireless devices. Industries that will be heavily impacted include automotive, aerospace & defense, home automation and smart cities.

The acceleration of developments being made in AI and machine learning means we’ll soon be living life on the edge. Or at least our devices certainly will.

To find out more about the latest CE tech trends to watch out for in 2022, including analyst insights on January's CES announcements, download our brand new Tech Perspectives here.

For all report enquiries, please contact leon.morris@futuresource-hq.com

About Futuresource

Futuresource Consulting is a market research and consulting company, providing its clients with expertise in Professional AV, Consumer Electronics, Education Technology, Content & Entertainment, Professional Broadcast and Automotive. Combining strong methodologies and unsurpassed data refinement with in-depth market knowledge and forecasting, Futuresource deliver the latest insights and technological developments to drive business decision-making.

Related Reports by Futuresource

Date Published:

Simon Forrest

About the author

Simon Forrest

As Principal Technology Analyst for Futuresource Consulting, Simon is responsible for identifying and reporting on transformational technologies that have propensity to influence and disrupt market dynamics. A graduate in Computer Science from the University of York, his expertise extends across broadcast television and audio, digital radio, smart home, broadband, Wi-Fi and cellular communication technologies.

He has represented companies across standards groups, including the Audio Engineering Society, DLNA, WorldDAB digital radio, the Digital TV Group (DTG) and Home Gateway Initiative.

Prior to joining Futuresource, Simon held the position of Director of Segment Marketing at Imagination Technologies, promoting development in wireless home audio semiconductors, and Chief Technologist within Pace plc (now Commscope) responsible for technological advancement within the Pay TV industry.

Latest Consumer Electronics Insights

Cookie Notice

Find out more about how this website uses cookies to enhance your browsing experience.

Back to top