The Increase in Computer Power is Driving Applied AI

 

The continuous increase in speed and capacity of CPUs and chips is driving AI models closer to mimicking reasoning and problem-solving skills. Read more about the applications and advantages of AI for organizations.

Encora’s engineers have kept their fingers on the pulse of the technologies enabling agility in the face of uncertainty as organizations cope with the impact of COVID-19 and move into 2022.  

We spoke with the Delivery Director and Data Science & Engineering Lead for Encora Central & South America, Rodrigo Vargas, about Applied AI. The acceleration of Applied AI is 1 of 10 trends that Encora’s Innovation Leaders expect will help organizations respond to disruption in the new year and beyond.

 

 

 

To what do owe the rise of Applied AI?


When it comes to Applied AI, we are talking about how AI concepts and domains are applied to real-world problems to find real and practical solutions. This is particularly difficult because of the computing power we require for it to run flawlessly and at a speed that mimics human reasoning and problem-solving skills. We’ve seen the rise of AI applications in the last few years because we have also seen the rise of speed and capacity of CPUs and chips that power that technology.

The rise in computer power and availability of AI in general has allowed us to address a broader range of problems that we could tackle before, thus developing new applications and uses for specific cases, like decision making, automation, and cost savings.

 

How are organizations benefiting from AI today?


How an organization benefits from using AI depends on the organization, but some examples include autonomous operations. That means leveraging computer power to free up people’s time to perform tasks that require the cognitive skills that computers cannot do right now. It’s about doing more with less.

Computers don’t get tired, they don’t need a vacation, and they work 24/7. We can leverage their capacity for problem-solving to engage more with customers, for example, and to provide customer solutions for problems we haven’t seen before.

We can leverage technology such as Robotic Process Automation (RPA) and computer vision to identify patterns in the financial industry like default risks and fraud detection.

Those are the kind of tasks that were addressed through rules-based programs in the past, but now, they’re leveraging AI in such a way that it’s… I wouldn’t say easier, but computer systems are able to recognize frauds that we weren’t able to recognize in the past, at a quicker velocity, just because of scale. There are millions of transactions happening every single minute. Attackers learn and always find a way to go around different security checks that systems have in place. By leveraging AI, systems can learn, by themselves, and deploy solutions right away, without human intervention. Without AI, those fixes, updates, and improvements will take weeks or months.

 

What direction will Applied AI take in the next few years?


Although Applied AI has gained a lot of traction, it is still being used only for very specific or narrow use cases. For example, you can train an AI model to play chess but that same model cannot be used to decide, for example, how to steer a car or how to recognize signs in the street.

We are talking about narrow AI, and we identify narrow AI as the current state of technology. We can create AI models to resolve very specific problems but we cannot yet create models that can respond to more general problems like the human brain does. The human brain is versatile, capable of doing many complex things. AI, on the other hand, requires specific models for very specific use cases.

So, where do we see AI going into the future? We’ve seen an increase in the power of AI and an increase in the complexity of models capable of tackling higher-level problems. These AI models are getting better as we feed more data into their training processes. So, what we are expecting to see in the future are more complex neural networks and more complex AI because we will be able to process a lot more data.

 

What are some common misconceptions about Applied AI?


People generally think of AI only when they see robots or self-driving cars. Again, AI is a broad term. It covers a large range of fields. One of which is called computer vision—the computer being able to identify what it is looking at and engage with those objects.

Another example is translation. People don’t generally think about AI when they see a translation tool but under the hood that’s using AI.

Misconceptions arise when people limit their recognition of AI to robots or chatbots. Their perception is limited to an interaction with another entity, whether it is virtual or whether it is physical. But AI goes beyond that and is being used in a wide variety of fields.

 

What are the different domains of AI?


AI has many fields, but the most common ones are: Computer vision – This is everything related to capturing what a human eye sees and being able to identify what is being seen by the computer. That’s object detection; Natural Language Processing (NLP) - It’s anything related to how humans communicate. That includes subfields such as language translation, text summarization, and many use cases related to how we process text such as voice to text, and text to voice solutions, like the ones we see on our smartphones or when performing dictation; Machine Learning (ML) - It is a very specific subfield that is generally used interchangeably with the term AI but machine learning is the use of computers to perform tasks such as classification, for example, being able to identify frauds in international systems, classify objects, or classify people’s behaviors; Deep Learning - Deep Learning takes machine learning one step further by using more data and more complex solutions to perform the task at hand.

There are more fields, but these are the big four.

 

Can you tell us about neural networks?


Neural networks try to mimic how human neurons work. We can say it’s a mathematical data structure that tries to learn based on the stimuli it receives. It’s a set of different artificial neurons that are laid out in such a way that, when stimulated by some input data, they compensate internally to produce a specific result. There are different types of neural networks, different layouts depending on the problem that you need to resolve, and it’s a data structure that is used to produce a specific output from input data.


The good thing about neural networks is that they learn. When you create a neural network and it’s “empty” or “untrained”, we cannot expect accurate results. So, we need to train that neural network. That means, internally, those networks will change through time based on the input data and they will learn to produce the results that we expect. That’s why there are very specific processes to train them and we need a lot of data to train them. And when I say a lot, it’s a lot. We are talking about millions of records. We cannot expect accurate results if we just give a neural network 10 or 100 pieces of input data. It needs to work through many cycles to learn how to produce the results that we expect.

 

What does it mean for AI to be responsible?


AI is a great tool used to accomplish many tasks. But there are concerns around how AI can replace human reasoning. There are a few challenges we need to address and, generally, when creating and training models, we need to make sure that the data that we feed those models removes the biases that would produce unexpected results.

Let’s take, for example, a neural network or a machine learning model that defines whether you get credit or a loan. Your age, background, birthplace—all that information can be used to train those models. But what if some bias is introduced in that training data? Let’s say the model is trained using only white males living in certain areas. That means the model might discard qualified applicants just because the model was trained to exclude those people. That is no good for AI and that means that there are a lot of things to be considered when creating those models.

When we talk about responsible AI, we’re talking about making sure that the models are not biased, they’re properly trained, and they can produce results based on the task at hand and not considering data that should not be considered.

 

Who is responsible for AI mistakes?


There are ongoing discussions about this because, generally, tech moves faster than legislation. Some would argue that the responsibility falls on people that trained the model but others would say the responsibility falls on the ones who deployed and used the model.

There’s no easy answer to that question. There needs to be a common understanding and a common definition of how to validate and confirm that AI models conform to specific criteria and principles. The thing is that there’s no consensus on how to confirm that a certain model follows those principles. Again, technology has evolved rapidly and legislation has fallen behind. Who’s responsible for AI, right now? That could depend on whoever is using it. If I deploy an AI model, I’m responsible for what that model does, but at the end of the day, where did I get that model from? How it got trained should also be considered.

There are also malicious people working on how to go around the AI models being used. For example, street signs that are specifically designed to make autonomous vehicles malfunction and produce accidents. So, who’s responsible for an accident there? Is it the company who sold the car, whether the company created the model or not?

Those are the kinds of conversations currently happening on a global scale. There are principles being applied but at the end of the day, it’s the people that create the models who are the ones responsible for applying those principles. But there’s no governing body making sure that they are applying those principles.

 

How can organizations protect their AI from malicious actors?


Cybersecurity is also a broad term. The difference here with AI is that the data itself can also impose a liability. So, how can companies protect it? First, the basic stuff, making sure that all aspects of cybersecurity are in place within the organization, like access to data and access to systems. To go a step further, deploy validation steps and verification steps for models that are being created so that they get properly tested, and properly verified without being used.

Organizations have to try to stay one step ahead of attackers. Cybersecurity can be underestimated and organizations must make sure that there’s a proper organizational approach to cybersecurity in place that includes everything related to AI. There needs to be a body within the organization, a department, specialist, or a group of people that actively work on making sure that all processes and systems are protected. That’s obviously just one step to approaching this.

 

What role will Federated Learning play in developing larger and accurate AI models?


As I shared, in most cases, larger datasets with appropriate quality lead to better-performing AI models. As an example, in the FinTech and HealthTech verticals, larger datasets can lead to improved fraud detection and health diagnosis, respectively. However, building large, consolidated datasets may not be feasible in all cases. There may be privacy concerns, regulatory frameworks, and competitive pressures that prevent smaller datasets from being combined into larger ones. This is where the federated learning techniques come into play. These techniques do not attempt to bring together all data into a consolidated (aggregated) dataset; instead, local datasets are used to train local AI models, and these multiple models are aggregated/averaged by the centralized server to create a new model and transmitted back to the local entities/clients. So, these techniques address concerns that I just mentioned, while simultaneously address the needs to more data in building robust AI models.

 

How will AIOps enable companies to improve speed, security and accuracy when deploying AI models?


While AIOps is not without its challenges, there has been tremendous progress over the past couple of years. In environments that support a large number of AI models, some degree of maturity with AIOps is almost a necessity. Without AIOps, operational confidence in AI models can suffer and complexity can skyrocket. AIOps practices & technologies are critical towards ensuring that models perform as intended, traceability is maintained and vulnerabilities are identified either ahead of time or as they arise.

 

How will companies leverage Cognitive Services to drive deeper integration of AI into their workflows?


Cognitive services such as those based on speech, language, vision, or decision, allow organizations to leverage pretrained models and deploy AI into their workflows using APIs. The pretrained models are customizable. These services can be used to construct task-specific AI models and can be combined with business rules to create AI solutions. Virtual assistants and chatbots are great examples of how organizations use speech and language cognitive services. Defect detection in manufacturing is a great example of how manufacturing companies use vision cognitive service in their workflow to improve outcomes & speed, while lowering costs simultaneously.

 

What is the first thing that a company needs to do to implement Applied AI?


There are two things that come to mind and they depend on whether or not an organization already possesses skills in AI. First, let’s assume they do not. The first thing an organization would need to do is start thinking about data. You can bring in a seasoned AI engineer but you won’t be able to accomplish much if you don’t have enough data of a reasonable quality to work with. It doesn’t matter if the data is structured or if you have a database. What matters is that you have enough data with which to begin experimenting.

If you already have the data, you need to find someone that would be pragmatic and timely enough to do something with it. Because AI is based on experimentation and the probabilistic approach, you will most likely need to learn how to work in a way that is different from what you’re used to.

In software engineering, we like to be deterministic and we like to have a plan. But with AI, you don’t know what you’re going to find, especially if you haven’t worked with it in the past. If you bring someone in that has experience with AI, you will likely see results faster than if you implement AI with someone you choose to train internally.

Building AI capabilities is not only about the AI skills. It’s about a mind shift and a culture shift on how the organization thinks about data and how the organization expects outputs. You need to be aware that some of the outputs might not be a hundred percent accurate and you will need to start at fifty percent or sixty percent accuracy. But it will get better as the experimentation approach evolves, data path lines are created, data is ingested, and as the model is trained and improves.

 

How can Encora help clients through this evolution?


Encora can help organizations deploy AI capabilities because we have extensive experience dealing with both scenarios. We have depth and breadth of knowledge in how to work with data, gather data, process it, and make it available in a way that is usable for creating, training, and deploying AI models. We have the skillset and that means organizations can advance quicker because they won’t need to create those skillsets in-house.

We can also help organizations develop those skillsets without starting from scratch in deploying AI. There is a lot of know-how that we have accrued through the years from working with different technologies. Based on our experience, we can make faster decisions about which technologies or cloud vendors we will use. For example, if we are deploying, a ripple assistant or a virtual voice assistant, the technology you use will determine whether the voice that customers hear will sound robotic or will sound more natural. If organizations have a very specific problem in place, we might have already worked with that problem and can set the proper expectations so that the investments are made in such a way that the return on investment is maximized.

 



A special thanks to the Delivery Director and Data Science & Engineering Lead from Encora, CSA, Rodrigo Vargas, for taking the time to talk to us about the rising Applied AI trend. 

To read more interviews, visit Encora’s 2022 Technology Trends.  

“In the future, we are expecting to see more complex neural networks and more complex AI because we will be able to process much more data than we can process now.” -Rodrigo Vargas

 

 

About Encora


Encora is a digital engineering services company specializing in next-generation software and digital product development. Fast-Growing Tech organizations trust Encora to lead the full Product Development Lifecycle because of our expertise in translating our clients’ strategic innovation roadmap into differentiated capabilities and accelerated bottom-line impacts.

Please let us know if you would ever like to have a conversation with a client partner and/or one of our Innovation Leaders about accelerating next-generation product engineering within your organization.

 

Contact us

 

Share this post

Table of Contents