<img height="1" width="1" src="https://www.facebook.com/tr?id=901494157838465&amp;ev=PageView &amp;noscript=1">
Skip to content

The Role of AI in Decision-Making: Smarter Insights or Faster Mistakes?

AI is revolutionizing decision-making in business. It’s never been easier to analyze vast amounts of data, to uncover new consumer trends and patterns, to draft reports and even to investigate new products. All of this is obviously an enormous advantage when you’re a strategic insights professional looking for answers. 

But these new tools, and the tasks they can perform, bring along with them some very important questions: when it comes to decision-making, is faster actually better? And what’s the role of the human being in all of this?

Recently we’ve been exploring these very questions. In our webinar, Rethinking Enterprise Intelligence with GenAI, we looked at how AI is transforming business and what it actually means for enterprise insights professionals. You can get a link to the recording here.

But first, let’s go through what we’ve already learned about using AI in the context of decision-making. (Hint: faster isn’t always better, and humans still matter.)

 

AI is really good at data processing and pattern recognition

AI tools do a great job of assembling and synthesizing large data sets, and they’re extremely useful when it comes to helping researchers analyze them. With the right prompts, generative AI tools can surface insights users may have missed on their own, and it can all happen in minutes.

These tools and techniques are especially valuable with market research. Here are just a few ways researchers are already getting the most out of AI:

Data collection and aggregation. AI tools bring together data from multiple sources, doing away with the time-consuming task of collecting it all manually.

Sentiment analysis. AI has become adept at analyzing and summarizing customer sentiment evinced by social media, product reviews, and other forms of feedback.

Consumer insights. AI can segment audiences by behavior, demographics, and a vast array of preferences, helping businesses to tailor their market strategies. They can also help with trend analysis by identifying market shifts and buying patterns.

Competitive intelligence. AI can quickly gather data on competitors, offering insights into what new products they may be looking to offer or pricing strategies.

The real power of AI tools lies in collecting all the information above and presenting it in digestible ways, allowing research teams to focus on decision-making. For example, it’s no longer necessary to have the intern summarize a lengthy report and pull out the numbers; AI can do it for you. It’s an incredible time-saver, freeing up researchers to focus on more strategic tasks.

 

The risks: bad sources, data bias, and nuance

Given all the possibilities, you’d think that AI would be a no-brainer in the context of research and decision-making for enterprises. Unfortunately, that’s not quite the situation.

There are several ways AI can generate unhelpful or even false responses, but here are three of the most common: 

Outdated or incorrect sources. As you may know, Large Language Models (LLMs) were developed by hoovering up vast quantities of data from the Internet, and that’s what they draw upon when they respond to our prompts. And as you also know, the Internet has a lot of unverified and outdated information, which can lead to misleading or even blatantly incorrect responses.

Mickel Grönroos leads AI product development for us at Stravito. As he puts it, “If you serve a Generative AI model with disinformation, it will amplify it. That is a real problem.”

Data bias. As the above suggests, LLMs are only as good as the data they were “trained” on. And if a generative AI tool is drawing on biased information, it will build its responses based on that. 

For instance, researchers from Apple and MIT found that “LLMs are 3-6 times more likely to choose an occupation that stereotypically aligns with a person's gender,” among other problematic issues related to gender bias.

Detecting nuance. AI is improving at detecting tonal subtleties like irony and sarcasm. But it’s still not foolproof: there are numerous recent examples of LLMs mistaking sarcasm for sincerity in social media. Similarly, it can miss emotional cues and cultural nuances. 

Of course, we wouldn’t be talking about AI if we didn’t believe the benefits outweighed the risks. It’s just a question of how to mitigate these issues to effectively use AI for research and the decision-making process.

 

The human factor in decision-making: AI alone isn’t enough

Let’s return to the original questions: is faster actually better in the decision-making process? And what’s the role of human beings when we bring AI into the process?

In terms of working faster, AI is great at summarization, pattern recognition, and information retrieval. It can efficiently assemble the information around a question, and it can quickly uncover novel ways of thinking about a problem. And the speed with which it can perform these tasks makes it a valuable aid to the decision-making process. 

But humans are essential to framing the questions in the first place and validating the responses. And we’re better at understanding context and nuance. 

There are still a lot of people making claims about AI replacing humans in the decision-making process. But it’s more helpful and realistic to see AI as a collaborative tool that automates tedious tasks. 

So faster is better, but only in some respects. And ultimately, people have to make the higher-level choices when it comes to decision-making and strategizing.

 

Practical tips for smarter AI-powered decisions

  • Quality over quantity. Use AI to speed up the process, not for shortcuts or to create a surfeit of unhelpful responses. 
  • Verify. Check the sources of the data to ensure they are reliable and that the data is up to date. 
  • Establish clear internal guidelines. Your company should have its own checklist or policy document to ensure an ethical and secure use of AI.
  • Promote AI literacy. The most brilliant and experienced researchers may still need training in best practice when it comes to AI. As AI evolves rapidly and new tools come to market, it’s a good idea to keep your skills sharp.

In short, the key is to use AI to support the humans doing the decision-making and strategizing. For instance, it’s amazing at reducing cognitive load, the exhaustion that comes when we have to handle too much information in too short a time.

Mickel Grönroos likens AI to an intern: “You put it on the wrong path, and it will just run into the jungle.” It’s up to the people involved to keep the research, and the decisions that follow, on the right path.

If you’re interested in learning how Stravito Assistant can keep your research, strategic insights, and decision-making process on the right path, we have a few suggestions:

 

Explore how Stravito supports smarter decisions