Media framing is shaping our opinions before we realise it

Our opinions feel personal, but they are often shaped by repetition and framing long before the debate begins, he writes Professor Niusha Shafiabady.
We like to think that our opinions are the product of careful thought. We read, weigh the arguments, and decide where we stand. But in reality, most of our views were formed much earlier and much more quietly. Long before we begin to debate, the language we encounter and the stories we see over and over again have already shaped what is considered normal, controversial, and even worthy of debate.
Commercial surrogacy is a clear example of how this happens. This is a complex issue that touches on morality, law, medicine, family and money. Public debate on this issue is often heated, but the backdrop on which this debate takes place is rarely neutral. Some narratives are frequently repeated in the media, while others remain marginal. Over time, these repeated frames shape how the subject is understood before most people consciously form an opinion.
Artificial intelligence makes this process easier to see. By analyzing large volumes of media news simultaneously, AI can uncover patterns that individual readers miss. Rather than focusing on whether a particular article is fair or biased, this approach looks at what is repeated across the news as a whole. The result is a clearer picture of how public understanding is created through repetition.
It’s not about individual journalists getting things wrong. Repetition creates familiarity, and familiarity creates boundaries. While certain ways of talking about commercial surrogacy are taken for granted, alternative perspectives seem unusual or extreme because they occur less frequently. The discussion unintentionally narrows in this way.
This has consequences. Media framing doesn’t tell people what to think, but it powerfully shapes what they think. When ethical or political arguments are presented, the reader is already primed to see some concerns as central and others as secondary. The range of “reasonable” positions has already been determined.
The conclusion here is not about surrogacy itself. This is about how public debate works. When framing is not examined, discussions become polarized, repetitive, and stuck. People argue passionately while sharing many of the same unspoken assumptions they have inherited from the narratives they have absorbed.
Artificial intelligence does not solve this problem, but it helps reveal it. Used carefully, AI can act as a mirror, showing us how our collective conversation is shaped over time. It does not decide what is right or true, but it makes visible the forces that silently influence what we accept as our own views.
Recognizing this impact does not undermine individual agency. On the contrary, it is a prerequisite for a meaningful judgment. If we want public debate to be thoughtful rather than reactive, we need to pay attention not only to the arguments but also to the frameworks that make some arguments natural and others almost unthinkable.
Our opinions feel personal. Most of the time, they are something else.
Professor Niusha Shafiabady is an internationally recognized expert in the field of Computational Intelligence and Women in Artificial Intelligence for Social Good. He is the inventor of the computational optimization algorithm and developed the predictive analysis tool. Ai-Labz.
Support independent journalism Subscribe to IA.
Related Articles


