Geen categorie


By | Geen categorie | No Comments

Conclusion: Group evaluation of
The coverage of Vox concerns a wide variety of articles. All articles which were analyzed covered a completely different topic as it ranged from vegans, to homicides. This is in line with what Vox aims for; “(…) Vox candidly shepherds audiences through politics and policy, business and pop culture, food, science and everything else that matters.” (Vox, 2018). It is clear that they really aim to make news understandable and doing this, they want to be as transparent and truthful as possible.

The Vox authors often refer to their sources right away, which was considered a positive aspect. This not only provides openness on their acquisition of information but is also practical to obtain additional information. Yet, these sources were not always primary sources, but secondary or even tertiary, and in various cases the validity of these referred articles was questionable, and have room for improvement. Obviously, this only applies to the articles which were analyzed and thus no generalizations can be made. However, it gave a feeling of limited investigative journalism. It is of course unknown whether the authors themselves checked with the primary sources even though they did not refer to them, but they certainly should aim for objective and accurate reporting.

Articles were extensive and therefore quite long. This was done to make sure Vox thoroughly covered the topic and provided additional details for readers. All this contributed to the understanding of the article by readers. This matches the goal of Vox; “By making complex topics easier to understand (…)” (Vox, 2018).

Of note is the fact that in several of the analyzed articles, the author seemed to use an agenda-setting style of writing. The authors did not always seem to provide all possible angles on the topics discussed. This might have resulted in biased views, due to framing.

While checking the articles, some minor errors were found. Most of those appeared to be a case of (unintentional) misinformation, rather than disinformation. In other cases, numbers were rounded up or left out, a reason for this could be to make the story more fluent. These errors did not appear to have a significant effect on the overall point of the story, nor did they seem to affect the reader’s interpretation. Thus, the overall result points to the articles being quite trustworthy.

Contrary to the written articles which largely showed to be valid, different results were found for the video that was checked. Although the general scope of the video was valid, specifics showed to be far off sometimes. Therefore, the viewer was slightly lured into the agenda of the videomakers. This is therefore a cause for concern. Even though this analysis was limited to a single video it would be interesting to investigate whether this only holds for this single case.

To conclude, Vox is generally a reliable news source to obtain information about different topics as articles are well composed. Yet, facts and figures, as well as statements should always be interpreted with caution by the reader.


By | Geen categorie | 4 Comments

I’ve got to admit. I never realized the problem is as big as it is. With the amount of content that is currently produced every single day, and the high velocity at which some of it spreads across social media, we need eye-catching headlines to attract attention. To be able to generate these ‘clickbait’ headlines we find ourselves a perfect partner; data visualizations. All it takes nowadays is a single visualization of data from a less-than-reputable source. You release the beast it into the wild (read: share the visual with a list of followers), and the world is being polluted with false narrative. The most disturbing thing is; the data doesn’t even have to be bad. The only thing you have to do is present it in a misleading way. Easy as pie!

A quick Google search provides you with a Wikipedia page, a Reddit community, and hundreds, if not thousands of articles about how graphs can be used to misinform you and me. Hard to grasp is that the study of how graphs, charts, maps, and diagrams can be used to deceive, has remained within the boundaries of academic circles in statistics, cartography, and computer science (Bihanic, 2015). However, visual journalists and information graphics designers need to be part of this debate. We need good data visualizations as it is easier for the brain to comprehend an image versus words or numbers (Cukier, 2010),

Of course, I do understand that creating a high-quality visual is a profession in its own right. The creators might not be trying to actively deceive you. The “misuse of graphical material” might have been completely unintentional. Following professional ethics codes in journalism and graphic design, knowing the truth and hiding it, or conveying it in a way that distorts it is outright inadmissible (Bihanic, 2015).

The Enliven Project created the infographic shown below. The aim is to provide insight into the relatively small number of false rape accusations. By assuming ‘one rape per rapist’ and representing each rape as a single ‘man’ symbol, the chart damages its own cause. When in fact, a rapist has an average of up to six victims. You could also argue that the number of unreported rapes is overestimated. No wonder this data visualization went viral. Sarah Beaulieu dedicated a whole page on her website to elaborate on the matter.

So far, so good. As the example shows, unintentional misleading visualizations do exist, but what about intentional misleading visualizations? As I’ve mentioned before, the internet is full of articles and videos that teach you how to defend yourself against misleading statistics. But why do we need those?

Graphs can be fundamentally misleading about underlying data, and design choices can skew viewers’ perceptions, leading them toward incorrect conclusions (Jones, 2006). Take for example the results of a study by Beattie and Jones, 2002. They indicate that sub-optimal slope parameters may produce distorted judgments of corporate financial performance by users. The researchers found that financial graphs with large slope parameters in particular are likely to be perceived as portraying stronger growth compared to those with small slope parameters. What this means is that with a small almost unnoticeable tweak, you are able to convey a totally different meaning.

The lie factor
The “Lie Factor” is a value which describes the relation between the size of effect shown in a graphic and the size of effect shown in the data. The closer this value is to 1, the better the visualization is understanding the actual effect. Vice versa, when the lie factor is greater than 1 the actual effect in the visualization is being exaggerated.

Have a look at the image below from the New South Wales ministry of health that shows the increase in the number of nurses from 2008 to March 2013. One could argue that a ministry is a trustworthy institution, right? Do you also notice something weird going?

At first glance the image portrays a huge growth in the recruitment of nurses. When you take a closer look, you notice four stick people represent 43,000 nurses, while 28 sticks people stand in for an additional 3,000 nurses. This makes a 7 percent increase look like a 7000 percent increase. I won’t bore you with boring calculations. Nevertheless, it is worth mentioning that the lie factor for this visualization is a staggering 95,9?!

It should be clear to you by now that it is actually quite easy to create a misleading visualization. Even ‘trustworthy’ institutions make use of these shady misleading practices. And I was wondering if we need advice on how to defend ourselves against misleading visualizations… I would like to give you a few recommendations to stay sharp when you’re presented with either graphs, charts, maps, and diagrams.

Make sure you check the y-axis of a graph as it is one of the most common ways data is manipulated in visuals. It makes something that is not significant seem like quite a difference. This is called a ‘truncated graph’. One of the first things that is removed is the baselines or y-axis as can be seen in the example to fool you into thinking February experienced a drastic increase in conversion rate.

Be prepared for blown out scales of a graph to minimize or maximize a change. This phenomenon is called ‘axis changing’ and is almost the opposite of truncating data. Axis and baselines are included but are changed so much that they lose meaning. Look at the next example about climate change. Why are temperatures like -10 degrees up to 110 degrees included? Of course, to make the line as flat as possible and make you believe climate change is no real thing. If you have a look at the ‘fixed’ graph next to it you have a better understanding of what is actually happening.

Be aware of visuals misleading you through skewed data, wherein only certain parts are included that sheds a positive light on their viewpoint. Take a look at the two examples below. The left graph portrays to cover a long period as many points in time are added. In reality it is only 10 years. You might think that the UK national debt is at an all-time high. However, when you analyze the right example, you know you’re being fooled!

Watch out for the use of the wrong graph as it can create a misleading data visualization. Most probably Microsoft made this mistake on purpose to give us the feeling that Microsoft Edge is almost 50% faster than Firefox and 25% faster than Chrome. When you check the right graph you notice that in reality the difference in browser speed is only marginal.

Don’t be surprised to see misleading graphs and charts that alter long-held conventions or associations. Think of using green for losses, and red for profits. It would make no sense to anyone. In the example below they used a light color for high levels and a dark color for low levels. With some common sense you would rather flip this color schema around.

I hope you’ve been thought at school that correlation doesn’t imply causation. Nevertheless, because of all clickbait headlines you’ve read over the last couple of years it might have fell out of your head. The next graph might be an obvious one, but still, you’re warned!

The internet is bloated with examples of misleading data visualization. The recommendation you read might actually come in handy one day. Don’t you think it is a frustrating thing that we have to teach ourselves how to guard against misleading data visualizations? Shouldn’t we expect professionals and institutions to be trustworthy instead of spreading misleading narrative? Frustratingly, we can’t expect this misconduct to never happen again. I suggest that we immediately start teaching kids in high school how to defend themselves against future misconducts of data visualization. What do you think? Or can you think of a better solution?

Beattie, V., & Jones, M. J. (2002). The impact of graph slope on rate of change judgments in corporate reports. Abacus, 38(2), 177–199.

Bihanic, D. (2015). New challenges for data design. New Challenges for Data Design, 1–447.

Cukier, K., (2010). A special report on managing information. The Economist, 394 (8671), 3e18.

Jones, G.E. (2006). How to Lie with Charts, Second Edition (Santa Monica, CA: LaPuerta).


By | Geen categorie | 9 Comments

Climate change is a ‘hot’ topic at the moment. Or should we call it global warming? During the last decade a gradual change in preferred terminology from global warming to climate change began because that’s what the scientific community and governmental institutions called for. It is argued that global warming frames the situation about the temperature rise very negatively in comparison to the friendlier term climate change.

What is actually the correct reference? Both are used interchangeably and are commonly used in the media impersonating they have the same meaning. Take the BBC (2017) for example:

‘Scientists believe we are adding to the natural greenhouse effect with gases released from industry and agriculture (known as emissions), trapping more energy and increasing the temperature. This is commonly referred to as global warming or climate change.’ 

Nevertheless, global warming and climate change are substantially different. NASA elaborates that global warming is about the increasing temperature trend around the world since the early 20th century. This as a result of increase in fossil fuel emissions since the industrial revolution. On the other hand, NASA states that climate change points out a broad range of global phenomena created predominantly by burning fossil fuels, which add heat-trapping gases to Earth’s atmosphere. Take for example the upward temperature trend described by global warming, but also besets changes like sea level rise, ice mass loss around the world, shifts in flower/plant blooming and extreme weather events (NASA, 2018).

First of all, it is important to have a clear understanding of framing.

“Framing effects refer to behavioral or attitudinal outcomes that are not due to differences in what is being communicated, but rather to variations in how a given piece of information is being presented (or framed) in public discourse.” (Scheufele & Iyengar, 2014)

Imagine you are presented with either one of two statements that both use a different frame about climate change. Reading the presented statement, you might not even notice you’re being framed. It all happens right under your nose.

Over the last couple of years, the challenge/war/race against climate change (each word frames the situation differently) has been heavily discussed. Several studies have been conducted to find out how framing influences people in regard to climate change.

Read the following two statements about climate change carefully form the research of Spence & Pidgeon (2010):

Statement 1:

‘‘By mitigating climate change, we can prevent further increases in winter floods in maritime regions and flash floods throughout Europe.’’

Statement 2

‘‘Without mitigating climate change, we will see further increases in winter floods in maritime regions and flash floods throughout Europe.’’

Can you decipher which statement is framed through a ‘gain frame’ and which one through a ‘loss frame’? It might be obvious that statement 1 used a gain frame, while statement 2 was created with the help of a loss frame. But would you have noticed you’re being framed if presented with just one of the two statements?

Participants in the study of Spence et al. (2010) who had been presented with a gain frame rated the information as being significantly more positive. Furthermore, fear related emotions were significantly higher within loss framed conditions. Additionally, Morton, Rabinovich, Marshall, & Bretschneider (2010) found that higher uncertainty combined with a negative frame (highlighting possible losses) decreased individual intentions to behave environmentally. However, when higher uncertainty was combined with a positive frame (highlighting the possibility of losses not materializing) this produced stronger intentions to act.

Inspect the following two images carefully form the research of Spence & Pidgeon (2010):

Fig. 1. Slide left to see local image, slide right to see distant image.

As you might have noticed, Image 2 makes use of the local frame while image 2 pictures the area in a distant frame. The results of Spence & Pidgeon (2010) study showed that individuals who had been presented with information relating to their local area rated the information as being more personally relevant. Wiest, Raymond, & Clawson (2015) confirm with their results that local framing for climate impacts with greater perceptions of severity of the threat, behavioral intentions to address the problem, and support for policy action among the public.

Studies examining personal experiences of climate change-related events highlight the potential to encourage climate action by framing it as happening now, in your neighborhood, and affecting people like you that is, psychologically close. By way of contrast other studies present a more nuanced picture in which psychological proximity does not always lead to more concern about or action on climate change (McDonald, Chai, & Newell, 2015).

Aforementioned statements are framed through equivalent framing, based on the following two assumptions as described in the study of Valenzuela, Piña and Ramírez (2017):

  • The first assumption is that framing refers to differential modes of presentation for the exact same piece of information. As a result, the presented information is informationally equivalent across different frames. Statement 1 and 2 contain the exact same piece of information.
  • Participants interpret the stimulus in line with the context in which it is framed in the particular experimental condition but have no reason to assume that it could also be seen differently if framed in an alternative way. Picture yourself being presented with only one of the two aforementioned statements. You won’t assume it can be seen in a different way because of framing.

It is safe to say by now that framing influences the way we interpret the presented information about climate change. This means that with the use of framing we can actively direct receivers of the information to think and act in specific ways. With for example the use of high uncertainty, combined with a positive frame (losses not materializing), acting against climate change will increase.

With climate change increasingly manifesting itself, we have to act altogether. We need to get people moving. It needs to be an act of mankind. Advocates against climate change have to make use of these different types of framing to change people’s attitudes and actions. Or do you think making use of these frames is actually deceiving and unethical? Or is it allowed when we’re talking about something as big as climate change?


McDonald, R. I., Chai, H. Y., & Newell, B. R. (2015). Personal experience and the “psychological distance” of climate change: An integrative review. Journal of Environmental Psychology, 44, 109–118.

Morton, T. A., Rabinovich, A., Marshall, D., & Bretschneider, P. (2010). The future that may (or may not) come: How framing changes responses to uncertainty in climate change communications. Global Environmental Change, 21(1), 103–109.

Scheufele, D. A., & Iyengar, S. (2014). The State of Framing Research, 1(July).

Spence, A., & Pidgeon, N. (2010). Framing and communicating climate change: The effects of distance and outcome frame manipulations. Global Environmental Change, 20(4), 656–667.

Valenzuela, S., Piña, M., & Ramírez, J. (2017). Behavioral Effects of Framing on Social Media Users: How Conflict, Economic, Human Interest, and Morality Frames Drive News Sharing. Journal of Communication, 67(5), 803–826.

What is climate change? (2018, October 31). BBC. Retrieved from

Wiest, S. L., Raymond, L., & Clawson, R. A. (2015). Framing, partisan predispositions, and public opinion on climate change. Global Environmental Change, 31, 187–198.