Language by R. Suresh Babu

mother tongue day
the students trying to script
an alien language

by R. Suresh Babu

Kids should not ignore their mother tongue. If they do, they are going to lose connection with their tradition and environment. To make a country scientifically advanced, we need to learn and teach science in vernacular languages.

Further reading:

Indigenous Languages Must Feature More in Science Communication: https://theconversation.com/indigenous-languages-must-feature-more-in-science-communication-88596

Author bio:

R. Suresh Babu is a graduate teacher of English and a teacher counsellor in a Government Residential School in India. He is an alumnus of the Regional Institute of Education, Mysuru in India. His works have been published in Cattails, Failed Haiku, Wales Haiku Journal, Akitsu, Presence, Under the Basho, Poetry Pea Journal and Podcast, The Asahi Shimbun, World Haiku Series, The Mamba, Kontinuum, Haikuniverse, Cold Moon Journal, Chrysanthemum, tsuri-dōrō and The Mainichi. He is a contributing writer to the anthology, We Will Not Be Silenced of the Indie Blu(e) Publishing. He has done the art works for the Haiku anthology Bull-Headed, edited by Corine Timmer. You can follow him on Twitter @sureshniranam

Read more sciku by R. Suresh Babu: ‘Climate Change’ and ‘Moonwalk’.

Radar obscura

Radar obscura
Misrepresenting data
yet such lovely shapes

Recently I’ve been looking at how to visualise some data in a way that’s engaging for the reader. Radar charts (also known as spider charts, web charts and star plots) seemed to fit the bill. My data with its 6 variables per subject can create a variety of irregular hexagonal shapes that are interesting and informative. Curiosity even had me wondering whether I could look at the surface area of the shapes to compare between different subjects.

Problem solved?

Perhaps not.

In reading up on radar charts I found articles by Chandoo (2008), Odds (2011) and Morin-Chassé (2020) outlining why my plans might not be ideal, and why despite their good looks, comparing between subjects can be less intuitive than simple bar charts. The issue lies with what is added to the data when visualising it. Let’s take my radar chart with 6 variables as an example.

The radar chart consists of a centre point with the 6 variables coming out from it like the spokes of a wheel, with the length of each spoke being the value for that subject in that category. The end of each spoke is then connected to its immediate neighbours with straight lines (although some radar charts are circular and the connecting lines are curved).

Two example radar charts.

The main thing the reader sees and focuses on is those connecting lines and the hexagonal shape they create. But that shape actually means very little – it’s not the actual data, instead it’s a circular sequence of the relationships between pairs of neighbouring variables. Ultimately the reader can easily be distracted from the data by how its visualised.

It gets worse – the shape created depends on a couple of factors:

1. The scale of each spoke.

2. The order of the variables arranged around the graph.

A different scale for one or more of the variables or a different order of the variables around the graph and the resulting shape can look very different. These decisions can also make it harder for readers to interpret the graph – imagine trying to read 6 different axes with their own units and trying to understand what they mean.

What’s more, the areas of the resulting shapes change as the shapes themselves change – simply swapping the location of two variables can result in a different shape and so a different area. And even if the shapes were always regular hexagons, the area doesn’t increase proportionally to the spoke length – a radar graph with longer spokes would have a disproportionally large area compared to one of the same shape with shorter spokes.

These two radar charts are the same subject and underlying data but the positions of two of the variables has been swapped, creating very different shapes with surface areas that differ by about 2%.

All this means that in many situations radar charts can actually cloud interpretation of the data rather than make it clearer. Doesn’t stop them being a good looking graph though!

Curious about what to use instead of radar charts? Check out the articles below for alternatives (including stellar charts and petal charts) and to get a more detailed (and far better written and explained!) understanding of some of the flaws of radar charts.

A note about the sciku: I’ve used the word obscura in the first line. In this case I mean to suggest how the radar chart obscures or obfuscates the data. I could have written obscurer but I wanted to reference the camera obscura that were used from the second half of the 16h century onwards as drawing aids to produce highly accurate representations and were later integral to the development of the camera. I liked the comparison between something that made things clearer and something that purported to make things clearer but often doesn’t.

Further reading:

Chandoo (2008) You are NOT spider man, so why do you use radar charts? https://chandoo.org/wp/better-radar-charts-excel/

Odds (2011) A critique of radar charts https://blog.scottlogic.com/2011/09/23/a-critique-of-radar-charts.html

Morin-Chassé (2020) Off the “radar”? Here are some alternatives. https://www.significancemagazine.com/science/684-off-the-radar-here-are-some-alternatives

Cobwebs to Foodwebs by Dr. Jon Hare

collecting
fish stomach contents
from file cabinets

By Jon Hare

Field studies take a lot of effort. Think of studying fishes in an estuary – where a river meets the sea. You need the expertise to know the fishes and how to take the variety of biological samples including earbones, stomachs, and gonads. You need a boat and gear to catch fish of different sizes and habits. You need to be able to deal with weather, seasons, and the other elements of nature. You need a group of people with varying expertise committed to work together. You need funding for the project. And the field effort is just the beginning – samples need to be processed in the laboratory, data compiled and analyzed, the results published, and the data made available. Now think about how many field studies or parts of field studies never make it to those final steps of dissemination. What happens to these studies? What happens to all that effort? 

Hanson and Courtenay (2020) describe the fate of one such effort. A multi-year fish-related field program was undertaken from 1991 to 1993 to describe the structure and function of the Miramichi River and Estuary ecosystem in eastern Canada. After several years, the project ended owing to a change in priorities (and funding); the team of scientists and fishers went their separate ways. Some of the results were published – primarily around high profile species like Atlantic cod and Atlantic salmon. However, many of the samples and much of the data never made it to the dissemination stage of science. 

The study by Hanson and Courtenay is part of an effort to recover the large amounts of field data stored in old file cabinets, on floppy disks, and in unpublished theses. In their study, Hanson and Courtenay use data collected during the Miramichi Estuary program and present detailed descriptions of the stomach contents of more than 8,000 individual fish across a range of species. Through these analyses, they describe the seasonality in the estuary both in terms of fish occurrence and diet. They also identify a small shrimp species (Crangon septemspinosa – Seven-spined Bay Shrimp) as a keystone species, linking estuarine and coastal foodwebs. Although the findings are not earth-shattering, the results and data are now available for future studies, which could model foodweb dynamics in the ecosystem (e.g. using EcoPath) or document ecosystem changes over the past three decades (a neat example from Long Island Sound, USA). Field studies and the subsequent research based on field studies are essential to developing strategies for ecosystem resilience and climate adaptation and ultimately for living sustainably within the earth system. 

Original research: Hanson, J. M., & Courtenay, S. C. (2020). Data Recovery from Old Filing Cabinets: Seasonal Diets of the Most Common Demersal Fishes in the Miramichi River Estuary (Atlantic Canada), 1991–1993. Northeastern Naturalist, 27(3), 401-433. https://doi.org/10.1656/045.027.0302

Dr. Jon Hare is a scientist who works in Woods Hole, Massachusetts. His research background is fisheries oceanography and climate change impacts on marine fisheries. Check out Jon’s other sciku ‘Owls of the Eastern Ice’, ‘Varves’, ‘Signs of Spring’ and ‘Glacier Mice‘.

Indigenous Engagement

The benefits of
indigenous engagement:
Ethics and Science.

Local knowledge and an awareness of local context can be integral to conducting a variety of research. However, one thing that’s less often considered is the impact of the diversity of the research team itself.

Conservation research by Ward-Fear et al (2019) into the impact of cane toads on yellow-spotted monitor lizards in Australia has unintentionally produced evidence of the scientific benefits of collaborating with local indigenous people.

Large cane toads are spreading through tropical Australia but are fatally toxic if eaten by yellow-spotted monitor lizards. Ward-Fear et al (2016) trained lizards with smaller, non-lethal cane toads and then compared the survival rates of trained and non-trained lizards in the wild over an 18 month period. They found that trained lizards had a greater survival rate than non-trained lizards, suggesting that the training helped the lizards to avoid eating the larger toxic cane toads.

Yet their study also revealed the importance of researcher diversity. In monitoring the population of lizards over 18 months, the research team included western scientists (professional, nonindigenous ecologists) and indigenous rangers (Australian-Aboriginal Traditional Owners of the region).

The indigenous rangers saw lizards from a greater distance, in more dense vegetation, under poorer light levels, and more frequently when the lizard was stationary. Additionally, when assessing the behavioural traits of the lizards, those that were spotted by the indigenous rangers were found to be more shy. What’s more, the ranger caught lizards appeared to benefit more from the training against the toxic cane toads.

All this highlights the importance of cultural diversity within research teams and in particular shows that indigenous collaboration can be utterly crucial for conservation efforts.

Original research:

Training of predatory lizards reduces their vulnerability to invasive toxic prey: https://doi.org/10.1098/rsbl.2015.0863

Collaboration with indigenous peoples can alter the outcomes of conservation research: https://doi.org/10.1111/conl.12643

Consequences

Curb carbon outputs

or face the consequences:

Falling stock prices.

 

We often hear about the environmental benefits of companies reducing their carbon outputs. Generally, however, little happens in business without consideration of the subsequent monetary impacts, and many companies have been slow to change their ways for little apparent financial incentive.

New research by Fang et al (2018) explores the impacts of companies not acting within the emission-intensive sector in North America. The researchers examined the risk factors of climate change on investment portfolios, both directly (e.g. physical risk to properties) and indirectly (e.g. as a result of stricter environmental regulations). They found that companies that don’t take steps to reduce their carbon output could be affected by stock price depreciation and asset devaluation within a decade. Such findings will hopefully prompt more action on curbing carbon emissions.

Original research: http://dx.doi.org/10.1080/20430795.2018.1522583

False positives

Again we find the

results not replicable.

False positives teem!

 

One of the core principles of research is that it should be reproducible, namely that someone else repeating your methods should get the same result as you. But there’s little resources available to reproduce work so it’s often hard to know just how reproducible a result is.

The results of a study by Camerer et al (2018) suggest that reproducibility (at least in certain fields) might be lower than expected. The researchers replicated 21 experiments published in the social sciences in the journals Nature and Science between 2010 and 2015. They found that only 62% of their replications showed evidence consistent with the original studies. Interestingly, they also found evidence to suggest that the research community could predict which studies would replicate and which wouldn’t.

Original research: https://doi.org/10.1038/s41562-018-0399-z

Extrapolation

Extrapolation

from laboratory tests.

Not always correct?

 

Experiments within the laboratory are often used to understand biological interactions in a controlled manner. Yet research by Comforth et al (2018) suggests that what we learn from the laboratory may not always represent what happens in reality.

The researchers found that Pseudomonas bacteria (a pathogen that threatens immunocompromised people) behaved differently in humans compared to under laboratory conditions. This was particularly apparent in the levels of gene expression involved in antibiotic resistance, cell to cell communication and metabolism. The implications of this work suggest laboratory studies only take us so far and further understanding bacterial behaviour in humans is just as important.

Original research: https://doi.org/10.1073/pnas.1717525115

Journal ranking

Journal ranking means

little in terms of methods.

Higher might be worse.

 

Academics aim to submit their research for publication in the most prestigious journals as this brings career advantages including during job and grant applications. This is due to the concept that only the best research, and therefore academics, will be accepted for publication by these journals.

Yet increasingly research is showing that these high ranked journals may not actually be publishing the highest quality research after all.

In a fascinating review Brembs (2018) summaries findings from multiple studies investigating journal status and research quality. Together these findings suggest that the methodological quality of research doesn’t increase with journal rank. In fact, evidence suggests that the inverse may be true – as journal status increases the quality and reliability of the published work may actually decrease. These findings could have profound impacts on ways that modern publically funded science operates and the preservation of public trust in science.

Original research: https://doi.org/10.3389/fnhum.2018.00037

Table accessible

Board game inclusion

keeps tables accessible.

Lessons to be learnt.

 

Whilst the concept of inclusion has been studied in video games, board games remain an under-explored area despite a surge in board game popularity in recent years. In an article in The Computer Games Journal, Heron et al (2018) have set out to rectify this based on their work with Meeple Like Us and the Meeple Centred Design project (meeple being a term for player pieces in board games – ‘my people’).

Their analysis of 116 board games found strengths and weakness in game design and accessibility around colour blindness and other visual impairments, physical abilities, cognitive and emotional accessibility, articulation and communication, as well as the level of representation, diversity and inclusion present in modern board games.

The project is now looking towards developing a set of Tabletop Accessibility Guidelines to help game designers interested in ensuring their games are accessible.

Original research: https://doi.org/10.1007/s40869-018-0056-9

How you handle mice

How you handle mice

affects response to rewards.

Science improves too!

 

There is an increasing body of research to suggest that handling laboratory mice by the tail is both bad for their welfare and the science that the mice are studied for. Tail handling has negative impacts on mouse behaviour and physiology, with tunnel and cupping handling techniques resulting in behavioural improvements across various common behavioural bioassays, including the elevated plus maze, the open field test and the habituation-dishabituation paradigm.

Now new research suggests that handling is also important for reward-based behavioural assays. A study by Clarkson et al (2018) examined mouse response to sucrose solution (a common reward). They found that tail handled mice showed a reduced response to the sucrose than the tunnel handling method, a finding indicative of the tail handled mice having a ‘decreased responsiveness to reward and potentially a more depressive-like state’.

Across eight years and five research papers, from three distinct research groups in two countries, the field of laboratory mouse research has been irrevocably changed. Combined, the research suggests that tail handling results in poor animal welfare and potentially erroneous scientific results. The National Centre for the Replacement, Refinement & Reduction of Animals in Research now has extensive information on mouse handling techniques, example videos, tips and testimonials for researchers and animal carers to find out more about changing their current mouse handling methods to the tunnel or cupping techniques.

Original research: http://dx.doi.org/10.1038/s41598-018-20716-3

 

Tunnels and cupping

Tunnels and cupping

beat tail handling mice for

behavioural tests.

 

Laboratory mouse handling method can affect mouse behaviour and physiology, and new research suggests that it can also impinge on mouse performance in behavioural tests. Research by Gouveia and Hurst (2017) found that tail handled mice performed poorly in a habituation-dishabituation paradigm test in comparison to cupped or tunnel handled mice. The tail handled mice ‘showed little willingness to explore and investigate test stimuli’ and even prior familiarisation with the test arena didn’t improve their performance much.

Combined with the previous research findings on mouse handling this research continues to expand on the long-reaching impacts of mouse handling technique on both mouse welfare and scientific experimental rigour and asks the question – just how valid are behavioural tests using laboratory mice that have been tail handled? Yet the story of mouse handling is not yet done, click here for the final instalment of this tale/tail!

Original research: http://dx.doi.org/10.1038/srep44999

 

Cup handled mice

Cup handled mice show

improved glucose tolerance

and less anxiousness.

 

When performing scientific research with animals, it’s important to ensure that the procedures used do not themselves impact upon the results obtained. Laboratory mouse handling method has already been shown to impact upon mouse anxiety in common behavioural tests. However it seems that handling can have physiological impacts too.

Ghosal et al (2015) compared the behavioural and physiological responses of laboratory mice to either tail handling or cupped handling techniques. Cupped handled mice showed fewer anxious behaviours in a common behavioural test, reduced blood glucose levels and a lower stress-induced plasma corticosterone concentration in response to an overnight fast compared to tail handled mice. The researchers also found that obese laboratory mice handled using the cupped method demonstrated improved glucose tolerance.

Replication and repeatability are crucial components of science and this paper is a perfect demonstration of this – the researchers are from different research laboratories and in a different country to the mouse handling work that preceded it. In this way not only does it build on what came before, it also strengthens those earlier findings. Yet the mouse handling story is not finished yet, click here for the next chapter of this tale/tail!

Original research: https://doi.org/10.1016/j.physbeh.2015.06.021

 

Reducing mouse anxiety

Further reducing

mouse anxiety using

familiar tunnels.

 

Building on the finding that handling laboratory mice using a tunnel resulted in lower anxiety than picking them up by the tail, Gouveia and Hurst (2013) next investigated whether familiarity with the tunnel might be an important factor. Once again they found that tunnel handling resulted in lower anxiety than tail handling during an elevated plus maze (a common behavioural test for laboratory mice).

This time they found differences between mouse strains, with C57BL/6 mice being most interactive towards tunnels from their home cage and ICR mice showing no difference in interaction between familiar home cage tunnels and novel tunnels previously used for handling mice from other cages. The researchers suggest that ‘as home cage tunnels can further improve response to handling in some mice, we recommend that mice are handled with a tunnel provided in their home cage where possible as a simple, practical method to minimise handling stress’. The tunnel would also act as a form of environmental enrichment for the home cage.

In science it’s rare to tell a complete story through the findings of two research papers, click here for the next chapter of this tale/tail!

Original research: https://doi.org/10.1371/journal.pone.0066401

 

The little changes

The little changes

can make a big difference:

Handle mice with care.

 

Traditionally laboratory mice are handled by picking them up by the tail, yet increasing evidence suggests that this is bad, both for the mice themselves and the quality of the science they are being used for. The evidence for this started building from Hurst and West’s 2010 study which demonstrated that handling by the tail resulted in increased aversion and anxiety.

The researchers proposed two alternative methods for handling laboratory mice: holding the mice cupped in the hands or using tunnels that the mice can crawl into and be transported by carrying the tunnels. These novel methods of handling led to the mice approaching the handler voluntarily, being more accepting of physical restraint and showing lower levels of anxiety.

In science it’s rare to tell a complete story through the findings of a single research paper, click here for the next chapter of this tale/tail!

Original research: http://dx.doi.org/10.1038/nmeth.1500

 

Foibles of research

Manipulation?

Coercion? Unwanted guests?

Foibles of research.

 

Academia prides itself on being fair, rational-minded and logical. Yet the practice behind these noble aims is sometimes far from that. A study by Fong & Wilhite (2017) reveals the various manipulations that can take place: from scholars gaining guest authorships on research papers despite contributing nothing to unnecessary reference list padding in an effort to boost citation rate. These instances of misconduct are likely a response to the pressures of an academic career – the demand for high numbers of publications and citation rates.

The survey of approximately 12,000 scholars across 18 disciplines revealed that over 35% of scholars have added an author to a manuscript despite little contribution (with female researchers more likely to add honorary authors than male researchers). 20% of scholars felt someone had been added to one of their grant proposal for no reason. 14% of academics reported being coerced into adding citations to their papers by journals, whilst 40% said they’d padded their reference list to pre-empt any coercion. Whilst changes to aspects of the academic system might help alleviate these issues, it’s likely to be a slow process.

Original research: https://doi.org/10.1371/journal.pone.0187394