Could AI replace politicians? A philosopher maps out three possible futures

(© jon – stock.adobe.com)

From business and public administration to daily life, artificial intelligence is reshaping the world – and politics may be next. While the idea of AI politicians might make some people uneasy, survey results tell a different story. A poll conducted by my university in 2021, during the early surge of AI advancements, found broad public support for integrating AI into politics across many countries and regions.

A majority of Europeans said they would like to see at least some of their politicians replaced by AI. Chinese respondents were even more bullish about AI agents making public policy, while normally innovation-friendly Americans were more circumspect.

As a philosopher who researches the moral and political questions raised by AI, I see three main pathways for integrating AI into politics, each with its own mixture of promises and pitfalls.

While some of these proposals are more outlandish than others, weighing them up makes one thing certain: AI’s involvement in politics will force us to reckon with the value of human participation in politics, and with the nature of democracy itself.

Chatbots running for office?

Prior to ChatGPT’s explosive arrival in 2022, efforts to replace politicians with chatbots were already well underway in several countries. As far back as 2017, a chatbot named Alisa challenged Vladimir Putin for the Russian presidency, while a chatbot named Sam ran for office in New Zealand. Denmark and Japan have also experimented with chatbot-led political initiatives.

These efforts, while experimental, reflect a longstanding curiosity about AI’s role in governance across diverse cultural contexts.

The appeal of replacing flesh and blood politicians with chatbots is, on some levels, quite clear. Chatbots lack many of the problems and limitations typically associated with human politics. They are not easily tempted by desires for money, power, or glory. They don’t need rest, can engage virtually with everyone at once, and offer encyclopedic knowledge along with superhuman analytic abilities.

However, chatbot politicians also inherit the flaws of today’s AI systems. These chatbots, powered by large language models, are often black boxes, limiting our insight into their reasoning. They frequently generate inaccurate or fabricated responses, known as hallucinations. They face cybersecurity risks, require vast computational resources, and need constant network access. They are also shaped by biases derived from training data, societal inequalities, and programmers’ assumptions.

Additionally, chatbot politicians would be ill-suited to what we expect from elected officials. Our institutions were designed for human politicians, with human bodies and moral agency. We expect our politicians to do more than answer prompts – we also expect them to supervise staff, negotiate with colleagues, show genuine concern for their constituents, and take responsibility for their choices and actions.

Without major improvements in the technology, or a more radical reimagining of politics itself, chatbot politicians remain an uncertain prospect.

AI-powered direct democracy

Another approach seeks to completely do away with politicians, at least as we know them. Physicist César Hidalgo believes that politicians are troublesome middlemen that AI finally allows us to cut out. Instead of electing politicians, Hidalgo wants each citizen to be able to program an AI agent with their own political preferences. These agents can then negotiate with each other automatically to find common ground, resolve disagreements, and write legislation.

Hidalgo hopes that this proposal can unleash direct democracy, giving citizens more direct input into politics while overcoming the traditional barriers of time commitment and legislative expertise. The proposal seems especially attractive in light of widespread dissatisfaction with conventional representative institutions.

However, eliminating representation may be more difficult than it seems. In Hidalgo’s “avatar democracy,” the de facto kingmakers would be the experts who design the algorithms. Since the only way to legitimately authorize their power would likely be through voting, we might merely replace one form of representation with another.

The specter of algocracy

One even more radical idea involves eliminating humans from politics altogether. The logic is simple enough: if AI technology advances to the point where it makes reliably better decisions than humans, what would be the point of human input?

An algocracy is a political regime run by algorithms. While few have argued outright for a total handover of political power to machines (and the technology for doing so is still far off), the specter of algocracy forces us to think critically about why human participation in politics matters. What values – such as autonomy, responsibility, or deliberation – must we preserve in an age of automation, and how?

Source : https://studyfinds.org/could-ai-replace-politicians/

Obesity label is medically flawed, says global report

People with excess body fat can still be active and healthy, experts say

Calling people obese is medically “flawed” – and the definition should be split into two, a report from global experts says.

The term “clinical obesity” should be used for patients with a medical condition caused by their weight, while “pre-clinically obese” should be applied to those remaining fat but fit, although at risk of disease.

This is better for patients than relying only on body mass index (BMI) – which measures whether they are a healthy weight for their height – to determine obesity.

More than a billion people are estimated to be living with obesity worldwide and prescription weight-loss drugs are in high demand.

The report, published in The Lancet Diabetes & Endocrinology journal, is supported by more than 50 medical experts around the world.

“Some individuals with obesity can maintain normal organ function and overall health, even long term, whereas others display signs and symptoms of severe illness here and now,” Prof Francesco Rubino, from King’s College London, who chaired the expert group, said.

“Obesity is a spectrum,” he added.

The current, blanket definition means too many people are being diagnosed as obese but not receiving the most appropriate care, the report says.

Natalie, from Crewe, goes to the gym four times a week and has a healthy diet, but is still overweight.

“I would consider myself on the larger side, but I’m fit,” she told the BBC 5 Live phone-in with Nicky Campbell.

“If you look at my BMI I’m obese, but if I speak to my doctor they say that I’m fit, healthy and there’s nothing wrong with me.

“I’m doing everything I can to stay fit and have a long healthy life,” she said.

Richard, from Falmouth, said there is a lot of confusion around BMI.

“When they did my test, it took me to a level of borderline obesity, but my body fat was only 4.9% – the problem is I had a lot of muscle mass,” he says.

In Mike’s opinion, you cannot be fat and fit – he says it is all down to diet.

“All these skinny jabs make me laugh, if you want to lose weight stop eating – it’s easy.”

Currently, in many countries, obesity is defined as having a BMI over 30 – a measurement that estimates body fat based on height and weight.

How is BMI calculated?
It is calculated by dividing an adult’s weight in kilograms by their height in metres squared.

For example, if they are 70kg (about 11 stone) and 1.70m (about 5ft 7in):

square their height in metres: 1.70 x 1.70 = 2.89
divide their weight in kilograms by this amount: 70 ÷ 2.89 = 24.22
display the result to one decimal place: 24.2
Find out what your body mass index (BMI) means on the NHS website

But BMI has limitations.

It measures whether someone is carrying too much weight – but not too much fat.

So very muscular people, such as athletes, tend to have a high BMI but not much fat.

The report says BMI is useful on a large scale, to work out the proportion of a population who are a healthy weight, overweight or obese.

But it reveals nothing about an individual patient’s overall health, whether they have heart problems or other illnesses, for example, and fails to distinguish between different types of body fat or measure the more dangerous fat around the waist and organs.

Measuring a patient’s waist or the amount of fat in their body, along with a detailed medical history, can give a much clearer picture than BMI, the report says.

Source: https://www.bbc.com/news/articles/c79dz14d30ro

Keeping the thermostat between these temperatures is best for seniors’ brains

(Credit: © Lopolo | Dreamstime.com)

That perfect thermostat setting might be more important than you think, especially at grandma and grandpa’s house. A new study finds that indoor temperature significantly affects older adults’ ability to concentrate, even in their own homes where they control the climate. The research suggests that as climate change brings more extreme temperatures, elderly individuals may face increased cognitive challenges unless their indoor environments are properly regulated.

Researchers at the Hinda and Arthur Marcus Institute for Aging Research, the research arm of Hebrew SeniorLife affiliated with Harvard Medical School, conducted a year-long study monitoring 47 community-dwelling adults aged 65 and older. The study tracked both their home temperatures and their self-reported ability to maintain attention throughout the day. What they discovered was a clear U-shaped relationship between room temperature and cognitive function. In other words, attention spans were optimal within a specific temperature range and declined when rooms became either too hot or too cold.

The sweet spot for cognitive function appeared to be between 20-24°C (68-75°F). When temperatures deviated from this range by just 4°C (7°F) in either direction, participants were twice as likely to report difficulty maintaining attention on tasks. This finding is particularly concerning given that many older adults live on fixed incomes and may struggle to maintain optimal indoor temperatures, especially during extreme weather events.

Many previous studies have examined temperature’s effects on cognition in controlled laboratory settings, but this research breaks new ground by studying people in their natural home environments over an extended period. The research team used smart sensors placed in participants’ primary living spaces to continuously monitor temperature and humidity levels, while participants completed twice-daily smartphone surveys about their thermal comfort and attention levels.

The study’s findings revealed an interesting asymmetry in how people responded to temperature variations. While both hot and cold conditions impaired attention, participants seemed particularly sensitive to cold temperatures. When reporting feeling cold, they showed greater cognitive difficulties across a wider range of actual temperatures compared to when they felt hot. This suggests that maintaining adequate heating may be especially crucial for preserving cognitive function in older adults during winter months.

“Our findings underscore the importance of understanding how environmental factors, like indoor temperature, impact cognitive health in aging populations,” said lead author Dr. Amir Baniassadi, an assistant scientist at the Marcus Institute, in a statement. “This research highlights the need for public health interventions and housing policies that prioritize climate resilience for older adults. As global temperatures rise, ensuring access to temperature-controlled environments will be crucial for protecting their cognitive well-being.”

This study follows a 2023 investigation measuring how temperature affected older adults’ sleep and cognitive ability, building a growing body of evidence that climate change impacts extend beyond physical health. While much attention has been paid to the direct health impacts of heat waves and cold snaps, this research suggests that even moderate temperature variations inside homes could affect older adults’ daily cognitive functioning.

The participant group, while relatively small, was carefully monitored. With an average age of 79 years, the cohort completed over 17,000 surveys during the study period. Most participants lived in private, market-rate housing (34 participants) rather than subsidized housing (13 participants), suggesting they had reasonable control over their home environments. This makes the findings particularly striking: if even relatively advantaged older adults experience cognitive effects from temperature variations, more vulnerable populations may face even greater challenges.

The connection between temperature and cognition isn’t entirely surprising. As we age, our bodies become less efficient at regulating temperature, a problem often compounded by chronic conditions like diabetes or medications that affect thermoregulation. What’s novel about this research is its demonstration that these physiological vulnerabilities may extend to cognitive function in real-world settings.

As winter gives way to spring and thermostats across the country get adjusted, this research suggests we might want to pay closer attention to those settings — especially in homes where older adults reside. The cognitive sweet spot of 68-75°F might just be the temperature range where wisdom flourishes.

Source : https://studyfinds.org/cold-homes-linked-to-attention-problems-in-older-adults/

Process this: 50,000 grocery products reveal shocking truth about America’s food supply

(Credit: © Photopal604 | Dreamstime.com)

Minimally processed foods make up just a small percentage of what’s available in the U.S. supermarkets

Next time you walk down the aisles of your local grocery store, take a closer look at what’s actually available on those shelves. A stunning report reveals the majority of food products sold at major U.S. grocery chains are highly processed, with most of them priced significantly cheaper than less processed alternatives.

In what may be the most comprehensive analysis of food processing in American grocery stores to date, researchers examined over 50,000 food items sold at Walmart, Target, and Whole Foods to understand just how processed our food supply really is. Using sophisticated machine learning techniques, they developed a database called GroceryDB that scores foods based on their degree of processing.

What exactly makes a food “processed“? While nearly all foods undergo some form of processing (like washing and packaging), ultra-processed foods are industrial formulations made mostly from substances extracted from foods or synthesized in laboratories. Think instant soups, packaged snacks, and soft drinks – products that often contain additives like preservatives, emulsifiers, and artificial colors.

Research has suggested that diets high in ultra-processed foods can contribute to health issues like obesity, diabetes and heart disease. Over-processing can also strip foods of beneficial nutrients. Despite these risks, there has been no easy way for consumers to identify what foods are processed, highly processed, or ultra-processed.

“There are a lot of mixed messages about what a person should eat. Our work aims to create a sort of translator to help people look at food information in a more digestible way,” explains Giulia Menichetti, PhD, an investigator in the Channing Division of Network Medicine at Brigham and Women’s Hospital and the study’s corresponding author, in a statement.

The findings paint a concerning picture of American food retail. Across all three stores, minimally processed products made up a relatively small fraction of available items, while ultra-processed foods dominated the shelves. Even more troubling, the researchers found that for every 10% increase in processing scores, the price per calorie dropped by 8.7% on average. This means highly processed foods tend to be substantially cheaper than their less processed counterparts.

However, the degree of processing varied significantly between stores. Whole Foods offered more minimally processed options and fewer ultra-processed products compared to Walmart and Target. The researchers also found major differences between food categories. Some categories, like jerky, popcorn, chips, bread, and mac and cheese, showed little variation in processing levels – meaning consumers have limited choices if they want less processed versions of these foods. Other categories, like cereals, milk alternatives, pasta, and snack bars, displayed wider ranges of processing levels.

Looking at specific examples helps illustrate these differences. When examining breads, researchers found that Manna Organics multi-grain bread from Whole Foods scored low on the processing scale since it’s made primarily from whole wheat kernels and basic ingredients. In contrast, certain breads from Walmart and Target scored much higher due to added ingredients like resistant corn starch, soluble corn fiber, and various additives.

The research team also developed a novel way to analyze individual ingredients’ contributions to food processing. They found that certain oils, like brain octane oil, flaxseed oil, and olive oil, contributed less to ultra-processing compared to palm oil, vegetable oil, and soybean oil. This granular analysis helps explain why seemingly similar products can have very different processing scores.

Study authors have made their findings publicly accessible through a website called TrueFood.tech, where consumers can look up specific products and find less processed alternatives within the same category.

“When people hear about the dangers of ultra-processed foods, they ask, ‘OK, what are the rules? How can we apply this knowledge?’” Menichetti notes. “We are building tools to help people implement changes to their diet based on information currently available about food processing. Given the challenging task of transforming eating behaviors, we want to nudge them to eat something that is within what they currently want but a less-processed option.”

As Americans increasingly rely on grocery stores for their food — with over 60% of U.S. food consumption coming from retail establishments — understanding what’s actually available on store shelves becomes crucial for public health. While this research doesn’t definitively prove that ultra-processed foods are harmful, it does demonstrate that avoiding them may require both conscious effort and deeper pockets.

Source : https://studyfinds.org/ultra-processed-foods-america-grocery-stores-target-walmart/

 

Age 13 rule isn’t working — Most pre-teens already deep in social media

(Credit: Child Social Media © Andrii Iemelianenko | Dreamstime.com)

Ages 11 and 12 represent a pivotal transition from childhood to adolescence — a time traditionally marked by first crushes, growing independence, and deepening friendships. But according to new research, this age group is also marked by something more troubling: widespread social media addiction. The study of over 10,000 American youth reveals that most pre-teens are active on platforms they’re technically too young to use.

As the U.S. Supreme Court prepares to hear arguments against Congress’ TikTok ban, the research pulls back the curtain on what many parents have long suspected: nearly 64% of pre-teens have at least one social media account, flouting minimum age requirements and raising concerns about online safety and mental health impacts.

Drawing from a diverse sample of adolescents aged 11 to 15, researchers found that TikTok reigns supreme among young users, with 67% of social media-using teens maintaining an account on the short-form video platform. YouTube and Instagram followed closely behind at around 65% and 66% respectively.

“Policymakers need to look at TikTok as a systemic social media issue and create effective measures that protect children online,” said Dr. Jason Nagata, a pediatrician at UCSF Benioff Children’s Hospitals and the lead author of the study, in a statement. “TikTok is the most popular social media platform for children, yet kids reported having more than three different social media accounts, including Instagram and Snapchat.”

Notable gender differences emerged in platform preferences. Female adolescents gravitated toward TikTok, Snapchat, Instagram, and Pinterest, while their male counterparts showed stronger affinity for YouTube and Reddit. This digital divide hints at how social media may be shaping different aspects of adolescent development and socialization between genders.

Among the study’s more concerning findings was that 6.3% of young social media users admitted to maintaining “secret” accounts hidden from parental oversight. These covert profiles, sometimes dubbed “Finstas” (fake Instagram accounts), represent a digital double life that could put vulnerable youth at risk while hampering parents’ ability to protect their children online.

Signs of problematic use and potential addiction emerged as significant concerns. Twenty-five percent of children with social media accounts reported often thinking about social media apps, and another 25% said they use the apps to forget about their problems. Moreover, 17% of users tried to reduce their social media use but couldn’t, while 11% reported that excessive use had negatively impacted their schoolwork.

“Our study revealed a quarter of children reported elements of addiction while using social media, with some as young as eleven years old,” Nagata explained. “The research shows underage social media use is linked with greater symptoms of depression, eating disorders, ADHD, and disruptive behaviors. When talking about social media usage and policies, we need to prioritize the health and safety of our children.”

Recent legislative efforts, including the federal Protecting Kids on Social Media Act and various state-level initiatives, aim to strengthen safeguards around youth social media use. The U.S. Surgeon General has called for more robust age verification systems and warning labels on social media platforms, highlighting the growing recognition of this issue as a public health concern.

To address these challenges, medical professionals recommend structured approaches to managing screen time. The American Academy of Pediatrics has developed the Family Media Plan, providing families with tools to schedule both online and offline activities effectively.

“Every parent and family should have a family media plan to ensure children and adults stay safe online and develop a healthy relationship with screens and social media,” said Nagata, who practices this approach with his own children. “Parents can create strong relationships with their children by starting open conversations and modeling good behaviors.”

As social media continues evolving at breakneck speed, this research, published in Academic Pediatrics, provides a crucial snapshot of how the youngest generation navigates the digital landscape. The timing proves particularly relevant as the Supreme Court prepares to hear arguments about Congress’ TikTok ban, set to take effect January 19th. While the case primarily centers on national security concerns, the study’s findings suggest that children’s welfare should be an equally important consideration in platform regulation.

Source : https://studyfinds.org/most-pre-teens-already-deep-in-social-media/

Warning: Your pooch’s smooches really could make you quite sick

(Credit: © Natalia Skripnikova | Dreamstime.com)

39% of healthy dogs may silently carry dangerous Salmonella strains, researchers warn
UNIVERSITY PARK, Pa. — Next time your furry friend gives you those irresistible puppy dog eyes, you might want to think twice before sharing your snack. That’s because scientists say that household dogs could be silent carriers of dangerous antibiotic-resistant Salmonella bacteria, potentially putting their human families at risk.

Most pet owners know to wash their hands after handling raw pet food or cleaning up after their dogs, but researchers at Pennsylvania State University have uncovered a concerning trend: household dogs can carry and spread drug-resistant strains of Salmonella even when they appear perfectly healthy. This finding is particularly worrisome because these resistant bacteria can make treating infections much more challenging in both animals and humans.

The research takes on added significance considering that over half of U.S. homes include dogs. “We have this close bond with companion animals in general, and we have a really close interface with dogs,” explains Sophia Kenney, the study’s lead author and doctoral candidate at Penn State, in a statement. “We don’t let cows sleep in our beds or lick our faces, but we do dogs.”

To investigate this concerning possibility, the research team employed a clever detective-like approach. They first tapped into an existing network of veterinary laboratories that regularly test animals for various diseases. They identified 87 cases where dogs had tested positive for Salmonella between May 2017 and March 2023. These weren’t just random samples: they came from real cases where veterinarians had submitted samples for testing, whether the dogs showed symptoms or not.

The scientists then did something akin to matching fingerprints. For each dog case they found, they searched a national database of human Salmonella infections, looking for cases that occurred in the same geographic areas around the same times. This database, maintained by the National Institutes of Health, is like a library of bacterial information collected from patients across the country. Through this matching process, they identified 77 human cases that could potentially be connected to the dog infections.

The research team then used advanced DNA sequencing technology to analyze each bacterial sample. This allowed them to not only identify different varieties of Salmonella but also determine how closely related the bacteria from dogs were to those found in humans. They specifically looked for two key things: genes that make the bacteria resistant to antibiotics, and genes that help the bacteria cause disease.

What they found was eye-opening. Among the dog samples, they discovered 82 cases of the same type of Salmonella that commonly causes human illness. More concerning was that many of these bacterial strains carried genes making them resistant to important antibiotics, the same medicines doctors rely on to treat serious infections.

In particular, 16 of the human cases were found to be very closely related to six different dog-associated strains. While this doesn’t definitively prove the infections spread from dogs to humans, it’s like finding matching puzzle pieces that suggest a connection. The researchers also discovered that 39% of the dog samples contained a special gene called shdA, which allows the bacteria to survive longer in the dog’s intestines. This means infected dogs could potentially spread the bacteria through their waste for extended periods without appearing sick themselves.

The bacteria showed impressive diversity, with researchers identifying 31 different varieties in dogs alone. Some common types found in both dogs and humans included strains known as Newport, Typhimurium, and Enteritidis — names that might not mean much to the average person but are well-known to health officials for causing human illness.

The research has highlighted real-world implications. Study co-author Nkuchia M’ikanatha, lead epidemiologist for the Pennsylvania Department of Health, points to a recent outbreak where pig ear pet treats sickened 154 people across 34 states with multidrug-resistant Salmonella. “This reminds us that simple hygiene practices such as hand washing are needed to protect both our furry friends and ourselves — our dogs are family but even the healthiest pup can carry Salmonella,” he notes.

The historical context adds another layer to the findings. According to researchers, Salmonella has been intertwined with human history since agriculture began, potentially shadowing humanity for around 10,000 years alongside animal domestication.

While the study reveals concerning patterns about antibiotic resistance and disease transmission, lead researcher Erika Ganda emphasizes that not all bacteria are harmful. “Bacteria are never entirely ‘bad’ or ‘good’ — their role depends on the context,” she explains. “While some bacteria, like Salmonella, can pose serious health risks, others are essential for maintaining our health and the health of our pets.”

Of course, this doesn’t mean we should reconsider having dogs as pets. Instead, scientists say just be smart, and maybe try not to let your pooch kiss you on the lips.

“Several studies highlight the significant physical and mental health benefits of owning a dog, including reduced stress and increased physical activity,” Ganda notes. “Our goal is not to discourage pet ownership but to ensure that people are aware of potential risks and take simple steps, like practicing good hygiene, to keep both their families and their furry companions safe.”

Source : https://studyfinds.org/dogs-drug-resistant-salmonella/

‘Super Scoopers’ dumping ocean water on the Los Angeles fires: Why using saltwater is typically a last resort

A Croatian Air Force CL-415 Super Scooper firefighting aircraft in flight. (Photo by crordx on Shutterstock)

Firefighters battling the deadly wildfires that raced through the Los Angeles area in January 2025 have been hampered by a limited supply of freshwater. So, when the winds are calm enough, skilled pilots flying planes aptly named Super Scoopers are skimming off 1,500 gallons of seawater at a time and dumping it with high precision on the fires.

Using seawater to fight fires can sound like a simple solution – the Pacific Ocean has a seemingly endless supply of water. In emergencies like Southern California is facing, it’s often the only quick solution, though the operation can be risky amid ocean swells.

But seawater also has downsides.

Saltwater corrodes firefighting equipment and may harm ecosystems, especially those like the chaparral shrublands around Los Angeles that aren’t normally exposed to seawater. Gardeners know that small amounts of salt – added, say, as fertilizer – does not harm plants, but excessive salts can stress and kill plants.

While the consequences of adding seawater to ecosystems are not yet well understood, we can gain insights on what to expect by considering the effects of sea-level rise.

A seawater experiment in a coastal forest

As an ecosystem ecologist at the Smithsonian Environmental Research Center, I lead a novel experiment called TEMPEST that was designed to understand how and why historically salt-free coastal forests react to their first exposures to salty water.

Sea-level rise has increased by an average of about 8 inches globally over the past century, and that water has pushed salty water into U.S. forests, farms and neighborhoods that had previously known only freshwater. As the rate of sea-level rise accelerates, storms push seawater ever farther onto the dry land, eventually killing trees and creating ghost forests, a result of climate change that is widespread in the U.S. and globally.

In our TEMPEST test plots, we pump salty water from the nearby Chesapeake Bay into tanks, then sprinkle it on the forest soil surface fast enough to saturate the soil for about 10 hours at a time. This simulates a surge of salty water during a big storm.

Our coastal forest showed little effect from the first 10-hour exposure to salty water in June 2022 and grew normally for the rest of the year. We increased the exposure to 20 hours in June 2023, and the forest still appeared mostly unfazed, although the tulip poplar trees were drawing water from the soil more slowly, which may be an early warning signal.

Things changed after a 30-hour exposure in June 2024. The leaves of tulip poplar in the forests started to brown in mid-August, several weeks earlier than normal. By mid-September the forest canopy was bare, as if winter had set in. These changes did not occur in a nearby plot that we treated the same way, but with freshwater rather than seawater.

The initial resilience of our forest can be explained in part by the relatively low amount of salt in the water in this estuary, where water from freshwater rivers and a salty ocean mix. Rain that fell after the experiments in 2022 and 2023 washed salts out of the soil.

But a major drought followed the 2024 experiment, so salts lingered in the soil then. The trees’ longer exposure to salty soils after our 2024 experiment may have exceeded their ability to tolerate these conditions.

Seawater being dumped on the Southern California fires is full-strength, salty ocean water. And conditions there have been very dry, particularly compared with our East Coast forest plot.

Changes evident in the ground

Our research group is still trying to understand all the factors that limit the forest’s tolerance to salty water, and how our results apply to other ecosystems such as those in the Los Angeles area.

Tree leaves turning from green to brown well before fall was a surprise, but there were other surprises hidden in the soil below our feet.

Rainwater percolating through the soil is normally clear, but about a month after the first and only 10-hour exposure to salty water in 2022, the soil water turned brown and stayed that way for two years. The brown color comes from carbon-based compounds leached from dead plant material. It’s a process similar to making tea.

Our lab experiments suggest that salt was causing clay and other particles to disperse and move about in the soil. Such changes in soil chemistry and structure can persist for many years.

Source : https://studyfinds.org/super-scoopers-dumping-ocean-water-los-angeles-fires/

An eye for an eye: People agree about the values of body parts across cultures and eras

(Credit: © Kateryna Chyzhevska | Dreamstime.com)

The Bible’s lex talionis – “Eye for eye, tooth for tooth, hand for hand, foot for foot” (Exodus 21:24-27) – has captured the human imagination for millennia. This idea of fairness has been a model for ensuring justice when bodily harm is inflicted.

Thanks to the work of linguists, historians, archaeologists and anthropologists, researchers know a lot about how different body parts are appraised in societies both small and large, from ancient times to the present day.

But where did such laws originate?

According to one school of thought, laws are cultural constructions – meaning they vary across cultures and historical periods, adapting to local customs and social practices. By this logic, laws about bodily damage would differ substantially between cultures.

Our new study explored a different possibility – that laws about bodily damage are rooted in something universal about human nature: shared intuitions about the value of body parts.

Do people across cultures and throughout history agree on which body parts are more or less valuable? Until now, no one had systematically tested whether body parts are valued similarly across space, time and levels of legal expertise – that is, among laypeople versus lawmakers.

We are psychologists who study evaluative processes and social interactions. In previous research, we have identified regularities in how people evaluate different wrongful actions, personal characteristics, friends and foods. The body is perhaps a person’s most valuable asset, and in this study we analyzed how people value its different parts. We investigated links between intuitions about the value of body parts and laws about bodily damage.

How critical is a body part or its function?

We began with a simple observation: Different body parts and functions have different effects on the odds that a person will survive and thrive. Life without a toe is a nuisance. But life without a head is impossible. Might people intuitively understand that different body parts are have different values?

Knowing the value of body parts gives you an edge. For example, if you or a loved one has suffered multiple injuries, you could treat the most valuable body part first, or allocate a greater share of limited resources to its treatment.

This knowledge could also play a role in negotiations when one person has injured another. When person A injures person B, B or B’s family can claim compensation from A or A’s family. This practice appears around the world: among the Mesopotamians, the Chinese during the Tang dynasty, the Enga of Papua New Guinea, the Nuer of Sudan, the Montenegrins and many others. The Anglo-Saxon word “wergild,” meaning “man price,” now designates in general the practice of paying for body parts.

But how much compensation is fair? Claiming too little leads to loss, while claiming too much risks retaliation. To walk the fine line between the two, victims would claim compensation in Goldilocks fashion: just right, based on the consensus value that victims, offenders and third parties in the community attach to the body part in question.

This Goldilocks principle is readily apparent in the exact proportionality of the lex talionis – “eye for eye, tooth for tooth.” Other legal codes dictate precise values of different body parts but do so in money or other goods. For example, the Code of Ur-Nammu, written 4,100 years ago in ancient Nippur, present-day Iraq, states that a man must pay 40 shekels of silver if he cuts off another man’s nose, but only 2 shekels if he knocks out another man’s tooth.

Testing the idea across cultures and time

If people have intuitive knowledge of the values of different body parts, might this knowledge underpin laws about bodily damage across cultures and historical eras?

To test this hypothesis, we conducted a study involving 614 people from the United States and India. The participants read descriptions of various body parts, such as “one arm,” “one foot,” “the nose,” “one eye” and “one molar tooth.” We chose these body parts because they were featured in legal codes from five different cultures and historical periods that we studied: the Law of Æthelberht from Kent, England, in 600 C.E., the Guta lag from Gotland, Sweden, in 1220 C.E., and modern workers’ compensation laws from the United States, South Korea and the United Arab Emirates.

Participants answered one question about each body part they were shown. We asked some how difficult it would be for them to function in daily life if they lost various body parts in an accident. Others we asked to imagine themselves as lawmakers and determine how much compensation an employee should receive if that person lost various body parts in a workplace accident. Still others we asked to estimate how angry another person would feel if the participant damaged various parts of the other’s body. While these questions differ, they all rely on assessing the value of different body parts.

To determine whether untutored intuitions underpin laws, we didn’t include people who had college training in medicine or law.

Then we analyzed whether the participants’ intuitions matched the compensations established by law.

Our findings were striking. The values placed on body parts by both laypeople and lawmakers were largely consistent. The more highly American laypeople tended to value a given body part, the more valuable this body part seemed also to Indian laypeople, to American, Korean and Emirati lawmakers, to King Æthelberht and to the authors of the Guta lag. For example, laypeople and lawmakers across cultures and over centuries generally agree that the index finger is more valuable than the ring finger, and that one eye is more valuable than one ear.

But do people value body parts accurately, in a way that corresponds with their actual functionality? There are some hints that, yes, they do. For example, laypeople and lawmakers regard the loss of a single part as less severe than the loss of multiples of that part. In addition, laypeople and lawmakers regard the loss of a part as less severe than the loss of the whole; the loss of a thumb is less severe than the loss of a hand, and the loss of a hand is less severe than the loss of an arm.

Additional evidence of accuracy can be gleaned from ancient laws. For example, linguist Lisi Oliver notes that in Barbarian Europe, “wounds that may cause permanent incapacitation or disability are fined higher than those which may eventually heal.”

Although people generally agree in valuing some body parts more than others, some sensible differences may arise. For instance, sight would be more important for someone making a living as a hunter than as a shaman. The local environment and culture might also play a role. For example, upper body strength could be particularly important in violent areas, where one needs to defend oneself against attacks. These differences remain to be investigated.

Source : https://studyfinds.org/values-of-body-parts-across-cultures-and-eras/

One juice, three benefits: How elderberry could transform metabolism in just 7 days

(Photo credit: © Anna Komisarenko | Dreamstime.com)

Small study demonstrates the enormous fat-burning and gut-boosting powers of an ‘underappreciated’ berry

In an era where 74% of Americans are considered overweight and 40% have obesity, scientists have discovered that an ancient berry might offer modern solutions. Research from Washington State University reveals that elderberry juice could help regulate blood sugar levels and improve the body’s ability to burn fat, while also promoting beneficial gut bacteria.

Elderberries have long been used in traditional medicine, but this new research provides scientific evidence for their metabolic benefits. The study, published in the journal Nutrients, demonstrates that consuming elderberry juice for just one week led to significant improvements in how the body processes sugar and burns fat.

“Elderberry is an underappreciated berry, commercially and nutritionally,” says Patrick Solverson, an assistant professor in WSU’s Department of Nutrition and Exercise Physiology, in a statement. “We’re now starting to recognize its value for human health, and the results are very exciting.”

Solverson and his team recruited 18 overweight but otherwise healthy adults for this carefully controlled experiment. Most participants were women, with an average age of 40 years and an average body mass index (BMI) of 29.12, placing them in the overweight category.

This wasn’t your typical “drink this and tell us how you feel” study. Instead, the researchers implemented a sophisticated crossover design where participants served as their own control group. Each person completed two one-week periods: one drinking elderberry juice and another drinking a placebo beverage that looked and tasted similar but lacked the active compounds. A three-week “washout” period separated these phases to ensure no carryover effects.

During the study, participants consumed 355 grams (about 12 ounces) of either elderberry juice or placebo daily, split between morning and evening doses. The elderberry juice provided approximately 720 milligrams of beneficial compounds called anthocyanins, which give the berries their deep purple color.

Perhaps most remarkably, after just one week of elderberry juice consumption, participants showed a 24% reduction in blood glucose response following a high-carbohydrate meal challenge. This suggests that elderberry juice might help the body better regulate blood sugar levels, a crucial factor in metabolic health and weight management.

The study also revealed that participants burned more fat both while resting and during exercise when consuming elderberry juice. Using specialized equipment to measure breath gases, researchers found that those drinking elderberry juice burned 27% more fat compared to when they drank the placebo. This increased fat-burning occurred not only during rest but also persisted during a 30-minute moderate-intensity walking test.

But the benefits didn’t stop there. The research team also examined participants’ gut bacteria through stool samples and found that elderberry juice promoted the growth of beneficial bacterial species while reducing less desirable ones. Specifically, it increased levels of bacteria known for producing beneficial compounds called short-chain fatty acids, which play essential roles in metabolism and gut health.

What makes elderberry particularly special is its exceptionally high concentration of anthocyanins. According to Solverson, a person would need to consume four cups of blackberries to match the anthocyanin content found in just 6 ounces of elderberry juice. These compounds are believed to be responsible for the berry’s anti-inflammatory, anti-diabetic, and antimicrobial effects.

While further research is needed to confirm these effects over longer periods and in larger populations, this study suggests that elderberry juice might offer a practical dietary strategy for supporting metabolic health. It’s worth noting that participants reported no adverse effects from consuming the juice, suggesting it’s both safe and well-tolerated.

The timing of this research coincides with growing consumer interest in elderberry products. While these purple berries have long been popular in European markets, demand in the United States surged during the COVID-19 pandemic and continues to rise. This increasing market presence could make it easier for consumers to access elderberry products if further research continues to support their health benefits.

“Food is medicine, and science is catching up to that popular wisdom,” Solverson notes. “This study contributes to a growing body of evidence that elderberry, which has been used as a folk remedy for centuries, has numerous benefits for metabolic as well as prebiotic health.”

The research team isn’t stopping here. With an additional $600,000 in funding from the U.S. Department of Agriculture, they plan to investigate whether elderberry juice might help people maintain their weight after discontinuing weight loss medications. This could provide a natural solution for one of the most challenging aspects of weight management – maintaining weight loss over time.

As obesity rates continue to climb and are projected to reach 48-55% of American adults by 2050, finding natural, food-based approaches to support metabolic health becomes increasingly important. While elderberry juice shouldn’t be viewed as a magic bullet, this research suggests it might be a valuable addition to a healthy diet and lifestyle approach for managing weight and metabolic health.

Source : https://studyfinds.org/how-elderberry-might-transform-metabolism-in-just-7-days/

From first breath: Male and female brains really do differ at birth

(Credit: © Katrina Trninich | Dreamstime.com)

The age-old debate about differences between male and female brains has taken a dramatic turn with new evidence suggesting these variations begin before a baby’s first cry. In the largest study of its kind, researchers at Cambridge University’s Autism Research Centre have discovered that structural brain differences between the sexes don’t gradually emerge through childhood — they’re already established at birth.

Brain development during the first few weeks of life occurs at a remarkably rapid pace, making this period particularly crucial for understanding how sex differences in the brain emerge and evolve. Previous research has primarily focused on older infants, children, and adults, leaving a significant gap in our understanding of the earliest stages of brain development.

The research team analyzed brain scans of 514 newborns (236 females and 278 males) aged 0-28 days using data from the developing Human Connectome Project. The study, published in the journal Biology of Sex Differences, represents one of the largest and most comprehensive investigations of sex differences in neonatal brain structure to date, addressing a common limitation of past research: small sample sizes.

Male newborns showed larger overall brain volumes compared to females, even after accounting for differences in birth weight. This finding was particularly significant because the research team carefully controlled for body size differences between sexes, a factor that has complicated previous studies in this field.

When controlling for total brain volume, female babies exhibited greater amounts of gray matter — the outer brain tissue containing nerve cell bodies and dendrites responsible for processing and interpreting information, such as sensation, perception, learning, speech, and cognition. Meanwhile, male infants had higher volumes of white matter, which consists of long nerve fibers (axons) that connect different brain regions together.

“Our study settles an age-old question of whether male and female brains differ at birth,” says lead author Yumnah Khan, a PhD student at the Autism Research Centre, in a statement. “We know there are differences in the brains of older children and adults, but our findings show that they are already present in the earliest days of life.”

Several specific brain regions showed notable differences between males and females. Female newborns had larger volumes in areas related to memory and emotional regulation, while male infants showed greater volume in regions involved in sensory processing and motor control.

Dr. Alex Tsompanidis, who supervised the study, emphasizes its methodological rigor: “This is the largest such study to date, and we took additional factors into account, such as birth weight, to ensure that these differences are specific to the brain and not due to general size differences between the sexes.”

The research team is now investigating potential prenatal factors that might contribute to these differences. “To understand why males and females show differences in their relative grey and white matter volume, we are now studying the conditions of the prenatal environment, using population birth records, as well as in vitro cellular models of the developing brain,” explains Dr. Tsompanidis.

Importantly, the researchers stress that these findings represent group averages rather than individual characteristics.

“The differences we see do not apply to all males or all females, but are only seen when you compare groups of males and females together,” says Dr. Carrie Allison, Deputy Director of the Autism Research Centre. “There is a lot of variation within, and a lot of overlap between, each group.”

These findings mark a significant step forward in understanding early brain development, while raising new questions about the role of prenatal factors in shaping neurological differences. The research team’s ongoing investigations into prenatal conditions and cellular models may soon provide even more insights into how these sex-based variations emerge.

“These differences do not imply the brains of males and females are better or worse. It’s just one example of neurodiversity,” says Professor Simon Baron-Cohen, Director of the Autism Research Centre. “This research may be helpful in understanding other kinds of neurodiversity, such as the brain in children who are later diagnosed as autistic, since this is diagnosed more often in males.”

Source : https://studyfinds.org/how-male-and-female-brains-differ-at-birth/

Gender shock: Study reveals men, not women, make more emotional money choices

(Credit: © Yuri Arcurs | Dreamstime.com)

When it comes to making financial decisions, conventional wisdom suggests keeping emotions out of the equation. But new research reveals that men, contrary to traditional gender stereotypes, may be significantly more susceptible to letting emotions influence their financial choices than women.

A study led by the University of Essex challenges long-held assumptions about gender and emotional decision-making. The research explores how emotions generated in one context can influence decisions in completely unrelated situations – a phenomenon known as the emotional carryover effect.

“These results challenge the long-held stereotype that women are more emotional and open new avenues for understanding how emotions influence decision-making across genders,” explains lead researcher Dr. Nikhil Masters from Essex’s Department of Economics.

Working with colleagues from the Universities of Bournemouth and Nottingham, Masters designed an innovative experiment comparing how different types of emotional stimuli affect people’s willingness to take financial risks. They contrasted a traditional laboratory approach targeting a single emotion (fear) with a more naturalistic stimulus based on real-world events that could trigger multiple emotional responses.

The researchers recruited 186 university students (100 women and 86 men) and randomly assigned them to one of three groups. One group watched a neutral nature documentary about the Great Barrier Reef. Another group viewed a classic fear-inducing clip from the movie “The Shining,” showing a boy searching for his mother in an empty corridor with tense background music. The third group watched actual news footage about the BSE crisis (commonly known as “mad cow disease”) from the 1990s, a real food safety scare that generated widespread public anxiety.

After watching their assigned videos, participants completed decision-making tasks involving both risky and ambiguous financial choices using real money. In the risky scenario, they had to decide between taking guaranteed amounts of money or gambling on a lottery with known 50-50 odds. The ambiguous scenario was similar, but participants weren’t told the odds of winning.

The results revealed striking gender differences. Men who watched either the horror movie clip or the BSE footage subsequently made more conservative financial choices compared to those who watched the neutral nature video. This effect was particularly pronounced for those who saw the BSE news footage, and even stronger when the odds were ambiguous rather than clearly defined.

Perhaps most surprisingly, women’s financial decisions remained remarkably consistent regardless of which video they watched. The researchers found that while women reported experiencing similar emotional responses to the videos as men did, these emotions didn’t carry over to influence their subsequent financial choices.

The study challenges previous assumptions about how specific emotions like fear influence risk-taking behavior. While earlier studies suggested that fear directly leads to more cautious decision-making, this new research indicates the relationship may be more complex. Even when the horror movie clip successfully induced fear in participants, individual variations in reported fear levels didn’t correlate with their financial choices.

Instead, the researchers discovered that changes in positive emotions may play a more important role than previously thought. When positive emotions decreased after watching either the horror clip or BSE footage, male participants became more risk-averse in their financial decisions.

The study also demonstrated that emotional effects on decision-making can be even stronger when using realistic stimuli that generate multiple emotions simultaneously, compared to artificial laboratory conditions designed to induce a single emotion. This suggests that real-world emotional experiences may have more powerful influences on our financial choices than controlled laboratory studies have indicated.

The research team is now investigating why only men appear to be affected by these carryover effects. “Previous research has shown that emotional intelligence helps people to manage their emotions more effectively. Since women generally score higher on emotional intelligence tests, this could explain the big differences we see between men and women,” explains Dr. Masters.

These findings could have significant implications for understanding how major news events or crises might affect financial markets differently across gender lines. They also suggest the potential value of implementing “cooling-off” periods for important financial decisions, particularly after exposure to emotionally charged events or information.

“We don’t make choices in a vacuum and a cooling-off period might be crucial after encountering emotionally charged situations,” says Dr. Masters, “especially for life-changing financial commitments like buying a home or large investments.”

Source : https://studyfinds.org/study-men-not-women-make-more-emotional-money-choices/

Danger in drinking water? Flouride linked to lower IQ scores in children

(Photo by Tatevosian Yana on Shutterstock)

In a discovery that could reshape how we think about water fluoridation, researchers have uncovered a troubling pattern across 10 countries and nearly 21,000 children: higher fluoride exposure consistently correlates with lower IQ scores. The meta-analysis raises critical questions about the balance between preventing tooth decay and protecting cognitive development.

While fluoride has long been added to public drinking water systems to prevent tooth decay, this research suggests the need to carefully weigh the dental health benefits against potential developmental risks. In the United States, the recommended fluoride concentration for community water systems is 0.7 mg/L, with regulatory limits set at 4.0 mg/L by the Environmental Protection Agency (EPA).

The research team, led by scientists from the National Institute of Environmental Health Sciences, examined studies from ten different countries, though notably none from the United States. The majority of the research (45 studies) came from China, with others from Canada, Denmark, India, Iran, Mexico, New Zealand, Pakistan, Spain, and Taiwan.

Published in JAMA Pediatrics, the findings paint a consistent picture across different types of analyses. When comparing groups with higher versus lower fluoride exposure, children in the higher exposure groups showed significantly lower IQ scores. For every 1 mg/L increase in urinary fluoride levels, researchers observed an average decrease of 1.63 IQ points.

This effect size might seem small, but population-level impacts can be substantial. The researchers note that a five-point decrease in population IQ would nearly double the number of people classified as intellectually disabled, highlighting the potential public health significance of their findings.

The study employed three different analytical approaches to examine the relationship between fluoride and IQ. First, they compared mean IQ scores between groups with different exposure levels. Second, they analyzed dose-response relationships to understand how IQ scores changed with increasing fluoride concentrations. Finally, they examined individual-level data to calculate precise estimates of IQ changes per unit increase in fluoride exposure.

Of particular concern, the inverse relationship between fluoride exposure and IQ remained significant even at relatively low exposure levels. When researchers restricted their analysis to studies with fluoride concentrations below 2 mg/L (closer to levels found in fluoridated water systems), they still found evidence of cognitive impacts.

The implications of these findings are especially relevant for the United States, where fluoridated water serves about 75% of people using community water systems. While no U.S. studies were included in this analysis, the researchers note that significant inequalities exist in American water fluoride levels, particularly affecting Hispanic and Latino communities.

The study’s findings arrive at a crucial moment in public health policy. While water fluoridation has been hailed as one of the great public health achievements of the 20th century for its role in preventing tooth decay, this research suggests the need for a careful reassessment of fluoride exposure guidelines, particularly for vulnerable populations like pregnant women and young children.

Source : https://studyfinds.org/danger-in-drinking-water-flouride-linked-to-lower-iq-scores-in-children/

The disturbing trend discovered in 166,534 movies over past 50 years

(Credit: Prostock-studio on Shutterstock)

Movies are getting deadlier – at least in terms of their dialogue. A new study analyzing over 160,000 English-language films has revealed a disturbing trend: characters are talking about murder and killing more frequently than ever before, even in movies that aren’t focused on crime.

Researchers from the University of Maryland, University of Pennsylvania, and The Ohio State University examined movie subtitles spanning five decades, from 1970 to 2020, to track how often characters used words related to murder and killing. What they found was a clear upward trajectory that mirrors previous findings about increasing visual violence in films.

“Characters in noncrime movies are also talking more about killing and murdering today than they did 50 years ago,” says Brad Bushman, corresponding author of the study and professor of communication at The Ohio State University, in a statement. “Not as much as characters in crime movies, and the increase hasn’t been as steep. But it is still happening. We found increases in violence cross all genres.”

By applying sophisticated natural language processing techniques, the team calculated the percentage of “murderous verbs” – variations of words like “kill” and “murder” – compared to the total number of verbs used in movie dialogue. They deliberately took a conservative approach, excluding passive phrases like “he was killed,” negations such as “she didn’t kill,” and questions like “did he murder someone?” to focus solely on characters actively discussing committing violent acts.

“Our findings suggest that references to killing and murder in movie dialogue not only occur far more frequently than in real life but are also increasing over time,” explains Babak Fotouhi, lead author of the study and adjunct assistant research professor in the College of Information at the University of Maryland.

“We focused exclusively on murderous verbs in our analysis to establish a lower bound in our reporting,” notes Amir Tohidi, a postdoctoral researcher at the University of Pennsylvania. “Including less extreme forms of violence would result in a higher overall count of violence.”

Nearly 7% of all movies analyzed contained these murderous verbs in their dialogue. The findings demonstrate a steady increase in such language over time, particularly in crime-focused films. Male characters showed the strongest upward trend in violent dialogue, though female characters also demonstrated a significant increase in non-crime movies.

This rising tide of violent speech wasn’t confined to obvious genres like action or thriller films. Even movies not centered on crime showed a measurable uptick in murder-related dialogue over the 50-year period studied. This suggests that casual discussion of lethal violence has become more normalized across all types of movies, potentially contributing to what researchers call “mean world syndrome” – where heavy media consumption leads people to view the world as more dangerous and threatening than it actually is.

The findings align with previous research showing that gun violence in top movies has more than doubled since 1950, and more than tripled in PG-13 films since that rating was introduced in 1985. What makes this new study particularly noteworthy is its massive scale – examining dialogue from more than 166,000 films provides a much more comprehensive picture than earlier studies that looked at smaller samples.

Movie studios operate in an intensely competitive market where they must fight for audience attention. “Movies are trying to compete for the audience’s attention and research shows that violence is one of the elements that most effectively hooks audiences,” Fotouhi explains.

“The evidence suggests that it is highly unlikely we’ve reached a tipping point,” Bushman warns. Decades of research have demonstrated that exposure to media violence can influence aggressive behavior and mental health in both adults and children. This can manifest in various ways, from direct imitation of observed violent acts to a general desensitization toward violence and decreased empathy for others.

As content platforms continue to multiply and screen time increases, particularly among young people, these findings raise important questions about the cumulative impact of exposure to violent dialogue in entertainment media. The researchers emphasize that their results highlight the crucial need for promoting mindful consumption and media literacy, especially among vulnerable populations like children.

Source : https://studyfinds.org/movie-violence-dialogue-disturbing-trend/

Even small diet tweaks can lead to sustainable weight loss – here’s how

Woman stepping on scale (© Siam – stock.adobe.com)

It’s a well-known fact that to lose weight, you either need to eat less or move more. But how many calories do you really need to cut out of your diet each day to lose weight? It may be less than you think.

To determine how much energy (calories) your body requires, you need to calculate your total daily energy expenditure (TDEE). This is comprised of your basal metabolic rate (BMR) – the energy needed to sustain your body’s metabolic processes at rest – and your physical activity level. Many online calculators can help determine your daily calorie needs.

If you reduce your energy intake (or increase the amount you burn through exercise) by 500-1,000 calories per day, you’ll see a weekly weight loss of around one pound (0.45kg).

But studies show that even small calorie deficits (of 100-200 calories daily) can lead to long-term, sustainable weight-loss success. And although you might not lose as much weight in the short-term by only decreasing calories slightly each day, these gradual reductions are more effective than drastic cuts as they tend to be easier to stick with.

Hormonal changes

When you decrease your calorie intake, the body’s BMR often decreases. This phenomenon is known as adaptive thermogenesis. This adaptation slows down weight loss so the body can conserve energy in response to what it perceives as starvation. This can lead to a weight-loss plateau – even when calorie intake remains reduced.

Caloric restriction can also lead to hormonal changes that influence metabolism and appetite. For instance, thyroid hormones, which regulate metabolism, can decrease – leading to a slower metabolic rate. Additionally, leptin levels drop, reducing satiety, increasing hunger and decreasing metabolic rate.

Ghrelin, known as the “hunger hormone,” also increases when caloric intake is reduced, signaling the brain to stimulate appetite and increase food intake. Higher ghrelin levels make it challenging to maintain a reduced-calorie diet, as the body constantly feels hungrier.

Insulin, which helps regulate blood sugar levels and fat storage, can improve in sensitivity when we reduce calorie intake. But sometimes, insulin levels decrease instead, affecting metabolism and leading to a reduction in daily energy expenditure. Cortisol, the stress hormone, can also spike – especially when we’re in a significant caloric deficit. This may break down muscles and lead to fat retention, particularly in the stomach.

Lastly, hormones such as peptide YY and cholecystokinin, which make us feel full when we’ve eaten, can decrease when we lower calorie intake. This may make us feel hungrier.

Fortunately, there are many things we can do to address these metabolic adaptations so we can continue losing weight.

Weight loss strategies

Maintaining muscle mass (either through resistance training or eating plenty of protein) is essential to counteract the physiological adaptations that slow weight loss down. This is because muscle burns more calories at rest compared to fat tissue – which may help mitigate decreased metabolic rate.

Gradual caloric restriction (reducing daily calories by only around 200-300 a day), focusing on nutrient-dense foods (particularly those high in protein and fibre), and eating regular meals can all also help to mitigate these hormonal challenges.

Source : https://studyfinds.org/small-diet-tweaks-sustainable-weight-loss/

 

A proven way to stay younger longer — and all it takes is an hour each week

(© New Africa – stock.adobe.com)

Could you find an hour a week to devote to slowing your biological aging? You’ll get other, additional benefits – adding not just more years to your life but more life to your years. That hour can also create a sense of purpose, improve mental health, give you a psychological lift, boost your social connectedness, and you’ll know you’re making the world a better place. All you have to do is volunteer. If you can find a few hours a week, the benefits are even greater.

A study published in this month’s issue of Social Science and Medicine found that volunteering for as little as an hour a week is linked to slower biological aging.

Biologic age

Biologic age refers to the age of a body’s cells and tissues and, over time, how quickly they are aging, compared to the body’s chronologic age. The most common way to assess biological age examines how your behaviors and environment change the expression of your DNA; it’s called epigenetic testing.

Why volunteering is associated with slower aging

Experts explain that volunteering’s significant effect on biologic aging is multifactorial, with physical, social, and psychological benefits.

Volunteering often includes physical activity, like walking. Social connections are vital; we’re programmed for connectedness. Social connections decrease stress and improve cognitive function. According to the study authors, volunteering can also create a sense of purpose, improve mental health, and buffer any loss of important roles, like spouse or parent, as we age.

Family Volunteering

When my son was six, we volunteered at a soup kitchen in a less-affluent part of Detroit. On the Saturday after Thanksgiving, he was right in the thick of making gallons of turkey soup and hundreds of cheese or peanut butter and jelly sandwiches. Finally, he grabbed his own PB&J and munched out with our guests. It’s one of my favorite memories.

Family volunteering (whatever “family” means to you) is a win for everyone. It strengthens families and communities. When family members unite for a worthy cause, their collective power is greater than just adding together the strengths of individuals.

Children will develop compassion and tolerance. They may acquire new skills. More importantly, volunteering provides models from which children learn to respect and serve others. They discover the gratitude that flows only from giving. Children who volunteer are more likely to volunteer as adults and, later on in life, create their own traditions with their children.

Parents get to spend more time with their kids, instilling important values with action; those values run deeper than words could ever reach. Include your kids in planning. You may discover what’s truly important to them.

Nonprofit agencies, understaffed and overstressed, can do little without volunteers. Virtually everyone can find a nonprofit that matches their passion.

Getting started

To decide if volunteering is right for your family, consider:

  • About what issues are you passionate?
  • What are your children’s ages?
  • Who would you like to help?
  • What does your family enjoy doing together?
  • How frequently can you volunteer?
  • What skills and talents can your family offer?
  • What do you want your family to learn from the experience?

There are innumerable causes in which you can make a difference. About 3.5 million people a year will experience homelessness; about 40 percent are kids. Since 1989, the number of beds available in shelters has tripled. Collect toiletries. Give art and school supplies. Provide clothing and transportation.

Every day, 10% of Americans are hungry. Have a canned food drive. Make bag lunches for kids in a homeless shelter. Have a party – with an entrance fee of a can of food.

The elderly often need help the most. Adopt a grandparent. Deliver food – drive for Meals on Wheels. Look at photos and listen to stories. Give manicures and pedicures. Do seasonal yard work, rake leaves or shovel snow. Write letters. Play board games. Read books or newspapers. Bring your pet to visit. Write life stories. Provide transportation for medical appointments. Run errands. Make small home repairs.

I had elderly neighbors next door. When I cleared snow and ice (which was plentiful) from my car, I’d clear their car as well. Mrs. Neighbor watched through the living room window. Sometime later, she told me that she had a remote device to start and clear her car from inside her home! What can you do but laugh?

Source : https://studyfinds.org/volunteering-proven-way-stay-younger-longer/

‘Simple nasal swab’ could revolutionize childhood asthma treatment

(Credit: © Alena Stalmashonak | Dreamstime.com)

A novel diagnostic test using just a nasal swab could transform how doctors diagnose and treat childhood asthma. Researchers at the University of Pittsburgh have developed this non-invasive approach that, for the first time, allows physicians to precisely identify different subtypes of asthma in children without requiring invasive procedures.

Until now, determining the specific type of asthma a child has typically required bronchoscopy, an invasive procedure performed under general anesthesia to collect lung tissue samples. This limitation has forced doctors to rely on less accurate methods like blood tests and allergy screenings, potentially leading to suboptimal treatment choices.

“Because asthma is a highly variable disease with different endotypes, which are driven by different immune cells and respond differently to treatments, the first step toward better therapies is accurate diagnosis of endotype,” says senior author Dr. Juan Celedón, a professor of pediatrics at the University of Pittsburgh and chief of pulmonary medicine at UPMC Children’s Hospital of Pittsburgh, in a statement.

3 subtypes of asthma

The new nasal swab test analyzes the activity of eight specific genes associated with different types of immune responses in the airways. This genetic analysis reveals which of three distinct asthma subtypes, or endotypes, a patient has: T2-high (involving allergic inflammation), T17-high (showing a different type of inflammatory response), or low-low (exhibiting minimal inflammation of either type).

The research team validated their approach across three separate studies involving 459 young people with asthma, focusing particularly on Puerto Rican and African American youth, populations that experience disproportionately higher rates of asthma-related emergency room visits and complications. According to the researchers, Puerto Rican children have emergency department and urgent care visit rates of 23.5% for asthma, while Black children have rates of 26.6% — both significantly higher than the 12.1% rate among non-Hispanic white youth.

The findings, published in JAMA, challenge long-held assumptions about childhood asthma. While doctors have traditionally believed that most cases were T2-high, the nasal swab analysis revealed this type appears in only 23-29% of participants. Instead, T17-high asthma accounted for 35-47% of cases, while the low-low type represented 30-38% of participants.

“These tests allow us to presume whether a child has T2-high disease or not,” explained Celedón. “But they are not 100% accurate, and they cannot tell us whether a child has T17-high or low-low disease. There is no clinical marker for these two subtypes. This gap motivated us to develop better approaches to improve the accuracy of asthma endotype diagnosis.”

Precision medicine for patients

This breakthrough carries significant implications for treatment. Currently, powerful biological medications exist for T2-high asthma, but no available treatments specifically target T17-high or low-low types. The availability of this new diagnostic test could accelerate research into treatments for these previously understudied forms of asthma.

“We have better treatments for T2-high disease, in part, because better markers have propelled research on this endotype,” said Celedón. “But now that we have a simple nasal swab test to detect other endotypes, we can start to move the needle on developing biologics for T17-high and low-low disease.”

The test could also help researchers understand how asthma evolves throughout childhood and adolescence. Celedón noted that one of the “million-dollar questions in asthma” involves understanding why the condition affects children differently as they age.

“Before puberty, asthma is more common in boys, but the incidence of asthma goes up in females in adulthood. Is this related to endotype? Does endotype change over time or in response to treatments? We don’t know,” he says. “But now that we can easily measure endotype, we can start to answer these questions.”

Dr. Gustavo Matute-Bello, acting director of the Division of Lung Diseases at the National Heart, Lung, and Blood Institute, emphasizes the potential impact of this diagnostic advancement. “Having tools to test which biological pathways have a major role in asthma in children, especially those who have a disproportionate burden of disease, may help achieve our goal of improving asthma outcomes,” he says. “This research has the potential to pave the way for more personalized treatments, particularly in minority communities.”

Source : https://studyfinds.org/simple-nasal-swab-could-revolutionize-childhood-asthma-treatment/

Do You Believe in Life After Death? These Scientists Study It.

In an otherwise nondescript office in downtown Charlottesville, Va., a small leather chest sits atop a filing cabinet. Within it lies a combination lock, unopened for more than 50 years. The man who set it is dead.

On its own, the lock is unremarkable — the kind you might use at the gym. The code, a mnemonic of a six-letter word converted into numbers, was known only to the psychiatrist Dr. Ian Stevenson, who set it long before he died, and years before he retired as director of the Division of Perceptual Studies, or DOPS, a parapsychology research unit he founded in 1967 within the University of Virginia’s school of medicine.

Dr. Stevenson called this experiment the Combination Lock Test for Survival. He reasoned that if he could transmit the code to someone from the grave, it might help answer the questions that had consumed him in life: Is communication from the “beyond” possible? Can the personality survive bodily death? Or, simply: Is reincarnation real?

This last conundrum — the survival of consciousness after death — continues to be at the forefront of the division’s research. The team has logged hundreds of cases of children who claim to remember past lives from all continents except Antarctica. “And that’s only because we haven’t looked for cases there,” said Dr. Jim Tucker, who has been investigating claims of past lives for more than two decades. He recently retired after having been the director of DOPS since 2015.

It was an unexpected career path to begin with.

“As far as reincarnation itself goes, I never had any particular interest in it,” said Dr. Tucker, who set out to solely become a child psychiatrist and was, at one point, the head of U.Va.’s Child and Family Psychiatry Clinic. “Even when I was training, it never occurred to me that I’d end up doing this work.”

Now, at 64 years old, after traveling the world to record cases of possible past life recollections, and with books and papers of his own on the subject of past lives, he has left the position.

“There’s a level of stress in medicine, and in academics,” he reflected. “There are always things you should be doing, papers you should be writing, prescriptions you should be giving. I enjoyed my day to day work, both in the clinic and at DOPS, but you reach a point where you’re ready not to have so many responsibilities and demands.”

According to a job listing issued by the medical school, on top of their academic reputation, the ideal candidate to replace Dr. Tucker must have “a track record of rigorous investigation of extraordinary human experiences, such as the mind’s relationship to the body and the possibility of consciousness surviving physical death.”

None of the eight principal team members have the required academic status to undertake the role, making it necessary to find someone externally.

“I think there’s a feeling that it would be rejuvenating for the group to have an outside person come in,” said Dr. Jennifer Payne, vice-chair of research at the department of psychiatry, who leads the selection committee.

Scientists That Have Strayed From the Usual Path

Dr. Tucker was running a busy practice when he first learned about DOPS. It was 1996 and a local newspaper, The Daily Progress in Charlottesville, had profiled Dr. Stevenson after he received funding to interview individuals about their near-death experiences. Entranced by the pioneering work, Dr. Tucker began volunteering at the division before joining as a permanent researcher.

Each of the division’s researchers has committed their career — and, to some extent, risked their professional reputation — to the study of the so-called paranormal. This includes near-death and out-of-body experiences, altered states of consciousness, and past lives research, which all come under the portmanteau of “parapsychology.” They are scientists that have strayed from the usual path.

DOPS is a curious institution. There are only a few other labs in the world undertaking similar lines of research — the Koestler Parapsychology Unit at the University of Edinburgh, for instance — with DOPS being by far the most prominent. The only other major parapsychology unit in the United States was Princeton’s Engineering Anomalies Research Laboratory, or PEAR, which focused on telekinesis and extrasensory perception. That unit was shuttered in 2007.

While it is technically part of the U.Va., DOPS occupies four spacious would-be condominiums inside a residential building. It is notably distanced from the university’s leafy main campus, and at least a couple of miles from the medical school.

“Nobody knows we’re here,” said Dr. Bruce Greyson, 78, a former director of DOPS and a professor emeritus of psychiatry and neurobehavioral sciences at U.Va., who started working with Dr. Stevenson in the late 1970s. “Ian was very cautious about that, because he had faced a lot of prejudice,” Dr. Greyson said. “He kept a very low profile.”

Dr. Greyson received a lot of pushback before joining DOPS. He had worked at the University of Michigan for eight years early in his career, but his interest in near-death experiences began to ruffle feathers, much like it had for Dr. Stevenson.

“They told me, point blank, that I wouldn’t have a future there if I did near-death research, because you can’t measure that in a test tube,” he said. “Unless I could quantify it by a biological measure, they didn’t want to hear about it.” He left Michigan for the University of Connecticut, where he spent 11 years, and then found his way to DOPS.

The atmosphere within DOPS is one of studious calm. There are only a few signs of the team’s activities. In the basement laboratory one finds a copper-lined Faraday cage used to assess out of body experience subjects, and foam mannequin heads sporting Electroencephalogram caps. Upstairs, running the full length of the wall in the Ian Stevenson Memorial Library, which boasts over 5,000 books and papers pertaining to past lives research, is a glass display case containing a collection of knives, swords and mallets — weapons described by children who recalled a violent end in their previous life.

“It’s not the actual weapon, but the kind of weapon used,” explained Dr. Tucker. Each object is labeled with intricate, sometimes gory, detail. One display told the story of a young girl from Burma, Ma Myint Thein, who was born with deformities of her fingers and birthmarks across her back and neck. “According to villagers,” the label reads, “the man whose life she remembered being had been murdered, his fingers chopped off and his throat slashed by a sword.” It is accompanied by a photograph of the girl’s hands, her right missing two fingers.

That children who claim to remember past lives are most frequently found in South Asia, where reincarnation is a core tenet of many religious beliefs, has been used by critics to debunk the studies. After all, surely it’s all too easy to find corroborative evidence in places with a pre-existing belief in reincarnation.

The question of life after death has been an existential preoccupation for humans throughout time, however, and reincarnation is a central tenet of belief in many cultures. Buddhism, where there is thought to be a 49-day journey between death and rebirth; Hinduism, with its concept of samsara, the endless cycle; and Native American and West African nations, all share similar core concepts of the soul or spirit moving from one life to the next. Meanwhile, a 2023 Pew Research survey found that a quarter of Americans believe it is “definitely or probably true” that people who have died can be reincarnated.

When it comes to past life claims, the DOPS team works on cases that almost always have come directly from parents.

Common features in children who claim to have led a previous life include a verbal precocity and mannerisms at odds with that of the rest of the family. Unexplained phobias or aversions have also been thought to have been transferred over from a past existence. In some cases, extreme clarity besets the remembrances: the names, professions and quirks of a different set of relatives, or the particularities of the streets they used to live on and sometimes even recalling obscure historical events — details the child couldn’t possibly have known about.

One of the most famous cases the team worked on was that of James Leininger, an American boy who remembered being a fighter pilot in Japan. The case drew a great deal of attention to DOPS, but also brought with it numerous detractors.

Ben Radford, the deputy editor of Skeptical Inquirer, a magazine dedicated to scientific research, believes that wishful thinking and general death anxiety has fueled an increased interest in reincarnation, and finds flaws in the DOPS research methodology, which he often dissects in his blog. He said, “The fact is, no matter how sincere the person is, often recovered memories are false.”

‘The Evidence Is Not Flawless’

Remembered by many as a dignified man with a penchant for three-piece suits, Dr. Stevenson lived for his research. He almost never took time off. “I had to swing by the office once on New Year’s Eve and there was one car in the lot, and it was his,” Dr. Tucker recalled.

Born in 1918, Dr. Stevenson, who was Canadian and graduated from St. Andrews with a degree in history before studying biochemistry and psychiatry at McGill University, had served as chair of the department of psychiatry at U.Va. for 10 years until 1967.

By the early 1960s he had become disillusioned by conventional medicine. In an interview with The New York Times in 1999, he said that he had been drawn to studying past lives through his “discontent with other explanations of human personality. I wasn’t satisfied with psychoanalysis or behaviorism or, for that matter, neuroscience. Something seemed to be missing.”

And so he began recording potential cases of reincarnation, which he would come to call “cases of the reincarnation type,” or CORT. It was one of his initial CORT research papers, from a 1966 trip to India, that caught the attention of Chester F. Carlson, the inventor of the technology behind Xerox photocopying machines. It was Mr. Carlson’s generous financial assistance that enabled Dr. Stevenson to leave his role at the medical school and focus full-time on past lives research.

The dean of the medical school at the time, Kenneth Crispell, didn’t approve of this foray into the paranormal. He was happy to see Dr. Stevenson resign from his spot in the department of psychiatry, and, believing in academic freedom, agreed to the formation of a small research division. However, any hope Dr. Crispell had that Dr. Stevenson and his unorthodox ideas would disappear into the academic shadows was quickly dashed: Mr. Carlson died of a heart attack in 1968 and in his will he bequeathed $1 million to Dr. Stevenson’s endeavor.

While not all of the attention was positive in the division’s early years, some individuals in the science community were intrigued. “Either Dr. Stevenson is making a colossal mistake, or he will be known as the Galileo of the 20th century,” the psychiatrist Harold Lief wrote in a 1977 article for the Journal of Nervous and Mental Disease.

To this day, DOPS is still financed entirely by private donations. In October it was announced that the division had received the first installment of a $1 million estate gift from The Philip B. Rothenberg Legacy Fund, which will be used to finance early-career researchers. Other supporters have included the Bonner sisters, Priscilla Bonner-Woolfan and Margerie Bonner-Lowry — silent screen actresses of the 1920s, whose endowment continues to fund the DOPS directorship. Another unlikely supporter is the actor John Cleese, who first encountered the division at the Esalen Institute, a retreat and intentional community located in Big Sur, Calif.

“These people are behaving like good scientists,” Mr. Cleese said in a phone interview. “Good scientists are after the truth: they don’t just want to be right. I think it is absolutely astonishing and quite disgraceful, the way that orthodox contemporary, materialistic reductionist theory treats all the things — and there are so many of them — that they can’t begin to explain.”

In the early years of the department, Dr. Stevenson traveled the world extensively, recording more than 2,500 cases of children recalling past lives. In this pre-internet time, discovering so many similar accounts and trends served to strengthen his thesis. The findings from these excursions, collected in Dr. Stevenson’s neat handwriting, are stored by country in filing cabinets and is in the slow process of being digitized.

From this database, researchers have yielded findings they believe are interesting. The strongest cases, according to the DOPS researchers, have been found in children under the age of 10, and the majority of remembrances tend to occur between the ages of 2 and 6, after which they appear to fade. The median time between death and rebirth is about 16 months, a period the researchers see as a form of intermission. Very often, the child has memories that match up to the life of a deceased relative.

And yet for all of this meticulous work, Dr. Stevenson was aware of the limitations of past lives research. “The evidence is not flawless and it certainly does not compel such a belief,” he explained in a lecture at The University of Southwestern Louisiana (now the University of Louisiana at Lafayette) in 1989. “Even the best of it is open to alternative interpretations, and one can only censure those who say there is no evidence whatsoever.”

“Ian thought reincarnation was the best explanation, but he wasn’t positive,” said Dr. Greyson. “He thought a lot of the cases may be something else. It might be a kind of possession, it might even be delusion. There are lots of different possibilities. It may be clairvoyance, or picking up the information from some other sources that you’re not aware of.”

After spending more than half his life studying past lives, Dr. Stevenson retired from DOPS in 2002, handing the directorial baton to Dr. Greyson. Though he kept a watchful eye on proceedings from afar, offering guidance when solicited, he never set foot in the division again. He died of pneumonia five years later, at 88 years old.

‘Many of the Memories Are Difficult’

Each year DOPS receives more than 100 emails from parents regarding something their child has said. Reaching out to the division is often an attempt at clarity, but the researchers never promise answers. Their only promise is to take these claims seriously, “but as far as the case having enough to investigate, enough to potentially verify that it matches with a past life, those are very few,” said Dr. Tucker.

This summer, Dr. Tucker drove to the rural town of Amherst, Va., to visit a case of possible past life remembrance. He was joined by his colleagues Marieta Pehlivanova and Philip Cozzolino, who would be taking over his research in the new year.

Ms. Pehlivanova, 43, who specializes in near death experience and children who remember past lives, has been at DOPS for seven years and is launching a study into women who’ve had near death experiences during childbirth. When she tells people what she does, they find the subject matter both fascinating and disturbing. “We’ve had emails from people saying we’re doing the work of the devil,” she said.

Upon arrival at the family’s home, the team was shown into the kitchen. A child, who was three, the youngest of four home-schooled siblings, peeked from behind her mother’s legs, looking up shyly. She wore a baggy Minnie Mouse shirt and went to perch between her grandparents on a banquette, watching everyone take their seats around the dining table.

“Let’s start from the very beginning,” Dr. Tucker said after the paperwork had been signed by Misty, the child’s 28-year-old mother. “It all began with the puzzle piece?”

A few months earlier, mother and child had been looking at a wooden puzzle of the United States, with each state represented by a cartoon of a person or object. Misty’s daughter pointed excitedly at the jagged piece representing Illinois, which had an abstract illustration of Abraham Lincoln.

“That’s Pom,” her daughter exclaimed. “He doesn’t have his hat on.”

This was indeed a drawing of Abraham Lincoln without his hat, but more important, there was no name under the image indicating who he was. Following weeks of endless talk about “Pom” bleeding out after being hurt and being carried to a too-small bed — which the family had started to think could be related to Lincoln’s assassination — they began to consider that their daughter had been present for the historical moment. This was despite the family having no prior belief in reincarnation, nor any particular interest in Lincoln.

On the drive to Amherst, Dr. Tucker confessed his hesitation in taking on this particular case — or any case connected to a famous individual. “If you say your child was Babe Ruth, for example, there would be lots of information online,” he said. “When we get those cases, usually it’s that the parents are into it. Still, it’s all a little strange to be coming out of a three-year-old’s mouth. Now if she had said her daughter was Lincoln, I probably wouldn’t have made the trip.”

Lately, Dr. Tucker has been giving the children picture tests. “Where we think we know the person they’re talking about, we’ll show them a picture from that life, and then show them another picture — a dummy picture — from somewhere else, to see if they can pick out the right one,” he said. “You have to have a few pictures for it to mean anything. I had one where the kid remembered dying in Vietnam. I showed him eight pairs of pictures and a couple of them he didn’t make any choice on, but the others he was six out of six. So, you know, that makes you think. But this girl is so young, that I don’t think we can do that.”

On this occasion, the little girl decided not to engage, and pretended to be asleep. Then she actually fell asleep.

“She’ll come around to it soon,” Misty assured the researchers. As the minutes ticked by, Dr. Tucker decided the picture test would be best left for another time. The child was still asleep when the researchers returned to their car.

After the first meeting, the only course of action is to do nothing and wait, see if the memories develop into something more concrete. Since the onus for past lives research is on spontaneous recollections, the team are largely unconvinced by the concept of hypnotic regression. “People will be hypnotized and told to go back to their past lives and all that, which we’re quite skeptical about,” said Dr. Tucker. “You can also make up a lot of stuff, even if you’re talking about memories from this life.”

DOPS rarely takes accounts from adults into consideration. “They’re not our primary interest, partly because, as an adult, you’ve been exposed to a lot,” Dr. Tucker explained. “You may think that you don’t know things from history, but you may well have been exposed to it. But also, the phenomenon typically happens in young kids. It’s as if they carry the memories with them, and they are typically very young when they start talking.”

There is also the concern that parents are looking for attention. “There are people who say, ‘Well, the parents are just doing it to have their 15 minutes of fame or whatever,” said Dr. Tucker. “But most of them have no interest in anyone knowing about it, you know, because it’s kind of embarrassing, or they worry people will think their kid is weird.”

For a child, recalling a past life can be trying. “They might be missing people, or have a sense of unfinished business,” he said. After a silence, he continued, his voice contemplative. “Frankly it’s probably better for the child that they don’t have these memories, because so many of the memories are difficult. The majority of kids who remember how they died perished in some kind of violent, unnatural death.”

Source : https://dnyuz.com/2025/01/03/do-you-believe-in-life-after-death-these-scientists-study-it/

Why your couch could be killing you: Sedentary lifestyle linked to 19 chronic conditions

(Credit: © Tracy King | Dreamstime.com)

In an era where many of us spend our days hunched over computers or scrolling through phones, mounting evidence suggests our sedentary lifestyles may be quietly damaging our health. A new study from the University of Iowa reveals that physically inactive individuals face significantly higher risks for up to 19 different chronic health conditions, ranging from obesity and diabetes to depression and heart problems.

Medical researchers have long known that regular physical activity helps prevent disease and promotes longevity. However, this comprehensive study, which analyzed electronic medical records from over 40,000 patients at a major Midwestern hospital system, provides some of the most detailed evidence yet about just how extensively physical inactivity can impact overall health.

Leading the study, now published in the journal Preventing Chronic Disease, was a team of researchers from various departments at the University of Iowa, including pharmacy practice, family medicine, and human physiology. Their mission was to examine whether screening patients for physical inactivity during routine medical visits could help identify those at higher risk for developing chronic diseases.

The simple 30-second exercise survey

When patients at the University of Iowa Health Care Medical Center arrived for their annual wellness visits, they received a tablet during the standard check-in process. Researchers implemented the Exercise Vital Sign (EVS), which asks two straightforward questions: how many days per week they engaged in moderate to vigorous exercise (like a brisk walk) and for how many minutes per session. Based on their responses, patients were categorized into three groups: inactive (0 minutes per week), insufficiently active (1-149 minutes per week), or active (150+ minutes per week).

“This two-question survey typically takes fewer than 30 seconds for a patient to complete, so it doesn’t interfere with their visit. But it can tell us a whole lot about that patient’s overall health,” says Lucas Carr, associate professor in the Department of Health and Human Physiology and the study’s corresponding author, in a statement.

Study authors discovered clear patterns when they analyzed responses from 7,261 screened patients. About 60% met the recommended guidelines by exercising moderately for 150 or more minutes per week. However, 36% fell short of these guidelines, exercising less than 150 minutes weekly, and 4% reported no physical activity whatsoever. When the team examined the health records of these groups, they found remarkable differences in health outcomes.

Consequences of a sedentary lifestyle

The data painted a compelling picture of how physical activity influences overall health. Active patients showed significantly lower rates of depression (15% compared to 26% in inactive patients), obesity (12% versus 21%), and hypertension (20% versus 35%). Their cardiovascular health markers were also notably better, including lower resting pulse rates and more favorable cholesterol profiles.

Perhaps most revealing was the relationship between activity levels and chronic disease burden. Patients reporting no physical activity carried a median of 2.16 chronic conditions. This number dropped to 1.49 conditions among insufficiently active patients and fell further to just 1.17 conditions among those meeting exercise guidelines. This clear progression suggests that even small increases in physical activity might help reduce disease risk.

To provide context for their findings, the researchers compared the screened group against 33,445 unscreened patients from other areas of the hospital. This comparison revealed an important pattern: patients who completed the survey tended to be younger and healthier than the general patient population. As Carr notes, “We believe this finding is a result of those patients who take the time to come in for annual wellness exams also are taking more time to engage in healthy behaviors, such as being physically active.”

Based on the study’s findings, physical inactivity was associated with higher rates of:

  1. Obesity
  2. Liver disease
  3. Psychoses
  4. Chronic lung disease
  5. Neurological seizures
  6. Coagulopathy (blood clotting disorders)
  7. Depression
  8. Weight loss issues
  9. Uncontrolled hypertension (high blood pressure)
  10. Controlled hypertension
  11. Uncontrolled diabetes
  12. Anemia deficiency
  13. Neurological disorders affecting movement
  14. Peripheral vascular disease
  15. Autoimmune disease
  16. Drug abuse
  17. Hypothyroidism
  18. Congestive heart failure
  19. Valvular disease (heart valve problems)

Need for better exercise counseling

The findings highlight a crucial gap in healthcare delivery that needs addressing. “In our healthcare environment, there’s no easy pathway for a doctor to be reimbursed for helping patients become more physically active,” Carr explains. “And so, for these patients, many of whom report insufficient activity, we need options to easily connect them with supportive services like exercise prescriptions and/or community health specialists.”

However, there’s encouraging news about the financial feasibility of exercise counseling. A related study by Carr’s team found that when healthcare providers billed for exercise counseling services, insurance companies reimbursed these claims nearly 95% of the time. This suggests that expanding physical activity screening and counseling services could be both beneficial for patients and financially viable for healthcare providers.

Source : https://studyfinds.org/couch-potato-sedentary-lifestyle-chronic-diseases/

Science confirms: ‘Know-it-alls’ typically know less than they think

(Credit: © Robert Byron | Dreamstime.com)

The next time you find yourself in a heated argument, absolutely certain of your position, consider this: researchers have discovered that the more confident you feel about your stance, the more likely you are to be working with incomplete information. It’s a psychological quirk that might explain everything from family disagreements to international conflicts.

We’ve all been there: stuck in traffic, grumbling about the “idiot” driving too slowly in front of us or the “maniac” who just zoomed past. But what if that slow driver is carefully transporting a wedding cake, or the speeding car is rushing someone to the hospital? The fascinating new study published in PLOS ONE suggests that these snap judgments stem from what researchers call “the illusion of information adequacy” — our tendency to believe we have enough information to make sound decisions, even when we’re missing crucial details.

“We found that, in general, people don’t stop to think whether there might be more information that would help them make a more informed decision,” explains study co-author Angus Fletcher, a professor of English at The Ohio State University and member of the university’s Project Narrative, in a statement. “If you give people a few pieces of information that seems to line up, most will say ‘that sounds about right’ and go with that.”

In today’s polarized world, where debates rage over everything from vaccines to climate change, understanding why people maintain opposing viewpoints despite access to the same information has never been more critical. This research, conducted by Fletcher, Hunter Gehlbach of Johns Hopkins University, and Carly Robinson of Stanford University, reveals that we rarely pause to consider what information we might be missing before making judgments.

The researchers conducted an experiment with 1,261 American participants recruited through the online platform Prolific. The study centered around a hypothetical scenario about a school facing a critical decision: whether to merge with another school due to a drying aquifer threatening their water supply.

The participants were divided into three groups. One group received complete information about the situation, including arguments both for and against the merger. The other two groups only received partial information – either pro-merger or pro-separation arguments. The remarkable finding? Those who received partial information felt just as competent to make decisions as those who had the full picture.

“Those with only half the information were actually more confident in their decision to merge or remain separate than those who had the complete story,” Fletcher notes. “They were quite sure that their decision was the right one, even though they didn’t have all the information.”

Social media users might recognize this pattern in their own behavior: confidently sharing or commenting on articles after reading only headlines or snippets, feeling fully informed despite missing crucial context. It’s a bit like trying to review a movie after watching only the first half, yet feeling qualified to give it a definitive rating.

The study revealed an interesting finding regarding the influence of new information. When participants who initially received only one side of the story were later presented with opposing arguments, about 55% maintained their original position on the merger decision. That rate is comparable to that of the control group, which had received all information from the start.

Fletcher notes that this openness to new information might not apply to deeply entrenched ideological issues, where people may either distrust new information or try to reframe it to fit their existing beliefs. “But most interpersonal conflicts aren’t about ideology,” he points out. “They are just misunderstandings in the course of daily life.”

Beyond personal relationships, this finding has profound implications for how we navigate complex social and political issues. When people engage in debates about controversial topics, each side might feel fully informed while missing critical pieces of the puzzle. It’s like two people arguing about a painting while looking at it from different angles: each sees only their perspective but assumes they’re seeing the whole picture.

Fletcher, who studies how people are influenced by the power of stories, emphasizes the importance of seeking complete information before taking a stand. “Your first move when you disagree with someone should be to think, ‘Is there something that I’m missing that would help me see their perspective and understand their position better?’ That’s the way to fight this illusion of information adequacy.”

Source : https://studyfinds.org/science-confirms-know-it-alls-typically-know-less-than-they-think/

Ants smarter than humans? Watch as tiny insects outperform grown adults in solving puzzle

Longhorn Crazy Ants (Paratrechina longicornis) swarming and attacking a much larger ant. They are harmless to humans and found in the world`s tropical regions.(Credit: © Brett Hondow | Dreamstime.com)

Scientists have long been fascinated by collective intelligence, the idea that groups can solve problems better than individuals. Now, an interesting new study reveals some unexpected findings about group problem-solving abilities across species, specifically comparing how ants and humans tackle complex spatial challenges.

Researchers at the Weizmann Institute of Science designed an ingenious experiment pitting groups of longhorn crazy ants against groups of humans in solving the same geometric puzzle at different scales. The puzzle, known as a “piano-movers’ problem,” required moving a T-shaped load through a series of tight spaces and around corners. Imagine trying to maneuver a couch through a narrow doorway, but with more mathematical precision involved.

What makes this study, published in PNAS, particularly fascinating is that both ants and humans are among the few species known to cooperatively transport large objects in nature. In fact, of the approximately 15,000 ant species on Earth, only about 1% engage in cooperative transport of heavy loads, making this shared behavior between humans and ants especially remarkable.

The species chosen for this evolutionary competition was Paratrechina longicornis, commonly known as “crazy ants” due to their erratic movement patterns. These black ants, measuring just 3 millimeters in length, are widespread globally but particularly prevalent along Israel’s coast and southern regions. Their name derives from their distinctive long antennae, though their frenetic behavior earned them their more colorful nickname.

Recruiting participants for the study presented different challenges across species. While human volunteers readily joined when asked, likely motivated by the competitive aspect, the ants required a bit of deception. Researchers had to trick them into thinking the T-shaped load was food that needed to be transported to their nest.

In experiments spanning three years and involving over 1,250 human participants and multiple ant colonies, researchers tested different group sizes tackling scaled versions of the same puzzle. For the ants, they used both individual ants and small groups of about 7 ants, as well as larger groups averaging 80 ants. Human participants were divided into single solvers and groups of 6-9 or 16-26 people.

Perhaps most intriguingly, the researchers found that while larger groups of ants performed significantly better than smaller groups or individuals, the opposite was true for humans when their communication was restricted. When human groups were not allowed to speak or use gestures and had to wear masks and sunglasses, their performance actually deteriorated compared to individuals working alone.

This counterintuitive finding speaks to fundamental differences in how ants and humans approach collective problem-solving. Individual ants cannot grasp the global nature of the puzzle, but their collective motion translates into emergent cognitive abilities; in other words, they develop new problem-solving skills simply by working together. The large ant groups showed impressive persistence and coordination, maintaining their direction even after colliding with walls and efficiently scanning their environment until finding openings.

The study highlights a crucial distinction between ant and human societies. “An ant colony is actually a family. All the ants in the nest are sisters, and they have common interests. It’s a tightly knit society in which cooperation greatly outweighs competition,” explains study co-author Prof. Ofer Feinerman in a statement. “That’s why an ant colony is sometimes referred to as a super-organism, sort of a living body composed of multiple ‘cells’ that cooperate with one another.”

This familial structure appears to enhance the ants’ collective problem-solving abilities. Their findings validated this “super-organism” vision, demonstrating that ants acting as a group are indeed smarter, with the whole being greater than the sum of its parts. In contrast, human groups showed no such enhancement of cognitive abilities, challenging popular notions about the “wisdom of crowds” in the social media age.

Source : https://studyfinds.org/ants-smarter-than-humans/

5 consumer myths to ditch in 2025

(© Ivan Kruk – stock.adobe.com)

Over the past year, books like Less by Patrick Grant and documentaries like Buy Now: The Shopping Conspiracy have encouraged consumers to rethink their internalized beliefs that more consumption equals better living.

As we enter a new year, it’s the perfect time to reflect on and leave behind some consumer myths that are detrimental to ourselves and to the planet.

Myth 1: Buying more is better for consumers and society

Retail therapy is a common practice for coping with negative emotions and might seem easier than actual therapy. However, research has consistently shown that materialistic consumption leads to lower individual and societal well-being. In fact, emerging studies are pointing out that low-consumption lifestyles might bring greater personal satisfaction and higher environmental benefits.

Some might argue that buying more stimulates the economy, creates jobs and supports public services through taxes. However, the positive impact on local communities is often overstated due to globalized supply chains and corporate tax avoidance.

To ensure that your spending really does support your community and does not contribute to economic inequalities, it is helpful to learn more about the story behind the labels and the businesses you support with your money.

Myth 2: New is always better

While certain cutting-edge tech may indeed offer improvements over older versions, for most items new might not always be better. As Grant argues in his book Less, product quality has declined over the past few decades as manufacturers prioritize affordability and engage in planned obsolescence practices. That is, they purposely design products that will break after a certain number of uses to keep the cycle of consumption going and hit their sales targets.

But older products were often built to last, so choosing secondhand or repairing older items can save you money and actually secure you better-quality products.

Myth 3: Being sustainable is expensive

It’s true that some brands have used the term “sustainable” to justify premium prices. However, adopting sustainable consumer practices can often be free or even bring in some extra cash if you sell or donate the things you no longer need.

Instead of “buying new,” consider swapping unused items with others by hosting a “swapping party” for things like toys or clothes with your friends, family, or neighbours. Decluttering your home could free up space, bring you some joy, and could also help you to connect with others by exchanging items.

Myth 4: Buying experiences are better than buying material things

Previous research has found that spending money on experiences brings more happiness primarily because these purchases are better at bringing people together. But material purchases that help you to connect with others, such as a board game, could bring as much joy as an experience.

When spending money, my research has shown that the key is to understand whether the purchase will help you to connect with others, learn new things or help your community. It’s not about whether we spend our money on material items or experiences.

It is also worth remembering that there are plenty of activities that can help you to achieve those goals with no spending required. So, instead of instinctively reaching to our wallets, perhaps in the new year we could think about whether a non-consumer activity like a winter hike or doing some volunteering could bring us closer to those intrinsic goals like personal growth or developing relationships. These goals have been consistently linked to better well-being.

Source : https://studyfinds.org/5-consumer-myths-to-ditch-in-2025/

The rise of the intention economy: How AI could turn your thoughts into currency

(Image by Shutterstock AI Generator)

Imagine scrolling through your social media feed when your AI assistant chimes in: “I notice you’ve been feeling down lately. Should we book that beach vacation you’ve been thinking about?” The eerie part isn’t that it knows you’re sad — it’s that it predicted your desire for a beach vacation before you consciously formed the thought yourself. Welcome to what some experts believe will be known as the “intention economy,” a way of life for consumers in the not-too-distant future.

A new paper by researchers at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence warns that large language models (LLMs) like ChatGPT aren’t just changing how we interact with technology, they’re laying the groundwork for a new marketplace where our intentions could become commodities to be bought and sold.

“Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve,” says co-author Dr. Yaqub Chaudhary, a visiting scholar at the Centre, in a statement.

For decades, tech companies have profited from what’s known as the attention economy, where our eyeballs and clicks are the currency. Social media platforms and websites compete for our limited attention spans, serving up endless streams of content and ads. But according to researchers Chaudhary and Dr. Jonnie Penn, we’re witnessing early signs of something potentially more invasive: an economic system that could treat our motivations and plans as valuable data to be captured and traded.

What makes this potential new economy particularly concerning is its intimate nature. “What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions,” Chaudhary explains.

Early signs of this emerging marketplace are already visible. Apple’s new “App Intents” developer framework for Siri includes protocols to “predict actions someone might take in future” and suggest apps based on these predictions. OpenAI has openly called for “data that expresses human intention… across any language, topic, and format.” Meanwhile, Meta has been researching “Intentonomy,” developing datasets for understanding human intent.

Consider Meta’s AI system CICERO, which achieved human-level performance in the strategy game Diplomacy by predicting players’ intentions and engaging in persuasive dialogue. While currently limited to gaming, this technology demonstrates the potential for AI systems to understand and influence human intentions through natural conversation.

Major tech companies are positioning themselves for this potential future. Microsoft has partnered with OpenAI in what the researchers describe as “the largest infrastructure buildout that humanity has ever seen,” investing over $50 billion annually from 2024 onward. The researchers suggest that future AI assistants could have unprecedented access to psychological and behavioral data, often collected through casual conversation.

The researchers warn that unless regulated, this developing intention economy “will treat your motivations as the new currency” in what amounts to “a gold rush for those who target, steer, and sell human intentions.” This isn’t just about selling products — It could have implications for democracy itself, potentially affecting everything from consumer choices to voting behavior.

Source: https://studyfinds.org/rise-of-intention-economy-ai-assistant/

Farewell, 2024: You were just a so-so year for most Americans

(ID 327257589 | 2024 © Penchan Pumila | Dreamstime.com)

Americans may be divided on many issues, but when it comes to rating 2024, they’ve reached a surprising consensus: it was decidedly average. In a nationwide survey of 2,000 people, the year earned a 6.1 out of 10—though beneath this seemingly tepid score lies a heartening discovery about what truly matters to Americans: personal connections topped the list of memorable moments.

The comprehensive study, conducted by Talker Research, surveyed 2,000 Americans about their experiences throughout the year. Perhaps most touching was the discovery that the most memorable moment for many Americans wasn’t a grand achievement or milestone, but rather the simple joy of reconnecting with old friends and family members, with 17% of respondents citing this as their standout experience.

Overall, a notable 30% of Americans rated their year as exceptional, scoring it eight or higher on the ten-point scale.

Personal development emerged as a dominant theme in 2024, with an overwhelming 67% of Americans reporting some form of growth over the past year. This growth manifested in various aspects of their lives: more than half (52%) saw improvements in their personal relationships, while 38% experienced positive changes in their mental and emotional well-being. Physical health gains were reported by 29% of respondents, and a quarter celebrated advances in their financial situation.

The year proved transformative for many Americans in unexpected ways. Tied for second place among memorable experiences were three distinct life changes: creative and personal growth, welcoming a new pet, and mastering a new skill or hobby, each cited by 12% of respondents. Close behind, 11% found meaning in volunteering or contributing to causes they care about.

The survey revealed that 17% of respondents rated the year a seven out of ten, matched by another 17% giving it a five, while 16% scored it an eight. At the extremes, 8% of Americans had a fantastic year worthy of a perfect ten, while 5% rated it a disappointing one out of ten.

The survey highlighted how Americans found joy and achievement in various pursuits, from visiting new places (10%) to overcoming major health challenges (9%). Some celebrated financial victories, with 8% paying off significant debts and 7% reaching important savings goals. Others embraced adventure, with 6% embarking on dream vacations or relocating to new homes.

Source: https://studyfinds.org/americans-rate-2024-six-out-of-ten/

Unlock the Power of Manifestation: How to Achieve What You Truly Desire


Manifestation is the art of turning your dreams into reality by aligning your thoughts, beliefs, and actions toward achieving them. It’s a combination of positive thinking and purposeful action. Here’s a comprehensive guide on how to manifest your aspirations with clarity and confidence.

1. Understand Manifestation and How It Works

Manifestation is rooted in the idea that your thoughts and energy can influence your reality. It’s driven by two powerful principles:

The Power of Positive Thinking

Your mindset shapes your outcomes. Positive thinking helps you:
• Overcome fears and doubts.
• Channel your energy toward your goals.
• Take actions that bring you closer to success.

When you believe in your ability to achieve something, you’re more likely to focus your efforts and persist through challenges.

The Law of Attraction

The law of attraction states that what you focus on is what you attract. By immersing yourself in your interests and goals:
• You gain knowledge and expertise in the area.
• You build networks with like-minded individuals.
• Opportunities naturally come your way, making success more attainable.

2. Key Techniques to Manifest Your Goals

Practice Visualization

Visualization is a powerful tool to make your dreams feel real. Spend a few minutes daily imagining your goals and the steps you’ll take to achieve them.
• Morning visualization can motivate you for the day.
• Evening visualization allows you to reflect on your progress.

Create a Vision Board

A vision board is a physical or digital collage of images and notes representing your goals.
• For instance, if your dream is a perfect home, include pictures of the decor, layout, or neighborhood.
• Seeing your vision board daily reinforces your commitment to your goals.

Maintain a Future Box

A future box (or manifestation box) holds items that represent your goals.
• Collect objects or notes related to your dreams, such as travel accessories for a future vacation or a letter to your future self.
• This tangible collection keeps your aspirations alive and close.

Use the 3-6-9 Method

Write down or repeat your goal:
• 3 times in the morning,
• 6 times in the afternoon,
• 9 times in the evening.

This repetition focuses your thoughts and reinforces your intent.

Try the 777 Method

Write your goal seven times in the morning and evening for seven days.
• This method is particularly effective for short-term objectives.
• It keeps your mind engaged with your aspirations consistently.

Make a 10-10-10 Worksheet

List out:
• 10 things you desire.
• 10 things you’re grateful for.
• 10 things you enjoy doing.

This worksheet offers a holistic view of your goals, strengths, and passions, helping you stay positive and self-aware.

Keep a Journal

Document your dreams, fears, and progress in a journal.
• Journaling helps identify obstacles and find solutions.
• Regular updates keep your journey organized and inspiring.

3. Strategies for Effective Manifestation

Be Clear About What You Want

Clarity is essential. Define your goals in detail to create a focused path toward achieving them.

Make Positive Affirmations

Speak positively about your goals. Examples include:
• “I am capable and deserving of this promotion.”
• “I am grateful for my growing success and the abundance it brings.”
• “I will live in my dream home within five years.”

Take Action Toward Your Goal

Manifestation requires action. Dedicate time to your goals every day or week.
• Example: If you want a new job, apply to at least one opening weekly.

Step Out of Your Comfort Zone

Growth often involves discomfort. Start small, like sharing your work with friends, then gradually take on bigger challenges.

Build Your Confidence

Confidence is key to success. Begin your day with affirmations such as, “I am strong, capable, and ready to succeed.”

Practice Gratitude

Gratitude fosters positivity. Appreciate what you have while working toward your dreams.

4. The Bottom Line: Manifest Your Best Life

Manifestation isn’t magic; it’s a combination of belief, focus, and consistent action. By practicing these techniques and strategies, you can align your thoughts and energy with your goals and transform them into reality.

Your journey begins with a single thought. Dream big, believe in yourself, and take the steps needed to turn your vision into life-changing success.

 

Teflon flu cases surge: What you need to know

(Credit: Simca/Shutterstock)

From frying pans to muffin tins and saucepans – you can get nonstick surfaces on just about any type of cookware. However, did you know that the nonstick coating can make some people ill?

In 2023, 267 reports of suspected polymer fume fever or “Teflon flu” were reported to U.S. Poison Centers, triple the annual number in previous years. In 2019, 79 cases were reported. Similar reports have been reported since 2011, and they are increasing in frequency.

The 267 cases were not all confirmed, and not all patients reported symptoms. Some patients may have been exposed to chemicals at work.

The disorder is called polymer fume fever since the polymers that make up the nonstick coatings typically come from polytetrafluorethylene (PTFE), which prevents foods from sticking to a pan. According to the National Institutes of Health, this material can break down into tiny particles at normal cooking temperatures, releasing toxic gases and chemicals.

The Poison Center explains that Teflon flu is caused by inhaling the fumes from burning products, including PTFE. Symptoms include:

  • headaches
  • fever
  • shivering or chills
  • unpleasant taste
  • thirst
  • coughing
  • nausea
  • weakness
  • muscle aches or cramps

Symptoms of Teflon flu can last one to two days.

How can you prevent polymer fume fever?

Whether in your work environment or at home, these safety principles while working with fumes can help avoid illnesses:

Ventilation and Air Filtration: Use exhaust fans, open windows, or operate air purifiers equipped with HEPA filters to lessen indoor air pollution and facilitate the removal of airborne contaminants. Avoid using abrasive cleaning methods (such as using steel wool or metal scouring pads) that produce airborne particles.

Material Selection for Safer Alternatives: Opt for low-emission materials and products with reduced metal content when possible. Choose water-based or low-VOC (volatile organic compound) cleaning agents and coatings to minimize exposure to toxic substances. Choose environmentally friendly and low-emission options for consumer goods whenever doable.

For metal and polymer fume exposures, most of the symptoms of Teflon flu will resolve in 24 to 48 hours. If you or someone you know has been exposed to metal fumes or polymer fumes and are experiencing either metal fume fever or polymer fume fever, follow these steps:

  1. Move away from the source causing the fumes.
  2. Drink plenty of water to stay hydrated.
  3. Over-the-counter medications such as ibuprofen and acetaminophen can help manage fever and body aches.

Call the Poison Center right away at 1-800-222-1222 to receive immediate first aid and instructions from a specially trained nurse or pharmacist.

Source: https://studyfinds.org/teflon-flu-cases-surge-what-you-need-to-know/?nab=0

Social media has long battled bot overload — Now AI is both the problem and the cure

(Image by VectorMine on Shutterstock)

Remember when the biggest threat online was a computer virus? Those were simpler times. Today, we face a far more insidious digital danger: AI-powered social media bots. A study by researchers from the University of Washington and Xi’an Jiaotong University reveals both the immense potential and concerning risks of using large language models (LLMs) like ChatGPT in the detection and creation of these deceptive fake profiles.

Social media bots — automated accounts that can mimic human behavior — have long been a thorn in the side of platform operators and users alike. These artificial accounts can spread misinformation, interfere with elections, and even promote extremist ideologies. Until now, the fight against bots has been a constant game of cat and mouse, with researchers developing increasingly sophisticated detection methods, only for bot creators to find new ways to evade them.

Enter the era of large language models. These AI marvels, capable of understanding and generating human-like text, have shown promise in various fields. But could they be the secret weapon in the war against social media bots? Or might they instead become a powerful tool for creating even more convincing fake accounts?

The research team, led by Shangbin Feng, set out to answer these questions by putting LLMs to the test in both bot detection and bot creation scenarios. Their findings paint a picture of both hope and caution for the future of social media integrity.

“There’s always been an arms race between bot operators and the researchers trying to stop them,” says Feng, a doctoral student in Washington’s Paul G. Allen School of Computer Science & Engineering, in a university release. “Each advance in bot detection is often met with an advance in bot sophistication, so we explored the opportunities and the risks that large language models present in this arms race.”

On the detection front, the news is encouraging. The researchers developed a novel approach using LLMs to analyze various aspects of user accounts, including metadata (like follower counts and account age), the text of posts, and the network of connections between users. By combining these different streams of information, their LLM-based system was able to outperform existing bot detection methods by an impressive margin—up to 9.1% better on standard datasets.

Large language models like ChatGPT can play a major role in the detection and creation of deceptive fake profiles, researchers warn. (Photo by Tada Images on Shutterstock)

What’s particularly exciting about this approach is its efficiency. While traditional bot detection models require extensive training on large datasets of labeled accounts, the LLM-based method achieved its superior results after being fine-tuned on just 1,000 examples. This could be a game-changer in a field where high-quality, annotated data is often scarce and expensive to obtain.

However, the study’s findings weren’t all rosy. The researchers also explored how LLMs might be used by those on the other side of the battle — the bot creators themselves. By leveraging the language generation capabilities of these AI models, they were able to develop strategies for manipulating bot accounts to evade detection.

These LLM-guided evasion tactics proved alarmingly effective. When applied to known bot accounts, they were able to reduce the detection rate of existing bot-hunting algorithms by up to 29.6%. The manipulations ranged from subtle rewrites of bot-generated text to make it appear more human-like to strategic changes in which accounts a bot follows or unfollows.

Perhaps most concerning is the potential for LLMs to create bots that are not just evasive but truly convincing. The study demonstrated that LLMs could generate user profiles and posts that capture nuanced human behaviors, making them far more difficult to distinguish from genuine accounts.

This dual-use potential of LLMs in the realm of social media integrity presents a challenge for platform operators, researchers, and policymakers alike. On one hand, these powerful AI tools could revolutionize our ability to identify and remove malicious bot accounts at scale. On the other, they risk becoming a sophisticated weapon in the arsenal of those seeking to manipulate online discourse.

Source: https://studyfinds.org/social-media-bots-ai/?nab=0

6G revolution begins: Researchers achieve record-breaking data speeds

(© sitthiphong – stock.adobe.com)

The road to 6G wireless networks just got a little smoother. Scientists have made a significant leap forward in terahertz technology, potentially revolutionizing how we communicate in the future. An international team has developed a tiny silicon device that could double the capacity of wireless networks, bringing us closer to the promise of 6G and beyond.

Imagine a world where you could download an entire season of your favorite show in seconds or where virtual reality feels as real as, well, reality. This is what scientists believe terahertz technology can potentially bring to the world. Their work is published in the journal Laser & Photonics Review.

This tiny marvel, a silicon chip smaller than a grain of rice, operates in a part of the electromagnetic spectrum that most of us have never heard of: the terahertz range. Think of the electromagnetic spectrum as a vast highway of information.

We’re currently cruising along in the relatively slow lanes of 4G and 5G. Terahertz technology? That’s the express lane, promising speeds that make our current networks look like horse-drawn carriages in comparison.

Terahertz waves occupy a sweet spot in the electromagnetic spectrum between microwaves and infrared light. They’ve long been seen as a promising frontier for wireless communication because they can carry vast amounts of data. However, harnessing this potential has been challenging due to technical limitations.

The researchers’ new device, called a “polarization multiplexer,” tackles one of the key hurdles in terahertz communication: efficiently managing different polarizations of terahertz waves. Polarization refers to the orientation of the wave’s oscillation. By cleverly manipulating these polarizations, the team has essentially created a traffic control system for terahertz waves, allowing more data to be transmitted simultaneously.

If that sounds like technobabble, think of it as a traffic cop for data, able to direct twice as much information down the same road without causing a jam.

“Our proposed polarization multiplexer will allow multiple data streams to be transmitted simultaneously over the same frequency band, effectively doubling the data capacity,” explains lead researcher Professor Withawat Withayachumnankul from the University of Adelaide, in a statement.

At the heart of this innovation is a compact silicon chip measuring just a few millimeters across. Despite its small size, this chip can separate and combine terahertz waves with different polarizations with remarkable efficiency. It’s like having a tiny, incredibly precise sorting machine for light waves.

To create this device, the researchers used a 250-micrometer-thick silicon wafer with very high electrical resistance. They employed a technique called deep reactive-ion etching to carve intricate patterns into the silicon. These patterns, consisting of carefully designed holes and structures, form what’s known as an “effective medium” – a material that interacts with terahertz waves in specific ways.

The team then subjected their device to a battery of tests using specialized equipment. They used a vector network analyzer with extension modules capable of generating and detecting terahertz waves in the 220-330 GHz range with minimal signal loss. This allowed them to measure how well the device could handle different polarizations of terahertz waves across a wide range of frequencies.

“This large relative bandwidth is a record for any integrated multiplexers found in any frequency range. If it were to be scaled to the center frequency of the optical communications bands, such a bandwidth could cover all the optical communications bands.”

In their experiments, the researchers demonstrated that their device could effectively separate and combine two different polarizations of terahertz waves with high efficiency. The device showed an average signal loss of only about 1 decibel – a remarkably low figure that indicates very little energy is wasted in the process. Even more impressively, the device maintained a polarization extinction ratio (a measure of how well it can distinguish between different polarizations) of over 20 decibels across its operating range. This is crucial for ensuring that data transmitted on different polarizations doesn’t interfere with each other.

To put the potential of this technology into perspective, the researchers conducted several real-world tests. In one demonstration, they used their device to transmit two separate high-definition video streams simultaneously over a terahertz link. This showcases the technology’s ability to handle multiple data streams at once, effectively doubling the amount of information that can be sent over a single channel.

But the team didn’t stop there. In more advanced tests, they pushed the limits of data transmission speed. Using a technique called on-off keying, they achieved error-free data rates of up to 64 gigabits per second. When they employed a more complex modulation scheme (16-QAM), they reached staggering data rates of up to 190 gigabits per second. That’s roughly equivalent to downloading 24 gigabytes – or about six high-definition movies – in a single second. It’s a staggering leap from current wireless technologies.

Still, the researchers say it’s not just about speed. This device is also incredibly versatile.

“This innovation not only enhances the efficiency of terahertz communication systems but also paves the way for more robust and reliable high-speed wireless networks,” adds Dr. Weijie Gao, a postdoctoral researcher at Osaka University and co-author of the study.

Source: https://studyfinds.org/6g-record-breaking-data-speed/?nab=0

How workplace rudeness is killing productivity and endangering lives

Boss yelling at employees (Photo by Yan Krukov from Pexels)

“Please” and “thank you” — these simple courtesies might be worth more than their weight in gold, according to a stunning new study. Researchers have uncovered a startling link between workplace rudeness and team performance that’s forcing organizations to rethink their approach to interpersonal dynamics.

In an era where workplace efficiency is paramount, who would have thought that a careless comment or a dismissive email could be the wrench in the gears of productivity? However, according to the research published in the Journal of Applied Psychology, incivility is wreaking havoc in our offices, operating rooms, and boardrooms.

Far from being a mere annoyance, the study suggests that rudeness is a silent saboteur, capable of derailing team performance and potentially endangering lives. The study, conducted by an international team of researchers from the University of Florida, Indiana University, and institutions across the U.S. and Israel, paints a sobering picture of how even mild instances of incivility can have far-reaching consequences.

“Many workplaces treat rudeness as a minor interpersonal issue,” says Dr. Amir Erez, a professor at the University of Florida Warrington College of Business, in a statement. “Our research shows that it’s a major threat to productivity and even safety. Organizations should treat it as such.”

Through a series of five innovative studies, the researchers peeled back the layers of workplace interactions to reveal the insidious effects of rudeness. From laboratory experiments involving bridge-building with newspaper and tape to high-stakes medical simulations, the findings consistently pointed to a disturbing truth: rudeness dramatically impairs team functioning.

Perhaps most alarming is the disproportionate impact of rudeness relative to its perceived intensity. In one study, seemingly mild rude comments from an external source accounted for a staggering 44% of the variance in medical teams’ performance quality. This suggests that even small slights can have outsized effects on team outcomes.

Far from being a mere annoyance, the study suggests that rudeness is a silent saboteur, capable of derailing team performance and potentially endangering lives. (© fizkes – stock.adobe.com)

How exactly does rudeness wreak such havoc?

The researchers found that rudeness acts as a social threat, triggering defensive responses in team members. This threat response shifts individuals from a collaborative mindset to a self-protective one, reducing what the researchers call “social value orientation” (SVO) – essentially, the degree to which people prioritize collective interests over their own.

This shift towards self-interest manifests in reduced information sharing and workload distribution among team members, two critical components of effective teamwork. In medical settings, this translates to poorer execution of potentially life-saving procedures.

“Our research helps us understand the effect rudeness can have on team dynamics, especially in urgent, intense situations like in health care,” says Jake Gale, Ph.D., an assistant professor of management at the Indiana University Kelley School of Business Indianapolis. “By understanding how rudeness triggers self-focused behaviors and impairs communication, we’re not just advancing academic knowledge; we’re uncovering insights that could save lives. It’s a powerful reminder that the way we interact with each other has real-world consequences, especially in critical situations.”

The implications of these findings extend far beyond the medical field. Whether in a high-powered corporate boardroom or a local retail store, rudeness from any source – be it supervisors, colleagues, or customers – consistently degrades team cooperation and coordination, leading to poorer outcomes across the board.

Given the pervasiveness of rudeness in modern workplaces, with over 50% of employees reporting weekly encounters, addressing this issue becomes not just a matter of politeness but a critical factor in organizational effectiveness and safety.

The researchers suggest that organizations take proactive steps to create work environments that foster respect and civility. This could include implementing training programs to build resilience against rudeness or promoting mindfulness practices that help employees maintain a collective focus even in the face of interpersonal challenges.

Source: https://studyfinds.org/workplace-rudeness-productivity/?nab=0

Inside the attention spans of young kids: Why curiosity is mistaken for lack of focus

(Credit: August de Richelieu from Pexels)

Picture this: You’re playing a game of “Guess Who?” with a five-year-old. You’ve narrowed it down to the character with the red hat, but instead of triumphantly declaring their guess, the child keeps flipping over cards, examining every detail from mustaches to earrings. Frustrating? Maybe. But according to new research, this seemingly inefficient behavior might be a key feature of how young minds learn about the world.

A study published in Psychological Science by researchers at The Ohio State University has shed new light on a longstanding puzzle in child development: Why do young children seem to pay attention to everything, even when it doesn’t help them complete a task? The answer, it turns out, is more complex and fascinating than anyone expected.

For years, scientists have observed that children tend to distribute their attention broadly, taking in information that adults would consider irrelevant or distracting. This “distributed attention” has often been chalked up to immature brain development or a simple lack of focus. But Ohio State psychology professor Vladimir Sloutsky and his team suspected there might be more to the story.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” Sloutsky explains in a media release.

This over-exploration persists even when children are motivated by rewards to complete tasks quickly.

To investigate this question, Sloutsky and lead author Qianqian Wan designed clever experiments involving four to six-year-old children and adults. Participants were shown images of cartoon creatures and asked to sort them into two made-up categories called “Hibi” and “Gora.” Each creature had seven features like horns, wings, and tails. Importantly, only one feature perfectly predicted which category the creature belonged to, while the other features were only somewhat helpful for categorizing.

The key twist was that all the features were initially hidden behind “bubbles” on a computer screen. Participants could reveal features one at a time by tapping or clicking on the bubbles. This setup allowed the researchers to see exactly which features people chose to look at before making their category decision.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” researchers explain. (Credit: Kamaji Ogino from Pexels)

If children’s broad attention was simply due to an inability to filter out distractions, the researchers reasoned that hiding irrelevant features should help them focus only on the most important one. However, that’s not what happened. Even when they quickly figured out which feature was the perfect predictor of category, children – especially younger ones – continued to uncover and examine multiple features on each trial. Adults, on the other hand, quickly zeroed in on the key feature and mostly ignored the rest.

Interestingly, by age six, children started to show a mix of strategies. About half the six-year-olds behaved more like adults, focusing mostly on the key feature. The other half continued to explore broadly like younger children. This suggests the study may have captured a key transition point in how children learn to focus their attention.

To rule out the possibility that children just enjoyed the action of tapping to reveal features, the researchers ran a second experiment. This time, they gave children the option to either reveal all features at once with one tap or uncover them one by one. Children of all ages strongly preferred the single-tap option, indicating their goal was indeed to gather information rather than simply tapping for fun.

So, why do children persist in this seemingly inefficient exploration? Sloutsky proposes two intriguing possibilities. The first is simple curiosity – an innate drive to learn about the world that overrides task efficiency. The second, which Sloutsky favors, relates to the development of working memory.

“The children learned that one body part will tell them what the creature is, but they may be concerned that they don’t remember correctly. Their working memory is still under development,” Sloutsky suggests. “They want to resolve this uncertainty by continuing to sample, by looking at other body parts to see if they line up with what they think.”

Source: https://studyfinds.org/over-exploring-minds-attention-kids/?nab=0

Just 10 seconds of light exercise boosts brain activity in kids

(Photo by Yan Krukov from Pexels)

What if the secret to unlocking your child’s cognitive potential was as simple as a 10-second stretch? It may sound too good to be true, but a revolutionary study from Japan suggests that brief, light exercises could be the key to boosting brain activity in children, challenging our understanding of the mind-body connection.

The findings, published in Scientific Reports, suggest that these quick, low-intensity activities could be a valuable tool for enhancing cognitive function and potentially improving learning in school settings.

The research, led by Takashi Naito and colleagues, focuses on a part of the brain called the prefrontal cortex (PFC). This area, located at the front of the brain, is crucial for many important mental tasks. It helps us plan, make decisions, control our impulses, and pay attention – all skills that are vital for success in school and life.

As children grow, their prefrontal cortex continues to develop. This means that childhood is a critical time for building strong mental abilities. However, many children today aren’t getting enough physical activity. In fact, a whopping 81% of children worldwide don’t get enough exercise. This lack of movement could potentially hinder their brain development and cognitive skills.

While previous studies have shown that moderate to intense exercise can improve brain function, less was known about the effects of light, easy activities – the kind that could be done quickly in a classroom or during short breaks. This study aimed to fill that gap by examining how simple exercises affect blood flow in the prefrontal cortex of children.

“Our goal is to develop a light-intensity exercise program that is accessible to everyone, aiming to enhance brain function and reduce children’s sedentary behavior,” Naito explains in a statement. “We hope to promote and implement this program in schools through collaborative efforts.”

The researchers recruited 41 children between the ages of 10 and 15 to participate in the study. These kids performed seven different types of light exercises, each lasting either 10 or 20 seconds. The exercises included things like stretching, hand movements, and balancing on one leg – all activities that could be easily done in a classroom without special equipment.

To measure brain activity, the researchers used a technique called functional near-infrared spectroscopy (fNIRS). This non-invasive method uses light to detect changes in blood flow in the brain, which can indicate increased brain activity. The children wore a special headband with sensors while doing the exercises, allowing the researchers to see how their brain activity changed during each movement.

Most of the exercises led to significant increases in blood flow to the prefrontal cortex, suggesting increased brain activity in this important region. Interestingly, not all exercises had the same effect. Simple, static stretches didn’t show much change, but exercises that required more thought or physical effort – like twisting movements, hand exercises, and balancing – showed the biggest increases in brain activity.

These findings suggest that even short bursts of light activity can “wake up” the prefrontal cortex in children. This could potentially lead to improved focus, better decision-making, and enhanced learning abilities. The best part is that these exercises are quick and easy to do, making them perfect for incorporating into a school day or study routine.

Source: https://studyfinds.org/10-seconds-exercise-brain-activity/?nab=0

Tourist dies after ice collapse in Icelandic glacier

An aerial view of the Breidamerkurjökull glacier in 2021

A foreign tourist has died in south Iceland after ice collapsed during a visit their group was making to a glacier, local media report.

A second tourist was injured but they have been taken to hospital and their life is not in danger, while two others are still missing.

Rescuers have suspended the search for the missing in the Breidamerkurjökull glacier until morning because of difficult conditions.

Ice collapsed as the group of 25 people were visiting an ice cave along with a guide on Sunday.

Emergency workers worked by hand to try to rescue those missing.

First responders received a call just before 15:00 on Sunday about the collapse.

“The conditions are very difficult on the ground,” said local police chief Sveinn Kristján Rúnarsson. “It’s in the glacier. It’s hard to get equipment there… It’s bad. Everything is being done by hand.”

Local news outlets reported that 200 people were working on the rescue operation at one point on Sunday.

Speaking on Icelandic TV, Chief Superintendent Rúnarsson said police had been unable to contact the two missing people.

While the conditions were “difficult”, the weather was “fair”, he said.

Confirming that all those involved were foreign tourists, he said there was nothing to suggest that the trip to the cave should not have taken place.

“Ice cave tours happen almost the whole year,” he said

“These are experienced and powerful mountain guides who run these trips. It’s always possible to be unlucky. I trust these people to assess the situation – when it’s safe or not safe to go, and good work has been done there over time. This is a living land, so anything can happen.”

The police chief was quoted as saying that people had been standing in a ravine between cave mouths when an ice wall collapsed.

Source : https://www.bbc.com/news/articles/cp8ny80e6lyo

Mental menu: Your food choices may be causing anxiety and depression

(Credit: Prostock-studio/Shutterstock)

The proverbial “sugar high” that follows the ingestion of a sweet treat is a familiar example of the potentially positive effects of food on mood.

On the flip side, feeling “hangry” – the phenomenon where hunger manifests in the form of anger or irritability – illustrates how what we eat or don’t eat can also provoke negative emotions.

The latest research suggests that blood sugar fluctuations are partly responsible for the connection between what we eat and how we feel. Through its effects on our hormones and our nervous system, blood sugar levels can be fuel for anxiety and depression.

Mental health is complex. There are countless social, psychological, and biological factors that ultimately determine any one person’s experience. However, numerous randomized controlled trials have demonstrated that diet is one biological factor that can significantly influence risk for symptoms of depression and anxiety, especially in women.

As a family medicine resident with a Ph.D. in nutrition, I have witnessed the fact that antidepressant medications work for some patients but not others. Thus, in my view, mental health treatment strategies should target every risk factor, including nutrition.

The role of the glycemic index
Many of the randomized controlled trials that have proven the link between diet and mental health have tested the Mediterranean diet or a slightly modified version of it. The Mediterranean diet is typically characterized by lots of vegetables – especially dark green, leafy vegetables – fruit, olive oil, whole grains, legumes and nuts, with small amounts of fish, meat and dairy products. One of the many attributes of the Mediterranean diet that may be responsible for its effect on mood is its low glycemic index.

The glycemic index is a system that ranks foods and diets according to their potential to raise blood sugar. Thus, in keeping with the observation that blood sugar fluctuations affect mood, high glycemic index diets that produce drastic spikes in blood sugar have been associated with increased risk for depression and to some extent anxiety.

High glycemic index carbohydrates include white rice, white bread, crackers and baked goods. Therefore, diets high in these foods may increase risk for depression and anxiety. Meanwhile, low glycemic index carbs, such as parboiled rice and al dente pasta, that are more slowly absorbed and produce a smaller blood sugar spike are associated with decreased risk.

Diets high in legumes and dark green vegetables produce lower spikes in blood sugar. (Credit: Jacqueline Howell from Pexels)

How diet affects mood

Many scientific mechanisms have been proposed to explain the connection between diet and mental health. One plausible explanation that links blood sugar fluctuations with mood is its effect on our hormones.

Every time we eat sugar or carbohydrates such as bread, rice, pasta, potatoes, and crackers, the resulting rise in blood sugar triggers a cascade of hormones and signaling molecules. One example, dopamine – our brain’s pleasure signal – is the reason we can experience a “sugar high” following the consumption of dessert or baked goods. Dopamine is the body’s way of rewarding us for procuring the calories, or energy that are necessary for survival.

Insulin is another hormone triggered by carbohydrates and sugar. Insulin’s job is to lower blood sugar levels by escorting the ingested sugar into our cells and tissues so that it can be used for energy. However, when we eat too much sugar, too many carbs, or high glycemic index carbs, the rapid increase in blood sugar prompts a drastic rise in insulin. This can result in blood sugar levels that dip below where they started.

This dip in blood sugar sparks the release of adrenaline and its cousin noradrenaline. Both of these hormones appropriately send glucose into the bloodstream to restore blood sugar to the appropriate level.

However, adrenaline influences more than just blood sugar levels. It also affects how we feel, and its release can manifest as anxiety, fear, or aggression. Hence, diet affects mood through its effect on blood sugar levels, which trigger the hormones that dictate how we feel.

Interestingly, the rise in adrenaline that follows sugar and carbohydrate consumption doesn’t happen until four to five hours after eating. Thus, when eating sugar and carbs, dopamine makes us feel good in the short term; but in the long term, adrenaline can make us feel bad.

However, not everyone is equally affected. Identical meals can produce widely varying blood sugar responses in different people, depending on one’s sex, as well as genetics, sedentariness, and the gut microbiome.

And it’s important to keep in mind that, as previously mentioned, mental health is complicated. So in certain circumstances, no amount of dietary optimization will overcome the social and psychological factors that may underpin one’s experience.

Nevertheless, a poor diet could certainly make a person’s experience worse and is thus relevant for anyone, especially women, hoping to optimize mental health. Research has shown that women, in particular, are more sensitive to the effects of the glycemic index and diet overall.

Source: https://studyfinds.org/food-choices-anxiety-depression/

In just 10 minutes, new app gives you a mental health makeover

(Credit: Microgen/Shutterstock)

Just 10 minutes of daily mindfulness practice, delivered through a free smartphone app, could be the key to unlocking a healthier, happier you. It sounds almost too good to be true, but that’s exactly what researchers from the Universities of Bath and Southampton have discovered.

In one of the largest and most diverse studies of its kind, 1,247 adults from 91 countries embarked on a 30-day mindfulness journey using the free Medito app. The results were nothing short of remarkable. Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms compared to the control group. They also experienced a 6.9% greater improvement in well-being and a 12.6% larger decrease in anxiety.

The benefits didn’t stop there. The study, published in the British Journal of Health Psychology, uncovered an intriguing link between mindfulness practice and healthier lifestyle choices. Participants who used the mindfulness app reported more positive attitudes towards health maintenance (7.1% higher than the control group) and stronger intentions to look after their health (6.5% higher). It’s as if the simple act of tuning into the present moment created a ripple effect, influencing not just mental health but also motivating healthier behaviors.

What makes this study particularly exciting is its accessibility. Unlike traditional mindfulness programs that might require significant time commitments or expensive retreats, this intervention was delivered entirely through a free mobile app. Participants, most of whom had no prior mindfulness experience, were asked to complete just 10 minutes of practice daily. The sessions included relaxation exercises, intention-setting, body scans, focused breathing, and self-reflection.

“This study highlights that even short, daily practices of mindfulness can offer benefits, making it a simple yet powerful tool for enhancing mental health,” says Masha Remskar, the lead researcher from the University of Bath, in a media release.

Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms. (Credit: Ground Picture/Shutterstock)

Perhaps even more impressive than the immediate effects were the long-term benefits. In follow-up surveys conducted 30 days after the intervention ended, participants in the mindfulness group continued to report improved well-being, reduced depression symptoms, and better sleep quality compared to the control group.

The study also shed light on why mindfulness might be so effective.

“The research underscores how digital technology – in this case, a freely available app – can help people integrate behavioral and psychological techniques into their lives, in a way that suits them,” notes Dr. Ben Ainsworth from the University of Southampton.

Source : https://studyfinds.org/10-minute-app-mental-health/?nab=0

Wow! Scientists may have finally decoded mysterious signal from space

The “Wow!” signal was originally captured in 1977 by the Ohio State University’s Big Ear radio telescope (Credit: Big Ear Radio Observatory and North American AstroPhysical Observatory)

For nearly half a century, astronomers have been puzzled by a brief and unexplainable radio signal detected in 1977 that seemed to hint at the existence of alien life. Known as the “Wow! Signal,” this tantalizing cosmic transmission has remained one of the most intriguing mysteries in the search for signs of intelligent life in outer space. Now, scientists may finally know where it came from!

A team of researchers may have uncovered a potential astrophysical explanation for the Wow! Signal that could reshape our understanding of this enduring enigma. Their findings, currently published in the preprint journal arXiv, suggest the signal may have been the result of a rare and dramatic event involving a burst of energy from a celestial object interacting with clouds of cold hydrogen gas in the Milky Way galaxy.

“Our latest observations, made between February and May 2020, have revealed similar narrowband signals near the hydrogen line, though less intense than the original Wow! Signal,” explains Abel Méndez, lead author of the study from the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo, in a media release.

“Our study suggests that the Wow! Signal was likely the first recorded instance of maser-like emission of the hydrogen line.”

Cold hydrogen clouds in the galaxy emit faint narrowband radio signals similar to those shown here, detected by the Arecibo Observatory in 2020. A sudden brightening of one of these clouds, triggered by a strong emission from another stellar source, may explain the Wow! Signal. (Credit: University of Puerto Rico at Arecibo)

The Wow! Signal was detected by the Big Ear radio telescope at The Ohio State University on August 15, 1977. It exhibited several intriguing characteristics, including a narrow bandwidth, high signal strength, and a frequency tantalizingly close to the natural radio emission of neutral hydrogen — an element abundant throughout the universe. These properties led many to speculate the signal could be of artificial origin, perhaps a deliberate message from an extraterrestrial intelligence.

This passing burst of activity in space led Dr. Jerry Ehman to famously write “Wow!” next to the print-out of the signal, which was like nothing else astronomers were seeing in space at the time. However, the signal was never detected again, despite numerous attempts to locate its source over the ensuing decades.

This has posed a major challenge for the SETI (Search for Extraterrestrial Intelligence) community, as repetition is considered essential for verifying the authenticity of a potential extraterrestrial signal — also known as a technosignature.

This new study, however, is pushing the conversation away from an alien radio transmission and closer to a once-in-a-lifetime natural occurrence in deep space. The researchers’ key insight stems from observations made using the now-decommissioned Arecibo Observatory in Puerto Rico, one of the world’s most powerful radio telescopes until its collapse in 2020.

For now, the Wow! Signal remains shrouded in mystery, but there is now at least a plausible explanation for its existence — one that does not involve aliens.

Source: https://studyfinds.org/wow-signal-decoded/?nab=0

Do ambitious people really make the best leaders? New study raises doubts

(Credit: fizkes/Shutterstock)

Leadership is a critical component in every aspect of human activity, from business and education to government and healthcare. We often assume that those who aspire to leadership positions are the most qualified for the job. However, a new study challenges this assumption, revealing a striking disconnect between ambition and actual leadership effectiveness.

The study, conducted by researchers Shilaan Alzahawi, Emily S. Reit, and Francis J. Flynn from Stanford University’s Graduate School of Business, explores the relationship between ambition and leadership evaluations. Their findings suggest that while ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts.

At the heart of this research is the concept of ambition, defined as a persistent striving for success, attainment, and accomplishment. Ambitious individuals are typically drawn to leadership positions, motivated by the promise of power, status, and financial rewards. However, the study, published in PNAS Nexus, raises an important question: Does this ambition translate into better leadership skills?

To investigate this question, the researchers conducted a large-scale study involving 472 executives enrolled in a leadership development program. These executives were evaluated on 10 leadership competencies by their peers, subordinates, managers, and themselves. In total, the study analyzed 3,830 ratings, providing a comprehensive view of each leader’s effectiveness from multiple perspectives.

Perhaps the most thought-provoking finding of the study is the significant discrepancy between how ambitious leaders view themselves and how others perceive them. Highly ambitious individuals consistently rated themselves as more effective leaders across various competencies. However, this positive self-assessment was not corroborated by the evaluations from their peers, subordinates, or managers.

For instance, ambitious leaders believed they were better at motivating others, managing collaborative work, and coaching and developing people. They also thought they had a stronger growth orientation and were more accountable for results. Yet, their colleagues and subordinates did not observe these superior abilities in practice.

While ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts. (Credit: fauxels from Pexels)

This disconnect between self-perception and reality has significant implications for how we select and develop leaders. Many organizations rely on self-selection processes, where individuals actively choose to be considered for leadership roles. The assumption is that those who step forward are the most capable candidates. However, this study suggests that such an approach may be flawed, potentially promoting individuals based on their ambition rather than their actual leadership skills.

The researchers propose that ambitious individuals may be drawn to leadership roles for reasons unrelated to their aptitude. The allure of higher salaries, greater authority, and increased social status may drive them to pursue these positions, regardless of their actual leadership capabilities. To justify this pursuit, ambitious individuals may unconsciously inflate their self-perceptions of leadership effectiveness.

This phenomenon aligns with psychological concepts such as motivated reasoning and cognitive dissonance. Essentially, people tend to interpret information in a way that confirms their existing beliefs or desires. In this case, ambitious individuals may convince themselves of their superior leadership skills to justify their pursuit of higher positions.

Organizations and individuals may need to rethink their approach to leadership selection and development. Rather than relying solely on self-selection and ambitious individuals dominating candidate pools, companies might benefit from actively identifying and encouraging individuals who possess leadership potential but may lack the confidence or ambition to pursue such roles.

Moreover, the research highlights the importance of gathering diverse perspectives when evaluating leadership effectiveness. Relying solely on self-assessments or the opinions of a single group (e.g., only peers or only subordinates) may provide an incomplete or biased picture of a leader’s true capabilities.

This study urges us to look beyond ambition when selecting and developing leaders. By focusing on actual leadership skills rather than mere drive for power, we can cultivate leaders who are truly capable of guiding us through the challenges of the 21st century.

Source: https://studyfinds.org/the-ambitious-leaders-dilemma/?nab=0

Sea snail’s deadly venom may hold the key to a diabetes cure

A freshly-collected batch of venomous cone snails. (Credit: Safavi Lab)

In the vast, mysterious depths of the ocean, where some of the planet’s deadliest creatures reside, scientists have discovered an unexpected ally in the fight against diabetes and hormone disorders. A new study finds that the geography cone, a venomous marine snail known for its lethal sting, harbors a powerful secret: a toxin that could revolutionize the way we treat certain diseases.

The geography cone (Conus geographus) isn’t your typical predator. Instead of using brute force to capture its prey, it employs a more insidious method — a cocktail of venomous toxins that disrupt the bodily functions of its victims, leaving them helpless and easy to consume. However, within this deadly arsenal lies a remarkable substance, one that mimics a human hormone and holds the potential to create groundbreaking medications.

Publishing their work in the journal Nature Communications, scientists from the University of Utah and their international collaborators have identified a component in the snail’s venom that acts like somatostatin, a human hormone responsible for regulating blood sugar and various other bodily processes. What’s truly astonishing is that this snail-produced toxin, known as consomatin, doesn’t just mimic the hormone — it surpasses it in stability and specificity, making it an extraordinary candidate for drug development.

How can a deadly venom become a life-saving drug?
Somatostatin in humans serves as a kind of master regulator, ensuring that levels of blood sugar, hormones, and other critical molecules don’t spiral out of control. However, consomatin, the snail’s version of this hormone, has some unique advantages. Unlike human somatostatin, which interacts with multiple proteins in the body, consomatin targets just one specific protein with pinpoint accuracy. This precise targeting means that consomatin could potentially be used to regulate blood sugar and hormone levels with fewer side-effects than existing medications.

Consomatin is also more stable than the human hormone, lasting longer in the body due to the presence of an unusual amino acid that makes it resistant to breakdown. For pharmaceutical researchers, this feature is a goldmine — it could lead to the development of drugs that offer longer-lasting benefits to patients, reducing the frequency of doses and improving overall treatment outcomes.

Ho Yan Yeung, PhD, first author on the study (left) and Thomas Koch, PhD, also an author on the study (right) examine a freshly-collected batch of cone snails. Credit: Safavi Lab

While it may seem counterintuitive to look to venom for inspiration in drug development, this approach is proving to be incredibly fruitful. As Dr. Helena Safavi, an associate professor of biochemistry at the University of Utah and the senior author of the study, explains, venomous animals like the geography cone have had millions of years to fine-tune their toxins to target specific molecules in their prey. This evolutionary precision is exactly what makes these toxins so valuable in the search for new medicines.

“Venomous animals have, through evolution, fine-tuned venom components to hit a particular target in the prey and disrupt it,” says Safavi in a media release. “If you take one individual component out of the venom mixture and look at how it disrupts normal physiology, that pathway is often really relevant in disease.”

In other words, nature’s own designs can offer shortcuts to discovering new therapeutic pathways.

In its natural environment, consomatin works alongside another toxin in the cone snail’s venom, which mimics insulin, to drastically lower the blood sugar of the snail’s prey. This one-two punch leaves the fish in a near-comatose state, unable to escape the snail’s deadly grasp. By studying consomatin and its insulin-like partner, researchers believe they can uncover new ways to control blood sugar levels in humans, potentially leading to better treatments for diabetes.

“We think the cone snail developed this highly selective toxin to work together with the insulin-like toxin to bring down blood glucose to a really low level,” explains Ho Yan Yeung, a postdoctoral researcher in biochemistry at the University of Utah and the study’s first author.

What’s even more exciting is the possibility that the cone snail’s venom contains additional yet undiscovered toxins that also regulate blood sugar.

“It means that there might not only be insulin and somatostatin-like toxins in the venom,” Yeung adds. “There could potentially be other toxins that have glucose-regulating properties too.”

Source: https://studyfinds.org/sea-snail-venom-diabetes/?nab=0

Franchise Faces: The Most Iconic Fast Food Mascots of All Time

Step right up, folks, and feast your eyes on the colorful cast of characters that have been tempting our taste buds and raiding our wallets for decades! We’re talking about those lovable (and sometimes slightly unnerving) fast food mascots that are as much a part of our culture as the greasy, delicious food they’re hawking. From the golden arches of McDonald’s to the finger-lickin’ goodness of KFC, these animated pitchmen have wormed their way into our hearts faster than you can say “supersize me.” They’ve made us laugh, occasionally made us cringe, and more often than not, made us inexplicably crave a burger at 2 AM. So, grab your favorite value meal and get ready for a nostalgic trip down fast food memory lane as we rank the best fast food mascots. Trust us, this list is more stacked than a triple-decker burger!

If fast food mascots feel like old friends, you aren’t alone. That’s why we’ve put together a list of the top best fast food mascots from 10 expert websites. Did your favorite make our list? As always, we’d like to see your own recommendations in the comments below!

The Consensus Best Fast Food Mascots, Ranked
1. Colonel Sanders – KFC
Who doesn’t love a heaping bucket of fried chicken? “One of the most popular and recognizable fast food mascots is KFC’s Colonel Sanders. Not only is this a mascot and symbol for the brand, it directly represents the founder of Kentucky Fried Chicken — Colonel Harland David Sanders,” notes Restaurant Clicks.

What makes him stand out? “As a character, Colonel Sanders is a lovable, sweet old man with plenty of personal ties to KFC. He’s often portrayed by comedians, which gives the brand plenty of room to create funny and innovative commercials,” adds Ranker.

“Dressed in a white suit and black bow tie, accessorized with glasses and a cane, the Colonel’s image has become synonymous with the brand’s finger-licking good fried chicken. His face, etched in the memories of countless fried chicken fans, carries an aura of professionalism, quality, and trustworthiness,” suggests Sixstoreys.

2. Ronald McDonald – McDonald’s
One of the most recognizable fast food mascots, Ronald McDonald even has his own balloon in the Macy’s Thanksgiving Day parade. The mascot, “was first introduced to audiences in 1963, when actor Willard Scott (who played the immensely popular Bozo the Clown at the time) took on the persona of the red-haired clown for three TV ads promoting McDonald’s. He was referred to as ‘Ronald McDonald – the hamburger-happy clown’ and sported a drink cup on his nose as well as a food tray as a hat,” according to Lovefood.com.

Ronald is the perfect combination of fun and odd for a mascot. “He has Wendy’s red hair, The King’s freaky appearance, and the Colonel’s kindly character. Put it all together and you have a master of the mascots,” adds WatchMojo.

Thrillist writes: “Ronald is without a doubt the most polemic fast-food mascot. He’s friendly and instantly recognizable, but he’s also a clown. Most normal people are terrified by clowns regardless of nostalgia, so whether he reminds you of Saturday mornings spent watching cartoons and eating Happy Meals or the scariest moments or Stephen King’s ‘It’ is all on you.”

3. The King – Burger King
Who remembers going into Burger King as a kid and getting one of those paper crowns? “The first iteration of the Burger King was an unsuspecting fellow with a lopsided crown sitting atop his burger throne, cradling a soda. Today, he’s a life-size dude with a massive plastic head. He’s always smiling, giving him an almost menacing air — he might be outside your bedroom window right now,” points out The Daily Meal.

You know who we are talking about. “That unsettling-yet-unforgettable maniacal grin has been producing nightmares across the U.S. since 2004, when the current, plastic-costumed incarnation was introduced to the world,” says Mashed.

Restaurant Clicks writes: “Sometimes creepy and odd is what restaurants need to make people pay attention. It’s also fitting that he’s wearing a paper crown, similar to the ones kids can get in-store.”

I had to ask my 9-year-old if she thought The King was creepy. Her response? “A little, but I like him.”

4. Wendy – Wendy’s

Consider Wendy’s founder Dave Thomas as the ultimate girl dad. His daughter, Melinda was the idea behind the smiling, freckled, red-headed girl that the fast food chain still embraces.

You don’t think of Wendy’s without conjuring up an image of this red-haired sweetheart. “She’s been the primary logo of Wendy’s since the beginning and her image is irrevocably tied to the restaurant chain. Her personality is a central part of the fast food chain – that of a sweet young girl with plenty of pep and enthusiasm. Plus, her association with her father gives the brand a family feel, even though it has grown into a huge corporation,” notes Ranker.

Sixstoreys adds, “the character has remained a consistent symbol of the all-American, wholesome cuisine that Wendy’s seeks to provide. Her warm and approachable demeanor instantly evokes a sense of familiarity and family, resonating with customers who appreciate the brand’s commitment to quality, freshness, and friendliness.”

“She isn’t animatronic, she doesn’t have any particular peculiarities, but she is one of the most famous faces in all of fast food,” points out WatchMojo.

5. Jack Box – Jack in the Box
Rounding out our top five is Jack Box, from (you guessed it) Jack in the Box. “An adaptation of the fast food chain’s original clown head mascot, the geometrical character has become a classic American mascot. The franchise has employed Jack in its advertising since 1994 – part of a larger rebranding effort after a 1993 food contamination scandal,” according to The Drum.

 

Source: https://studyfinds.org/best-fast-food-mascots/?nab=0

Gen Z blames social media for ruining their mental health — but no one’s signing off

(Photo by DimaBerlin on Shutterstock)

Three in four Gen Z Americans are putting the blame on social media for having a negative impact on their mental health.

The survey, commissioned by LG Electronics and conducted by Talker Research, offers compelling insights into the digital habits and emotional responses of 2,000 Gen Z social media users. In a startling revelation, 20% of Gen Zers cite Instagram and TikTok as detrimental to their well-being, followed by Facebook at 13%.

Despite these concerns, social media remains an integral part of Gen Z’s daily life. The average user spends a whopping five-and-a-half hours per day on social media apps, with 45% believing they outpace their friends in usage time. Boredom (66%), seeking laughter (59%), staying informed (49%), and keeping tabs on friends (44%) are the primary motivators for their online engagement.

However, this digital immersion comes at a cost. Nearly half the poll (49%) report experiencing negative emotions from social media use, with stress and anxiety affecting 30% of respondents. Even more alarming, those who experience these negative feelings report that it takes only 38 minutes of scrolling before their mood begins to sour.

“We spend a significant portion of our lives online and often these experiences may leave us feeling drained and not mentally stimulated,” says Louis Giagrande, head of U.S. marketing at LG Electronics, in a statement. “We encourage everyone to be more conscious about the social media content they choose to engage with, bringing stronger balance, inspiration, and happiness to their lives. If we focus on optimism, we will be better equipped to deal with life’s challenges and build a happier life.”

The study also uncovered a desire for change among Gen Z users. In fact, 62% wish they could “reset” their social media feeds and start anew. Over half (53%) express frustration with content misalignment, feeling that their feeds don’t reflect their interests. Moreover, 54% believe they have limited or no control over the content populating their feeds, with only 16% claiming total control.

Yet, it’s not all doom and gloom. Four in five respondents (80%) associate social media with positive impacts on their mood. Comedy (65%), animal content (48%), beauty-related posts (40%), and prank videos (34%) are among the top mood boosters. Two-thirds of users say that social media has turned a bad day into a good one, and 44% believe it positively impacts their outlook on life.

Source: https://studyfinds.org/gen-z-blames-social-media-for-ruining-their-mental-health-but-no-ones-signing-off/?nab=0

The superstorms from space that could end modern life

A sudden solar superstorm is thought to be behind a devastating bombardment of high-energy particles around 14,000 years ago (Credit: Nasa)

The Sun is going through a period of high activity, but it is nothing compared to an enormous solar event that slammed into our planet 14,000 years ago. If one were to occur today, the effect on Earth could be devastating.

The oldest trees on Earth date back a whopping 5,000 years, living through all manner of events. They have stood through the rise and fall of the Roman Empire, the birth of Christianity, the European discovery of the Americas and the first Moon landing. Trees can even be fossilised in soil underground, giving us a connection to the last 30,000 years.
At first glance, these long-lived specimens might just appear to be static observers, but not so. They are doing something extraordinary as they grow – recording the activity of our Sun.

As trees photosynthesise throughout the year, they change in colouration depending on the season, appearing lighter in spring and darker by autumn. The result is a year-on-year record contained within the growth “rings” of the tree. “This gives us this really valuable archive of time capsules,” says Charlotte Pearson, a dendrochronologist – someone who studies tree rings – at the Laboratory of Tree-Ring Research at the University of Arizona, US.

For most of the 20th Century, dendrochronologists have largely used tree rings to investigate change across wide chunks of history – a decade or more. Yet at certain points in time, the change they document has been more sudden and cataclysmic. What they are finding evidence of are massive solar events that reveal disturbing insights into the turbulent recent past of the star at the centre of our Solar System.

“Nobody was expecting a brief event to appear,” says Edouard Bard, a climatologist at the College de France in Paris. But in 2012 a then-PhD student called Fusa Miyake, now a cosmic ray physicist at Nagoya University in Japan, made an astonishing discovery. Studying Japanese cedar trees, she discovered a huge spike in a type of carbon known as carbon-14 in a single year nearly 800 years ago, in 774 CE. “I was so excited,” says Miyake.

After doubting the data at first, Miyake and her colleagues soon came to an unnerving conclusion. The spike in carbon-14 must have come from something injecting huge numbers of particles into our atmosphere, since this radioactive isotope of carbon is produced when high-energy particles strike nitrogen in the atmosphere. Once linked perhaps to cosmic events like supernovae, studies have since suggested another probable cause: a monster burst of particles thrown out by the Sun. These would be generated by superflares, far bigger than anything seen in the modern era.

“They require an event that’s at least ten times bigger than anything we’ve observed,” says Mathew Owens, a space physicist at the University of Reading in the UK. The first recorded solar flare sighting dates back to the middle of the 19th Century, and are associated with the great geomagnetic storm of 1859, which has become known as the Carrington Event, after one of the astronomers who observed it, Richard Carrington.

Spikes in the level of carbon-14 isotope in tree rings have revealed past spikes in high-energy particles bombarding the Earth (Credit: Getty Images)

Miyake’s discovery was confirmed by other studies of tree rings and analysis of ancient ice in cores collected from places such as Antarctica and Greenland. The latter contained correlated signatures of berylium-10 and chlorine-36, which are produced in a similar atmospheric process to carbon-14. Since then, more Miyake events, as these massive bursts of cosmic radiation and particles are now known, have been unearthed. In total, seven well studied events are known to have occurred over the past 15,000 years, while there are several other spikes in carbon-14 that have yet to be confirmed as Miyake events.

The most recent occurred just over 1,000 years ago in 993 CE. Researchers believe these events occur rarely – but at somewhat regular intervals, perhaps every 400 to 2,400 years.

The most powerful known Miyake event was discovered as recently as 2023 when Bard and his colleagues announced the discovery of a carbon-14 spike in fossilised Scots pine trees in Southern France dating back 14,300 years. The spike they saw was twice as powerful as any Miyake event seen before, suggesting these already-suspected monster events could be even bigger than previously thought.

The team behind the discovery of this superstorm from space had scoured the Southern French Alps for fossilised trees and found some that had been exposed by rivers. Using a chainsaw, they collected samples and examined them back in a laboratory, discovering evidence for an enormous carbon-14 spike. “We dreamed of finding a new Miyake event, and we were very, very happy to find this,” says Cécile Miramont, a dendrochronologist at Aix-Marseille University in France and a co-author on the study.

Source : https://www.bbc.com/future/article/20240815-miyake-events-the-giant-solar-superstorms-that-could-rock-earth

South American lungfish has largest genome of any animal

A South American lungfish, whose scientific name is Lepidosiren paradoxa, is seen at a laboratory at the Louisiana State University in Baton Rouge, Louisiana, U.S., March 18, 2024. Katherine Seghers, Louisiana State University/Handout via REUTERS/File Photo Purchase Licensing Rights

The South American lungfish is an extraordinary creature – in some sense, a living fossil. Inhabiting slow-moving and stagnant waters in Brazil, Argentina, Peru, Colombia, Venezuela, French Guiana and Paraguay, it is the nearest living relative to the first land vertebrates and closely resembles its primordial ancestors dating back more than 400 million years.

This freshwater species, called Lepidosiren paradoxa, also has another distinction: the largest genome – all the genetic information of an organism – of any animal on Earth. Scientists have now sequenced its genome, finding it to be about 30 times the size of the human genetic blueprint.

The metric for genome size was the number of base pairs, the fundamental units of DNA, in an organism’s cellular nuclei. If stretched out like from a ball of yarn, the length of the DNA in each cell of this lungfish would extend almost 200 feet (60 meters). The human genome would extend a mere 6-1/2 feet (2 meters).

“Our analyses revealed that the South American lungfish genome grew massively during the past 100 million years, adding the equivalent of one human genome every 10 million years,” said evolutionary biologist Igor Schneider of Louisiana State University, one of the authors of the study published this week in the journal Nature

In fact, 18 of the 19 South American lungfish chromosomes – the threadlike structures that carry an organism’s genomic information – are each individually larger than the entire human genome, Schneider said.

While huge, there are plants whose genome is larger. The current record holder is a fork fern species, called Tmesipteris oblanceolata, in the French overseas territory of New Caledonia in the Pacific. Its genome is more than 50 times the human genome’s size.

Until now, the largest-known animal genome was that of another lungfish, the Australian lungfish, Neoceratodus forsteri. The South American lungfish’s genome was more than twice as big. The world’s four other lungfish species live in Africa, also with large genomes.

Lungfish genomes are largely composed of repetitive elements – about 90% of the genome. The researchers said the massive genome expansion documented in lungfish genomes seems to be related to a reduction in these species of a mechanism that ordinarily suppresses such genomic repetition.

“Animal genome sizes vary greatly, but the significance and causes of genome size variation remain unclear. Our study advances our understanding of genome biology and structure by identifying mechanisms that control genome size while maintaining chromosome stability,” Schneider said.

The South American lungfish reaches up to about 4 feet (1.25 meters) long. While other fish rely upon gills to breathe, lungfish also possess a pair of lung-like organs. It lives in oxygen-starved, swampy environs of the Amazon and Parana-Paraguay River basins, and supplements the oxygen gotten from the water by breathing in oxygen from the air.

Lungfish first appeared during the Devonian Period. It was during the Devonian that one of the most important moments in the history of life on Earth occurred – when fish possessing lungs and muscular fins evolved into the first tetrapods, the four-limbed land vertebrates that now include amphibians, reptiles, birds and mammals.

Source : https://www.reuters.com/science/south-american-lungfish-has-largest-genome-any-animal-2024-08-16

Why are more young adults getting colorectal cancer? The answer may be their diet

3D Rendered Medical Illustration of Male Anatomy showing Colorectal Cancer (© SciePro – stock.adobe.com)

Colorectal cancer rates are rising at an alarming rate among young adults, but the reason behind the increased diagnoses has been a medical mystery. However, the Cleveland Clinic has released a study that pinpoints a major cause for the spike in cases: diet.

When looking at the microbiomes of adults 60 years and younger with colorectal cancer, researchers found an unusually high level of diet-derived molecules called metabolites. The metabolites involved in colorectal cancer usually come from eating red and processed meat.

“Researchers—ourselves included—have begun to focus on the gut microbiome as a primary contributor to colon cancer risk. But our data clearly shows that the main driver is diet,” says Dr. Naseer Sangwan, a director at the Microbial Sequencing & Analytics Resource Core at the Cleveland Clinic and study co-author, in a media release. “We already know the main metabolites associated with young-onset risk, so we can now move our research forward in the correct direction.”

The study is published in the journal npj Precision Oncology.

This is a map of colorectal cancer hotspots in the United States. (Image credit: Rogers et al. American Journal of Cancer Research)

The researchers created an artificial intelligence algorithm to examine a wide range of datasets in published studies to determine what factors contributed most to colorectal cancer risk. One crucial area to explore was the gut microbiome. Previous research showed significant differences in gut composition between younger and older adults with colorectal cancer.

One of the most striking features among young adults and older adults with colorectal cancer is the differences in diet, reflected through the type of metabolites present in the gut microbiome. Younger people showed higher levels of metabolites involved in producing and metabolizing an amino acid called arginine, along with metabolites involved in the urea cycle.

According to the authors, these metabolites likely result from overeating red meat and processed foods. They are currently examining national datasets to confirm their findings.

Choosing between the two, it is much simpler to change a diet than to completely reset a person’s microbiome. The findings suggest that eating less red and processed meat could lower a person’s risk of colorectal cancer.

“Even though I knew before this study that diet is an important factor in colon cancer risk, I didn’t always discuss it with my patients during their first visit. There is so much going on, it can already be so overwhelming,” says Dr. Suneel Kamath, a gastrointestinal oncologist at the Cleveland Clinic and senior author of the study. “Now, I always make sure to bring it up to my patients, and to any healthy friends or family members they may come in with, to try and equip them with the tools they need to make informed choices about their lifestyle.”

Making healthier dietary choices is also a more accessible method for preventing colorectal cancer. While screening is an important tool, Dr. Kamath notes it is impractical for doctors to give every person in the world a colonoscopy. In the future, simple tests that count specific metabolites as a marker for colorectal cancer risk may help with increased monitoring. On the research side, the authors plan to test whether particular diets or drugs involved in regulating arginine production and the urea cycle can help prevent or treat colorectal cancer in young adults.

Source: https://studyfinds.org/colorectal-cancer-diet/?nab=0

Shocking brain scans reveal consciousness remains among vegetative patients

(© Photographee.eu – stock.adobe.com)

For years, families of brain-injured patients have insisted their unresponsive loved ones were still “in there.” Now, a groundbreaking study on consciousness suggests they may have been right all along.

Researchers have discovered that approximately one in four patients who appear completely unresponsive may actually be conscious and aware but physically unable to show it. This phenomenon, known as cognitive motor dissociation, challenges long-held assumptions about disorders of consciousness and could have profound implications for how we assess and care for brain-injured patients.

The study, published in the New England Journal of Medicine, represents the largest and most comprehensive investigation of cognitive motor dissociation to date. An international team of researchers used advanced brain imaging and electrophysiological techniques to detect signs of consciousness in patients who seemed entirely unresponsive based on standard behavioral assessments.

The findings suggest that cognitive motor dissociation is far more common than previously thought. This has major implications for clinical care, end-of-life decision-making, and our fundamental understanding of consciousness itself.

The study examined 353 adult patients with disorders of consciousness resulting from various types of brain injuries. These conditions exist on a spectrum, ranging from coma (where patients are completely unresponsive and show no signs of awareness) to the vegetative state (where patients may open their eyes and have sleep-wake cycles but show no signs of awareness) to the minimally conscious state (where patients show some inconsistent but reproducible signs of awareness).

Traditionally, doctors have relied on bedside behavioral assessments to diagnose a patient’s level of consciousness. However, this approach assumes that if a patient can’t physically respond to commands or stimuli, they must not be aware. The new study challenges this assumption, revealing signs of consciousness that may not be outwardly visible.

Strikingly, the study found that 25% of patients who showed no behavioral signs of consciousness demonstrated brain activity consistent with awareness and the ability to follow commands. In other words, one in four patients who appeared to be in a vegetative state or minimally conscious state without command-following ability were actually conscious and able to understand and respond mentally to instructions.

“Some patients with severe brain injury do not appear to be processing their external world. However, when they are assessed with advanced techniques such as task-based fMRI and EEG, we can detect brain activity that suggests otherwise,” says lead study author Yelena Bodien, PhD, in a statement.

Bodien is an investigator for the Spaulding-Harvard Traumatic Brain Injury Model Systems and Massachusetts General Hospital’s Center for Neurotechnology and Neurorecovery.

“These results bring up critical ethical, clinical, and scientific questions – such as how can we harness that unseen cognitive capacity to establish a system of communication and promote further recovery?”

The study also found that cognitive motor dissociation was more common in younger patients, those with traumatic brain injuries, and those who were assessed later after their initial injury. This suggests that some patients may recover cognitive abilities over time, even if they remain unable to communicate behaviorally.

Interestingly, even among patients who could follow commands behaviorally, more than 60% did not show responses on the brain imaging tests. This highlights the complex nature of consciousness and the limitations of current detection methods.

The findings raise challenging questions about how we diagnose disorders of consciousness, make end-of-life decisions, and allocate resources for long-term care and rehabilitation. It also opens up new avenues for potential therapies aimed at restoring communication in these patients.

While the study represents a significant advance, the authors caution that the techniques used are not yet widely available and require further refinement before they can be routinely used in clinical practice.

“To continue our progress in this field, we need to validate our tools and to develop approaches for systematically and pragmatically assessing unresponsive patients so that the testing is more accessible,” adds Bodien. “We know that cognitive motor dissociation is not uncommon, but resources and infrastructure are required to optimize detection of this condition and provide adequate support to patients and their families.”

Source: https://studyfinds.org/brain-consciousness-vegetative/?nab=0

AI model 98% accurate in detecting diseases — just by looking at your tongue

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real-time — with 98% accuracy.
vladimirfloyd – stock.adobe.com

This technology could be aah-mazing!

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real time — with 98% accuracy.

“Typically, people with diabetes have a yellow tongue; cancer patients a purple tongue with a thick greasy coating; and acute stroke patients present with an unusually shaped red tongue,” explained senior study author Ali Al-Naji, who teaches at Middle Technical University in Baghdad and the University of South Australia.

Examining the tongue for signs of disease has long been commonplace in Chinese medicine.
MDPI

“A white tongue can indicate anemia; people with severe cases of COVID-19 are likely to have a deep-red tongue,” Al-Naji continued. “An indigo- or violet-colored tongue indicates vascular and gastrointestinal issues or asthma.”

Source: https://nypost.com/2024/08/13/lifestyle/ai-model-98-accurate-in-detecting-diseases-just-by-looking-at-your-tongue/

Paradise Found: Experts Rank the West Coast’s Most Beautiful Beaches

A pelican in front of Haystack Rock on Cannon Beach in Oregon (Photo by Hank Vermote on Shutterstock)

The West Coast of the United States is home to some of the most breathtaking beaches in the world. From California’s dramatic cliffs to Oregon and Washington’s peaceful shores, there’s a beach for every vibe. With nearly 8,000 miles of shoreline, it would take years to get to every beach. That’s why we’ve created a list of the best West Coast beaches that travel experts across seven websites recommend adding to your bucket list. So, grab your sunscreen and towel, and discover what could become your new favorite spot. Is there a beach you love that’s not on the list? Let us know!

The List: Top 6 Must-Visit Beaches on the Left Coast

1. Cannon Beach, Oregon

Cannon Beach, Oregon (Photo by Tim Mossholder on Unsplash)

You won’t be able to put your camera down when visiting Cannon Beach along the Oregon coast. The breathtaking sunsets rival those in far-off lands, with the towering Haystack Rock providing an incredible backdrop. Nestled in the charming town of Cannon Beach, this coastal gem is a favorite spot for day-trippers from Portland, says Roam The Northwest.

You’ll also encounter plenty of wildlife at the beach, from tide pools to unique marine life. Nearby, Ecola State Park offers hiking trails and lookouts – perfect for selfies. Sixt recommends a stroll through Cannon Beach’s downtown area, filled with unique boutiques and galleries. It’s a full-day adventure!

Swimming might not be the main attraction at Cannon Beach, but that doesn’t bother visitors reports Your Tango. There’s plenty to do, from exploring vibrant nature trails to walking to Haystack Rock at low tide. The beach promises fun times for everyone, even if you don’t take a dip!

2. La Jolla Beach, California

La Jolla Beach, California (Photo by Danita Delimont on Shutterstock)

According to USA Today, this popular San Diego beach is a dream come true with its miles of turquoise water and gentle waves, making it the perfect place to learn to surf. The swimming and snorkeling are unbeatable with plenty of colorful fish and marine life.

The beach is an ideal destination for families looking to enjoy the sand, surf, and explore the nearby State Marine Reserve. Roam the Northwest recommends bringing a bike to tour the Coast Walk Trail or visit other nearby beaches.

If you enjoy kayaking or snorkeling, Your Tango says that this beach offers some of the most secluded caves to explore. Just a short walk from La Jolla Cove is the Children’s Pool, an ideal spot for families with small children.

3. Rialto Beach, Washington

Sunset on Rialto Beach, Washington (Photo by Jay Yuan on Shutterstock)

The highlight of this stunning Olympic Coast beach is the aptly named Hole-in-the-Wall, located about a mile north of the parking lot. Accessible only at low tide, this natural rock formation is perfect for exploring and taking photos. It frames the sea stacks that line this stretch of beach beautifully, according to Roam the Northwest.

The water can be chilly in the summer, around 59 degrees, but Cheapism says the stunning scenery more than makes up for it. You’ll find massive sea stacks just offshore, piles of driftwood scattered along the beach and the famous Hole-in-the-Wall sea cave arch. The wildlife here is a real treat too—otters, whales, and seals are regulars!

This beach is one of the rare spots where you can bring your pets, but Yard Barker says you must keep them leashed and don’t let them go past Ellen Creek. It’s a popular place for beach camping too, though sadly, your four-legged friends have to sit that one out.

4. Ruby Beach, Washington

Ruby Beach ( Photo by Sean Pavone on Shutterstock)

This is one of Washington State’s best-kept secrets according to Yard Barker. Adjacent to Olympic National Park, this spot is more “beachy” than Rialto. If it weren’t for the cooler weather, you might think you were in California. This spot is a must-add to any vacation itinerary.

Ruby Beach is conveniently located off Highway 101 and is perfect for day-trippers. With its towering sea stacks and cobbled stones, Roam the Northwest guarantees you’ll spend hours beachcombing and soaking in the wild beauty. Its remote charm and stunning landscapes keep people coming back for more.

During low tide, visitors can explore rocky areas and discover marine life in the tide pools, while photographers capture the scenic sea stacks and driftwood. For a more active experience, the nearby Olympic National Park offers coastal trails with stunning views of the Pacific Ocean. As Sixt highlights, the 1.5-mile hike to the Hoh River is breathtaking, featuring sea stacks, driftwood, and the chance to spot eagles and other wildlife.

Source: https://studyfinds.org/best-west-coast-beaches/?nab=0

Going vegan vs. Mediterranean diet: Surprising study reveals which is healthier

(© Mustafa – stock.adobe.com)

The Mediterranean diet has long been touted as the gold standard for healthy eating, but a new contender has emerged from an unexpected corner. Recent research shows that a low-fat vegan diet not only promotes more weight loss but also dramatically reduces harmful substances in our food.

The study, conducted by researchers at the Physicians Committee for Responsible Medicine, an agency that promotes plant-based foods, compared the effects of a Mediterranean diet and a low-fat vegan diet on overweight adults. Participants on the vegan diet lost an average of 6 kilograms (about 13 pounds) more than those on the Mediterranean diet, with no change in their physical activity.

But the benefits, published in the journal Frontiers in Nutrition, didn’t stop at weight loss. The vegan diet also led to a dramatic 73% reduction in dietary advanced glycation end-products (AGEs). These harmful compounds, formed when proteins or fats combine with sugars, have been linked to various health issues, including inflammation, oxidative stress, and an increased risk of chronic diseases like Type 2 diabetes and cardiovascular disease.

Why you should eliminate AGEs from your diet

To understand AGEs, imagine them as unwanted houseguests that overstay their welcome in your body. They form naturally during normal metabolism, but they also sneak in through our diet, especially in animal-based and highly processed foods. AGEs are particularly abundant in foods cooked at high temperatures, such as grilled meats or fried foods. They can accumulate in our bodies over time, causing damage to tissues and contributing to the aging process – hence their nickname, “glycotoxins.”

The Mediterranean diet, long praised for its health benefits, surprisingly showed no significant change in dietary AGE levels. This finding challenges the perception that the Mediterranean diet is the gold standard for healthy eating. The vegan diet, on the other hand, achieved its AGE-busting effects primarily by eliminating meat consumption (which accounted for 41% of the AGE reduction), minimizing added fats (27% of the reduction), and avoiding dairy products (14% of the reduction).

These results suggest that a low-fat vegan diet could be a powerful tool in the fight against obesity and its related health issues. By reducing both body weight and harmful AGEs, this dietary approach may offer a two-pronged attack on factors that contribute to chronic diseases.

Mediterranean diet not best for weight loss?
The study’s lead author, Dr. Hana Kahleova, says that the vegan diet’s benefits extended beyond just numbers on a scale. The reduction in AGEs could have far-reaching implications for overall health, potentially lowering the risk of various age-related diseases.

“The study helps bust the myth that a Mediterranean diet is best for weight loss,” says Kahleova, the director of clinical research at the Physicians Committee for Responsible Medicine, in a statement. “Choosing a low-fat vegan diet that avoids the dairy and oil so common in the Mediterranean diet helps reduce intake of harmful advanced glycation end-products leading to significant weight loss.”

This research adds to a growing body of evidence supporting the benefits of plant-based diets. Previous studies have shown that vegetarian and vegan diets can reduce the risk of developing metabolic syndrome and Type 2 diabetes by about 50%. The dramatic reduction in dietary AGEs observed in this study may help explain some of these protective effects.

Source: https://studyfinds.org/vegan-vs-mediterranean-diet-which-is-healthier/?nab=0

Survey says it takes nearly 2 months of exercise before you’ll start to look more fit

(© rangizzz – stock.adobe.com)

The poll of 2,000 adults reveals what goals people prioritize when it comes to their fitness. Above all, they’re aiming to lose a certain amount of weight (43%), increase their general strength (43%) and increase their general mobility (35%).

However, 48 percent are worried about potentially losing the motivation to get fit and 65 percent believe the motivation to increase their level of physical fitness wanes over time.

According to respondents, the motivation to keep going lasts for about four weeks before needing a new push.

The survey, commissioned by Optimum Nutrition and conducted by TalkerResearch, finds that a majority of Americans’ diet affects their level of fitness motivation (89%).

Nearly three in 10 (29%) believe they don’t get enough protein in their diet, lacking it either “all the time” (19%) or often (40%).

Gen X respondents feel like they are lacking protein the most out of all generations (35%), compared to millennials (34%), Gen Z (27%) and baby boomers (21%). Plus, over three in five (35%) women don’t think they get enough protein vs. 23 percent of men.

The average person has two meals per day that don’t include protein, but 61 percent would be more likely to increase their protein intake to help achieve their fitness goals.

As people reflect on health and wellness goals, the most common experiences that make people feel out of shape include running out of breath often (49%) and trying on clothing that no longer fits (46%).

Over a quarter (29%) say they realized they were out of shape after not being able to walk up a flight of stairs without feeling winded.

Source: https://studyfinds.org/survey-says-it-takes-nearly-2-months-of-exercise-before-youll-start-to-look-more-fit/

Gold goes 2D: Scientists create ultra-thin ‘goldene’ sheets

Lars Hultman, professor of thin film physics and Shun Kashiwaya, researcher at the Materials Design Division at Linköping University. (Credit: Olov Planthaber)

In a remarkable feat of nanoscale engineering, scientists have created the world’s thinnest gold sheets at just one atom thick. This new material, dubbed “goldene,” could revolutionize fields from electronics to medicine, offering unique properties that bulk gold simply can’t match.

The research team, led by scientists from Linköping University in Sweden, managed to isolate single-atom layers of gold by cleverly manipulating the metal’s atomic structure. Their findings, published in the journal Nature Synthesis, represent a significant breakthrough in the rapidly evolving field of two-dimensional (2D) materials.

Since the discovery of graphene — single-atom-thick sheets of carbon — in 2004, researchers have been racing to create 2D versions of other elements. While 2D materials made from carbon, boron, and even iron have been achieved, gold has proven particularly challenging. Previous attempts resulted in gold sheets several atoms thick or required the gold to be supported by other materials.

The Swedish team’s achievement is particularly noteworthy because they created free-standing sheets of gold just one atom thick. This ultra-thin gold, or goldene, exhibits properties quite different from its three-dimensional counterpart. For instance, the atoms in goldene are packed more tightly together, with about 9% less space between them compared to bulk gold. This compressed structure leads to changes in the material’s electronic properties, which could make it useful for a wide range of applications.

One of the most exciting potential uses for goldene is in catalysis, which is the process of speeding up chemical reactions. Gold nanoparticles are already used as catalysts in various industrial processes, from converting harmful vehicle emissions into less dangerous gases to producing hydrogen fuel. The researchers believe that goldene’s extremely high surface-area-to-volume ratio could make it an even more efficient catalyst.

The creation of goldene also opens up new possibilities in fields like electronics, photonics, and medicine. For example, the material’s unique optical properties could lead to improved solar cells or new types of sensors. In medicine, goldene might be used to create ultra-sensitive diagnostic tools or to deliver drugs more effectively within the body.

How They Did It: Peeling Gold Atom by Atom
The process of creating goldene is almost as fascinating as the material itself. The researchers used a technique that might be described as atomic-scale sculpting, carefully removing unwanted atoms to leave behind a single layer of gold.

They started with a material called Ti3AuC2, which is part of a family of compounds known as MAX phases. These materials have a layered structure, with sheets of titanium carbide (Ti3C2) alternating with layers of gold atoms. The challenge was to remove the titanium carbide layers without disturbing the gold.

To accomplish this, the team used a chemical etching process. They immersed the Ti3AuC2 in a carefully prepared solution containing potassium hydroxide and potassium ferricyanide, known as Murakami’s reagent. This solution selectively attacks the titanium carbide layers, gradually dissolving them away.

However, simply etching away the titanium carbide wasn’t enough. Left to their own devices, t

he freed gold atoms would quickly clump together, forming 3D nanoparticles instead of 2D sheets. To prevent this, the researchers added surfactants — molecules that help keep the gold atoms spread out in a single layer.

Two key surfactants were used: cetrimonium bromide (CTAB) and cysteine. These molecules attach to the surface of the gold, creating a protective barrier that prevents the atoms from coalescing. The entire process took about a week, with the researchers carefully controlling the concentration of the etching solution and surfactants to achieve the desired result.

For the first time, scientists have managed to create sheets of gold only a single atom layer thick. (Credit: Olov Planthaber)

Results: A New Form of Gold Emerges

The team’s efforts resulted in sheets of gold just one atom thick, confirmed through high-resolution electron microscopy. These goldene sheets showed several interesting properties:

  1. Compressed structure: The gold atoms in goldene are packed about 9% closer together than in bulk gold. This compression changes how the electrons in the material behave, potentially leading to new electronic and optical properties.
  2. Increased binding energy: X-ray photoelectron spectroscopy revealed that the electrons in goldene are more tightly bound to their atoms compared to bulk gold. This shift in binding energy could affect the material’s chemical reactivity.
  3. Rippling and curling: Unlike perfectly flat sheets, the goldene layers showed some rippling and curling, especially at the edges. This behavior is common in 2D materials and can influence their properties.
  4. Stability: Computer simulations suggested that goldene should be stable at room temperature, although the experimental samples showed some tendency to form blobs or clump together over time.

The researchers also found that they could control the thickness of the gold sheets by adjusting their process. Using slightly different conditions, they were able to create two- and three-atom-thick sheets of gold as well.

Limitations and Challenges

  1. Scale: The current process produces relatively small sheets of goldene, typically less than 100 nanometers across. Scaling up production to create larger sheets will be crucial for many potential applications.
  2. Stability: Although computer simulations suggest goldene should be stable, the experimental samples showed some tendency to curl and form blobs, especially at the edges. Finding ways to keep the sheets flat and prevent them from clumping together over time will be important.
  3. Substrate dependence: The goldene sheets were most stable when still partially attached to the original Ti3AuC2 material or when supported on a substrate. Creating large, free-standing sheets of goldene remains a challenge.
  4. Purity: The etching process leaves some residual titanium and carbon atoms mixed in with the gold. While these impurities are minimal, they could affect the material’s properties in some applications.
  5. Reproducibility: The process of creating goldene is quite sensitive to the exact conditions used. Ensuring consistent results across different batches and scaling up production will require further refinement of the technique.

The surprising cure for chronic back pain? Just take a walk

(© glisic_albina – stock.adobe.com)

For anyone who has experienced the debilitating effects of low back pain, the results of an eye-opening new study may be a game-changer. Researchers have found that a simple, accessible program of progressive walking and education can significantly reduce the risk of constant low back pain flare-ups in adults. The implications are profound — no longer does managing this pervasive condition require costly equipment or specialized rehab facilities. Instead, putting on a pair of sneakers and taking a daily stroll could be one of the best preventative therapies available.

Australian researchers, publishing their work in The Lancet, recruited over 700 adults across the country who had recently recovered from an episode of non-specific low back pain, which lasted at least 24 hours and interfered with their daily activities. The participants were divided into two groups: one received an individualized walking and education program guided by a physiotherapist over six months, and the other received no treatment at all during the study.

Participants were then carefully followed for at least one year, up to a maximum of nearly three years for some participants. The researchers meticulously tracked any recurrences of low back pain that were severe enough to limit daily activities.

“Our study has shown that this effective and accessible means of exercise has the potential to be successfully implemented at a much larger scale than other forms of exercise,” says lead author Dr. Natasha Pocovi in a media release. “It not only improved people’s quality of life, but it reduced their need both to seek healthcare support and the amount of time taken off work by approximately half.”

Methodology: A Step-by-Step Approach

So, what did this potentially back-saving intervention involve? It utilized the principles of health coaching, where physiotherapists worked one-on-one with participants to design and progressively increase a customized walking plan based on the individual’s age, fitness level, and objectives.

The process began with a 45-minute consultation to understand each participant’s history, conduct an examination, and prescribe an initial “walking dose,” which was then gradually ramped up. The guiding target was to work up to walking at least 30 minutes per day, five times per week, by the six-month mark.

During this period, participants also participated in lessons to help overcome fears about back pain while learning easy strategies to self-manage any recurrences. They were provided with a pedometer and a walking diary to track their progress. After the first 12 weeks, they could choose whether to keep using those motivational tools. Follow-up sessions with the physiotherapist every few weeks, either in-person or via video calls, were focused on monitoring progress, adjusting walking plans when needed, and providing encouragement to keep participants engaged over the long haul.

Results: Dramatic Improvement & A Manageable Approach
The impact of this straightforward intervention was striking. Compared to the control group, participants in the walking program experienced a significantly lower risk of suffering a recurrence of low back pain that limited daily activities. Overall, the risk fell by 28%.

Even more impressive, the average time for a recurrence to strike was nearly double for those in the walking group (208 days) versus the control group (112 days). The results for any recurrence of low back pain, regardless of impact on activities and recurrences requiring medical care, showed similarly promising reductions in risk. Simply put, people engaging in the walking program stayed pain-free for nearly twice as long as others not treating their lower back pain.

Source: https://studyfinds.org/back-pain-just-take-a-walk/

Intermittent fasting may supercharge ‘natural killer’ cells to destroy cancer

Could skipping a few meals each week help you fight cancer? It might sound far-fetched, but new research suggests that one type of intermittent fasting could actually boost your body’s natural ability to defeat cancer.

(Credit: MIA Studio/Shutterstock)

A team of scientists at Memorial Sloan Kettering Cancer Center (MSK) has uncovered an intriguing link between fasting and the body’s immune system. Their study, published in the journal Immunity, focuses on a particular type of immune cell called natural killer (NK) cells. These cells are like the special forces of your immune system, capable of taking out cancer cells and virus-infected cells without needing prior exposure.

So, what’s the big deal about these NK cells? Well, they’re pretty important when it comes to battling cancerous tumors. Generally speaking, the more NK cells you have in a tumor, the better your chances of beating the disease. However, there’s a catch: the environment inside and around tumors is incredibly harsh. It’s like a battlefield where resources are scarce, and many immune cells struggle to survive.

This is where fasting enters the picture. The researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment and more effective at fighting cancer.

“Tumors are very hungry,” says immunologist Joseph Sun, PhD, the study’s senior author, in a media release. “They take up essential nutrients, creating a hostile environment often rich in lipids that are detrimental to most immune cells. What we show here is that fasting reprograms these natural killer cells to better survive in this suppressive environment.”

Illustration of a group of cancer cells. Researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment. (© fotoyou – stock.adobe.com)

How exactly does intermittent fasting achieve this?
The study, which was conducted on mice, involved denying the animals food for 24 hours twice a week, with normal eating in between. This intermittent fasting approach had some pretty remarkable effects on the NK cells.

First off, fasting caused the mice’s glucose levels to drop and their levels of free fatty acids to rise. Free fatty acids are a type of lipid (fat) that can be used as an alternative energy source when other nutrients are scarce. The NK cells learned to use these fatty acids as fuel instead of glucose, which is typically their primary energy source.

“During each of these fasting cycles, NK cells learned to use these fatty acids as an alternative fuel source to glucose,” says Dr. Rebecca Delconte, the lead author of the study. “This really optimizes their anti-cancer response because the tumor microenvironment contains a high concentration of lipids, and now they’re able enter the tumor and survive better because of this metabolic training.”

The fasting also caused the NK cells to move around the body in interesting ways. Many of them traveled to the bone marrow, where they were exposed to high levels of a protein called Interleukin-12. This exposure primed the NK cells to produce more of another protein called Interferon-gamma, which plays a crucial role in fighting tumors. Meanwhile, NK cells in the spleen were undergoing their own transformation, becoming even better at using lipids as fuel. The result? NK cells were pre-primed to produce more cancer-fighting substances and were better equipped to survive in the harsh tumor environment.

 

Source: https://studyfinds.org/intermittent-fasting-fight-cancer/

There are 6 different types of depression, brain pattern study shows

(Image by Feng Yu on Shutterstock)

Depression and anxiety disorders are among the most common mental health issues worldwide, yet current treatments often fail to provide relief for many sufferers. A major challenge has been the heterogeneity of these conditions. Patients with the same diagnosis can have vastly different symptoms and underlying brain dysfunctions. Now, a team of researchers at Stanford University has developed a novel approach to parse this heterogeneity, identifying six distinct “biotypes” of depression and anxiety based on specific patterns of brain circuit dysfunction.

The study, published in Nature Medicine, analyzed brain scans from over 800 patients with depression and anxiety disorders. By applying advanced computational techniques to these scans, the researchers were able to quantify the function of key brain circuits involved in cognitive and emotional processing at the individual level. This allowed them to group patients into biotypes defined by shared patterns of circuit dysfunction, rather than relying solely on symptoms.

Intriguingly, the six biotypes showed marked differences not just in their brain function, but also in their clinical profiles. Patients in each biotype exhibited distinct constellations of symptoms, cognitive impairments, and critically, responses to different treatments. For example, one biotype characterized by hyperconnectivity in circuits involved in self-referential thought and salience processing responded particularly well to behavioral therapy. Another, with heightened activity in circuits processing sadness and reward, was distinguished by prominent anhedonia (inability to feel pleasure).

These findings represent a significant step towards a more personalized, brain-based approach to diagnosing and treating depression and anxiety. By moving beyond one-size-fits-all categories to identify subgroups with shared neural mechanisms, this work opens the door to matching patients with the therapies most likely to help them based on the specific way their brain is wired. It suggests that brain circuit dysfunction may be a more meaningful way to stratify patients than symptoms alone. In the future, brain scans could be used to match individual patients with the treatments most likely to work for them, based on their specific neural profile.

More broadly, this study highlights the power of a transdiagnostic, dimensional approach to understanding mental illness. By focusing on neural circuits that cut across traditional diagnostic boundaries, we may be able to develop a more precise, mechanistic framework for classifying these conditions.

“To our knowledge, this is the first time we’ve been able to demonstrate that depression can be explained by different disruptions to the functioning of the brain,” says the study’s senior author, Dr. Leanne Williams, a professor of psychiatry and behavioral sciences, and the director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “In essence, it’s a demonstration of a personalized medicine approach for mental health based on objective measures of brain function.”

The 6 Biotypes Of Depression

  1. The Overwhelmed Ruminator: This biotype has overactive brain circuits involved in self-reflection, detecting important information, and controlling attention. People in this group tend to have slowed-down emotional reactions and attention, but respond well to talk therapy.
  2. The Distracted Impulsive: This biotype has underactive brain circuits that control attention. They tend to have trouble concentrating and controlling impulses, and don’t respond as well to talk therapy.
  3. The Sensitive Worrier: This biotype has overactive brain circuits that process sadness and reward. They tend to have trouble experiencing pleasure and positive emotions.
  4. The Overcontrolled Perfectionist: This biotype has overactive brain circuits involved in regulating behavior and thoughts. They tend to have excessive negative emotions and threat sensitivity, trouble with working memory, but respond well to certain antidepressant medications.
  5. The Disconnected Avoider: This biotype has reduced connectivity in emotion circuits when viewing threatening faces, and reduced activity in behavior control circuits. They tend to have less rumination and faster reaction times to sad faces.
  6. The Balanced Coper: This biotype doesn’t show any major overactivity or underactivity in the brain circuits studied compared to healthy people. Their symptoms are likely due to other factors not captured by this analysis.

Of course, much work remains to translate these findings into clinical practice. The biotypes need to be replicated in independent samples and their stability over time needs to be established. We need to develop more efficient and scalable ways to assess circuit function that could be deployed in routine care. And ultimately, we will need prospective clinical trials that assign patients to treatments based on their biotype.

Nevertheless, this study represents a crucial proof of concept. It brings us one step closer to a future where psychiatric diagnosis is based not just on symptoms, but on an integrated understanding of brain, behavior, and response to interventions. As we continue to map the neural roots of mental illness, studies like this light the way towards more personalized and effective care for the millions of individuals struggling with these conditions.

“To really move the field toward precision psychiatry, we need to identify treatments most likely to be effective for patients and get them on that treatment as soon as possible,” says Dr. Jun Ma, the Beth and George Vitoux Professor of Medicine at the University of Illinois Chicago. “Having information on their brain function, in particular the validated signatures we evaluated in this study, would help inform more precise treatment and prescriptions for individuals.”

Source: https://studyfinds.org/there-are-6-different-types-of-depression-brain-pattern-study-shows/

Super dads, super kids: Science uncovers how the magic of fatherly care boosts child development

(Photo by Ketut Subiyanto from Pexels)

The crucial early years of a child’s life lay the foundation for their lifelong growth and happiness. Spending quality time with parents during these formative stages can lead to substantial positive changes in children. With that in mind, researchers have found an important link between a father’s involvement and their child’s successful development, both mentally and physically. Simply put, being a “super dad” results in raising super kids.

However, in Japan, where this study took place, a historical gender-based division of labor has limited fathers’ participation in childcare-related activities, impacting the development of children. Traditionally, Japanese fathers, especially those in their 20s to 40s, have been expected to prioritize work commitments over family responsibilities.

This cultural norm has resulted in limited paternal engagement in childcare, regardless of individual inclinations. The increasing number of mothers entering full-time employment further exacerbates the issue, leaving a void in familial support for childcare. With the central government advocating for paternal involvement in response to low fertility rates, Japanese fathers are now urged to become co-caregivers, shifting away from their traditional role as primary breadwinners.

While recent trends have found a rise in paternal childcare involvement, the true impact of this active participation on a child’s developmental outcomes has remained largely unexplored. This groundbreaking study published in Pediatric Research, utilizing data from the largest birth cohort in Japan, set out to uncover the link between paternal engagement and infant developmental milestones. Led by Dr. Tsuguhiko Kato from the National Center for Child Health and Development and Doshisha University Center for Baby Science, the study delved into this critical aspect of modern parenting.

“In developed countries, the time fathers spend on childcare has increased steadily in recent decades. However, studies on the relationship between paternal care and child outcomes remain scarce. In this study, we examined the association between paternal involvement in childcare and children’s developmental outcomes,” explains Dr. Kato in a media release.

Leveraging data from the Japan Environment and Children’s Study, the research team assessed developmental milestones in 28,050 Japanese children. These children received paternal childcare at six months of age and were evaluated for various developmental markers at three years. Additionally, the study explored whether maternal parenting stress mediates these outcomes at 18 months.

“The prevalence of employed mothers has been on the rise in Japan. As a result, Japan is witnessing a paradigm shift in its parenting culture. Fathers are increasingly getting involved in childcare-related parental activities,” Dr. Kato says.

The study measured paternal childcare involvement through seven key questions, gauging tasks like feeding, diaper changes, bathing, playtime, outdoor activities, and dressing. Each father’s level of engagement was scored accordingly. The research findings were then correlated with the extent of developmental delay in infants, as evaluated using the Ages and Stages questionnaire.

Source: https://studyfinds.org/super-dads-super-kids/

Women are losing their X chromosomes — What’s causing it?

(Credit: ustas7777777/Shutterstock)

A groundbreaking new study has uncovered genetic factors that may help explain why some women experience a phenomenon called mosaic loss of the X chromosome (mLOX) as they age. With mLOX, some of a woman’s blood cells randomly lose one of their two X chromosomes over time. Concerningly, scientists believe this genetic oddity may lead to the development of several disease, including cancer.

Researchers with the National Institutes of Health found that certain inherited gene variants make some women more susceptible to developing mLOX in the first place. Other genetic variations they identified seem to give a selective growth advantage to the blood cells that retain one X chromosome over the other after mLOX occurs.

Importantly, the study published in the journal Nature confirmed that women with mLOX have an elevated risk of developing blood cancers like leukemia and increased susceptibility to infections like pneumonia. This underscores the potential health implications of this chromosomal abnormality.

As some women age, their white blood cells can lose a copy of chromosome X. A new study sheds light on the potential causes and consequences of this phenomenon. (Credit: Created by Linda Wang with Biorender.com)

Paper Summary

Methodology

To uncover the genetic underpinnings of mLOX, the researchers conducted a massive analysis of nearly 900,000 women’s blood samples from eight different biobanks around the world. About 12% of these women showed signs of mLOX in their blood cells.

Results

By comparing the DNA of women with and without mLOX, the team pinpointed 56 common gene variants associated with developing the condition. Many of these genes are known to influence processes like abnormal cell division and cancer susceptibility. The researchers also found that rare mutations in a gene called FBXO10 could double a woman’s risk of mLOX. This gene likely plays an important role in the cellular processes that lead to randomly losing an X chromosome.

Source: https://studyfinds.org/women-losing-x-chromosomes/

Facially expressive people are more well-liked, socially successful

(Photo by airdone on Shutterstock)

Are you an open book, your face broadcasting every passing emotion, or more of a stoic poker face? Scientists at Nottingham Trent University say that wearing your heart on your sleeve (or rather, your face) could actually give you a significant social advantage. Their research shows that people who are more facially expressive are more well-liked by others, considered more agreeable and extraverted, and even fare better in negotiations if they have an amenable personality.

The study, led by Eithne Kavanagh, a research fellow at NTU’s School of Social Sciences, is the first large-scale systematic exploration of individual differences in facial expressivity in real-world social interactions. Across two studies involving over 1,300 participants, Kavanagh and her team found striking variations in how much people moved their faces during conversations. Importantly, this expressivity emerged as a stable individual trait. People displayed similar levels of facial expressiveness across different contexts, with different social partners, and even over time periods up to four months.

Connecting facial expressions with social success
So what drives these differences in facial communication styles and why do they matter? The researchers say that facial expressivity is linked to personality, with more agreeable, extraverted and neurotic individuals displaying more animated faces. But facial expressiveness also translated into concrete social benefits above and beyond the effects of personality.

In a negotiation task, more expressive individuals were more likely to secure a larger slice of a reward, but only if they were also agreeable. The researchers suggest that for agreeable folks, dynamic facial expressions may serve as a tool for building rapport and smoothing over conflicts.

Across the board, the results point to facial expressivity serving an “affiliative function,” or a social glue that fosters liking, cooperation and smoother interactions. Third-party observers and actual conversation partners consistently rated more expressive people as more likable.

Expressivity was also linked to being seen as more “readable,” suggesting that an animated face makes one’s intentions and mental states easier for others to decipher. Beyond frequency of facial movements, people who deployed facial expressions more strategically to suit social goals, such as looking friendly in a greeting, were also more well-liked.

“This is the first large scale study to examine facial expression in real-world interactions,” Kavanagh says in a media release. “Our evidence shows that facial expressivity is related to positive social outcomes. It suggests that more expressive people are more successful at attracting social partners and in building relationships. It also could be important in conflict resolution.”

Taking our faces at face value
The study, published in Scientific Reports, represents a major step forward in understanding the dynamics and social significance of facial expressions in everyday life. Moving beyond the traditional focus on static, stylized emotional expressions, it highlights facial expressivity as a consequential and stable individual difference.

The findings challenge the “poker face” intuition that a still, stoic demeanor is always most advantageous. Instead, they suggest that for most people, allowing one’s face to mirror inner states and intentions can invite warmer reactions and reap social rewards. The authors propose that human facial expressions evolved largely for affiliative functions, greasing the wheels of social cohesion and cooperation.

The results also underscore the importance of studying facial behavior situated in real-world interactions to unveil its true colors and consequences. Emergent technologies like automated facial coding now make it feasible to track the face’s mercurial movements in the wild, opening up new horizons for unpacking how this ancient communication channel shapes human social life.

Far from mere emotional readouts, our facial expressions appear to be powerful tools in the quest for interpersonal connection and social success. As the researchers conclude, “Being facially expressive is socially advantageous.” So the next time you catch yourself furrowing your brow or flashing a smile, know that your face just might be working overtime on your behalf to help you get ahead.

 

Source: https://studyfinds.org/facially-expressive-people-well-liked-socially-successful/

Can indie games inspire a creative boom from Indian developers?

Visai Games’ Venba won a Bafta Games Award this year

India might not be the first country that springs to mind when someone mentions video games, but it’s one of the fastest-growing markets in the world.
Analysts believe there could be more than half a billion players there by the end of this year.
Most of them are playing on mobile phones and tablets, and fans will tell you the industry is mostly known for fantasy sports games that let you assemble imaginary teams based on real players.
Despite concerns over gambling and possible addiction, they’re big business.
The country’s three largest video game startups – Game 24X7, Dream11 and Mobile Premier League – all provide some kind of fantasy sport experience and are valued at over $1bn.
But there’s hope that a crop of story-driven games making a splash worldwide could inspire a new wave of creativity and investment.
During the recent Summer Game Fest (SGF) – an annual showcase of new and upcoming titles held in Los Angeles and watched by millions – audiences saw previews of a number of story-rich titles from South Asian teams.

Detective Dotson will also have a companion TV series produced

One of those was Detective Dotson by Masala Games, based in Gujarat, about a failed Bollywood actor turned detective.
Industry veteran Shalin Shodhan is behind the game and tells BBC Asian Network this focus on unique stories is “bucking the trend” in India’s games industry.
He wants video games to become an “interactive cultural export” but says he’s found creating new intellectual property difficult.
“There really isn’t anything in the marketplace to make stories about India,” he says, despite the strength of some of the country’s other cultural industries.
“If you think about how much intellectual property there is in film in India, it is really surprising to think nothing indigenous exists as an original entertainment property in games,” he says.
“It’s almost like the Indian audience accepted that we’re just going to play games from outside.”
Another game shown during SGF was The Palace on the Hill – a “slice-of-life” farming sim set in rural India.
Mala Sen, from developer Niku Games, says games like this and Detective Dotson are what “India needed”.
“We know that there are a lot of people in India who want games where characters and setting are relatable to them,” she says.

Games developed by South Asian teams based in western countries have been finding critical praise and commercial success in recent years.

Venba, a cooking sim that told the story of a migrant family reconnecting with their heritage through food, became the first game of its kind to take home a Bafta Game Award this year.

Canada-based Visai Games, which developed the title, was revealed during SGF as one of the first beneficiaries of a new fund set up by Among Us developer Innersloth to boost fellow indie developers.

That will go towards their new, unnamed project based on ancient Tamil legends.

Another title awarded funding by the scheme was Project Dosa, from developer Outerloop, that sees players pilot giant robots, cook Indian food and fight lawyers.

Its previous game, Thirsty Suitors, was also highly praised and nominated for a Bafta award this year.

Games such as these resonating with players worldwide helps perceptions from the wider industry, says Mumbai-based Indrani Ganguly, of Duronto Games.

“Finally, people are starting to see we’re not just a place for outsource work,” she says.

“We’re moving from India being a technical space to more of a creative hub.

“I’m not 100% seeing a shift but that’s more of a mindset thing.

“People who are able to make these kinds of games have always existed but now there is funding and resource opportunities available to be able to act on these creative visions.”

Earth’s inner core rotation slows down and reverses direction. What does this mean for the planet?

(Image by DestinaDesign on Shutterstock)

Earth’s inner core, a solid iron sphere nestled deep within our planet, has slowed its rotation, according to new research. Scientists from the University of Southern California say their discovering challenges previous notions about the inner core’s behavior and raises intriguing questions about its influence on Earth’s dynamics.

The inner core, a mysterious realm located nearly 3,000 miles beneath our feet, has long been known to rotate independently of the Earth’s surface. Scientists have spent decades studying this phenomenon, believing it to play a crucial role in generating our planet’s magnetic field and shaping the convection patterns in the liquid outer core. Until now, it was widely accepted that the inner core was gradually spinning faster than the rest of the Earth, a process known as super-rotation. However, this latest study, published in the journal Nature, reveals a surprising twist in this narrative.

“When I first saw the seismograms that hinted at this change, I was stumped,” says John Vidale, Dean’s Professor of Earth Sciences at the USC Dornsife College of Letters, Arts and Sciences, in a statement. “But when we found two dozen more observations signaling the same pattern, the result was inescapable. The inner core had slowed down for the first time in many decades. Other scientists have recently argued for similar and different models, but our latest study provides the most convincing resolution.”

Slowing Spin, Reversing Rhythm
By analyzing seismic waves generated by repeating earthquakes in the South Sandwich Islands from 1991 to 2023, the researchers discovered that the inner core’s rotation had not only slowed down but had actually reversed direction. The team focused on a specific type of seismic wave called PKIKP, which traverses the inner core and is recorded by seismic arrays in northern North America. By comparing the waveforms of these waves from 143 pairs of repeating earthquakes, they noticed a peculiar pattern.

Many of the earthquake pairs exhibited seismic waveforms that changed over time, but remarkably, they later reverted to match their earlier counterparts. This observation suggests that the inner core, after a period of super-rotation from 2003 to 2008, had begun to sub-rotate, or spin more slowly than the Earth’s surface, essentially retracing its previous path. The researchers found that from 2008 to 2023, the inner core sub-rotated two to three times more slowly than its prior super-rotation.

The inner core began to decrease its speed around 2010, moving slower than the Earth’s surface. (Credit: USC Graphic/Edward Sotelo)

The study’s findings paint a captivating picture of the inner core’s rotational dynamics. The matching waveforms observed in numerous earthquake pairs indicate moments when the inner core returned to positions it had occupied in the past, relative to the mantle. This pattern, combined with insights from previous studies, reveals that the inner core’s rotation is far more complex than a simple, steady super-rotation.

The researchers discovered that the inner core’s super-rotation from 2003 to 2008 was faster than its subsequent sub-rotation, suggesting an asymmetry in its behavior. This difference in rotational rates implies that the interactions between the inner core, outer core, and mantle are more intricate than previously thought.

Limitations: Pieces Of The Core Puzzle
While the study offers compelling evidence for the inner core’s slowing and reversing rotation, the study of course has some limitations. The spatial coverage of the seismic data is relatively sparse, particularly in the North Atlantic, due to the presence of chert layers that hindered continuous coring. Furthermore, the Earth system model used in the study, despite its sophistication, is still a simplified representation of the complex dynamics at play.

The authors emphasize the need for additional high-resolution data from a broader range of locations to strengthen their findings. They also call for ongoing refinement of Earth system models to better capture the intricacies of the inner core’s behavior and its interactions with the outer core and mantle.

Source: https://studyfinds.org/earth-inner-core-rotation-slows/

Mars missions likely impossible for astronauts without kidney dialysis

Photo by Mike Kiev from Unsplash

New study shows damage from cosmic radiation, microgravity could be ‘catastrophic’ for human body
LONDON — As humanity sets its sights on deep space missions to the Moon, Mars, and beyond, a team of international researchers has uncovered a potential problem lurking in the shadows of these ambitious plans: spaceflight-induced kidney damage.

The findings, in a nutshell
In a new study that integrated a dizzying array of cutting-edge scientific techniques, researchers from University College London found that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts.

This sobering discovery, published in Nature Communications, not only highlights the immense challenges of long-duration space travel but also underscores the urgent need for effective countermeasures to protect the health of future space explorers.

“If we don’t develop new ways to protect the kidneys, I’d say that while an astronaut could make it to Mars they might need dialysis on the way back,” says the study’s first author, Dr. Keith Siew, from the London Tubular Centre, based at the UCL Department of Renal Medicine, in a media release. “We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it’s probably too late to prevent failure, which would be catastrophic for the mission’s chances of success.”

New research shows that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts. (© alonesdj – stock.adobe.com)

Methodology

To unravel the complex effects of spaceflight on the kidneys, the researchers analyzed a treasure trove of biological samples and data from 11 different mouse missions, five human spaceflights, one simulated microgravity experiment in rats, and four studies exposing mice to simulated galactic cosmic radiation on Earth.

The team left no stone unturned, employing a comprehensive “pan-omics” approach that included epigenomics (studying changes in gene regulation), transcriptomics (examining gene expression), proteomics (analyzing protein levels), epiproteomics (investigating protein modifications), metabolomics (measuring metabolite profiles), and metagenomics (exploring the microbiome). They also pored over clinical chemistry data (electrolytes, hormones, biochemical markers), assessed kidney function, and scrutinized kidney structure and morphology using advanced histology, 3D imaging, and in situ hybridization techniques.

By integrating and cross-referencing these diverse datasets, the researchers were able to paint a remarkably detailed and coherent picture of how spaceflight stressors impact the kidneys at multiple biological levels, from individual molecules to whole organ structure and function.

Results
The study’s findings are as startling as they are sobering. Exposure to microgravity and simulated cosmic radiation induced a constellation of detrimental changes in the kidneys of both humans and animals.

First, the researchers discovered that spaceflight alters the phosphorylation state of key kidney transport proteins, suggesting that the increased kidney stone risk in astronauts is not solely a secondary consequence of bone demineralization but also a direct result of impaired kidney function.

Second, they found evidence of extensive remodeling of the nephron – the basic structural and functional unit of the kidney. This included the expansion of certain tubule segments but an overall loss of tubule density, hinting at a maladaptive response to the unique stressors of spaceflight.

Perhaps most alarmingly, exposing mice to a simulated galactic cosmic radiation dose equivalent to a round trip to Mars led to overt signs of kidney damage and dysfunction, including vascular injury, tubular damage, and impaired filtration and reabsorption.

Piecing together the diverse “omics” datasets, the researchers identified several convergent molecular pathways and biological processes that were consistently disrupted by spaceflight, causing mitochondrial dysfunction, oxidative stress, inflammation, fibrosis, and senescence (cell death) — all hallmarks of chronic kidney disease.

Source: https://studyfinds.org/mars-missions-catastrophic-astronauts-kidneys/

Being more optimistic can keep you from procrastinating

(© chinnarach – stock.adobe.com)

We’ve all been there — a big task is looming over our heads, but we choose to put it off for another day. Procrastination is so common that researchers have spent years trying to understand what drives some people to chronically postpone important chores until the last possible moment. Now, researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future.

The findings, in a nutshell
Researchers found evidence that having a pessimistic view about how stressful the future will be could increase the likelihood of falling into a pattern of severe procrastination. Moreover, the study published in Scientific Reports reveals that having an optimistic view on the future wards off the urge to procrastinate.

“Our research showed that optimistic people — those who believe that stress does not increase as we move into the future — are less likely to have severe procrastination habits,” explains Saya Kashiwakura from the Graduate School of Arts and Sciences at the University of Tokyo, in a media release. “This finding helped me adopt a more light-hearted perspective on the future, leading to a more direct view and reduced procrastination.”

Researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future. (Credit: Ground Picture/Shutterstock)

Methodology
To examine procrastination through the lens of people’s perspectives on the past, present, and future, the researchers introduced new measures they dubbed the “chronological stress view” and “chronological well-being view.” Study participants were asked to rate their levels of stress and well-being across nine different timeframes: the past 10 years, past year, past month, yesterday, now, tomorrow, next month, next year, and the next 10 years.

The researchers then used clustering analysis to group participants based on the patterns in their responses over time – for instance, whether their stress increased, decreased or stayed flat as they projected into the future. Participants were also scored on a procrastination scale, allowing the researchers to investigate whether certain patterns of future perspective were associated with more or less severe procrastination tendencies.

Results: Procrastination is All About Mindset
When examining the chronological stress view patterns, the analysis revealed four distinct clusters: “descending” (stress decreases over time), “ascending” (stress increases), “V-shaped” (stress is lowest in the present), and a “skewed mountain” shape where stress peaked in the past and declined toward the future.

Intriguingly, the researchers found a significant relationship between cluster membership and level of procrastination. The percentage of severe procrastinators was significantly lower in the “descending” cluster – those who believed their stress levels would stay flat or decrease as they projected into the future.

Source: https://studyfinds.org/being-more-optimistic-can-keep-you-from-procrastinating/

Who’s most vulnerable to scams? Psychologists reveal who criminals target and why

(Credit: fizkes/Shutterstock)

About 1 in 6 Americans are age 65 or older, and that percentage is projected to grow. Older adults often hold positions of power, have retirement savings accumulated over the course of their lifetimes, and make important financial and health-related decisions – all of which makes them attractive targets for financial exploitation.

In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.

Financial exploitation represents one of the most common forms of elder abuse. Perpetrators are often individuals in the victims’ inner social circles – family members, caregivers, or friends – but can also be strangers.

When older adults experience financial fraud, they typically lose more money than younger victims. Those losses can have devastating consequences, especially since older adults have limited time to recoup – dramatically reducing their independence, health, and well-being.

But older adults have been largely neglected in research on this burgeoning type of crime. We are psychologists who study social cognition and decision-making, and our research lab at the University of Florida is aimed at understanding the factors that shape vulnerability to deception in adulthood and aging.

Defining vulnerability
Financial exploitation involves a variety of exploitative tactics, such as coercion, manipulation, undue influence, and, frequently, some sort of deception.

The majority of current research focuses on people’s ability to distinguish between truth and lies during interpersonal communication. However, deception occurs in many contexts – increasingly, over the internet.

Our lab conducts laboratory experiments and real-world studies to measure susceptibility under various conditions: investment games, lie/truth scenarios, phishing emails, text messages, fake news and deepfakes – fabricated videos or images that are created by artificial intelligence technology.

To study how people respond to deception, we use measures like surveys, brain imaging, behavior, eye movement, and heart rate. We also collect health-related biomarkers, such as being a carrier of gene variants that increase risk for Alzheimer’s disease, to identify individuals with particular vulnerability.

And our work shows that an older adult’s ability to detect deception is not just about their individual characteristics. It also depends on how they are being targeted.

Individual risk factors
Better cognition, social and emotional capacities, and brain health are all associated with less susceptibility to deception.

Cognitive functions, such as how quickly our brain processes information and how well we remember it, decline with age and impact decision-making. For example, among people around 70 years of age or older, declines in analytical thinking are associated with reduced ability to detect false news stories.

Additionally, low memory function in aging is associated with greater susceptibility to email phishing. Further, according to recent research, this correlation is specifically pronounced among older adults who carry a gene variant that is a genetic risk factor for developing Alzheimer’s disease later in life. Indeed, some research suggests that greater financial exploitability may serve as an early marker of disease-related cognitive decline.

Social and emotional influences are also crucial. Negative mood can enhance somebody’s ability to detect lies, while positive mood in very old age can impair a person’s ability to detect fake news.

Lack of support and loneliness exacerbate susceptibility to deception. Social isolation during the COVID-19 pandemic has led to increased reliance on online platforms, and older adults with lower digital literacy are more vulnerable to fraudulent emails and robocalls.

Isolation during the COVID-19 pandemic has increased aging individuals’ vulnerability to online scams. (© Andrey Popov – stock.adobe.com)

Finally, an individual’s brain and body responses play a critical role in susceptibility to deception. One important factor is interoceptive awareness: the ability to accurately read our own body’s signals, like a “gut feeling.” This awareness is correlated with better lie detection in older adults.

According to a first study, financially exploited older adults had a significantly smaller size of insula – a brain region key to integrating bodily signals with environmental cues – than older adults who had been exposed to the same threat but avoided it. Reduced insula activity is also related to greater difficulty picking up on cues that make someone appear less trustworthy.

Types of effective fraud
Not all deception is equally effective on everyone.

Our findings show that email phishing that relies on reciprocation – people’s tendency to repay what another person has provided them – was more effective on older adults. Younger adults, on the other hand, were more likely to fall for phishing emails that employed scarcity: people’s tendency to perceive an opportunity as more valuable if they are told its availability is limited. For example, an email might alert you that a coin collection from the 1950s has become available for a special reduced price if purchased within the next 24 hours.

There is also evidence that as we age, we have greater difficulty detecting the “wolf in sheep’s clothing”: someone who appears trustworthy, but is not acting in a trustworthy way. In a card-based gambling game, we found that compared with their younger counterparts, older adults are more likely to select decks presented with trustworthy-looking faces, even though those decks consistently resulted in negative payouts. Even after learning about untrustworthy behavior, older adults showed greater difficulty overcoming their initial impressions.

Reducing vulnerability
Identifying who is especially at risk for financial exploitation in aging is crucial for preventing victimization.

We believe interventions should be tailored instead of a one-size-fits-all approach. For example, perhaps machine learning algorithms could someday determine the most dangerous types of deceptive messages that certain groups encounter – such as in text messages, emails, or social media platforms – and provide on-the-spot warnings. Black and Hispanic consumers are more likely to be victimized, so there is also a dire need for interventions that resonate with their communities.

Prevention efforts would benefit from taking a holistic approach to help older adults reduce their vulnerability to scams. Training in financial, health, and digital literacy are important, but so are programs to address loneliness.

People of all ages need to keep these lessons in mind when interacting with online content or strangers – but not only then. Unfortunately, financial exploitation often comes from individuals close to the victim.

Source: https://studyfinds.org/whos-most-vulnerable-to-scams/

Mushroom-infused ‘microdosing’ chocolate bars are sending people to the hospital, prompting investigation: FDA

The Food and Drug Administration (FDA) is warning consumers about a mushroom-infused chocolate bar that has reportedly sent some people to the hospital.

The FDA released an advisory message about Diamond Shruumz “microdosing” chocolate bars on June 7. The chocolate bars contain a “proprietary nootropics blend” that is said to give a “relaxed euphoric experience without psilocybin,” according to its website.

“The FDA and CDC, in collaboration with America’s Poison Centers and state and local partners, are investigating a series of illnesses associated with eating Diamond Shruumz-brand Microdosing Chocolate Bars,” the FDA’s website reads.

“Do not eat, sell, or serve Diamond Shruumz-Brand Microdosing Chocolate Bars,” the site warns. “FDA’s investigation is ongoing.”

The FDA is warning consumers against Diamond Shruumz chocolate bars. (FDA | iStock)

“Microdosing” is a practice where one takes a very small amount of psychedelic drugs with the intent of increasing productivity, inspiring creativity and boosting mood. According to Diamond Shruumz’s website, the brand said its products help achieve “a subtle, sumptuous experience and a more creative state of mind.”

“We’re talkin’ confections with a kick,” the brand said. “So if you like mushroom chocolate bars and want to mingle with some microdosing, check us out. We just might change how you see the world.”

But government officials warn that the products have caused seizures in some consumers and vomiting in others.

“People who became ill after eating Diamond Shruumz-brand Microdosing Chocolate Bars reported a variety of severe symptoms including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting,” the FDA reported.

Six people reportedly experienced such severe reactions that they sought medical care.

At least eight people have suffered a variety of medical symptoms from the chocolates, including nausea. (iStock)

“All eight people have reported seeking medical care; six have been hospitalized,” the FDA’s press release said. “No deaths have been reported.”

Diamond Shruumz says on its website that its products are not necessarily psychedelic. Although the chocolate is marketed as promising a psilocybin-like experience, there is no psilocybin in it.

“There is no presence of psilocybin, amanita or any scheduled drugs, ensuring a safe and enjoyable experience,” the website claims. “Rest assured, our treats are not only free from psychedelic substances but our carefully crafted ingredients still offer an experience.”

“This allows you to indulge in a uniquely crafted blend designed for your pleasure and peace of mind.”

Officials warn consumers to keep the products out of the reach of minors, as kids and teens may be tempted to eat the chocolate bars.

Source: https://www.foxnews.com/health/mushroom-infused-microdosing-chocolate-bars-sending-people-hospital-prompting-investigation-fda

 

Elephants give each other ‘names,’ just like humans

(Photo by Unsplash+ in collaboration with Getty Images)

They say elephants never forget a face, and now as it turns out, they seem to remember names too. That is, the “names” they have for one another. Yes, believe it or not, a new study shows that elephants actually have the rare ability to identify one another through unique calls, essentially giving one another human-like names when they converse.

Scientists from Colorado State University, along with a team of researchers from Save the Elephants and ElephantVoices, used machine learning to make this fascinating discovery. Their work suggests that elephants possess a level of communication and abstract thought that is more similar to ours than previously believed.

In the study, published in Nature Ecology and Evolution, the researchers analyzed hundreds of recorded elephant calls from Kenya’s Samburu National Reserve and Amboseli National Park. By training a sophisticated model to identify the intended recipient of each call based on its unique acoustic features, they could confirm that elephant calls contain a name-like component, a behavior they had suspected based on observation.

“Dolphins and parrots call one another by ‘name’ by imitating the signature call of the addressee. By contrast, our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work,” says lead author Michael Pardo, who conducted the study as a postdoctoral researcher at CSU and Save the Elephants, in a statement.

Once the team pinpointed the specific calls to the corresponding elephants, the scientists played back the recordings and observed their reactions. When the calls were addressed to them, the elephants responded positively by calling back or approaching the speaker. In contrast, calls meant for other elephants elicited less enthusiasm, demonstrating that the elephants recognized their own “names.”

Two juvenile elephants greet each other in Samburu National Reserve in Kenya. (Credit: George Wittemyer)

Elephants’ Brains Even More Complex Than Realized

The ability to learn and produce new sounds, a prerequisite for naming individuals, is uncommon in the animal kingdom. This form of arbitrary communication, where a sound represents an idea without imitating it, is considered a higher-level cognitive skill that greatly expands an animal’s capacity to communicate.

Co-author George Wittemyer, a professor at CSU’s Warner College of Natural Resources and chairman of the scientific board of Save the Elephants, elaborated on the implications of this finding: “If all we could do was make noises that sounded like what we were talking about, it would vastly limit our ability to communicate.” He adds that the use of arbitrary vocal labels suggests that elephants may be capable of abstract thought.

To arrive at these conclusions, the researchers embarked on a four-year study that included 14 months of intensive fieldwork in Kenya. They followed elephants in vehicles, recording their vocalizations and capturing approximately 470 distinct calls from 101 unique callers and 117 unique receivers.

Kurt Fristrup, a research scientist in CSU’s Walter Scott, Jr. College of Engineering, developed a novel signal processing technique to detect subtle differences in call structure. Together with Pardo, he trained a machine-learning model to correctly identify the intended recipient of each call based solely on its acoustic features. This innovative approach allowed the researchers to uncover the hidden “names” within the elephant calls.

Source: https://studyfinds.org/elephants-give-each-other-names/

Baby talk explained! All those sounds mean more than you think

Mother and baby laying down together (Photo by Ana Tablas on Unsplash)

From gurgling “goos” to squealing “wheees!”, the delightful symphony of sounds emanating from a baby’s crib may seem like charming gibberish to the untrained ear. However, a new study suggests that these adorable vocalizations are far more than just random noise — they’re actually a crucial stepping stone on the path to language development.

The research, published in PLOS One, took a deep dive into the vocal patterns of 130 typically developing infants over the course of their first year of life. Their discoveries challenge long-held assumptions about how babies learn to communicate.

Traditionally, many experts believed that infants start out making haphazard sounds, gradually progressing to more structured “baby talk” as they listen to and imitate the adults around them. This new study paints a different picture, one where babies are actively exploring and practicing different categories of sounds in what might be thought of as a precursor to speech.

Think of it like a baby’s very first music lesson. Just as a budding pianist might spend time practicing scales and chords, it seems infants devote chunks of their day to making specific types of sounds, almost as if they’re trying to perfect their technique.

The researchers reached this conclusion after sifting through an enormous trove of audio data captured by small recording devices worn by the babies as they went about their daily lives. In total, they analyzed over 1,100 daylong recordings, adding up to nearly 14,500 hours – or about 1.6 years – of audio.

Using special software to isolate the infant vocalizations, the research team categorized the sounds into three main types: squeals (high-pitched, often excited-sounding noises), growls (low-pitched, often “rumbly” sounds), and vowel-like utterances (which the researchers dubbed “vocants”).

Next, they zoomed in on five-minute segments from each recording, hunting for patterns in how these sound categories were distributed. The results were striking: 40% of the recordings showed significant “clustering” of squeals, with a similar percentage showing clustering of growls. In other words, the babies weren’t randomly mixing their sounds, but rather, they seemed to focus on one type at a time, practicing it intensively.

Source: https://studyfinds.org/baby-talk-explained/

Why do giraffes have long necks? Researchers may finally have the answer

Photo by Krish Radhakrishna from Unsplash

Everything in biology ultimately boils down to food and sex. To survive as an individual, you need food. To survive as a species, you need sex.

Not surprisingly, then, the age-old question of why giraffes have long necks has centered around food and sex. After debating this question for the past 150 years, biologists still cannot agree on which of these two factors was the most important in the evolution of the giraffe’s neck. In the past three years, my colleagues and I have been trying to get to the bottom of this question.

Necks for sex
In the 19th century, biologists Charles Darwin and Jean Baptiste Lamarck both speculated that giraffes’ long necks helped them reach acacia leaves high up in the trees, though they likely weren’t observing actual giraffe behavior when they came up with this theory. Several decades later, when scientists started observing giraffes in Africa, a group of biologists came up with an alternative theory based on sex and reproduction.

These pioneering giraffe biologists noticed how male giraffes, standing side by side, used their long necks to swing their heads and club each other. The researchers called this behavior “neck-fighting” and guessed that it helped the giraffes prove their dominance over each other and woo mates. Males with the longest necks would win these contests and, in turn, boost their reproductive success. That favorability, the scientists predicted, drove the evolution of long necks.

Since its inception, the necks-for-sex sexual selection hypothesis has overshadowed Darwin’s and Lamarck’s necks-for-food hypothesis.

The necks-for-sex hypothesis predicts that males should have longer necks than females since only males use them to fight, and indeed, they do. But adult male giraffes are also about 30% to 50% larger than female giraffes. All of their body components are bigger. So, my team wanted to find out if males have proportionally longer necks when accounting for their overall stature, comprised of their head, neck, and forelegs.

Necks not for sex?
But it’s not easy to measure giraffe body proportions. For one, their necks grow disproportionately faster during the first six to eight years of their life. And in the wild, you can’t tell exactly how old an individual animal is. To get around these problems, we measured body proportions in captive Masai giraffes in North American zoos. Here, we knew the exact age of the giraffes and could then compare this data with the body proportions of wild giraffes that we knew confidently were older than 8 years.

To our surprise, we found that adult female giraffes have proportionally longer necks than males, which contradicts the necks-for-sex hypothesis. We also found that adult female giraffes have proportionally longer body trunks, while adult males have proportionally longer forelegs and thicker necks.

Giraffe babies don’t have any of these sex-specific body proportion differences. They only appear as giraffes are reaching adulthood.
Finding that female giraffes have proportionally both longer necks and longer body trunks led us to propose that females, and not males, drove the evolution of the giraffe’s long neck, and not for sex but for food and reproduction. Our theory is in agreement with Darwin and Lamarck that food was the major driver for the evolution of the giraffe’s neck but with an emphasis on female reproductive success.

A shape to die for
Giraffes are notoriously picky eaters and browse on fresh leaves, flowers, and seed pods. Female giraffes especially need enough to eat because they spend most of their adult lives either pregnant or providing milk to their calves.

Females tend to use their long necks to probe deep into bushes and trees to find the most nutritious food. By contrast, males tend to feed high in trees by fully extending their necks vertically. Females need proportionally longer trunks to grow calves that can be well over 6 feet tall at birth.

For males, I’d guess that their proportionally longer forelegs are an adaptation that allows them to mount females more easily during sex. While we found that their necks might not be as proportionally long as females’ necks are, they are thicker. That’s probably an adaptation that helps them win neck fights.

Source: https://studyfinds.org/why-do-giraffes-have-long-necks/

Eleven tonnes of rubbish taken off Himalayan peaks

Fewer permits were issued and fewer climbers died on Mount Everest in 2024 than 2023.

The Nepalese army says it has removed eleven tonnes of rubbish, four corpses and one skeleton from Mount Everest and two other Himalayan peaks this year.
It took troops 55 days to recover the rubbish and bodies from Everest, Nuptse and Lhotse mountains.
It is estimated that more than fifty tonnes of waste and more than 200 bodies cover Everest.
The army began conducting an annual clean-up of the mountain, which is often described as the world’s highest garbage dump, in 2019 during concerns about overcrowding and climbers queueing in dangerous conditions to reach the summit.
The five clean-ups have collected 119 tonnes of rubbish, 14 human corpses and some skeletons, the army says.
This year, authorities aimed to reduce rubbish and improve rescues by making climbers wear tracking devices and bring back their own poo.

In the future, the government plans to create a mountain rangers team to monitor rubbish and put more money toward its collection, Nepal’s Department of Tourism director of mountaineering Rakesh Gurung told the BBC.
For the spring climbing season that ended in May, the government issued permits to 421 climbers, down from a record-breaking 478 last year. Those numbers do not include Nepalese guides. In total, an estimated 600 people climbed the mountain this year.
This year, eight climbers died or went missing, compared to 19 last year.
A Brit, Daniel Paterson, and his Nepalese guide, Pastenji Sherpa, are among those missing after being hit by falling ice on 21 May.
Mr Paterson’s family started a fundraiser to hire a search team to find them, but said in an update on 4 June that recovery “is not possible at this time” because of the location and danger of the operation.
Mr Gurung said the number of permits was lower this year because of the global economic situation, China also issuing permits and the national election in India which reduced the number of climbers from that country.
Source: https://www.bbc.com/news/articles/cq5539lj1pqo

Women experience greater mental agility during menstruation

For female athletes, the impact of the menstrual cycle on physical performance has been a topic of much discussion. But what about the mental side of the game? A groundbreaking new study suggests that certain cognitive abilities, particularly those related to spatial awareness and anticipation, may indeed ebb and flow with a woman’s cycle.

(Photo 102762325 | Black Teen Brain © Denisismagilov | Dreamstime.com)

The findings, in a nutshell
Researchers from University College London tested nearly 400 participants on a battery of online cognitive tasks designed to measure reaction times, attention, visuospatial functions (like 3D mental rotation), and timing anticipation. The study, published in Neuropsychologia, included men, women on hormonal contraception, and naturally cycling women.

Fascinatingly, the naturally cycling women exhibited better overall cognitive performance during menstruation compared to any other phase of their cycle. This held true even though these women reported poorer mood and more physical symptoms during their period. In contrast, performance dipped during the late follicular phase (just before ovulation) and the luteal phase (after ovulation).

“What is surprising is that the participant’s performance was better when they were on their period, which challenges what women, and perhaps society more generally, assume about their abilities at this particular time of the month,” says Dr. Flaminia Ronca, first author of the study from UCL, in a university release.

“I hope that this will provide the basis for positive conversations between coaches and athletes about perceptions and performance: how we feel doesn’t always reflect how we perform.”

This study provides compelling preliminary evidence that sport-relevant cognitive skills may indeed fluctuate across the menstrual cycle, with a surprising boost during menstruation itself. If confirmed in future studies, this could have implications for understanding injury risk and optimizing mental training in female athletes.

Importantly, there was a striking mismatch between women’s perceptions and their actual performance. Many felt their thinking was impaired during their period when, in fact, it was enhanced. This points to the power of negative expectations and the importance of educating athletes about their unique physiology.

Source: https://studyfinds.org/womens-brains-show-more-mental-agility-during-their-periods/

Here’s why sugar wreaks havoc on gut health, worsens inflammatory bowel disease

(Photo by Alexander Grey from Unsplash)

There can be a lot of inconsistent dietary advice when it comes to gut health, but those that says that eating lots of sugar is harmful tend to be the most consistent of them all. Scientists from the University of Pittsburgh are now showing that consuming excess sugar disrupts cells that keep the colon healthy in mice with inflammatory bowel disease (IBD).

“The prevalence of IBD is rising around the world, and it’s rising the fastest in cultures with industrialized, urban lifestyles, which typically have diets high in sugar,” says senior author Timothy Hand, Ph.D., associate professor of pediatrics and immunology at Pitt’s School of Medicine and UPMC Children’s Hospital of Pittsburgh. “Too much sugar isn’t good for a variety of reasons, and our study adds to that evidence by showing how sugar may be harmful to the gut. For patients with IBD, high-density sugar — found in things like soda and candy — might be something to stay away from.”

In this study, researchers fed mice either a standard or high-sugar diet, and then mimicked IBD symptoms by exposing them to a chemical called DSS, which damages the colon.

Shockingly, all of the mice that ate a high-sugar diet died within nine days. All of the animals that ate a standard diet lived until the end of the 14-day experiment. To figure out where things went wrong, the team looked for answers inside the colon. Typically, the colon is lined with a layer of epithelial cells that are arranged with finger-like projections called crypts. They are frequently replenished by dividing stem cells to keep the colon healthy.

“The colon epithelium is like a conveyor belt,” explains Hand in a media release. “It takes five days for cells to travel through the circuit from the bottom to the top of the crypt, where they are shed into the colon and defecated out. You essentially make a whole new colon every five days.”

(© T. L. Furrer – stock.adobe.com)

This system collapsed in mice fed a high-sugar diet
In fact, the protective layer of cells was completely gone in some animals, filling the colon with blood and immune cells. This shows that sugar may directly impact the colon, rather than the harm being dependent on the gut microbiome, which is what the team originally thought.

To compare the findings to human colons, the researchers used poppy seed-sized intestinal cultures that could be grown in a lab dish. They found that as sugar concentrations increased, fewer cultures developed, which suggests that sugar hinders cell devision.

“We found that stem cells were dividing much more slowly in the presence of sugar — likely too slow to repair damage to the colon,” says Hand. “The other strange thing we noticed was that the metabolism of the cells was different. These cells usually prefer to use fatty acids, but after being grown in high-sugar conditions, they seemed to get locked into using sugar.”

Hand adds that these findings may be key to strengthening existing links between sweetened drinks and worse IBD outcomes.

Source: https://studyfinds.org/sugar-wreaks-havoc-gut-health/

Shocking study claims pollution causes more deaths than war, disease, and drugs combined

(Credit: aappp/Shutterstock)

We often think of war, terrorism, and deadly diseases as the greatest threats to human life. But what if the real danger is something we encounter every day, something that’s in the air we breathe, the water we drink, and even in the noise that surrounds us? A new study published in the Journal of the American College of Cardiology reveals a startling truth: pollution, in all its forms, is now a greater health threat than war, terrorism, malaria, HIV, tuberculosis, drugs, and alcohol combined. Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering seven million deaths globally each year.

“Every year around 20 million people worldwide die from cardiovascular disease with pollutants playing an ever-increasing role,” explains Professor Jason Kovacic, Director and CEO of the Victor Chang Cardiac Research Institute in Australia, in a media release.

The findings, in a nutshell
The culprits behind this global death toll aren’t just the obvious ones like air pollution from car exhausts or factory chimneys. The study, conducted by researchers from prestigious institutions worldwide, shines a light on lesser-known villains: soil pollution, noise pollution, light pollution, and even exposure to toxic chemicals in our homes.

Think about your daily life. You wake up after a night’s sleep disrupted by the glow of streetlights and the hum of late-night traffic. On your way to work, you’re exposed to car fumes and the blaring horns of impatient drivers. At home, you might be unknowingly using products containing untested chemicals. All these factors, the study suggests, are chipping away at your heart health.

“Pollutants have reached every corner of the globe and are affecting every one of us,” Prof. Kovacic warns. “We are witnessing unprecedented wildfires, soaring temperatures, unacceptable road noise and light pollution in our cities and exposure to untested toxic chemicals in our homes.”

Specifically, researchers estimate that manmade pollutants and climate change contribute to a staggering 7 million deaths globally each year. (© Quality Stock Arts – stock.adobe.com)

How do these pollutants harm our hearts?
Air Pollution: When you inhale smoke from a wildfire or exhaust fumes, these toxins travel deep into your lungs, enter your bloodstream, and then circulate throughout your body. It’s like sending tiny invaders into your system, causing damage wherever they go, including your heart.

Noise and Light Pollution: Ever tried sleeping with a streetlight shining through your window or with noisy neighbors? These disruptions do more than just annoy you—they mess up your sleep patterns. Poor sleep can lead to inflammation in your body, raise your blood pressure, and even cause weight gain. All of these are risk factors for heart disease.

Extreme Heat: Think of your heart as a car engine. On a scorching hot day, your engine works harder to keep cool. Similarly, during a heatwave, your heart has to work overtime. This extra strain, coupled with dehydration and reduced blood volume from sweating, can lead to serious issues like acute kidney failure.

Chemical Exposure: Many household items — from non-stick pans to water-resistant clothing — contain chemicals that haven’t been thoroughly tested for safety. Prof. Kovacic points out, “There are hundreds of thousands of chemicals that haven’t even been tested for their safety or toxicity, let alone their impact on our health.”

The statistics are alarming. Air pollution alone is linked to over seven million premature deaths per year, with more than half due to heart problems. During heatwaves, the risk of heat-related cardiovascular deaths can spike by over 10%. In the U.S., exposure to wildfire smoke has surged by 77% since 2002.

Source: https://studyfinds.org/pollution-causes-more-deaths/

Never-before-seen blue ants discovered in India

In the lush forests of India’s Arunachal Pradesh, a team of intrepid researchers has made a startling discovery: a never-before-seen species of ant that sparkles like a brilliant blue gemstone. The remarkable find marks the first new species of its genus to be identified in India in over 120 years.

Dubbing the species Paraparatrechina neela, the fascinating discovery was made by entomologists Dr. Priyadarsanan Dharma Rajan and Ramakrishnaiah Sahanashree, from the Ashoka Trust for Research in Ecology and the Environment (ATREE) in Bengaluru, along with Aswaj Punnath from the University of Florida. The name “neela” comes from Indian languages, meaning the color blue. And for good reason – this ant sports an eye-catching iridescent blue exoskeleton, unlike anything seen before in its genus.

Paraparatrechina is a widespread group of ants found across Asia, Africa, Australia and the Pacific. They are typically small, measuring just a few millimeters in length. Before this discovery, India was home to only one known species in the genus, Paraparatrechina aseta, which was described way back in 1902.

The researchers collected the dazzling P. neela specimens during an expedition in 2022 to the Siang Valley in the foothills of the Eastern Himalayas. Fittingly, this trip was part of a series called the “Siang Expeditions” – a project aiming to retrace the steps of a historic 1911-12 expedition that documented the region’s biodiversity.

Paraparatrechina neela — the blue ant discovered in India’s Himalayas. (Credit: Sahanashree R)

Over a century later, the area still holds surprises. The team found the ants living in a tree hole in a patch of secondary forest, at an altitude of around 800 meters. After carefully extracting a couple of specimens with an aspirator device, they brought them back to the lab for a closer look under the microscope. Their findings are published in the journal ZooKeys.

Beyond its “captivating metallic-blue color,” a unique combination of physical features distinguishes P. neela from its relatives. The body is largely blue, but the legs and antennae fade to a brownish-white. Compared to the light brown, rectangular head of its closest Indian relative, P. aseta, the sapphire ant has a subtriangular head. It also has one less tooth on its mandibles and a distinctly raised section on its propodeum (the first abdominal segment that’s fused to the thorax).

So what’s behind the blue? While pigments provide color for some creatures, in insects, hues like blue are usually the result of microscopic structural arrangements that reflect light in particular ways. Different layers and shapes of the exoskeleton can interact with light to produce shimmering, iridescent effects. This has evolved independently in many insect groups, but is very rare in ants.

The function of the blue coloration remains a mystery for now. In other animals, such striking hues can serve many possible roles – from communication and camouflage to thermoregulation.

“This vibrant feature raises intriguing questions. Does it help in communication, camouflage, or other ecological interactions? Delving into the evolution of this conspicuous coloration and its connections to elevation and the biology of P. neela presents an exciting avenue for research,” the authors write.

A view of Siang Valley. (Credit: Ranjith AP)

The Eastern Himalayas are known to be a biodiversity hotspot, but remain underexplored by scientists. Finding a new species of ant, in a genus that specializes in tiny, inconspicuous creatures, hints at the many more discoveries that likely await in the region’s forests. Who knows – maybe there are entire rainbow-hued colonies of ants hidden in the treetops!

Source: https://studyfinds.org/blue-ants-discovered/

Prenatal stress hormones may finally explain why infants won’t sleep at night

(Photo by Laura Garcia on Unsplash)

Babies with higher stress hormone levels late in their mother’s pregnancy can end up having trouble falling asleep, researchers explain. The sleep research suggests that measuring cortisol during the third trimester can predict infant sleep patterns up to seven months after a baby’s birth.

Babies often wake up in the middle of the night and have trouble falling asleep. A team from the University of Denver says one possible but unexplored reason for this is how well the baby’s hypothalamic-adrenal-pituitary (HPA) system is working. The HPA system is well-known for regulating the stress response and has previously been linked with sleep disorders when it’s not working properly. Cortisol is the end product produced from the HPA axis.

What is cortisol?

Cortisol is a steroid hormone produced by the adrenal glands, which are located on top of each kidney. It plays a crucial role in several body functions, including:

Regulation of metabolism: Cortisol helps regulate the metabolism of proteins, fats, and carbohydrates, releasing energy and managing how the body uses these macronutrients.

Stress response: Often referred to as the “stress hormone,” cortisol is released in response to stress and low blood-glucose concentration. It helps the body manage and cope with stress by altering immune system responses and suppressing non-essential functions in a fight-or-flight situation.

Anti-inflammatory effects: Cortisol has powerful anti-inflammatory capabilities, helping to reduce inflammation and assist in healing.

Blood pressure regulation: It helps in maintaining blood pressure and cardiovascular function.

Circadian rhythm influence: Cortisol levels fluctuate throughout the day, typically peaking in the morning and gradually falling to their lowest level at night.

Collecting hair samples is one way to measure fetal cortisol levels in the final trimester of pregnancy.

“Although increases in cortisol across pregnancy are normal and important for preparing the fetus for birth, our findings suggest that higher cortisol levels during late pregnancy could predict the infant having trouble falling asleep,” says lead co-author Melissa Nevarez-Brewster in a media release. “We are excited to conduct future studies to better understand this link.”

The team collected hair cortisol samples from 70 infants during the first few days after birth. Approximately 57% of the infants were girls. When each child was seven months-old, parents completed a sleep questionnaire including questions such as how long it took on average for the children to fall asleep, how long babies stayed awake at night, and the number of times the infants woke up in the middle of the night. The researchers also collected data on each infant’s gestational age at birth and their family’s income.

Source: https://studyfinds.org/prenatal-stress-hormones-may-finally-explain-why-infants-wont-sleep-at-night/

How much stress is too much?

Pedro Figueras / pexels.com

COVID-19 taught most people that the line between tolerable and toxic stress – defined as persistent demands that lead to disease – varies widely. But, some people will age faster and die younger from toxic stressors than others.

So, how much stress is too much, and what can you do about it?

I’m a psychiatrist specializing in psychosomatic medicine, which is the study and treatment of people who have physical and mental illnesses. My research is focused on people who have psychological conditions and medical illnesses, as well as those whose stress exacerbates their health issues.

I’ve spent my career studying mind-body questions and training physicians to treat mental illness in primary care settings. My forthcoming book is titled “Toxic Stress: How Stress is Killing Us and What We Can Do About It.”

A 2023 study of stress and aging over the life span – one of the first studies to confirm this piece of common wisdom – found that four measures of stress all speed up the pace of biological aging in midlife. It also found that persistent high-stress ages people in a comparable way to the effects of smoking and low socioeconomic status, two well-established risk factors for accelerated aging.

The difference between good stress and the toxic kind

Good stress – a demand or challenge you readily cope with – is good for your health. In fact, the rhythm of these daily challenges, including feeding yourself, cleaning up messes, communicating with one another, and carrying out your job, helps to regulate your stress response system and keep you fit.

Toxic stress, on the other hand, wears down your stress response system in ways that have lasting effects, as psychiatrist and trauma expert Bessel van der Kolk explains in his bestselling book “The Body Keeps the Score.”

The earliest effects of toxic stress are often persistent symptoms such as headache, fatigue, or abdominal pain that interfere with overall functioning. After months of initial symptoms, a full-blown illness with a life of its own – such as migraine headaches, asthma, diabetes, or ulcerative colitis – may surface.

When we are healthy, our stress response systems are like an orchestra of organs that miraculously tune themselves and play in unison without our conscious effort – a process called self-regulation. But when we are sick, some parts of this orchestra struggle to regulate themselves, which causes a cascade of stress-related dysregulation that contributes to other conditions.

For instance, in the case of diabetes, the hormonal system struggles to regulate sugar. With obesity, the metabolic system has a difficult time regulating energy intake and consumption. With depression, the central nervous system develops an imbalance in its circuits and neurotransmitters that makes it difficult to regulate mood, thoughts and behaviors.

‘Treating’ stress
Though stress neuroscience in recent years has given researchers like me new ways to measure and understand stress, you may have noticed that in your doctor’s office, the management of stress isn’t typically part of your treatment plan.

Most doctors don’t assess the contribution of stress to a patient’s common chronic diseases such as diabetes, heart disease, and obesity, partly because stress is complicated to measure and partly because it is difficult to treat. In general, doctors don’t treat what they can’t measure.

Stress neuroscience and epidemiology have also taught researchers recently that the chances of developing serious mental and physical illnesses in midlife rise dramatically when people are exposed to trauma or adverse events, especially during vulnerable periods such as childhood.

Over the past 40 years in the U.S., the alarming rise in rates of diabetes, obesity, depression, PTSD, suicide, and addictions points to one contributing factor that these different illnesses share: toxic stress.

Toxic stress increases the risk for the onset, progression, complications, or early death from these illnesses.

Suffering from toxic stress
Because the definition of toxic stress varies from one person to another, it’s hard to know how many people struggle with it. One starting point is the fact that about 16% of adults report having been exposed to four or more adverse events in childhood. This is the threshold for higher risk for illnesses in adulthood.

Research dating back to before the COVID-19 pandemic also shows that about 19% of adults in the U.S. have four or more chronic illnesses. If you have even one chronic illness, you can imagine how stressful four must be.

And about 12% of the U.S. population lives in poverty, the epitome of a life in which demands exceed resources every day. For instance, if a person doesn’t know how they will get to work each day or doesn’t have a way to fix a leaking water pipe or resolve a conflict with their partner, their stress response system can never rest. One or any combination of threats may keep them on high alert or shut them down in a way that prevents them from trying to cope at all.

Add to these overlapping groups all those who struggle with harassing relationships, homelessness, captivity, severe loneliness, living in high-crime neighborhoods, or working in or around noise or air pollution. It seems conservative to estimate that about 20% of people in the U.S. live with the effects of toxic stress.

Source: https://studyfinds.org/how-much-stress-is-too-much/

Eye Stroke Cases Surge During Heatwave: Symptoms, Prevention Tips

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues.

गर्मियों में कैसे रखें आंखों का ख्याल | Image:Freepik

As heatwaves sweep across various regions, there has been a noticeable increase in eye stroke cases. This condition, also known as retinal artery occlusion, can cause sudden vision loss and is comparable to a brain stroke in its seriousness.

Impact of heatwaves on eye health 

The extreme heat can affect overall health, increasing the risk of heart diseases, brain disorders, and other organ issues. Notably, it can also lead to eye strokes due to dehydration and heightened blood pressure. Dehydration during hot weather makes the blood more prone to clotting, while high temperatures can exacerbate cardiovascular problems, raising the risk of arterial blockages.

Eye stroke

An eye stroke occurs when blood flow to the retina is obstructed, depriving it of oxygen and nutrients. This can cause severe retinal damage in minutes. Dehydration from heatwaves thickens the blood, making clots more likely, while heat stress can worsen cardiovascular conditions, further increasing eye stroke risk.

Signs and symptoms

Sudden Vision Loss: The most common symptom, this can be partial or complete, and typically painless.

Visual Disturbances: Sudden dimming or blurring of vision, where central vision is affected but peripheral vision remains intact.

Preventive measures

Stay Hydrated: Ensure adequate fluid intake to prevent dehydration.

Avoid Peak Sun Hours: Limit exposure to the sun during the hottest parts of the day.

Manage Chronic Conditions: Keep blood pressure and other chronic conditions under control.

TImmediate Medical Attentioreatment optionsn: Urgency is crucial as delays can lead to permanent vision loss.

Source: https://www.republicworld.com/health/eye-stroke-cases-surge-during-heatwave-symptoms-prevention-tips/?amp=1

5 Hidden Effects Of Childhood Neglect

(Photo by Volurol on Shutterstock)

Trauma, abuse, and neglect — in the current cultural landscape, it’s not hard to find a myriad of discussions on these topics. But with so many people chiming in on the conversation, it’s more important now than ever to listen to what experts on the topic have to say. As we begin to understand more and more about the effects of growing up experiencing trauma and abuse, we also begin to understand that the effects of these experiences are more complex and wide-ranging than we had ever imagined.

Recent studies in the field of childhood trauma and abuse have found that these experiences can affect a wide range of aspects of our adult life. In fact, even seemingly disparate topics ranging from your stance on vaccinations to the frequency with which we experience headaches, to the types of judgments that we make about others are impacted by histories of abuse, trauma, or neglect.

Clearly, the effects of a traumatic childhood go far beyond the time when you are living in an abusive or unhealthy environment. A recent study reports that early childhood traumas can impact health outcomes decades later, potentially following you for the rest of your life. With many new and surprising effects of childhood trauma being discovered every day, it’s no wonder that so many people are interested in what exactly trauma is and how it can affect us.

So, what are the long-term ramifications of childhood neglect? For an answer to that question, StudyFinds sat down with Michael Menard, inventor-turned-author of the upcoming book, “The Kite That Couldn’t Fly: And Other May Avenue Stories,” to discuss the lesser-understood side of trauma and how it can affect us long into our adult lives.

Here is his list of five hidden effects of trauma, and some of them just might surprise you.

1. Unstable Relationships
For individuals with childhood trauma, attachment issues are an often overlooked form of collateral damage. Through infancy and early childhood, a person’s attachment style is developed largely through familial bonds and is then carried into every relationship from platonic peers to romantic partners. When this is lovingly and healthily developed, this is usually a positive thing. But for children and adults with a background of neglect, it often leads to difficulty in finding, developing, and keeping healthy relationships.

As Menard explains it, a childhood spent feeling invisible left scars on his adult relationship patterns. “As a child, I felt that I didn’t exist. No matter what I did, it was not recognized, so there was no reinforcement,” he says. “As a young adult, I panicked when I got ignored. I was afraid that everyone was going to leave. I also felt that I would drive people away in relationships. I would only turn to others when I needed emotional support, never when things were good. When things were good, I could handle them myself. I didn’t need anybody.”

Childhood trauma often creates adults who struggle to be emotionally vulnerable, to process feelings of anger and disappointment, and to accept support from others. And with trust as one of the most vital components of longterm, healthy relationships, it’s clear where difficulty may arise. But Menard emphasizes that a childhood of neglect should not have to mean a lifetime of distant or unstable relationships. “A large percentage of the people that I’ve talked to about struggles in their life, they think it’s life. But we were born to be healthy, happy, prosperous, and anything that is taking away from that is not good,” he says.

“The lesser known [effects] I would say are the things that cause disruption in relationships,” Menard adds. “The divorce rate is about 60%. Where does that come from? It comes from disruption and unhappiness between two people. Lack of respect, love, trust, sacrifice. And if you come into that relationship broken from childhood trauma and you don’t even know it, I’d say that’s not well known.”

2. Physical Health Issues
The most commonly discussed long-term effects of childhood neglect are usually mental and emotional ones. But believe it or not, a background of trauma can actually impact your physical health. From diabetes to cardiac disease, the toll of childhood trauma can turn distinctly physical. “Five of the top 10 diseases that kill us have been scientifically proven to come from childhood trauma,” says Menard. “I’ve got high blood pressure. I go to the doctor, and they can’t figure it out. I have diabetes, hypertension, obesity, cardiac disease, COPD—it’s now known that they have a high probability that they originated from childhood trauma or neglect. Silent killers.”

In some cases, the physical ramifications of childhood trauma may be due to long-term medical neglect. What was once a treatable issue can become a much larger and potentially permanent problem. In Menard’s case, untreated illness in his childhood meant open heart surgery in his adult years. “I’m now 73. When I was 70, my aortic valve closed. I had to have four open heart surgeries in two months — almost died three times,” he explains. “Now, can I blame that on childhood trauma? I can, because I had strep throat repeatedly as a child without medication. One episode turned into rheumatic fever that damaged my aortic valve. 50 years later, I’m having my chest opened up.”

From loss of sleep to chronic pain, the physical manifestations of a neglectful childhood can be painful and difficult. But beyond that, they often go entirely overlooked. For many people, this can feel frustrating and invalidating. For others, they may not know themselves that their emotional pain could be having physical ramifications. As Menard puts it, “things are happening to people that they think [are just] part of life, and [they’re] not.”

3. Mental Health Struggles
Growing up in an abusive or neglectful environment can have a variety of negative effects on children. However, one of the most widely discussed and understood consequences is that of their mental health. “Forty-one percent of all depression in the United States is caused by childhood trauma or comes from childhood trauma,” notes Menard. And this connection between trauma and mental illness goes far beyond just depression. In fact, a recent study found a clear link between experiences of childhood trauma and various mental illnesses including anxiety, depression, and substance use disorders.

Of course, depression and anxiety are also compounded when living in an environment lacking the proper love, support, and encouragement that a child deserves to grow up in. For Menard, growing up in a home with 16 people did little to keep the loneliness at bay. “I just thought it was normal—being left out,” Menard says. “We all need to trust, and we need to rely on people. But if you become an island and self-reliant, not depending on others, you become isolated.”

In some cases, the impact of mental health can also do physical damage. In one example, Menard notes an increased likelihood for eating disorders. “Mine came from not having enough food,” he says. “I get that, but there are all types of eating disorders that come from emotional trauma.”

4. Acting Out

For most children, the model set by the behavior of their parents lays the foundation for their own personal growth and development. However, kids who lack these positive examples of healthy behavior are less likely to develop important traits like empathy, self-control, and responsibility. Menard is acutely aware of this, stating, “Good self-care and self-discipline are taught. It goes down the drain when you experience emotional trauma.” Children who are not given proper role models for behavior will often instead mimic the anger and aggressive behaviors prevalent in emotionally neglectful or abusive households.

“My wife is a school teacher and she could tell immediately through the aggressive behavior of even a first grader that there were multiple problems,” adds Menard. However, his focus is less on pointing fingers at the person who is displaying these negative behaviors, and more about understanding what made them act this way in the first place. “It’s not about what’s wrong with you, it’s about what happened to you.”

However, for many, the negative influence extends beyond simple bad behavior. Menard also describes being taught by his father to steal steaks from the restaurant where he worked at the age of 12. This was not only what his father encouraged him to do, but also what seemed completely appropriate to him because of how he had been raised. “I’d bring steaks home for him, and when he got off the factory shift at midnight, that seemed quite okay,” Menard says. “It seemed quite normal. And it’s horrible. Everybody’s searching to try to heal that wound and they don’t know why they’re doing it.”

Source: https://studyfinds.org/5-hidden-effects-of-childhood-neglect/

You won’t believe how fast people adapt to having an extra thumb

The Third Thumb worn by different users (CREDIT: Dani Clode Design / The Plasticity Lab)

Will human evolution eventually give us a sixth finger? If it does, a new study is showing that we’ll have no trouble using an extra thumb! It may sound like science fiction, but researchers have shown that people of all ages can quickly learn how to use an extra, robotic third thumb.

The findings, in a nutshell
A team at the University of Cambridge developed a wearable, prosthetic thumb device and had nearly 600 people from diverse backgrounds try it out. The results in the journal Science Robotics were astonishing: 98% of participants could manipulate objects using the third thumb within just one minute of picking it up and getting brief instructions.

The researchers put people through simple tasks like moving pegs from a board into a basket using only the robotic thumb. They also had people use the device along with their real hand to manipulate oddly-shaped foam objects, testing hand-eye coordination. People, both young and old, performed similarly well on the tasks after just a little practice. This suggests we may be surprisingly adept at integrating robotic extensions into our sense of body movement and control.

While you might expect those with hand-intensive jobs or hobbies to excel, that wasn’t really the case. Most everyone caught on quickly, regardless of gender, handedness, age, or experience with manual labor. The only groups that did noticeably worse were the very youngest children under age 10 and the oldest seniors. Even so, the vast majority in those age brackets still managed to use the third thumb effectively with just brief training.

Professor Tamar Makin and designer Dani Clode have been working on Third Thumb for several years. One of their initial tests in 2021 demonstrated that the 3D-printed prosthetic thumb could be a helpful extension of the human hand. In a test with 20 volunteers, it even helped participants complete tasks while blindfolded!

Designer Dani Clode with her ‘Third Thumb’ device. (Credit: Dani Clode)

How did scientists test the third thumb?
For their inclusive study, the Cambridge team recruited a wide range of 596 participants between the ages of three and 96. The group comprised an intentionally diverse mix of demographics to ensure the robotic device could be effectively used by all types of people.

The Third Thumb device itself consists of a rigid, controllable robotic digit worn on the opposite side of the hand from the normal thumb. It’s operated by foot sensors – pressing with the right foot pulls the robotic thumb inward across the palm while the left foot pushes it back out toward the fingertips. Releasing foot pressure returns the thumb to its resting position.

During testing at a science exhibition, each participant received up to one minute of instructions on how to control the device and perform one of two simple manual tasks. The first had them individually pick up pegs from a board using just the third thumb and drop as many as possible into a basket within 60 seconds. The second required them to manipulate a set of irregularly-shaped foam objects using the robotic thumb in conjunction with their real hand and fingers.

Detailed data was collected on every participant’s age, gender, handedness, and even occupations or hobbies that could point to exceptional manual dexterity skills. This allowed the researchers to analyze how user traits and backgrounds affected performance with the third thumb device after just a minute’s practice. The stark consistency across demographics proved its intuitive usability.

Source: https://studyfinds.org/people-adapt-to-extra-thumb/

Mysterious layer inside Earth may come from another planet!

3D illustration showing layers of the Earth in space. (© Destina – stock.adobe.com)

From the surface to the inner core, Earth has several layers that continue to be a mystery to science. Now, it turns out one of these layers may consist of material from an entirely different planet!

Deep within our planet lies a mysterious, patchy layer known as the D” layer. Located a staggering 3,000 kilometers (1,860 miles) below the surface, this zone sits just above the boundary separating Earth’s molten outer core from its solid mantle. Unlike a perfect sphere, the D” layer’s thickness varies drastically around the globe, with some regions completely lacking this layer altogether – much like how continents poke through the oceans on Earth’s surface.

These striking variations have long puzzled geophysicists, who describe the D” layer as heterogeneous, meaning non-uniform in its composition. However, a new study might finally shed light on this deep enigma, proposing that the D” layer could be a remnant of another planet that collided with Earth during its early days, billions of years ago.

The findings, in a nutshell
The research, published in National Science Review and led by Dr. Qingyang Hu from the Center for High Pressure Science and Technology Advanced Research and Dr. Jie Deng from Princeton University, draws upon the widely accepted Giant Impact hypothesis. This hypothesis suggests that a Mars-sized object violently collided with the proto-Earth, creating a global ocean of molten rock, or magma, in the aftermath.

Hu and Deng believe the D” layer’s unique composition may be the leftover fallout from this colossal impact, potentially holding valuable clues about our planet’s formation. A key aspect of their theory involves the presence of substantial water within this ancient magma ocean. While the origin of this water remains up for debate, the researchers are focusing on what happened as the molten rock began to cool.

“The prevailing view,” Dr. Deng explains in a media release, “suggests that water would have concentrated towards the bottom of the magma ocean as it cooled. By the final stages, the magma closest to the core could have contained water volumes comparable to Earth’s present-day oceans.”

Is there a hidden ocean inside the Earth?
This water-rich environment at the bottom of the magma ocean would have created extreme pressure and temperature conditions, fostering unique chemical reactions between water and minerals.

“Our research suggests this hydrous magma ocean favored the formation of an iron-rich phase called iron-magnesium peroxide,” Dr. Hu elaborates.

This peroxide, which has a chemical formula of (Fe,Mg)O2, has an even stronger affinity for iron compared to other major components expected in the lower mantle.

“According to our calculation, its affinity to iron could have led to the accumulation of iron-dominant peroxide in layers ranging from several to tens of kilometers thick,” Hu explains.

The presence of such an iron-rich peroxide phase would alter the mineral composition of the D” layer, deviating from our current understanding. According to the new model proposed by Hu and Deng, minerals in the D” layer would be dominated by an assemblage of iron-poor silicate, iron-rich (Fe,Mg) peroxide, and iron-poor (Fe,Mg) oxide. Interestingly, this iron-dominant peroxide also possesses unique properties that could explain some of the D” layer’s puzzling geophysical features, such as ultra-low velocity zones and layers of high electrical conductance — both of which contribute to the D” layer’s well-known compositional heterogeneity.

Source: https://studyfinds.org/layer-inside-earth-another-planet/

Average person wastes more than 2 hours ‘dreamscrolling’ everyday!

(Photo by Perfect Wave on Shutterstock)

NEW YORK — The average American spends nearly two and a half hours a day “dreamscrolling” — looking at dream purchases or things they’d like to one day own. While some might think you’re just wasting your day, a whopping 71% say it’s time well spent, as the habit motivates them to reach their financial goals.

In a recent poll of 2,000 U.S. adults, more than two in five respondents say they spend more time dreamscrolling when the economy is uncertain (43%). Over a full year, that amounts to about 873 hours or nearly 36 days spent scrolling.

Conducted by OnePoll on behalf of financial services company Empower, the survey reveals half of the respondents say they dreamscroll while at work. Of those daydreaming employees, one in five admit to spending between three and four hours a day multitasking while at their job.

Gen Zers spend the most time dreamscrolling at just over three hours per day, while boomers spend the least, clocking in around an hour of fantasy purchases and filling wish lists. Americans say looking at dream purchases makes it easier for them to be smart with their money (56%), avoid making unplanned purchases or going into debt (30%), and better plan to achieve their financial goals (25%).

Nearly seven in 10 see dreamscrolling as an investment in themselves (69%) and an outlet for them to envision what they want out of life (67%). Four in 10 respondents (42%) say they regularly spend time picturing their ideal retirement — including their retirement age, location, and monthly expenses.

A whopping 71% say dreamscrolling is time well spent, as the habit motivates them to reach their financial goals. (© Antonioguillem – stock.adobe.com)

Many respondents are now taking the American dream online, with one in five respondents scrolling through listings of dream homes or apartments. Meanwhile, some are just browsing through vacation destinations (25%), beauty or self-care products (23%), and items for their pets (19%). Many others spend time looking at clothing, shoes, and accessories (49%), gadgets and technology (30%), and home décor or furniture (29%).

More than half (56%) currently have things left open in tabs and windows or saved in shopping carts that they’d like to purchase or own in the future. For those respondents, they estimate it would cost about $86,593.40 to afford everything they currently have saved.

Almost half of Americans say they are spending more time dreamscrolling now than in previous years (45%), and 56% plan on buying something on their dream list before this year wraps. While 65% are optimistic they’ll be able to one day buy everything on their list, nearly one in four say they don’t think they’ll ever be able to afford the majority of items (23%).

More than half (51%) say owning their dream purchases would make them feel more financially secure, and close to half say working with a financial professional would help them reach their goals (47%). Others feel they have more work to do: 34% say they’ve purchased fewer things on their dream list than they should at their age, with millennials feeling the most behind (39%).

Rising prices (54%), the inability to save money (29%), and growing debt (21%) are the top economic factors that may be holding some Americans back. Instead of doom spending, dreamscrolling has had a positive impact on Americans’ money habits: respondents say they better understand their financial goals (24%) as a result.

Source: https://studyfinds.org/shopping-browsing-cant-afford/

Who really was Mona Lisa? 500+ years on, there’s good reason to think we got it wrong

Visiting looking at the Mona Lisa (Credit: pixabay.com)

In the pantheon of Renaissance art, Leonardo da Vinci’s Mona Lisa stands as an unrivalled icon. This half-length portrait is more than just an artistic masterpiece; it embodies the allure of an era marked by unparalleled cultural flourishing.

Yet, beneath the surface of the Mona Lisa’s elusive smile lies a debate that touches the very essence of the Renaissance, its politics and the role of women in history.

A mystery woman

The intrigue of the Mona Lisa, also known as La Gioconda, isn’t solely due to Leonardo’s revolutionary painting techniques. It’s also because the identity of the subject is unconfirmed to this day. More than half a century since it was first painted, the real identity of the Mona Lisa remains one of art’s greatest mysteries, intriguing scholars and enthusiasts alike.

A Mona Lisa painting from the workshop of Leonardo da Vinci, held in the collection of the Museo del Prado in Madrid, Spain. Collection of the Museo del Prado

The painting has traditionally been associated with Lisa Gherardini, the wife of Florentine silk merchant Francesco del Giocondo. But another compelling theory suggests a different sitter: Isabella of Aragon.

Isabella of Aragon was born into the illustrious House of Aragon in Naples, in 1470. She was a princess who was deeply entwined in the political and cultural fabric of the Renaissance.

Her 1490 marriage to Gian Galeazzo Sforza, Duke of Milan, positioned Isabella at the heart of Italian politics. And this role was both complicated and elevated by the ambitions and machinations of Ludovico Sforza (also called Ludovico il Moro), her husband’s uncle and usurper of the Milanese dukedom.

In The Virgin and Child with Four Saints and Twelve Devotees, by (unknown) Master of the Pala Sforzesca, circa 1490, Gian Galeazzo Sforza is shown in prayer facing his wife, Isabella of Aragon (identified by her heraldic red and gold). National Gallery

Scholarly perspectives
The theory that Isabella is the real Mona Lisa is supported by a combination of stylistic analyses, historical connections and reinterpretations of Leonardo’s intent as an artist.

In his biography of Leonardo, author Robert Payne points to preliminary studies by the artist that bear a striking resemblances to Isabella around age 20. Payne suggests Leonardo captured Isabella across different life stages, including during widowhood, as depicted in the Mona Lisa.

U.S. artist Lillian F. Schwartz’s 1988 study used x-rays to reveal an initial sketch of a woman hidden beneath Leonardo’s painting. This sketch was then painted over with Leonardo’s own likeness.

Schwartz believes the woman in the sketch is Isabella, because of its similarity with a cartoon Leonardo made of the princess. She proposes the work was made by integrating specific features of the initial model with Leonardo’s own features.

An illustration of Isabella of Aragon from the Story of Cremona by Antonio Campi. Library of Congress

This hypothesis is further supported by art historians Jerzy Kulski and Maike Vogt-Luerssen.

According to Vogt-Luerssen’s detailed analysis of the Mona Lisa, the symbols of the Sforza house and the depiction of mourning garb both align with Isabella’s known life circumstances. They suggest the Mona Lisa isn’t a commissioned portrait, but a nuanced representation of a woman’s journey through triumph and tragedy.

Similarly, Kulski highlights the portrait’s heraldic designs, which would be atypical for a silk merchant’s wife. He, too, suggests the painting shows Isabella mourning her late husband.

The Mona Lisa’s enigmatic expression also captures Isabella’s self-described state post-1500 of being “alone in misfortune.” Contrary to representing a wealthy, recently married woman, the portrait exudes the aura of a virtuous widow.

Late professor of art history Joanna Woods-Marsden suggested the Mona Lisa transcends traditional portraiture and embodies Leonardo’s ideal, rather than being a straightforward commission.

This perspective frames the work as a deeply personal project for Leonardo, possibly signifying a special connection between him and Isabella. Leonardo’s reluctance to part with the work also indicates a deeper, personal investment in it.

Beyond the canvas
The theory that Isabella of Aragon could be the true Mona Lisa is a profound reevaluation of the painting’s context, opening up new avenues through which to appreciate the work.

It elevates Isabella from a figure overshadowed by the men in her life, to a woman of courage and complexity who deserves recognition in her own right.

Source: https://studyfinds.org/who-really-was-mona-lisa-500-years-on-theres-good-reason-to-think-we-got-it-wrong/

Scientists discover what gave birth to Earth’s unbreakable continents

Photo by Brett Zeck from Unsplash

The Earth beneath our feet may feel solid, stable, and seemingly eternal. But the continents we call home are unique among our planetary neighbors, and their formation has long been a mystery to scientists. Now, researchers believe they may have uncovered a crucial piece of the puzzle: the role of ancient weathering in shaping Earth’s “cratons,” the most indestructible parts of our planet’s crust.

Cratons are the old souls of the continents, forming roughly half of Earth’s continental crust. Some date back over three billion years and have remained largely unchanged ever since. They form the stable hearts around which the rest of the continents have grown. For decades, geologists have wondered what makes these regions so resilient, even as the plates shift and collide around them.

It turns out that the key may lie not in the depths of the Earth but on its surface. A new study out of Penn State and published in Nature suggests that subaerial weathering – the breakdown of rocks exposed to air – may have triggered a chain of events that led to the stabilization of cratons billions of years ago, during the Neoarchaean era, around 2.5 to 3 billion years ago.

These ancient metamorphic rocks called gneisses, found on the Arctic Coast, represent the roots of the continents now exposed at the surface. The scientists said sedimentary rocks interlayered in these types of rocks would provide a heat engine for stabilizing the continents. Credit: Jesse Reimink. All Rights Reserved.

To understand how this happened, let’s take a step way back in time. In the Neoarchaean, Earth was a very different place. The atmosphere contained little oxygen, and the continents were mostly submerged beneath a global ocean. But gradually, land began to poke above the waves – a process called continental emergence.

As more rock was exposed to air, weathering rates increased dramatically. When rocks weather, they release their constituent minerals, including radioactive elements like uranium, thorium, and potassium. These heat-producing elements, or HPEs, are crucial because their decay generates heat inside the Earth over billions of years.

The researchers propose that as the HPEs were liberated by weathering, they were washed into sediments that accumulated in the oceans. Over time, plate tectonic processes would have carried these sediments deep into the crust, where the concentrated HPEs could really make their presence felt.

Buried at depth and heated from within, the sediments would have started to melt. This would have driven what geologists call “crustal differentiation” – the separation of the continental crust into a lighter, HPE-rich upper layer and a denser, HPE-poor lower layer. It’s this layering, the researchers argue, that gave cratons their extraordinary stability.

The upper crust, enriched in HPEs, essentially acted as a thermal blanket, keeping the lower crust and the mantle below relatively cool and strong. This prevented the kind of large-scale deformation and recycling that affected younger parts of the continents.

Interestingly, the timing of craton stabilization around the globe supports this idea. The researchers point out that in many cratons, the appearance of HPE-enriched sedimentary rocks precedes the formation of distinctive Neoarchaean granites – the kinds of rocks that would form from the melting of HPE-rich sediments.

The rocks on the left are old rocks that have been deformed and altered many times. They are juxtaposed next to an Archean granite on the right side. The granite is the result of melting that led to the stabilization of the continental crust. Credit: Matt Scott. All Rights Reserved.

Furthermore, metamorphic rocks – rocks transformed by heat and pressure deep in the crust – also record a history consistent with the model. Many cratons contain granulite terranes, regions of the deep crust uplifted to the surface that formed in the Neoarchaean. These granulites often have compositions that suggest they formed from the melting of sedimentary rocks.

So, the sequence of events – the emergence of continents, increased weathering, burial of HPE-rich sediments, deep crustal melting, and finally, craton stabilization – all seem to line up.

Source: https://studyfinds.org/earths-unbreakable-continents/

The 7 Fastest Animals In The World: Can You Guess Them All?

Cheetah (Photo by David Groves on Unsplash)

Move over Usain Bolt, because in the animal kingdom, speed takes on a whole new meaning! Forget sprinting at a measly 28 mph – these record-breaking creatures can leave you in the dust (or water, or sky) with their mind-blowing velocity. From lightning-fast cheetahs hunting down prey on the African savanna to majestic peregrine falcons diving from incredible heights, these animals rely on their extreme speed to survive and thrive in the wild. So, buckle up as we explore the top seven fastest animals on Earth.

The animal kingdom is brimming with speedsters across different habitats. We’re talking about fish that can zoom by speedboats, birds that plummet from the sky at breakneck speeds, and even insects with lightning-fast reflexes. Below is our list of the consensus top seven fastest animals in the world. We want to hear from you too! Have you ever encountered an animal with incredible speed? Share your stories in the comments below, and let’s celebrate the awe-inspiring power of nature’s speed demons!

The List: Fastest Animals in the World, Per Wildlife Experts

1. Peregrine Falcon – 242 MPH

Peregrine Falcon (Photo by Vincent van Zalinge on Unsplash)

The peregrine falcon takes the title of the fastest animal in the world, able to achieve speeds of 242 miles per hour. These birds don’t break the sound barrier by flapping their wings like crazy. Instead, they use gravity as their accomplice, raves The Wild Life. In a blink of an eye, the falcon can plummet towards its prey, like a fighter jet in a vertical dive. These dives can exceed 200 miles per hour, which is the equivalent of a human running at over 380 mph! That’s fast enough to make even the speediest sports car look like a snail.

That prominent bulge of this falcon’s chest cavity isn’t just for show – it’s a keel bone, and it acts like a supercharged engine for their flight muscles. A bigger keel bone translates to more powerful wing strokes, propelling the falcon forward with incredible force, explains A-Z Animals. These birds also boast incredibly stiff, tightly packed feathers that act like a high-performance suit, reducing drag to an absolute minimum. And the cherry on top? Their lungs and air sacs are designed for one-way airflow, meaning they’re constantly topped up with fresh oxygen, even when exhaling. This ensures they have the fuel they need to maintain their breakneck dives.

These fast falcons might be the ultimate jet setters of the bird world, but they’re not picky about their digs. The sky-dwelling predators are comfortable calling a variety of landscapes home, as long as there’s open space for hunting, writes One Kind Planet. They can be found soaring over marshes, estuaries, and even skyscrapers, always on the lookout for unsuspecting prey.

2. Golden Eagle – 200 MPH

Golden Eagle (Photo by Mark van Jaarsveld on Unsplash)

The golden eagle is a large bird that is well known for its powerful and fast flight. These majestic birds can reach speeds of up to 199 mph during a hunting dive, says List 25. Just like the peregrine falcon, the golden eagle uses a hunting technique called a stoop. With a powerful tuck of its wings, the eagle plummets towards its target in a breathtaking dive.

They are undeniably impressive birds, with a wingspan that can stretch up to eight feet wide! Imagine an athlete being able to run at 179 miles per hour! That’s what a golden eagle achieves in a dive, reaching speeds of up to 87 body lengths per second, mentions The Wild Life. The air rushes past its feathers, creating a whistling sound as it picks up, hurtling toward its prey.

They also use these impressive dives during courtship rituals and even playful moments, states Live Science. Picture two golden eagles soaring in tandem, one diving after the other in a dazzling aerial ballet. It’s a display of both power and grace that reaffirms their status as the ultimate rulers of the skies. Their habitat range stretches across the northern hemisphere, including North America, Europe, Africa, and Asia, according to the International Union for Conservation of Nature (IUCN). So next time you see a golden eagle circling above, remember – it’s more than just a bird, it’s a living embodiment of speed, skill, and breathtaking beauty.

3. Black Marlin – 80 MPH

A Black Marlin jumping out of the water (Photo by Finpat on Shutterstock)

The ocean is a vast and mysterious realm, teeming with incredible creatures. And when it comes to raw speed, the black marlin is a high-performance athlete of the sea. They have a deep, muscular body built for cutting through water with minimal resistance, informs Crosstalk. Think of a sleek racing yacht compared to a clunky rowboat. Plus, their dorsal fin is lower and rounder, acting like a spoiler on a race car, reducing drag and allowing for a smoother ride through the water. Their “spears,” those sharp protrusions on their snouts, are thicker and more robust than other marlins. These aren’t just for show – they’re used to slash and stun prey during a hunt.

Some scientists estimate their burst speed at a respectable 22 mph. That’s impressive, but here’s where the debate gets interesting. Some reports claim black marlin can pull fishing line at a staggering 120 feet per second! When you do the math, that translates to a whopping 82 mph, according to Story Teller. This magnificent fish calls shallow, warm shores home, their ideal habitat boasts water temperatures between 59 to 86 degrees Fahrenheit. – basically, a permanent summer vacation!

The secret behind its impressive swimming prowess lies in its tail. Unlike the rounded tails of many fish, black marlin possess crescent-shaped tails, explains A-Z Animals. With a powerful flick, they can propel themselves forward with incredible bursts of speed. This marlin also boasts a long, thin, and sharp bill that cuts through water, offering minimal resistance as it surges forward. But that’s not all. Black marlin also have rigid pectoral fins that act like perfectly sculpted wings. These fins aren’t for flapping – they provide stability and lift, allowing the marlin to maintain a streamlined position in the water.

4. Cheetah – 70 MPH

Adult and cheetah pup on green grass during daytime (Photo by Sammy Wong on Unsplash)

The cheetah is Africa’s most endangered large cat and also the world’s fastest land animal. Their bodies are built for pure velocity, with special adaptations that allow them to go from zero to sixty in a mind-blowing three seconds, shares Animals Around The Globe. Each stride stretches an incredible seven meters, eating up the ground with astonishing speed. But they can only maintain their high speeds for short bursts.

Unlike its stockier lion and tiger cousins, the cheetah boasts a lean, streamlined physique that makes them aerodynamic. But the real innovation lies in the cheetah’s spine. It’s not just a rigid bone structure – it’s a flexible marvel, raves A-Z Animals. With each powerful push, this springy spine allows the cheetah to extend its strides to incredible lengths, propelling it forward with tremendous force. And finally, we come to the engine room: the cheetah’s muscles. Packed with a high concentration of “fast-twitch fibers,” these muscles are specifically designed for explosive bursts of speed. Think of them as tiny, built-in turbochargers that give the cheetah that extra surge of power when it needs it most.

These magnificent cats haven’t always been confined to the dry, open grasslands of sub-Saharan Africa. Cheetahs were once widespread across both Africa and Asia, but their range has shrunk dramatically due to habitat loss and dwindling prey populations, says One Kind Planet. Today, most cheetahs call protected natural reserves and parks home.

Source: https://studyfinds.org/fastest-animals-in-the-world/

Exit mobile version