Antarctica wasn’t always a frozen landscape. In fact, it may have once featured lush forests, palm trees, and rivers, according to new research published in the journal Nature Communications.
“This finding is like opening a time capsule,” Professor Stewart Jamieson, who co-authored the study, told The Economic Times.
Groundbreaking study
Researchers began the study in 2017, extracting sediment from the once-thriving ecosystem frozen in time for tens of millions of years beneath the ice, The Jerusalem Post reported. They drilled more than a mile underneath the ice and used satellite imagery to explore what the ancient environment may have looked like some 34 million years ago.
According to Jamieson, the prehistoric landscape of Antarctica remains very much a mystery to researchers.
“The land underneath the East Antarctic ice sheet is less well-known than the surface of Mars,” Jamieson said. “We’re investigating a small part of that landscape with more detail to see what I can tell us about the evolution of the landscape and the evolution of the ice sheet.”
What researchers found
With the help of advanced technology, scientists detected land divided by rivers and valleys roughly 25 miles wide and with depths of nearly 3,900 feet. Geologists believe the landscape underneath the ice located in the eastern part of the continent spans more than 12,000 square miles — roughly the size of Maryland.
What findings suggest
The study indicates that the ancient land developed prior to the first large-scale freeze over of Antarctica, when the supercontinent known as Gondwana began to break apart. The shift in tectonic plates led to deep cracks and created mountainous terrain.
The Economic Times reported that during this prehistoric period, the landscape featured rivers and dense tree cover, and the climate was warm or even tropical.
“It’s difficult to say exactly what this ancient landscape looked like, but depending on how far back you go, the climate may have resembled modern-day Patagonia or even something tropical,” Jamieson said.
Researchers said samples taken from beneath the ice also reveal much more biodiversity in organisms, suggesting a much warmer climate than today.
Ultimate goal
Scientists aim to study the once-hidden landscape even more, including how it was formed, to help them more accurately forecast melting on the continent now.
“It is remarkable that this landscape, hidden in plain sight for many years, can tell us so much about the early and long-term history of the East Antarctic ice sheet,” Professor Neil Ross, a geophysical expert as well as a co-author of the study, told the Daily Mail. “It also helps us understand how it might evolve in response to future climate change.”
Parasitic bed bugs crawling on a cloth. (Photo by Pavel Krasensky on Shutterstock)
When our ancestors first gathered in the world’s earliest cities 10,000 years ago, they weren’t alone. Tiny, blood-sucking hitchhikers were already lurking in their dwellings, and new genetic research reveals these bed bugs beat every other pest to urban living by thousands of years.
Scientists analyzing bed bug DNA from the Czech Republic discovered something remarkable: these notorious insects have been tracking human demographic patterns with startling accuracy, evolving alongside us as we shifted from nomadic life to city living. The research paints a picture of an evolutionary partnership that began when humans first started building permanent settlements.
“Bed bugs may represent the first true urban pest insect species,” the researchers conclude in their study, published in Biology Letters, making these creatures pioneers of city living right alongside humanity.
The study’s demographic modeling reveals that bed bugs maintained “a close relationship with human society for at least 50,000 years,” with their population history closely tracking major events in human development. Our relationship with bed bugs, written in their DNA, represents one of the oldest documented examples of urban pest evolution.
Ancient Choices: Why Bed Bugs Picked Humans Over Bats
The research team, led by Virginia Tech entomologists Warren Booth and Lindsay Miles, examined two distinct genetic branches of the common bed bug (Cimex lectularius). One group historically lived with bats in caves, while another made the evolutionary leap to human habitations. The bat-associated bed bugs remained relatively stable, but the human-associated variety experienced wild population swings that mirror major chapters in human history.
Researchers collected bed bugs from 19 locations across six sites in the Czech Republic in 2014, focusing on both human-associated and bat-associated populations. They sequenced whole genomes, generating 9.7 million variant sites to trace the insects’ evolutionary journey.
Their genetic detective work revealed dramatic population crashes around 50,000 to 20,000 years ago, possibly during harsh environmental conditions of the Last Glacial Maximum. But around 13,000 years ago — precisely when humans began establishing permanent settlements — bed bug populations started booming.
The timing isn’t coincidental. As humans created concentrated populations and early urban centers, bed bugs seized their evolutionary moment. Early cities became pest paradise: dense human populations, consistent food sources, and plenty of hiding spots in primitive dwellings.
“Initially with both populations, we saw a general decline that is consistent with the Last Glacial Maximum; the bat-associated lineage never bounced back, and it is still decreasing in size,” Miles says in a statement. “The really exciting part is that the human-associated lineage did recover and their effective population increased.”
Evolution’s Perfect Timing: How Cities Shaped Bed Bugs
The genetic evidence tells a fascinating story of co-evolution. Miles points to the early establishment of large human settlements that expanded into cities such as Mesopotamia about 12,000 years ago.
“That makes sense because modern humans moved out of caves about 60,000 years ago,” explains Booth. “There were bed bugs living in the caves with these humans, and when they moved out they took a subset of the population with them so there’s less genetic diversity in that human-associated lineage.”
The study found that human-associated bed bugs underwent significant physical changes as they adapted to indoor living. According to the research, these insects became “smaller, less hairy and have larger extremities compared to their bat-associated counterparts.” These adaptations helped them navigate the indoor environments humans were creating.
As humans increased their population size and continued living in communities and cities expanded, the human-associated lineage of the bed bugs saw an exponential growth in their effective population size.
The research demonstrates that bed bugs didn’t just stumble into cities, rather they evolved specifically for urban life. While their cave-dwelling cousins remained with bats, human bed bugs developed characteristics that made them supremely adapted to life in human settlements.
Why This Ancient History Matters Today
Modern bed bug infestations surging across major cities worldwide represent the latest chapter in this ancient story. The same traits that allowed bed bugs to colonize early human settlements make them perfectly suited for modern urban life.
“What will be interesting is to look at what’s happening in the last 100 to 120 years,” said Booth. “Bed bugs were pretty common in the old world, but once DDT [dichloro-diphenyl-trichloroethane] was introduced for pest control, populations crashed. They were thought to have been essentially eradicated, but within five years they started reappearing and were resisting the pesticide.”
The research helps explain why bed bugs remain so difficult to eliminate despite advances in pest control. Having co-evolved with humans for millennia, they’ve had ample time to develop resistance strategies and behavioral adaptations.
Booth, Miles, and graduate student Camille Block have already discovered a gene mutation that could contribute to that insecticide resistance in a previous study, and they are looking further into the genomic evolution of the bed bugs and relevance to the pest’s insecticide resistance.
Our relationship with bed bugs, written in their DNA, represents one of the oldest documented examples of urban pest evolution and offers a unique window into understanding how organisms adapt to human-created environments over evolutionary timescales.
The Vela supernova remnant, the remains of a supernova explosion 800 light-years from Earth in the southern constellation Vela, as seen from the Dark Energy Camera on the Víctor M. Blanco Telescope at Cerro Tololo Inter-American Observatory. (Credit: CTIO/NOIRLab/DOE/NSF/AURA)
A massive star exploded around 13,000 years ago, and research now suggests that the cosmic blast may have plummeted Earth into a sudden ice age while wiping out woolly mammoths, giant sloths, and other massive creatures across North America. The scientist behind the study suggests such events could continue to influence the future of our planet.
Research published in the Monthly Notices of the Royal Astronomical Societ reveals that at least eight nearby supernovas — the violent deaths of massive stars — unleashed enough high-energy radiation to strip away Earth’s protective ozone layer, trigger global cooling, and cause widespread animal extinctions. The study specifically links the Vela supernova, which exploded just 287 light-years from Earth, to the mysterious Younger Dryas period when temperatures suddenly plummeted for 1,300 years, interrupting the end of the last ice age.
Lead author G. Robert Brakenridge from the University of Colorado at Boulder analyzed 78 known supernova remnants and found a striking pattern: they appear to go hand-in-hand with climate shifts on Earth. “We have abrupt environmental changes in Earth’s history. That’s solid, we see these changes,” Brakenridge explains in a statement. “So, what caused them?”
His calculations show these stellar explosions were powerful enough to damage Earth’s atmosphere and alter the planet’s climate system, meaning our world’s environmental history has been shaped not just by earthbound disasters, but by the deaths of distant stars.
“The events that we know of, here on earth, are at the right time and the right intensity,” says Brakenridge.
When Stellar Giants Die, Earth Feels the Impact
Supernovas rank among the most energetic events in the universe. When a massive star runs out of nuclear fuel, it collapses and explodes with more energy than our sun will produce over its entire 10-billion-year lifetime. These explosions can briefly outshine entire galaxies while sending deadly radiation streaming across vast distances.
Brakenridge examined supernova remnants (the expanding shells of gas and debris left behind by these explosions) within 2,300 light-years of Earth. He calculated how much harmful X-rays and gamma rays each explosion would have delivered to our planet. While 287 light-years may sound impossibly distant (about 1,700 trillion miles), it’s practically next door in cosmic terms.
Eight supernovas were close enough and powerful enough to significantly affect Earth. The Vela explosion stands out as the most dramatic example, occurring when our planet was emerging from the last ice age around 13,000 years ago.
The Vela Supernova: A Cosmic Climate Killer
The timing of the Vela supernova aligns remarkably with several dramatic changes in Earth’s history. Brakenridge’s analysis reveals the explosion would have bombarded our planet with enough radiation to severely damage the ozone layer—the thin atmospheric shield protecting life from the sun’s harmful ultraviolet rays.
Tree ring records show a sudden 35-part-per-million spike in radioactive carbon-14, indicating massive atmospheric radiation increases. Ice cores from both poles reveal an abrupt decrease in methane concentrations. Archaeological sites across North America display mysterious “black mat” deposits marking the end of the Clovis culture, while fossil records document the extinction of mammoths, mastodons, giant ground sloths, and saber-toothed cats.
All these changes coincide with the onset of the Younger Dryas, when global temperatures plummeted and ice sheets began advancing again. When high-energy radiation hits Earth’s atmosphere, it breaks apart nitrogen molecules, creating compounds that destroy ozone. With less atmospheric protection, six times more harmful ultraviolet radiation would reach Earth’s surface, causing DNA damage in plants and animals while triggering massive wildfires.
Large animals faced particular vulnerability because they need more food and reproduce slowly, making rapid adaptation impossible. As Brakenridge notes in the paper: “Snow-blindness (photokeratitis) would be disabling for many herbivore and predator diurnal species.”
A Pattern of Cosmic Catastrophes
Beyond Vela, Brakenridge identified several other potential supernova connections throughout recent Earth history. Tree ring records show unexplained carbon-14 spikes at 9,126, 7,209, 2,764, 2,614, 1,175, and 957 years ago, all corresponding to known nearby supernova remnants of appropriate ages and distances.
The Hoinga supernova, exploding about 15,000 years ago at roughly 350 light-years away, may have caused a single-year 30-part-per-million carbon-14 rise coinciding with another cold period called the Older Dryas. The data reveals a clear dose-response relationship: closer explosions created larger environmental impacts.
Alternative Explanations and Future Threats
Many scientists remain skeptical, pointing to alternative explanations for these environmental changes. The Younger Dryas might result from ocean circulation disruptions caused by massive freshwater floods as ice sheets melted. Carbon-14 spikes could stem from intense solar storms rather than distant supernovas. Megafauna extinctions might be due to human hunting, regular climate change, or asteroid impacts.
Recent evidence supports comet impacts during the Younger Dryas, with researchers finding platinum and rare metals in sediment layers indicating extraterrestrial strikes. However, Brakenridge argues supernova radiation could have disrupted the Oort Cloud (the distant shell of icy objects surrounding our solar system) potentially triggering increased comet bombardment.
“When nearby supernovae occur in the future, the radiation could have a pretty dramatic effect on human society,” says Brakenridge. “We have to find out if indeed they caused environmental changes in the past.”
Currently, several nearby stars could become future supernovas, including Betelgeuse, a red giant about 170 light-years away that may explode within a million years. While unlikely to cause mass extinction, such an event could still measurably affect Earth’s atmosphere and climate.
“As we learn more about our nearby neighboring stars, the capability for prediction is actually there,” Brakenridge adds. “It will take more modeling and observation from astrophysicists to fully understand Earth’s exposure to such events.”
The research fundamentally changes how we view the forces shaping life on Earth. Beyond recognizing asteroid impacts, volcanic eruptions, and climate cycles, we must now consider that stellar explosions thousands of light-years away can trigger ice ages and extinctions. Earth’s environmental story has been written not just by local events, but by the deaths of distant stars, cosmic events that emphasize our planet’s connection to the broader universe.
(Photo gy PeopleImages.com – Yuri A on Shutterstock)
Forget fingerprints and facial recognition – researchers have discovered that the way you breathe through your nose is so unique, it can identify you with stunning accuracy. A breakthrough study shows that humans have individual “nasal respiratory fingerprints” that remain stable over time and can predict everything from your body weight to your mental health.
Israeli scientists at the Weizmann Institute of Science developed a wearable device that can identify people based solely on their breathing patterns with 96.8% accuracy, better than many voice recognition systems. Even more remarkable, these breathing signatures stayed consistent when participants returned for testing nearly two years later.
“We found that we could identify members of a 97-participant cohort at a remarkable 96.8% accuracy from nasal airflow patterns alone,” the researchers wrote in their paper, published in Current Biology. “In other words, humans have individual nasal airflow fingerprints.”
Beyond creating a new form of biometric security, the study reveals how breathing patterns reflect the unique wiring of your brain and can expose intimate details about your physical and mental state.
How Scientists Tracked Your Every Breath
Most people think of breathing as automatic and simple, but it’s actually controlled by an incredibly complex network of brain regions. Because every person’s brain is unique, the researchers reasoned that the breathing patterns it generates should also be unique, like a neural signature expressed through your nostrils.
Scientists at the Weizmann Institute of Science created a small, wearable device they call the “Nasal Holter,” a 22-gram gadget that participants wore for 24 hours straight. The device, about the size of a smartphone, attaches to the back of the neck and connects to the nose via a thin tube with separate sensors for each nostril.
One hundred participants, mostly young adults averaging 26 years old, wore the device while going about their normal daily activities – working, sleeping, exercising, and relaxing. Forty-two of them returned months later to repeat the experiment, allowing researchers to test whether breathing patterns remain stable over time.
How Your Breathing Style Is as Unique as Your Fingerprint
Using computer algorithms, the researchers analyzed 24 different breathing characteristics, including inhale volume, breathing rate, and the natural cycle of airflow switching between nostrils. They could identify individuals during waking hours with over 90% accuracy, and the patterns remained consistent even when participants returned up to two years later.
The study found that “individual identification by nasal airflow fingerprints was on par with or better than voice recognition.”
When researchers examined what these breathing patterns could reveal, they discovered something striking: your nose essentially broadcasts information about your private health and mental state. The breathing patterns could predict participants’ body mass index, levels of anxiety and depression, and even traits associated with autism spectrum conditions.
People with higher anxiety showed shorter inhales during sleep, while those with depression symptoms had different peak airflow patterns during the day. The device could distinguish between sleep and wake states with 100% accuracy using just breathing data. It could also detect the natural nasal cycle, which is the way airflow alternates between nostrils throughout the day, a process most people aren’t aware of.
What This Means for Health and Privacy
Timna Soroka, the study’s lead author, and her colleagues believe these patterns reflect the brain’s control over breathing. Unlike simple lung function tests that measure airway health, long-term breathing patterns reveal how your brain’s respiratory control centers are wired and functioning.
The technology could transform how we monitor health and diagnose diseases. Since breathing patterns reflect brain activity, changes in these patterns might signal neurological conditions, mental health issues, or other medical problems before symptoms become obvious.
However, the research raises privacy concerns. If breathing patterns are this revealing and can be detected by sensitive enough equipment, what happens to privacy in a world of increasingly sophisticated monitoring technology?
Currently, the device requires direct contact with the nostrils, limiting its use for covert surveillance. As sensor technology advances, though, the possibility of remote breathing pattern detection becomes more realistic.
The research has limitations. The study focused on healthy young adults, so it’s unclear how well the findings apply to older adults, children, or people with respiratory conditions. The nasal tubes occasionally slipped during sleep, and some participants found wearing the device for 24 hours uncomfortable. The correlations with mental health measures were based on questionnaire scores rather than clinical diagnoses.
Your breathing pattern is as individual as your fingerprint, but unlike fingerprints, it’s constantly active and potentially more revealing about your internal state. While the technology offers exciting possibilities for health monitoring and medical diagnosis, it also raises new questions about biological privacy in an age of ubiquitous sensing. Today’s fascinating research discovery often becomes tomorrow’s surveillance tool, making it crucial to consider how we’ll regulate and protect the intimate biological data that our bodies are constantly, unconsciously sharing.
Many Americans have shelled out hundreds of dollars for biological age tests promising to reveal whether their bodies are aging faster or slower than their actual years. These methylation-based “aging clocks” have become the gold standard for evaluating anti-aging treatments, from supplements to lifestyle changes. But a controversial new paper argues these widely trusted tests might be fundamentally flawed — and potentially dangerous.
Independent researcher Dr. Josh Mitteldorf contends in a perspective published in the journal Aging that most methylation clocks fail to distinguish between two biologically opposite processes. Some age-related methylation changes reflect the body ramping up self-destructive programs, while others represent protective responses aimed at repairing damage. Current aging clocks treat both as equivalent signals of aging.
Mitteldorf, a theoretical biologist who runs the website AgingAdvice.org, suggests that treatments that make you appear “younger” on a biological age test might actually be shortening your life by shutting down crucial repair mechanisms your body has activated to fight damage.
Two Types of Aging Changes
Mitteldorf’s argument centers on a fundamental question dividing aging researchers: Is aging purely the result of accumulated damage, or does the body actually program itself to decline over time?
Most scientists today believe aging happens because cellular damage builds up over decades. Under this view, any age-related changes in gene expression must be the body’s attempt to combat damage by ramping up repair mechanisms.
Mitteldorf belongs to a minority faction believing aging is at least partially programmed. Just as puberty follows a genetic blueprint, so does aging. Some age-related changes represent the body turning on self-destructive processes like excessive inflammation or reduced repair capacity.
He calls these “Type 1” and “Type 2” changes, respectively. Type 1 changes are harmful: they’re part of a programmed self-destruction sequence. Type 2 changes are protective: they’re repair responses to accumulated damage.
Current methylation clocks indiscriminately combine both types. An intervention that reverses Type 1 changes (good for longevity) will look identical on current tests to one that reverses Type 2 changes (potentially bad for longevity).
The Smoking Gun Example
Mitteldorf points to a troubling example with the popular GrimAge clock, one of the most accurate predictors of death risk. A major component of this test is based on differences between smokers and non-smokers.
Why do smokers have different methylation patterns? “It is a reasonable conjecture that smokers’ bodies are constantly trying to repair their lungs,” Mitteldorf explains. Much of the smoking signature likely represents protective repair mechanisms working overtime.
But the GrimAge clock counts smoking as accelerated aging because smokers die earlier. If an intervention makes someone’s methylation profile look less like a smoker’s, is it because the body has successfully repaired lung damage and dialed down emergency repairs? Or has the intervention simply turned off protective mechanisms while leaving damage intact?
Such an intervention would score as “anti-aging” on the test while potentially shortening lifespan — a dangerous false positive.
Testing Random Genetic Changes
To investigate whether aging-related methylation changes are truly random or directed by biological programs, Mitteldorf attempted to build a “stochastic methylation clock” based on genuine genetic drift.
Using a database of 278 individuals aged 2 to 92, he identified methylation sites that remained partially active throughout life but showed increasing random variation with age. These represented genuine drift rather than directed changes.
Only about 10% of subjects showed genuine random methylation drift above background noise. When Mitteldorf tried to build an aging clock from this truly random drift, it performed poorly, correlating with age at just 0.38—far too weak to be useful.
This evidence, he argues, indicates that most consistent age-related methylation changes aren’t random at all; instead, they’re directed by biological processes. That, in turn, supports the idea that aging involves programmed changes rather than being driven solely by accumulated damage.
“An intervention that sets back the methylation age, as measured by Type 2, is deceiving us,” Mitteldorf writes. Such an intervention “nominally lowers ‘epigenetic age’, but it is likely to actually decrease life expectancy.”
Industry and Research Implications
The methylation clock industry has exploded in recent years, with companies like Elysium Health and TruDiagnostic offering direct-to-consumer tests. These companies, along with supplement makers and longevity clinics, routinely use methylation age as a biomarker to validate their products.
If Mitteldorf’s concerns prove valid, it could undermine confidence in a considerable portion of anti-aging research. Studies showing that certain interventions “reduce biological age” might need reinterpretation—or their results could be misleading if it’s unclear whether they’re affecting helpful or harmful processes.
The problem extends beyond consumer tests to clinical research. Pharmaceutical companies developing longevity drugs rely heavily on methylation clocks as endpoints in trials, since waiting decades for mortality data isn’t practical.
Mitteldorf acknowledges that separating Type 1 from Type 2 changes is extremely difficult with current methods. Most studies simply identify methylation sites that change consistently with age, without determining whether those changes are beneficial or harmful.
For researchers who believe aging is purely damage-driven, the implications are even more troubling. If all consistent methylation changes reflect repair, then any intervention that reduces “methylation age” could be harmful by suppressing protective responses.
Getting a cortisone shot for knee arthritis might provide quick pain relief, but new research reveals a troubling association: those steroid injections may be linked to faster joint damage over time. A surprising study found that patients who received corticosteroid injections showed more signs of arthritis progression compared to those who got no treatment at all, or even a different type of injection.
The findings raise important questions about a common medical practice. More than 10% of knee arthritis patients receive these steroid shots, yet the study suggests they may be trading short-term comfort for potential long-term harm. By contrast, patients who received hyaluronic acid injections — a gel-like lubricant for joints — not only avoided signs of worsening arthritis, but actually showed reduced disease progression on MRI scans.
How Researchers Conducted the Investigation
The study, published in Radiology, analyzed data from 210 people participating in the Osteoarthritis Initiative, a large-scale project tracking Americans with knee problems from 2004 to 2015. The average participant was 64 years old, and around 60% were women—characteristics typical of the knee arthritis population.
What set this study apart was its use of detailed MRI scans to assess joint health. Researchers evaluated images from two years before the injection, at the time of injection, and again two years afterward using the Whole-Organ MRI Score (WORMS). This scoring system provides a comprehensive assessment of cartilage, bone marrow, meniscus, ligaments, and joint effusion.
Participants were grouped into three categories: 44 received corticosteroid injections, 26 received hyaluronic acid, and 140 were matched controls who had no injections. The control group was carefully selected using propensity-score matching to ensure they were comparable in age, sex, body mass index, arthritis severity, pain levels, and physical activity.
Clear Differences Emerged Between Treatments
Those who received steroid injections showed significantly more arthritis progression than both the control group and the hyaluronic acid group. The researchers found clear statistical evidence that steroid shots were linked to faster joint deterioration. The damage was especially evident in cartilage—the smooth tissue that cushions the knee joint.
By contrast, hyaluronic acid injections appeared to slow down arthritis progression. Patients in this group actually showed less joint damage after their injection compared to before they received it, suggesting these treatments may help protect the joint structure.
Both types of injections helped with pain relief. Steroid shots provided more dramatic pain reduction — cutting pain scores roughly in half — while hyaluronic acid injections offered more modest but still meaningful pain relief. However, only steroid shots came with the concerning side effect of potentially faster joint deterioration.
MRI images helped visualize the contrast. In a 58-year-old woman who received a steroid injection, follow-up scans showed new full-thickness cartilage lesions and bone marrow damage. In a 57-year-old man who received hyaluronic acid, the same cartilage remained intact and unchanged over four years.
What This Means for Your Healthcare Decisions
These findings don’t mean patients should avoid knee injections altogether. Managing arthritis pain is important for maintaining mobility and quality of life. But the results highlight the need for deeper conversations between patients and healthcare providers about treatment trade-offs.
Current guidelines from the American Academy of Orthopaedic Surgeons moderately recommend corticosteroids for short-term pain relief and advise against the routine use of hyaluronic acid. The new study doesn’t overturn those guidelines, but it suggests they may merit reevaluation if future research supports these results.
Importantly, this study cannot prove that steroid injections directly cause faster arthritis progression. It was observational in nature, meaning that unmeasured differences between patients could account for some of the outcomes. Still, the consistent patterns seen on MRI raise important questions that warrant further investigation in randomized controlled trials.
For the millions of Americans living with knee arthritis, this study offers both a note of caution and a reason for hope. Cortisone shots may still help with intense pain episodes, but patients should understand the potential risks to long-term joint health. Meanwhile, hyaluronic acid injections, often overlooked, may offer pain relief without the same structural downsides.
“This study could lead to more judicious use of corticosteroid injections, especially for patients with mild to moderate osteoarthritis who are not yet surgical candidates,” said lead author Dr. Upasana Upadhyay Bharadwaj, who was a research fellow in the Department of Radiology at University of California, San Francisco, at the time of the research, in a statement.
Very few people live beyond a century. So, if no one had babies anymore, there would probably be no humans left on Earth within 100 years. But first, the population would shrink as older folks died and no one was being born.
Even if all births were to suddenly cease, this decline would start slowly.
Eventually there would not be enough young people coming of age to do essential work, causing societies throughout the world to quickly fall apart. Some of these breakdowns would be in humanity’s ability to produce food, provide health care and do everything else we all rely on.
Food would become scarce even though there would be fewer people to feed.
As an anthropology professor who has spent his career studying human behavior, biology and cultures, I readily admit that this would not be a pretty picture. Eventually, civilization would crumble. It’s likely that there would not be many people left within 70 or 80 years, rather than 100, due to shortages of food, clean water, prescription drugs and everything else that you can easily buy today and need to survive.
Sudden Change Could Follow A Catastrophe
To be sure, an abrupt halt in births is highly unlikely unless there’s a global catastrophe. Here’s one potential scenario, which writer Kurt Vonnegut explored in his novel “Galapagos”: A highly contagious disease could render all people of reproductive age infertile – meaning that no one would be capable of having babies anymore.
Another possibility might be a nuclear war that no one survives – a topic that’s been explored in many scary movies and books.
A lot of these works are science fiction involving a lot of space travel. Others seek to predict a less fanciful Earth-bound future where people can no longer reproduce easily, causing collective despair and the loss of personal freedom for those who are capable of having babies.
Two of my favorite books along these lines are “The Handmaid’s Tale,” by Canadian writer Margaret Atwood, and “The Children of Men,” by British writer P.D. James. They are dystopian stories, meaning that they take place in an unpleasant future with a great deal of human suffering and disorder. And both have become the basis of television series and movies.
In the 1960s and 1970s, many people also worried that there would be too many people on Earth, which would cause different kinds of catastrophes. Those scenarios also became the focus of dystopian books and movies.
Heading Toward 10 Billion People
To be sure, the number of people in the world is still growing, even though the pace of that growth has slowed down. Experts who study population changes predict that the total will peak at 10 billion in the 2080s, up from 8 billion today and 4 billion in 1974.
The U.S. population currently stands at 342 million. That’s about 200 million more people than were here when I was born in the 1930s. This is a lot of people, but both worldwide and in the U.S. these numbers could gradually fall if more people die than are born.
About 3.6 million babies were born in the U.S. in 2024, down from 4.1 million in 2004. Meanwhile, about 3.3 million people died in 2022, up from 2.4 million 20 years earlier.
One thing that will be important as these patterns change is whether there’s a manageable balance between young people and older people. That’s because the young often are the engine of society. They tend to be the ones to implement new ideas and produce everything we use.
Also, many older people need help from younger people with basic activities, like cooking and getting dressed. And a wide range of jobs are more appropriate for people under 65 rather than those who have reached the typical age for retirement.
Declining Birth Rates
In many countries, women are having fewer children throughout their reproductive lives than used to be the case. That reduction is the most stark in several countries, including India and South Korea.
The declines in birth rates occurring today are largely caused by people choosing not to have any children or as many as their parents did. That kind of population decline can be kept manageable through immigration from other countries, but cultural and political concerns often stop that from happening.
At the same time, many men are becoming less able to father children due to fertility problems. If that situation gets much worse, it could contribute to a steep decline in population.
Neanderthals Went Extinct
Our species, Homo sapiens, has been around for at least 200,000 years. That’s a long time, but like all animals on Earth we are at risk of becoming extinct.
Consider what happened to the Neanderthals, a close relative of Homo sapiens. They first appeared at least 400,000 years ago. Our modern human ancestors overlapped for a while with the Neanderthals, who gradually declined to become extinct about 40,000 years ago.
Some scientists have found evidence that modern humans were more successful at reproducing our numbers than the Neanderthal people. This occurred when Homo sapiens became more successful at providing food for their families and also having more babies than the Neanderthals.
While aging is inevitable, aging well is something we can influence. It’s not just about the number of candles on your birthday cake – it’s whether you’ve got the puff to blow them out, the balance to carry the cake and the memory to remember why you’re celebrating.
As we age, our bodies change. Muscle mass shrinks, bones weaken, reaction times slow. But that doesn’t mean we’re all destined for a future of walking frames and daytime TV.
Aging well isn’t about staying wrinkle-free – it’s about staying independent, mobile, mentally sharp and socially connected. In gerontology, there’s a saying: we want to add life to years, not just years to life. That means focusing on quality – being able to do what you love, move freely, think clearly and enjoy time with others.
There’s no one-size-fits-all definition, but some simple home tests can give you a good idea. No fancy lab required – just a toothbrush, a stopwatch and a sense of humor.
Balance
One fun (and surprisingly useful) way to test your balance is to stand on one leg while brushing your teeth. If you can do this for 30 seconds or more (eyes open), that’s a great sign of lower-body strength, coordination, and postural stability.
A 2022 study found that people who couldn’t balance on one leg for ten seconds had an 84% higher risk of death over a median follow-up of seven years compared than those who could. As such, balance is like a superpower for healthy aging — it reduces falls, supports mobility, and can be improved at any age.
Grip
Grip strength is more than just opening jars. It’s a powerful indicator of overall health, predicting heart health, cognitive function and even mortality risk.
Research shows that for every 5kg decrease in grip strength, the risk of death from all causes rose by 16%.
You can test grip strength using a hand-dynamometer (many gyms or clinics have them), or simply take note of everyday tasks – is opening bottles, carrying groceries, or using tools becoming harder?
Floor-To-Feet Feat
Can you sit on the floor and stand up without using your hands? This test is a true measure of your lower-body strength and flexibility, which are essential for daily activities and reducing the risk of falls. If you can do it, you’re in great shape.
If it’s too tough, try the sit-to-stand test. Using a chair (no arms),see how many sit-to-stand transitions you can do in 30 seconds. This task is a good measure of lower limb function, balance and muscle strength, it can also predict people at risk of falls and cardiovascular issues.
Mental Sharpness
Cognitive function can be measured in all sorts of complex ways, but some basic home tests are surprisingly telling. Try naming as many animals as you can in 30 seconds. Fewer than 12 might indicate concern; more than 18 is a good sign.
Try spelling “world” backwards or recalling a short list of three items after a few minutes. This skill is an important strategy to enhance memory in older adults. Challenge yourself with puzzles, Sudoku, or learning a new skill. These kinds of “verbal fluency” and memory recall tests are simple ways to spot early changes in brain health – but don’t panic if you blank occasionally. Everyone forgets where they left their keys sometimes.
Lifestyle Matters
There’s no magic bullet to aging well – but, if one existed, it would probably be a combination of exercise, diet, sleep and social connections.
Some of the best-studied strategies include:
Daily Movement: Try walking, resistance training, swimming or tai chi keep your muscles and bones strong and support balance and heart health.
Healthy Eating: A Mediterranean-style diet — rich in whole grains, fruit, vegetables, fish, olive oil and nuts – is linked to better brain and heart health.
Sleep: Seven to nine hours of quality sleep support memory, immunity and mood.
Connection: Some research suggests that loneliness is as harmful as smoking 15 cigarettes a day. Stay engaged, join a club, volunteer, or just pick up the phone to a friend.
If you can balance on one leg while brushing your teeth, carry a bag of potatoes up the stairs, and name 20 animals under pressure, then you’re doing very well. If not (yet), that’s OK, these are skills you can build over time. Aging well means taking a proactive approach to health: making small, consistent choices that lead to better mobility, clearer thinking and richer social connections down the line.
For nearly 80 years, scholars have debated the origins of the Dead Sea Scrolls, relying on expert interpretations of ancient handwriting styles that often don’t agree. Now, a new artificial intelligence system is offering a more objective approach, with results that challenge the long-standing assumptions about when these texts were written. The findings could influence how historians understand the timeline of early Judaism and the context in which some of its most important religious writings emerged.
Researchers at the University of Groningen’s Qumran Institute developed an AI program called “Enoch” that combines radiocarbon dating with computer-based handwriting analysis to estimate the age of ancient manuscripts. When the system was applied to 135 previously undated fragments of the Dead Sea Scrolls, it consistently produced earlier dates than traditional paleographic methods. The program suggests that some scrolls may be decades older than scholars once believed.
The study, published in PLOS One, suggests that some religious texts and the movements behind them could have emerged earlier than previously thought. For example, scrolls that had been estimated to date to around 50 BCE may have actually been written as early as 150 BCE, offering new insights into the evolution of religious communities during a pivotal time in ancient Judaea.
How Scientists Trained AI to Read Ancient Handwriting
Study authors sought to improve the accuracy of manuscript dating — arguably one of archaeology’s most persistent challenges, especially when texts lack explicit historical references.
Traditional paleography — the study of ancient handwriting — can be highly subjective. Experts often arrive at different conclusions about the same manuscript, based on how they interpret shifts in letter shapes and writing styles over time.
To address this, the team developed the first AI system trained to learn from both physical evidence (radiocarbon dates) and visual handwriting patterns. They named the program “Enoch” after an ancient Jewish figure associated with wisdom and knowledge.
The researchers began by radiocarbon dating 30 manuscript samples using advanced chemical treatments that removed contaminants. Of those, 24 yielded reliable dates, which were then used to train the AI.
The system analyzed thousands of features in the handwriting, from the curvature of letters to stroke angles. These details are often invisible to the human eye but detectable by machine learning algorithms. The AI learned to associate specific handwriting features with distinct time periods using advanced statistical methods including Bayesian ridge regression.
What Did Enoch Reveal About The Dead Sea Scrolls?
When Enoch analyzed 135 previously undated scroll fragments, its predictions aligned with expert paleographic assessments 79% of the time. It’s a match rate that researchers describe as “highly unlikely to have occurred by chance alone.”
Importantly, Enoch often predicted earlier dates than traditional methods. One key example involves manuscript 4Q114, which contains text from the biblical Book of Daniel. Radiocarbon dating of this scroll yielded a range of 230-160 BCE, while scholars have traditionally dated the text based on historical content to around the 160s BCE. This demonstrates how the combined approach of radiocarbon dating and AI analysis can provide more precise age estimates.
Another significant finding concerns scrolls associated with the religious group behind the Dead Sea Scrolls. Manuscripts related to community regulations and religious practices, which had previously been dated to the first century BCE based on paleographic analysis, received earlier dates from Enoch’s AI predictions, suggesting the group’s literary activity may have begun in the second century BCE.
The AI system also revealed that two distinct handwriting styles, Hasmonaean and Herodian, likely coexisted for longer than experts once believed. This challenges the idea that one style replaced the other in a straightforward chronological sequence.
Why These Ancient Texts Still Matter
Discovered in the mid-20th century, the Dead Sea Scrolls are among the oldest surviving copies of the Hebrew Bible and provide critical context for the development of Judaism and early Christianity.
If some of these manuscripts were written decades earlier than previously believed, it could reshape scholarly understanding of how religious thought and textual traditions evolved during a crucial historical period. The results suggest that sophisticated literary and theological activity may have been flourishing in Judaea earlier than scholars realized.
The study also highlights the growing role of artificial intelligence in tackling historical questions. By combining physical data with high-resolution visual analysis, AI is helping researchers uncover patterns that are beyond human perception.
While Enoch doesn’t always agree with human experts, diverging in about 21% of cases, the authors argue that those disagreements highlight areas worthy of further investigation, rather than errors.
Looking ahead, the team plans to refine Enoch as more radiocarbon data and improved manuscript imaging become available. They’ve already tested the approach on medieval texts with known dates and achieved similar levels of accuracy, raising the possibility that AI could help redate other historical document collections around the world.
Artificial intelligence is opening a new chapter in manuscript studies, offering fresh perspectives on texts that shaped the religious and cultural foundations of the modern world.
Some of the most complex cognitive functions are possible because different sides of your brain control them. Chief among them is speech perception, the ability to interpret language. In people, the speech perception process is typically dominated by the left hemisphere.
Your brain breaks apart fleeting streams of acoustic information into parallel channels – linguistic, emotional and musical – and acts as a biological multicore processor. Although scientists have recognized this division of cognitive labor for over 160 years, the mechanisms underpinning it remain poorly understood.
Researchers know that distinct subgroups of neurons must be tuned to different frequencies and timing of sound. In recent decades, studies on animal models, especially in rodents, have confirmed that splitting sound processing across the brain is not uniquely human, opening the door to more closely dissecting how this occurs.
Yet a central puzzle persists: What makes near-identical regions in opposite hemispheres of the brain process different types of information?
Answering that question promises broader insight into how experience sculpts neural circuits during critical periods of early development, and why that process is disrupted in neurodevelopmental disorders.
Timing Is Everything
Sensory processing of sounds begins in the cochlea, a part of the inner ear where sound frequencies are converted into electricity and forwarded to the auditory cortex of the brain. Researchers believe that the division of labor across brain hemispheres required to recognize sound patterns begins in this region.
For more than a decade, my work as a neuroscientist has focused on the auditory cortex. My lab has shown that mice process sound differently in the left and right hemispheres of their brains, and we have worked to tease apart the underlying circuitry.
For example, we’ve found the left side of the brain has more focused, specialized connections that may help detect key features of speech, such as distinguishing one word from another. Meanwhile, the right side is more broadly connected, suited for processing melodies and the intonation of speech.
We tackled the question of how these left-right differences in hearing develop in our latest work, and our results underscore the adage that timing is everything.
We tracked how neural circuits in the left and right auditory cortex develop from early life to adulthood. To do this, we recorded electrical signals in mouse brains to observe how the auditory cortex matures and to see how sound experiences shape its structure.
Surprisingly, we found that the right hemisphere consistently outpaced the left in development, showing more rapid growth and refinement. This suggests there are critical windows of development – brief periods when the brain is especially adaptive and sensitive to environmental sound – specific to each hemisphere that occur at different times.
To test the consequences of this asynchrony, we exposed young mice to specific tones during these sensitive periods. In adulthood, we found that where sound is processed in their brains was permanently skewed. Animals that heard tones during the right hemisphere’s earlier critical window had an overrepresentation of those frequencies mapped in the right auditory cortex.
Adding yet another layer of complexity, we found that these critical windows vary by sex. The right hemisphere critical window opens earlier in female mice, and the left hemisphere window opens just days later. In contrast, male mice had a very sensitive right hemisphere critical window, but no detectable window on the left. This points to the elusive role sex may play in brain plasticity.
Our findings provide a new way to understand how different hemispheres of the brain process sound and why this might vary for different people. They also provide evidence that parallel areas of the brain are not interchangeable: the brain can encode the same sound in radically different ways, depending on when it occurs and which hemisphere is primed to receive it.
Speech And Neurodevelopment
The division of labor between brain hemispheres is a hallmark of many human cognitive functions, especially language. This is often disrupted in neuropsychiatric conditions such as autism and schizophrenia.
Reduced language information encoding in the left hemisphere is a strong indication of auditory hallucinations in schizophrenia. And a shift from left- to right-hemisphere language processing is characteristic of autism, where language development is often impaired.
Strikingly, the right hemisphere of people with autism seems to respond earlier to sound than the left hemisphere, echoing the accelerated right-side maturation we saw in our study on mice. Our findings suggest that this early dominance of the right hemisphere in encoding sound information might amplify its control of auditory processing, deepening the imbalance between hemispheres.
These insights deepen our understanding of how language-related areas in the brain typically develop and can help scientists design earlier and more targeted treatments to support early speech, especially for children with neurodevelopmental language disorders.
When patients with severe irritable bowel syndrome walk into Dr. Erin Mauney’s office, they’ve usually tried everything. Years of medications, diets, and treatments have failed them. So when she tells them she’s studying psilocybin — the psychoactive compound in magic mushrooms—to treat their gut problems, their reactions range from surprise to desperate hope.
Mauney, a pediatric gastroenterologist at Tufts University, is running the first study ever to test psychedelics on digestive disorders. Her research targets people whose irritable bowel syndrome (IBS) hasn’t responded to conventional medicine — a group that “may be 60%+ of patients by some studies,” according to Mauney.
The connection between psilocybin mushrooms and stomach problems might seem bizarre, but Mauney’s approach tackles something doctors have long overlooked: how psychological stress literally reshapes our gut function. Her patients often carry histories of childhood trauma that seem to manifest as physical symptoms decades later.
“I became very interested in the applicability of this emerging (or perhaps more apt to say, re-emerging) field of psychedelic-assisted medicine to patients who seem to be at war with their bodies,” Mauney explained in a recent interview published in the journal Psychedelics.
IBS causes unpredictable bouts of pain, cramping, diarrhea, and constipation that can derail careers and relationships. While the condition isn’t life-threatening, it can be socially isolating and emotionally devastating. Traditional treatments focus on managing symptoms rather than addressing root causes.
Mauney’s study examines something called interoception, which is how well people can sense and interpret signals from inside their bodies. Many IBS patients either become hyperaware of normal gut sensations, turning minor discomfort into severe pain, or completely disconnect from their bodily signals. Psilocybin appears to help reset this internal communication system.
Her research gives participants two doses of psilocybin alongside therapy sessions. Brain imaging using fMRI tracks changes in how patients perceive bodily sensations, while detailed interviews capture their reflections on the treatment.
Mauney’s path to psychedelic research began during the pandemic when she read Michael Pollan’s book “How to Change Your Mind.” As someone treating children and families dealing with mysterious symptoms, she started connecting childhood experiences to adult illness.
“During my medical training, I became aware of how common trauma, especially early life trauma, unfortunately is in the human experience,” she noted. “I think overall this is an area that medicine, particularly gastroenterology and obesity medicine, really fails to understand and address meaningfully.”
Her observation reflects growing scientific evidence that childhood experiences can affect adult health. Traditional medicine often treats physical symptoms separately from psychological causes.
While formal results haven’t yet been published, Mauney says sharing early observations with colleagues has been “very fun” — and appears to spark curiosity among other scientists. Rather than focusing solely on symptom relief, the study explores whether psychedelics can shift how patients relate to their bodily sensations in deeper, more meaningful ways.
Mauney draws from psychology literature, particularly the work of pediatrician-psychoanalyst Donald Winnicott, who explored how early relationships affect healing. Her interdisciplinary background allows her to see connections that purely medical approaches might miss.
Significant obstacles remain. Psilocybin is still federally illegal outside research contexts. Questions about optimal dosing, patient selection, and long-term effects need answers before treatments could become widely available.
But for patients who’ve exhausted conventional options, Mauney’s work offers hope. “This study brings a new option for patients who have not been helped by any existing approaches to IBS,” she said.
Rather than treating physical symptoms in isolation, Mauney’s approach recognizes that healing sometimes requires addressing the psychological experiences that may contribute to illness. For patients living with chronic digestive problems that conventional medicine struggles to solve, that holistic perspective could make all the difference.
Cryptogenic strokes have no obvious cause, but is increasingly being linked to subtle, hidden risk factors – such as estrogen.(Krakenimages.com/Shutterstock)
For millions of women, combined hormonal contraceptives are a part of their daily life – providing a convenient and effective option for preventing pregnancy and managing their menstrual cycle.
But new findings are sounding the alarm on a serious, and often overlooked, risk: stroke.
According to recent findings presented at the European Stroke Organization Conference, combined oral hormonal contraceptives (which contains both estrogen and progestogen) may significantly increase the chance of women experiencing a cryptogenic stroke. This is a sudden and serious type of stroke that occurs with no obvious cause.
Surprisingly, in younger adults – particularly women – cryptogenic strokes make up approximately 40% of all strokes. This suggests there may be sex-specific factors which contribute to this risk – such as hormonal contraception use. These recently-presented findings lend themselves to this theory.
At this year’s conference, researchers presented findings from the Secreto study. This is an international investigation that has been conducted into the causes of unexplained strokes in young people aged 18 to 49. The study enrolled 608 patients with cryptogenic ischemic stroke from 13 different European countries.
One of their most striking discoveries was that women who used combined oral contraceptives were three times more likely to experience a cryptogenic stroke compared to non-users. These results stood, even after researchers adjusted for other factors which may have contributed to stroke risk (such as obesity and history of migraines).
It’s well-documented that hormonal contraceptives, which contain both estrogen and progestin, come with a small, increased risk of experiencing serious health events, including stroke – particularly ischemic stroke, which occurs when blood flow to part of the brain is blocked.
But a study published earlier this year, which tracked over two million women, found that combined hormonal contraceptives – including the pill, intrauterine devices (IUD), patches and vaginal rings, which all contain both synthetic estrogen and progestogen – were linked to higher risks of both stroke and heart attack. The vaginal ring increased stroke risk by 2.4 times and 3.8 times for heart attack. The contraceptive patch was found to increase stroke risk by nearly 3.5 times.
Interestingly, they also looked at a progestin-only contraceptive (the IUD) and found there was no increased risk for either heart attacks or strokes.
Both of these recent findings suggest estrogen may be the main driver of stroke risk. While absolute risk is still low – meaning fewer than 40 in every 100,000 women using a combined hormonal contraceptive will experience a stroke – the population-level impact is significant considering the number of women worldwide that use a combined hormonal contraceptive.
Estrogen And Stroke Risk
Combined hormonal contraceptives contain synthetic versions of the sex hormones estrogen (usually ethinylestradiol) and a progestin (the synthetic version of progestogen).
Natural estrogen in the body plays a role in promoting blood clotting, which is important for helping wounds heal and prevents excessive bleeding.
But the synthetic estrogen in contraceptives is more potent and delivered in higher, steady doses. It stimulates the liver to produce extra clotting proteins and reduces natural anticoagulants — tipping the balance toward easier clot formation. This effect, while helpful in stopping bleeding, can raise the risk of abnormal blood clots that can lead to conditions such as stroke. This risk may be even greater for people who smoke, experience migraines or have a genetic tendency to clot.
If a clot forms in an artery that supplies the brain or breaks off and travels through the bloodstream to the brain, this can block blood flow – causing what’s known as an ischaemic stroke. This is the most common type of stroke. Clots can also form in deep veins (such as those in the legs or around your organs).
In addition to clotting, estrogen may also slightly raise blood pressure and affect how blood vessels behave over time, which can further increase stroke risk.
The effects of estrogen on clotting may partly explain why the recent conference findings showed a link between combined contraceptive use and cryptogenic stroke risk. Cryptogenic stroke has no obvious cause, but is increasingly being linked to subtle, hidden risk factors – such as hormone-driven clotting.
Understanding Risk
These numbers can sound alarming at first, but it’s important to keep them in perspective. The absolute risk – meaning the actual number of people affected – is still low.
For instance, researchers estimate that there may be one additional stroke per year for every 4,700 women using the combined pill.
That sounds rare, and for most users, it is. But when you consider that millions of women use these contraceptives globally, even a small increase in risk can translate into a significant number of strokes at the population level. Which is relative to what is seen with the high number of cryptogenic strokes in young women.
Despite the risks associated with combined hormonal contraceptives, many women continue to use them – either because they aren’t fully informed of the risks or because the alternatives are either less effective, less accessible or come with their own burdens.
Part of the reason this trade-off has become so normalized is the persistent under-funding and under-prioritization of women’s health research. Historically, medical research has focused disproportionately on men – with women either excluded from studies or treated as an afterthought.
This has led to a limited understanding of how hormonal contraceptives affect female physiology beyond fertility control. As a result, the side-effects remain poorly understood, under-communicated and under-addressed.
Women have a right to make informed decisions about their health and body. This starts with having access to accurate information about the real risks and benefits of every contraceptive option. It means understanding, for example, that while combined hormonal contraceptives do carry a small risk of blood clots and stroke, pregnancy and the weeks following childbirth come with an even higher risk of those same complications. This context is vital for making truly informed choices.
No method of contraception is perfect. But when women are given the full picture, they can choose the method that best suits them. We also need more research that reflects the diversity and complexity of women’s bodies – not just to improve safety, but to expand options and empower decisions.
Menopause is something nearly every woman will go through. As fertility ends, levels of estrogen and progesterone drop significantly – changes that can deeply affect physical health, emotional wellbeing and everyday life.
For many, the effects of this hormonal shift are more than frustrating – they can be life altering. Symptoms like brain fog, hot flushes, night sweats, headaches, insomnia, fatigue, joint pain, low libido, anxiety, depression and even bone loss from osteoporosis are all common.
Hormone replacement therapy (HRT) has helped many women manage these symptoms – but one key hormone is often overlooked in both treatment and conversation: testosterone.
Testosterone is typically viewed as a “male hormone,” but it plays a crucial role in women’s health too. In fact, women have higher levels of testosterone than either estrogen or progesterone for most of their adult lives. And like the other sex hormones, testosterone also declines with age – with consequences that are only now being fully explored.
The Testosterone Gap
Hormone replacement therapy (HRT) is now widely used to replace estrogen and progesterone during and after menopause. These treatments – available as tablets, patches, gels and implants – are regulated, evidence-based and increasingly accessible through the NHS.
But when it comes to testosterone, the situation is entirely different.
Currently, there are no testosterone products licensed for use by women in the UK or Europe. The only exception is in Australia, where a testosterone cream specifically designed for women is available. Europe once had its own option – a transdermal patch called Intrinsa, designed and approved by regulators based on clinical evidence to treat low libido in women with surgically induced menopause. But the manufacturer withdrew product in 2012, citing “commercial considerations” in their letter to the European Medicines Agency, the agency in charge of the evaluation and supervision of pharmaceutical products in Europe.
Since then, women across Europe have been left without an approved option.
In the absence of licensed treatments, some clinicians – mainly in private practice – are prescribing testosterone “off label,” often using products developed for men. These are typically gels or creams with dosages several times higher than most women need. While doctors may advise on how to adjust the dose, this kind of improvisation comes with risks: inaccurate dosing, inconsistent absorption and a lack of long-term safety data.
Some women report significant improvements – not just in libido, but also in brain fog, mood, joint pain and energy levels. However, the only proven clinical benefit of testosterone in women is in improving sexual desire for those with hypoactive sexual desire disorder (HSDD) following surgical menopause.
Even so, interest is growing – fueled by patient demand, celebrity use, social media buzz and a growing sense that testosterone may be a missing piece in midlife women’s care.
While there is increasing consensus that testosterone can play a role in supporting women’s health, the current situation presents two serious problems:
Safety and regulation: without licensed products, standardized dosing guidelines, or long-term safety data, off-label use puts both patients and clinicians in uncertain territory.
Access and inequality: testosterone therapy is rarely available through the NHS and is often only accessible through private clinics, creating a two-tier system. Those who can pay hundreds of pounds for consultations and prescriptions can access care, while others are left behind.
Innovation
There are signs of change. For example, I founded Medherant, a University of Warwick spin-out company that is currently developing a testosterone patch designed specifically for women. It’s in clinical trials and, if approved, could become the first licensed testosterone product for women in the UK in over a decade. It’s a much-needed step – and one that could pave the way for further innovation and broader access.
But the urgency remains. Millions of women are currently going without effective, evidence-based care. In the meantime, off-label prescribing should used with care and use based on the best available science – not hype or anecdote – and delivered through transparent, regulated healthcare channels.
Women deserve more than workarounds. They deserve treatments that are developed for their bodies, rigorously tested, approved by regulators and accessible to all – not just the few who can afford private care.
When half the population is affected, this isn’t a niche issue. It’s a priority.
More and more people over the age of 50 are taking up physical exercise. Medical associations resoundingly agree that this is a good thing. Physical exercise is not only key to disease prevention, it is also a recommended part of treatment for many illnesses.
However, starting to move at this stage of life requires some care. This is especially true for those who have not previously been physically active, or for people who are overweight or obese.
It has been proven that starting to exercise with routines that are too demanding can lead to significant muscular and skeletal injuries, especially if combined with an inadequate diet. This risk is even greater after the age of 50, as the loss of muscle and bone mass is more pronounced due to natural aging processes.
Before starting any new exercise program, it is a good idea to carry out a complete analysis, especially to assess the need for micronutrient supplements.
Protein Is Key
In addition to micronutrients, the body also needs carbohydrates, fats and proteins – known collectively as macronutrients. Proteins provide the body with the essential amino acids needed to maintain and develop muscle mass, and to prevent sarcopenia: age-related muscle injury, osteoporosis, and loss of muscle mass and strength (formerly referred to as frailty).
Protein requirements vary according to an individual’s clinical situation. In people over 50 years of age who are moderately physically active, protein requirements range from 1 to 1.5 grams per kilogram of body weight per day.
However, it is not advisable to increase protein intake without a corresponding increase in physical exercise. Too much protein can actually have harmful effects, especially on bone health, as it has been observed to increase calcium excretion in the urine (calciuria) due to decreased tubular calcium reabsorption.
Animal And Vegetable Proteins
Protein sources should combine those of vegetable origin – soy, beans, seeds, peanuts, lentils, and so on – with those of animal origin, such as eggs, dairy products, chicken and fish.
While the ideal is to have balance of both, it has been shown that following a vegetarian diet is compatible with high-performance sports, so long as there is suitable medical and nutritional monitoring.
In addition to what you eat, it also matters when you do it. Spreading protein intake throughout the day is more beneficial than concentrating it in a single meal. You should also eat protein 30 minutes before or after exercise, as its absorption and availability in the body will be better.
Essential Micronutrients: Magnesium, Calcium, Vitamin D
Some micronutrients – by which we mean vitamins and minerals – play a key role in physical exercise at this age. These include magnesium, calcium and vitamin D.
Magnesium aids muscle recovery and bone formation, and can be found in foods such as wheat bran, cheese, pumpkin seeds and flax seeds.
Calcium is essential for maintaining adequate bone mineralization and preventing loss of bone mineral density (osteopenia) associated with calcium deficiencies in the blood.
Dairy products are known to be beneficial for bone health, both for their bioavailable calcium, and the vitamin D content in their whole milk. Certain plant-based foods, such as tahini (sesame paste), almonds, flaxseed, soya and hazelnuts, are also decent sources of calcium, but their phytate and oxalate content can hinder its absorption.
Lastly, oily fish (tuna, sardines, salmon, and so on) and egg yolks are considered complementary sources of vitamin D in dietary plans focused on people over 50 years of age who do physical exercise.
It is also vitally important to maintain proper hydration before, during and after exercise. Both dehydration and over hydration can affect performance, and increase the risk of muscle injury.
Does The Type Of Exercise Matter?
So far we have seen how nutrition influences athletic performance and ultimately the risk of injury. But there is another part of the puzzle: the exercise you do.
There is actually no clear consensus on this, and there is ongoing debate about which type of exercise is the most appropriate according to age, gender or body composition. The question is whether it is better to prioritize strength exercises, alternate with cardio sessions, or do both on different days.
Despite the different theories on the subject, one thing is clear: regular exercise, adapted to the abilities of each individual and with good medical and nutritional monitoring, reduces the risk of multiple diseases and improves quality of life.
That cup of coffee you had during dinner with friends didn’t just keep you tossing and turning — it fundamentally altered how your brain operated during whatever sleep you managed to get. New research reveals that caffeine doesn’t simply block sleep; it transforms the sleeping brain into a more complex, hyperactive state that resembles being closer to peak mental performance.
Even more surprising: this coffee-induced brain makeover hits young adults much harder than middle-aged people, showing that your relationship with caffeine changes dramatically as you get older.
Scientists from the University of Montreal analyzed the brain waves of 40 healthy people during sleep after they consumed either 200 milligrams of caffeine (roughly equivalent to two cups of coffee) or a placebo pill. Participants spent two nights in a sleep laboratory, consuming either 200mg of caffeine or a placebo pill in a double-blind study. Brain activity was recorded using multi-channel EEG throughout the night, and the team applied multiple analytical approaches to identify patterns that distinguished caffeinated from non-caffeinated sleep.
They discovered that caffeine pushes the brain toward what’s called a “critical regime” — a state where neural networks operate at maximum efficiency and complexity.
Brain complexity and criticality are tied to optimal cognitive performance, enhanced information processing, and greater mental flexibility. When your brain operates in this critical zone, it’s essentially firing on all cylinders, processing information more efficiently and maintaining better communication between different brain regions.
Coffee Doesn’t Just Disturb Sleep—It Completely Rewires It
Most people assume caffeine simply prevents good sleep by keeping you awake longer or making sleep lighter. The study, published in Communications Biology, reveals something more fascinating: caffeine actually changes the fundamental nature of whatever sleep you do get, making your brain work overtime even during rest periods.
During non-REM sleep (the deep, restorative phase that typically shows low brain activity) caffeinated participants showed dramatically increased brain entropy and complexity. Their sleeping brains exhibited patterns more similar to wakefulness, with heightened information processing and neural communication that normally wouldn’t occur during this crucial recovery phase.
This effect was much stronger in younger adults aged 20-27 compared to middle-aged participants aged 41-58, particularly during REM sleep. Young people’s brains showed significant increases in multiple measures of complexity and criticality when caffeinated, while older adults showed much weaker responses.
Young Brains React Far More Dramatically to Caffeine
The age-related differences likely stem from changes in adenosine receptors, which are the brain’s “sleepiness switches” that caffeine blocks. As people age, they naturally lose adenosine A1 receptors, which means caffeine has fewer targets to affect.
With more receptors available in younger people, caffeine can exert a stronger influence on brain dynamics. This finding has practical implications for caffeine consumption across age groups. While middle-aged adults might feel they can handle that after-dinner espresso better than they used to, young adults are experiencing more dramatic changes to their sleep brain activity. These are changes that could affect the restorative functions of sleep.
Understanding the ‘Critical’ Brain State
To grasp what researchers mean by “critical” brain dynamics, consider neural networks like a perfectly tuned orchestra. Too little activity, and the brain operates sluggishly and inefficiently. Too much activity creates chaos. But right at the critical point, the brain achieves optimal performance with maximum information processing.
Caffeine appears to push sleeping brains toward this critical state, particularly during non-REM sleep. The researchers measured this using several sophisticated techniques that look at how repetitive or varied brain signals are and examine long-range patterns in brain activity.
All measures pointed to the same conclusion: caffeine makes sleeping brains more complex, more variable, and more similar to highly engaged waking brains. Machine learning algorithms could distinguish between caffeinated and non-caffeinated sleep with up to 75% accuracy based solely on these brain complexity measures.
Previous caffeine research focused mainly on obvious effects, such as how long it takes to fall asleep, how much you toss and turn, or changes in specific brain wave frequencies. This study took a deeper dive, using cutting-edge analysis techniques to examine how caffeine affects the fundamental dynamics of neural networks.
What This Means for Your Health
While operating in a critical brain state sounds beneficial (and can be during wakefulness) the implications for sleep are more complicated. Sleep serves crucial functions for memory consolidation, cellular repair, and toxin clearance from the brain. If caffeine is making sleeping brains work more like waking brains, it might interfere with these restorative processes.
The study also found that caffeine’s effects were much more pronounced during non-REM sleep compared to REM sleep. Non-REM sleep is particularly important for memory consolidation and brain restoration, so disruptions to this phase could have significant consequences for cognitive function and health.
For coffee lovers, the research serves as a wake-up call that their favorite beverage’s effects extend far beyond simple wakefulness — it’s literally rewiring how their brains operate throughout the night, with younger people experiencing the most dramatic changes to their sleep brain activity.
The luminous, hot star Wolf-Rayet 124 (WR 124) is prominent at the center of the James Webb Space Telescope’s composite image combining near-infrared and mid-infrared wavelengths of light from Webb’s Near-Infrared Camera and Mid-Infrared Instrument. (Credit: NASA, ESA, CSA, STScI, Webb ERO Production Team)
My telescope, set up for astrophotography in my light-polluted San Diego backyard, was pointed at a galaxy unfathomably far from Earth. My wife, Cristina, walked up just as the first space photo streamed to my tablet. It sparkled on the screen in front of us.
“That’s the Pinwheel galaxy,” I said. The name is derived from its shape – albeit this pinwheel contains about a trillion stars.
The light from the Pinwheel traveled for 25 million years across the universe – about 150 quintillion miles – to get to my telescope. My wife wondered: “Doesn’t light get tired during such a long journey?”
Her curiosity triggered a thought-provoking conversation about light. Ultimately, why doesn’t light wear out and lose energy over time?
Let’s Talk About Light
I am an astrophysicist, and one of the first things I learned in my studies is how light often behaves in ways that defy our intuitions.
Light is electromagnetic radiation: basically, an electric wave and a magnetic wave coupled together and traveling through space-time. It has no mass. That point is critical because the mass of an object, whether a speck of dust or a spaceship, limits the top speed it can travel through space.
But because light is massless, it’s able to reach the maximum speed limit in a vacuum – about 186,000 miles (300,000 kilometers) per second, or almost 6 trillion miles per year (9.6 trillion kilometers). Nothing traveling through space is faster. To put that into perspective: In the time it takes you to blink your eyes, a particle of light travels around the circumference of the Earth more than twice.
As incredibly fast as that is, space is incredibly spread out. Light from the Sun, which is 93 million miles (about 150 million kilometers) from Earth, takes just over eight minutes to reach us. In other words, the sunlight you see is eight minutes old.
Alpha Centauri, the nearest star to us after the Sun, is 26 trillion miles away (about 41 trillion kilometers). So by the time you see it in the night sky, its light is just over four years old. Or, as astronomers say, it’s four light years away.
With those enormous distances in mind, consider Cristina’s question: How can light travel across the universe and not slowly lose energy?
Actually, some light does lose energy. This happens when it bounces off something, such as interstellar dust, and is scattered about.
But most light just goes and goes, without colliding with anything. This is almost always the case because space is mostly empty – nothingness. So there’s nothing in the way.
When light travels unimpeded, it loses no energy. It can maintain that 186,000-mile-per-second speed forever.
It’s About Time
Here’s another concept: Picture yourself as an astronaut on board the International Space Station. You’re orbiting at 17,000 miles (about 27,000 kilometers) per hour. Compared with someone on Earth, your wristwatch will tick 0.01 seconds slower over one year.
That’s an example of time dilation – time moving at different speeds under different conditions. If you’re moving really fast, or close to a large gravitational field, your clock will tick more slowly than someone moving slower than you, or who is further from a large gravitational field. To say it succinctly, time is relative.
Now consider that light is inextricably connected to time. Picture sitting on a photon, a fundamental particle of light; here, you’d experience maximum time dilation. Everyone on Earth would clock you at the speed of light, but from your reference frame, time would completely stop.
That’s because the “clocks” measuring time are in two different places going vastly different speeds: the photon moving at the speed of light, and the comparatively slowpoke speed of Earth going around the Sun.
What’s more, when you’re traveling at or close to the speed of light, the distance between where you are and where you’re going gets shorter. That is, space itself becomes more compact in the direction of motion – so the faster you can go, the shorter your journey has to be. In other words, for the photon, space gets squished.
Which brings us back to my picture of the Pinwheel galaxy. From the photon’s perspective, a star within the galaxy emitted it, and then a single pixel in my backyard camera absorbed it, at exactly the same time. Because space is squished, to the photon the journey was infinitely fast and infinitely short, a tiny fraction of a second.
But from our perspective on Earth, the photon left the galaxy 25 million years ago and traveled 25 million light years across space until it landed on my tablet in my backyard.
And there, on a cool spring night, its stunning image inspired a delightful conversation between a nerdy scientist and his curious wife.
Astronomers have detected the most promising signs yet of a possible biosignature outside the solar system, although they remain cautious. Using data from the James Webb Space Telescope (JWST), the astronomers, led by the University of Cambridge, have detected the chemical fingerprints of dimethyl sulfide (DMS) and/or dimethyl disulfide (DMDS), in the atmosphere of the exoplanet K2-18b, which orbits its star in the habitable zone. (Credit: A. Smith, N. Madhusudhan (University of Cambridge))
A team of researchers has recently claimed they have discovered a gas called dimethyl sulphide (DMS) in the atmosphere of K2-18b, a planet orbiting a distant star.
The University of Cambridge team’s claims are potentially very exciting because, on Earth at least, the compound is produced by marine bacteria. The presence of this gas may be a sign of life on K2-18b too – but we can’t rush to conclusions just yet.
K2-18b has a radius 2.6 times that of Earth, a mass nearly nine times greater and orbits a star that is 124 light years away. We can’t directly tell what kinds of large scale characteristics it has, although one possibility is a world with a global liquid water ocean under a hydrogen-rich atmosphere.
Such a world might well be hospitable to life, but different ideas exist about the properties of this planet – and what that might mean for a DMS signature.
Claims for the detection of life on other planets go back decades.
In the 1970s, one of the scientists working on the Viking mission to Mars claimed that his experiment had indicated there could be microorganisms in the Martian soil. However, these conclusions were widely refuted by other researchers.
In 1996, a team said that microscopic features resembling bacteria had been found in the Martian meteorite ALH84001. However, subsequent studies cast significant doubt on the discovery.
Since the early 2000s there have also been repeated claims for the detection of methane gas in the atmosphere of Mars, both by remote sensing by satellites and by in-situ observations by rovers.
Methane can be produced by several mechanisms. One of these potential sources involves production by microorganisms. Such sources are described by scientists as being “biotic.” Other sources of methane, such as volcanoes and hydrothermal vents, don’t require life and are said to be “abiotic.”
Not all of the previous claims for evidence of extraterrestrial life involve the Red Planet. In 2020, Earth-based observations of Venus’s atmosphere implied the presence of low levels of phosphine gas.
Because phosphine gas can be produced by microbes, there was speculation that life might exist in Venus’s clouds. However, the detection of phosphine was later disputed by other scientists.
Proposed signs of life on other worlds are known as “biosignatures.” This is defined as “an object, substance, and/or pattern whose origin specifically requires a biological agent.” In other words, any detection requires all possible abiotic production pathways to be considered.
In addition to this, scientists face many challenges in the collection, interpretation, and planetary environmental context of possible biosignature gases. Understanding the composition of a planetary atmosphere from limited data, collected from light years away, is very difficult.
We also have to understand that these are often exotic environments, with conditions we do not experience on Earth. As such, exotic chemical processes may occur here too.
In order to characterize the atmospheres of exoplanets, we obtain what are called spectra. These are the fingerprints of molecules in the atmosphere that absorb light at specific wavelengths.
Once the data has been collected, it needs to be interpreted. Astronomers assess which chemicals, or combinations thereof, best fit the observations. It is an involved process and one that requires lots of computer based work. The process is especially challenging when dealing with exoplanets, where available data is at a premium.
Once these stages have been carried out, astronomers can then assign a confidence to the likelihood of a particular chemical signature being “real.” In the case of the recent discovery from K2-18b, the authors claim the detection of a feature that can only be explained by DMS with a likelihood of greater than 99.9%. In other words, there’s about a 1 in 1,500 chance that this feature is not actually there.
While the team behind the recent result favors a model of K2-18b as an ocean world, another team suggests it could actually have a magma (molten rock) ocean instead. It could also be a Neptune-like “gas dwarf” planet, with a small core shrouded in a thick layer of gas and ices. Both of these options would be much less favorable to the development of life – raising questions as to whether there are abiotic ways that DMS can form.
A Higher Bar?
But is the bar higher for claims of extraterrestrial life than for other areas of science? In a study claiming the detection of a biosignature, the usual level of scientific rigor expected for all research should apply to the collection and processing of the data, along with the interpretation of the results.
However, even when these standards have been met, claims that indicate the presence of life have in the past still been meet with high levels of scepticism. The reasons for this are probably best summed up by the phrase “extraordinary claims require extraordinary evidence”. This is attributed to the American planetary scientist, author and science communicator Carl Sagan.
While on Earth there are no known means of producing DMS without life, the chemical has been detected on a comet called 67/P, which was studied up close by the European Space Agency’s Rosetta spacecraft. DMS has even been detected in the interstellar medium, the space between stars, suggesting that it can be produced by non-biological, or abiotic, mechanisms.
Given the uncertainties about the nature of K2-18b, we cannot be sure if the presence of this gas might simply be a sign of non-biological processes we don’t yet understand.
The claimed discovery of DMS on K2-18b is interesting, exciting, and reflects huge advances in astronomy, planetary science and astrobiology. However, its possible implications mean that we have to consider the results very cautiously. We must also entertain alternative explanations before supporting such a profound conclusion as the presence of extraterrestrial life.
Singer Billy Joel performs in concert at Madison Square Garden on November 21, 2016 in New York City (Photo by Debby Wong on Shutterstock)
Billy Joel is stepping away from the stage. The 76-year-old music legend announced Friday that he’s canceling all remaining concerts after being diagnosed with Normal Pressure Hydrocephalus, a brain condition that’s been affecting his balance, hearing, and vision.
“This condition has been exacerbated by recent concert performances, leading to problems with hearing, vision, and balance,” Joel’s team said in a statement. “I’m sincerely sorry to disappoint our audience, and thank you for understanding,” Joel added.
It’s been a tough few months for Joel. Back in February, he fell flat on his back after throwing his microphone stand to a crew member during a show at Mohegan Sun Arena in Connecticut. He quickly stood up and finished the performance, but that fall may have been connected to what was to come.
What Exactly Is Normal Pressure Hydrocephalus?
If you’ve never heard of Normal Pressure Hydrocephalus—or NPH, as doctors call it—you’re not alone. It’s a rare condition that most often affects older adults, with an average age over 60.
Your brain has chambers called ventricles that normally contain a clear fluid called cerebrospinal fluid, or CSF. This fluid cushions the brain and spinal cord, supplies them with nutrients, and removes waste products. Normally, your body makes just enough CSF each day and absorbs that same amount.
But with NPH, something goes wrong with this system. The CSF is produced in regular amounts but fails to be absorbed properly, causing it to accumulate. As the fluid builds up, the ventricles enlarge to accommodate the extra volume. This enlargement puts pressure on the surrounding brain tissues, which can cause damage.
Telltale Signs Of NPH
NPH typically shows up through three main symptoms, though not everyone experiences all three: difficulty walking, problems with thinking and memory, and loss of bladder control.
Walking problems are usually the first sign and affect nearly all normal pressure Hydrocephalus patients. The condition causes a distinctive gait that doctors describe as broad-based, slow, and short-stepped. Some patients describe feeling like their feet are “stuck to the floor.” For a performer like Joel who’s used to commanding the stage, these balance issues would make concerts not just difficult, but potentially dangerous.
Mental symptoms can include loss of interest in daily activities, forgetfulness, and difficulty completing routine tasks. These changes often develop so gradually that families initially dismiss them as normal aging, but they’re actually symptoms of a treatable condition.
Why Does It Happen?
In some cases, NPH develops after brain injuries, bleeding, infections, tumors, or complications from surgery. But in most cases, doctors don’t know what causes the fluid buildup. NPH symptoms can look very similar to those of Alzheimer’s and Parkinson’s diseases, which can make diagnosis challenging.
Given Joel’s February fall and the physical demands of touring, it’s possible that the repeated strain of performing may have worsened his condition. However, his medical team hasn’t specified what might have triggered his NPH.
Treatment Options Offer Hope
Here’s some encouraging news: unlike many brain conditions, NPH can often be treated successfully. It’s one of the few causes of dementia-like symptoms that can be controlled or even reversed with proper treatment.
The main treatment involves surgically placing a device called a shunt in the brain’s ventricles. This thin tube creates an alternative pathway to drain the excess fluid, typically into the abdomen where the body can absorb it. Studies show that NPH symptoms improve in 70% to 90% of patients who receive shunt surgery.
Most people who are properly diagnosed and determined to be good candidates for the procedure experience significant improvement, though it may take weeks or months to see the full benefits. The biggest improvements are usually seen in walking ability, followed by cognitive function.
Joel’s statement mentioned he’s currently undergoing physical therapy under his doctor’s supervision, which suggests his medical team is taking a comprehensive approach to his care.
A Career On Pause, Not Over
Joel is canceling 17 stadium shows that were scheduled across North America and England. This includes highly anticipated performances at iconic venues like Yankee Stadium and Citi Field in New York, plus MetLife Stadium in New Jersey.
For an artist who wrapped up a decade-long residency at Madison Square Garden just last year and has been one of the most reliable touring acts in music, this decision couldn’t have been easy. But Joel is prioritizing his health, which is the right call.
It’s a question that’s long been the cause of debate: is it better to shower in the morning or at night?
Morning shower enthusiasts will say this is the obvious winner, as it helps you wake up and start the day fresh. Night shower loyalists, on the other hand, will argue it’s better to “wash the day away” and relax before bed.
But what does the research actually say? As a microbiologist, I can tell you there actually is a clear answer to this question.
First off, it’s important to stress that showering is an integral part of any good hygiene routine — regardless of when you prefer to have one.
Showering helps us remove dirt and oil from our skin, which can help prevent skin rashes and infections.
Showering also removes sweat, which can quell body odor.
Although many of us think that body odor is caused by sweat, it’s actually produced by bacteria that live on the surface of our skin. Fresh sweat is, in fact, odorless. But skin-dwelling bacteria – specifically staphylococci – use sweat as a direct nutrient source. When they break down the sweat, it releases a sulphur-containing compound called thioalcohols which is behind that pungent BO stench many of us are familiar with.
Day or night?
During the day, your body and hair inevitably collect pollutants and allergens (such as dust and pollen) alongside their usual accumulation of sweat and sebaceous oil. While some of these particles will be retained by your clothes, others will inevitably be transferred to your sheets and pillow cases.
The sweat and oil from you skin will also support the growth of the bacteria that comprise your skin microbiome. These bacteria may then also be transferred from your body onto your sheets.
Showering at night may remove some of the allergens, sweat and oil picked up during the day so less ends up on your bedsheets.
However, even if you’ve freshly showered before bed, you will still sweat during the night – whatever the temperature is. Your skin microbes will then eat the nutrients in that sweat. This means that by the morning, you’ll have both deposited microbes onto your bed sheets and you’ll probably also wake up with some BO.
What particularly negates the cleaning benefits of a night shower is if your bedding is not regularly laundered. The odor causing microbes present in your bed sheets may be transferred while you sleep onto your clean body.
Showering at night also does not stop your skin cells being shed. This means they can potentially become the food source of house dust mites, whose waste can be allergenic. If you don’t regularly wash your sheets, this could lead to a build-up of dead skin cell deposits which will feed more dust mites. The droppings from these dust mites can trigger allergies and exacerbate asthma.
Morning showers, on the other hand, can help remove dead skin cells as well as any sweat or bacteria you’ve picked up from your bed sheets during the night. This is especially important to do if your sheets weren’t freshly washed when you went to bed.
A morning shower suggests your body will be cleaner of night-acquired skin microbes when putting on fresh clothes. You’ll also start the day with less sweat for odor-producing bacteria to feed on – which will probably help you smell fresher for longer during the day compared to someone who showered at night. As a microbiologist, I am a day shower advocate.
Of course, everyone has their own shower preference. Whatever time you choose, remember that the effectiveness of your shower is influenced by many aspects of your personal hygiene regime – such as how frequently you wash your bed sheets.
So regardless of whether your prefer a morning or evening shower, it’s important to clean your bed linen regularly. You should launder your sheets and pillow cases at least weekly to remove all the sweat, bacteria, dead skin cells and sebaceous oils that have built up on your sheets.
Washing will also remove any fungal spores that might be growing on the bed linen – alongside the nutrient sources these odor producing microbes use to grow.
More than 60 million people are estimated to be living with dementia, resulting in over 1.5 million deaths a year and an annual cost to the global healthcare economy of around $1.3 trillion.
Despite decades of scientific research and billions of dollars of investments, dementia still has no cure. But what of the old saying that prevention is better than cure? Is preventing dementia possible? And if so, at what age should we be taking steps to do so?
Despite what many believe, dementia is not simply an unavoidable consequence of aging or genetics. It is estimated that up to 45% of dementia cases could potentially be prevented by reducing exposure to 14 modifiable risk factors common throughout the world.
Many of these risk factors – which include things like obesity, lack of exercise, and smoking – are traditionally studied from middle age (around 40 to 60 years old) onwards. As a result, several of the world’s leading health bodies and dementia charities now recommend that strategies aimed at reducing dementia risk should ideally be targeted at this age to reap the greatest benefits.
We argue, however, that targeting even younger ages is likely to provide greater benefits still. But how young are we talking? And why would exposure to risk factors many decades before the symptoms of dementia traditionally appear be important?
To explain, let’s work backwards from middle age, starting with the three decades covering adolescence and young adulthood (from ten to 40 years old).
Many lifestyle-related dementia risk factors emerge during the teenage years, then persist into adulthood. For example, 80% of adolescents living with obesity will remain this way when they are adults. The same applies to high blood pressure and lack of exercise. Similarly, virtually all adults who smoke or drink will have started these unhealthy habits in or around adolescence.
This poses two potential issues when considering middle age as the best starting point for dementia-prevention strategies. First, altering health behavior that has already been established is notoriously difficult. And second, most high-risk individuals targeted in middle age will almost certainly have been exposed to the damaging effects of these risk factors for many decades already.
As such, the most effective actions are likely to be those aimed at preventing unhealthy behavior in the first place, rather than attempting to change long-established habits decades down the line.
The Roots Of Dementia
But what about even earlier in people’s lives? Could the roots of dementia stretch as far back as childhood or infancy? Increasing evidence suggests yes, and that risk factor exposures in the first decade of life (or even while in the womb) may have lifelong implications for dementia risk.
To understand why this may be, it’s important to remember that our brain goes through three major periods during our lives – development in early life, a period of relative stability in adult life, and decline (in some functions) in old age.
Most dementia research understandably focuses on changes associated with that decline in later life. But there is increasing evidence that many of the differences in brain structure and function associated with dementia in older adults may have at least partly existed since childhood.
For example, in long-term studies where people have had their cognitive ability tracked across their whole lives, one of the most important factors explaining someone’s cognitive ability at age 70 is their cognitive ability when they were 11. That is, older adults with poorer cognitive skills have often had these lower skills since childhood, rather than the differences being solely due to a faster decline in older age.
Similar patterns are also seen when looking for evidence of dementia-related damage on brain scans, with some changes appearing to be more closely related to risk factor exposures in early life than current unhealthy lifestyles.
Taken together, perhaps the time has come for dementia prevention to be thought of as a lifelong goal, rather than simply a focus for old age.
A Lifelong Prevention Plan
But how do we achieve this in practical terms? Complex problems require complex solutions, and there is no quick fix to address this challenge. Many factors contribute to increasing or decreasing an individual’s dementia risk – there is no “one size fits all” approach.
But one thing generally agreed upon is that mass medication of young people is not the answer. Instead, we – along with 33 other leading international researchers in the field of dementia – recently published a set of recommendations for actions that can be taken at the individual, community and national levels to improve brain health from an early age.
Our consensus statement and recommendations deliver two clear messages. First, meaningful reductions in dementia risk for as many people as possible will only be achievable through a coordinated approach that brings together healthier environments, better education and smarter public policy.
By analyzing how far material ejected from an impact crater flies, scientists can locate buried glaciers and other interesting subsurface features. On the left is an image of a fresh Martian impact crater taken by NASA’s HiRISE instrument. On the right is the extent of an ejecta blanket according to computer simulations of impacts. (Credit: NASA/Aleksandra Sokolowska)
Mars keeps its secrets buried deep, but every meteorite that punches through its protective layers scatters evidence across the surface. Scientists from Brown University now believe the scattered debris around impact craters holds a coded message about what’s hiding beneath the Martian surface, including potentially life-sustaining ice deposits that future astronauts could tap into.
When meteorites strike Mars, they fling material outward in what scientists call an ejecta blanket. The distance this debris travels compared to the crater size (known as “ejecta mobility” or EM) works like a fingerprint that can identify what is beneath the planet’s surface. This could include ice resources that could be vital for future human missions.
According to new research published in the Journal of Geophysical Research: Planets, when solid ice lies just beneath the surface on Mars, the debris from crater impacts doesn’t spread as far, giving scientists a possible way to spot hidden ice from orbit.
This means we might be able to map underground ice deposits simply by analyzing crater patterns from spacecraft orbiting Mars, without needing to send expensive drilling equipment to the planet.
The international team, led by A.J. Sokolowska from Brown University, conducted sophisticated computer simulations of Martian impacts. Using a program called iSALE, they modeled what happens when meteorites hit different layers of material that might exist beneath Mars’ dusty exterior.
Their simulations focused on impacts creating craters roughly 50 meters across, similar to ones that have formed on Mars during the recent period of spacecraft observation. The team tested dozens of different underground setups, varying the materials, strengths, porosities, and layering patterns.
This study focuses on smaller, more common craters. These impacts probe the shallow subsurface that future explorers would need to access. They’re also “strength-dominated,” meaning the material’s physical properties have a stronger influence on the impact process than gravity does. This means that for smaller craters, the type of material under the surface matters more for how far debris flies than it does in larger craters, where gravity plays a bigger role.
Depending on what lies beneath the surface, ejecta can spread dramatically different distances. EM values ranged from as low as 7 to as high as 19 times the crater radius.
The team discovered that ejecta dynamics are affected by materials much deeper than previously thought, not just in the area directly carved out by the impact. This is because their dynamics are affected by what exists at deeper levels.
When impacts occur over areas with ice beneath the surface, the debris pattern looks distinctly different from impacts into purely rocky terrain. This difference is substantial enough that scientists could identify it in images from spacecraft like NASA’s Mars Reconnaissance Orbiter.
The researchers examined two small Martian impact craters with dramatically different ejecta patterns. One displayed an EM of approximately 19, while the other had an EM of just 11. The crater with the lower EM value is known to have excavated ice, exactly as the team’s models predicted.
If there is ice on Mars, that means there is water there, too. Finding water on Mars is crucial for any future human presence. Astronauts will need water not only for drinking but also for growing food and potentially producing rocket fuel for a return journey to Earth. Hauling all that water from Earth would be prohibitively expensive and logistically challenging.
The research also revealed interesting physics about how impact shocks move through different materials. When a meteorite hits Mars, it creates shock waves that behave differently depending on what materials they encounter. The researchers found that the contrast in acoustic impedance (how well sound waves move through a material) between different subsurface layers significantly affects how ejecta gets distributed.
Future missions to Mars could target areas where crater ejecta patterns suggest accessible ice deposits, potentially increasing the chances of mission success.
A vast cavern in South Dakota shielded from the outside world will house sensitive equipment to detect tiny changes in sub-atomic particles
Inside a laboratory nestled above the mist of the forests of South Dakota, scientists are searching for the answer to one of science’s biggest questions: why does our Universe exist?
They are in a race for the answer with a separate team of Japanese scientists – who are several years ahead.
The current theory of how the Universe came into being can’t explain the existence of the planets, stars and galaxies we see around us. Both teams are building detectors that study a sub-atomic particle called a neutrino in the hope of finding answers.
The US-led international collaboration is hoping the answer lies deep underground, in the aptly named Deep Underground Neutrino Experiment (Dune).
The scientists will travel 1,500 metres below the surface into three vast underground caverns. Such is the scale that construction crews and their bulldozers seem like small plastic toys by comparison.
The science director of this facility, Dr Jaret Heise describes the giant caves as “cathedrals to science”.
Dr Heise has been involved the construction of these caverns at the Sanford Underground Research Facility (Surf) for nearly ten years. They seal Dune off from the noise and radiation from the world above. Now, Dune is now ready for the next stage.
“We are poised to build the detector that will change our understanding of the Universe with instruments that will be deployed by a collaboration of more than 1,400 scientists from 35 countries who are eager to answer the question of why we exist,” he says.
When the Universe was created two kinds of particles were created: matter – from which stars, planets and everything around us are made – and, in equal amounts, antimatter, matter’s exact opposite.
Theoretically the two should have cancelled each other out, leaving nothing but a big burst of energy. And yet, here we – as matter – are.
Scientists believe that the answer to understanding why matter won – and we exist – lies in studying a particle called the neutrino and its antimatter opposite, the anti-neutrino.
They will be firing beams of both kinds of particles from deep underground in Illinois to the detectors at South Dakota, 800 miles away.
This is because as they travel, neutrinos and anti-neutrinos change ever so slightly.
The scientists want to find out whether those changes are different for the neutrinos and anti-neutrinos. If they are, it could lead them to the answer of why matter and anti-matter don’t cancel each other out.
Dune is an international collaboration, involving 1,400 scientists from thirty countries. Among them is Dr Kate Shaw from Sussex University, who told me that the discoveries in store will be “transformative” to our understanding of the Universe and humanity’s view of itself.
“It is really exciting that we are here now with the technology, with the engineering, with the computer software skills to really be able to attack these big questions,” she said.
Half a world away, Japanese scientists are using shining golden globes to search for the same answers. Gleaming in all its splendour it is like a temple to science, mirroring the cathedral in South Dakota 6,000 miles (9,650 km) away. The scientists are building Hyper-K – which will be a bigger and better version of their existing neutrino detector, Super-K.
The Japanese-led team will be ready to turn on their neutrino beam in less than three years, several years earlier than the American project. Just like Dune, Hyper-K is an international collaboration. Dr Mark Scott of Imperial College, London believes his team is in pole position to make one of the biggest ever discoveries about the origin of the Universe.
“We switch on earlier and we have a larger detector, so we should have more sensitivity sooner than Dune,” he says.
Having both experiments running together means that scientists will learn more than they would with just one, but, he says, “I would like to get there first!”
Dark chocolate and tea are examples of things you can incorporate into your diet to help lower blood pressure. (Photo by StudyFinds on Shutterstock AI Generator)
Instead of buying the next trendy expensive supplement or exotic superfood, a morning cup of tea could do more for your blood pressure than you realize. A new international study suggests that certain foods, like tea and dark chocolate, pack enough cardiovascular punch to rival prescription medications—at least for people who need them most.
Researchers analyzed 145 clinical trials involving over 5,000 participants and found that foods rich in compounds called flavan-3-ols can slash blood pressure by amounts comparable to many hypertension drugs. In people with high blood pressure, consuming these foods daily dropped their readings by about 6 points systolic (the top number) and 3 points diastolic (the bottom number), similar to what doctors see with standard medications.
The study, published in the European Journal of Preventive Cardiology, examined foods that most Americans already know and love: dark chocolate, tea (both green and black), apples, and grapes. Unlike studies that test isolated compounds in lab settings, this analysis looked at actual foods people can easily incorporate into their daily routines.
Blood pressure benefits weren’t the only good news. These foods also improved how well blood vessels function, measured through a test called flow-mediated dilation, which is how well arteries expand when blood flows through them. This effect occurred regardless of blood pressure changes, suggesting that flavan-3-ols might offer cardiovascular protection beyond just lowering the numbers on a monitor.
The research team combed through medical databases for studies published between 1946 and 2024. They included only randomized controlled trials, the gold standard for medical research, that tested flavan-3-ol-rich foods against placebos.
Bigger Benefits for Those Who Need It Most
The trials varied widely in design and duration, from single-dose studies measuring immediate effects to longer trials lasting up to 26 weeks. Participants ranged from healthy volunteers to people with existing cardiovascular disease, diabetes, or hypertension. This diversity actually strengthens the findings, showing that benefits appear across different populations and health conditions.
Breaking down the effective doses, the average intervention delivered 586 milligrams of total flavan-3-ols daily. Dark chocolate studies typically used about 56 grams (roughly two ounces) of chocolate with 75% cocoa content. Tea studies provided around 700 milliliters (about three cups) daily. Apple studies used about 340 grams, roughly two medium apples.
Researchers found that blood pressure benefits were most pronounced in people who needed them most. Those with normal blood pressure saw minimal changes, but people with elevated readings or diagnosed hypertension experienced significant drops. This mirrors how prescription blood pressure medications work; they’re most effective in people with higher baseline readings.
They also examined isolated compounds versus whole foods. Studies using purified epicatechin (the main flavan-3-ol in cocoa) or EGCG (common in green tea) showed weaker effects than studies using actual chocolate or tea. This suggests that the benefits come from the complex interplay of compounds in whole foods, not just single active ingredients.
Flow-mediated dilation improved by about 2% on average. This might sound small, but research has shown that every 1% improvement in this measure corresponds to a 10% reduction in cardiovascular disease risk.
The E-BAR could prevent injuries that could land seniors in assisted living facilities. (Dragana Gordic/Shutterstock)
Over 11,000 Americans turn 65 each day, and with each passing year comes more risk for one of the most dangerous threats of aging: falls. But have no fear! Researchers from the Massachusetts Institute of Technology (MIT) have designed a new robot, aptly named E-BAR (Elderly Bodily Assistance Robot), that can lift a person from the floor, help with daily movements, and catch someone during a fall, all without requiring them to wear any special harness or device.
Ninety-two percent of elderly Americans strongly prefer to age in their homes rather than move to assisted living facilities, but falls remain the leading cause of injury for those over 65. In fact, approximately one in four elderly Americans—nearly 14 million people—falls each year.
This research, set to be presented at the IEEE Conference on Robotics and Automation (ICRA), provides new safety technology for seniors who are afraid of falling but don’t want to be put in a nursing home or assisted living facility.
“Many older adults underestimate the risk of fall and refuse to use physical aids, which are cumbersome, while others overestimate the risk and may not exercise, leading to declining mobility,” says study author Harry Asada from MIT, in a statement.
Most elderly individuals resist wearing restrictive devices despite needing assistance, and switching between different assistive technologies can be hazardous for older adults trying to manage independently.
The Fall Prevention Robot
What separates E-BAR from existing eldercare robots is its unique combination of features in a single system. While other robots might help with standing or walking or provide fall detection, E-BAR does it all with a surprisingly slim profile—just 38 centimeters at its narrowest point. This allows it to navigate through typical home doorways and around furniture.
The robot functions like an invisible safety net positioned behind the user. Its U-shaped fork can provide support under the arms or at the forearms, enabling activities that many elderly people avoid due to fear of falling, such as bending to pick up objects or getting in and out of bathtubs.
When the team tested the robot with elderly participants and caregivers, they found that many older adults have enough muscle strength for daily activities but lack confidence in their balance. E-BAR targets exactly this population, the approximately 24% of Americans over 65 who have significant muscle strength but require assistive devices, and the 28% of those over 75 who show increased fall risk in clinical assessments.
Making Independent Living Safer
Unlike most support robots that require the user to stand within the robot’s base footprint, E-BAR can extend its support arm across gaps like bathtub edges while maintaining stability. This was achieved through computational optimization that balanced competing priorities like footprint size, reach distance, and stability.
To create the lifting mechanism, the MIT team developed an innovative 18-bar linkage system that follows the natural trajectory of human movement when transitioning from sitting to standing. This mechanism provides maximum mechanical advantage at key points where users need the most help, mimicking how a caregiver might assist someone.
The E-BAR includes four rapidly inflating airbags that can deploy in less than 250 milliseconds to catch someone during a fall. The system monitors for balance issues and can predict a fall before descent begins, giving the airbags enough time to inflate and safely cushion the user.
Should you ditch your nine-to-five to pursue your passion? (Studio Romantic/Shutterstock)
Trading the 9-to-5 for fulfillment still comes with a cost
When was the last time you felt truly alive at work? For a small but growing segment of the workforce, that feeling isn’t the exception; it’s their entire career strategy. New research reveals how some people are rejecting cubicle life entirely, transforming recreational passions into full-time work in a quest for deeper meaning. But the reality isn’t all sunshine and rainbows.
A new international study published in the International Journal of Research in Marketing explores these bold souls who transform their passions into full-time careers in search of fulfillment. The researchers call this phenomenon “eudaimonic consumption careers” (ECCs), where people chase lasting happiness not through material possessions but by making their passion for extraordinary experiences their actual job.
“These workers are after feelings of accomplishment, a life of virtue and greater meaning in life,” says study author Marian Makkar from the Royal Melbourne Institute of Technology, in a statement. “Happiness can be fleeting and short-lived, but hard work and setting big goals and developing skills to get there is what can bring long-term life satisfaction and fulfillment.”
The authors point out that little research exists about individuals who make permanent escapes into career paths. Through an extensive 10.5-year ethnographic study following snowsports instructors across Canada and New Zealand, researchers uncovered what drives people to abandon traditional career paths, the challenges they face in sustaining these alternative lifestyles, and what ultimately causes some to return to conventional work.
Escaping the 9-to-5 Trap
For many caught in unfulfilling office jobs, reading this study might spark recognition of a deeper yearning—and perhaps offer a roadmap for those brave enough to consider jumping ship.
Imagine waking up every morning genuinely excited about your workday. That’s the reality for snowsports instructors who travel hemisphere to hemisphere chasing endless winter. Rather than seeing their passion as just a weekend escape, they’ve built entire lives around it.
One participant in the study said: “I remember at university my first management lecturer said, ‘you could go on to be a CEO, be on $300,000 a year and have a month off every year to go skiing,’ and I said, ‘or I could go skiing every day and still afford to eat and pay my rent. It’s all I really need, isn’t it?’”
The researchers discovered that people typically embark on these unconventional careers for two main reasons: escaping from something (the drudgery of conventional work) and escaping to something (a life centered around passion and meaning).
Lars, another participant, described snowboarding as an escape route from sedentary modern life. He emphasized the visceral, embodied experience that allows people to release tensions in ways that aren’t available in everyday routines.
Many participants explicitly rejected the conventional life script of acquiring a mortgage, accumulating possessions, and climbing corporate ladders. Lars explained how accepting a traditional job means accepting extremely limited freedom—perhaps twenty days off annually for forty years—and how responsibilities like mortgages, houses, and children create a trap that’s difficult to escape.
Finding Meaning Through Mastery and Community
But these alternative careers aren’t just about avoiding responsibility; they involve a different kind of commitment. The researchers identified a transition that happens when people move from merely enjoying an activity to pursuing mastery and meaning through it.
John, one of the snowboarders studied, described how the sport demands complete focus at high speeds, but with increasing skill, one develops the ability to process multiple elements within short timeframes. This heightened awareness creates a sensation of expanded time during brief moments.
This level of presence and skill development represents what the researchers call a “eudaimonic transition,” where pleasure comes not just from temporary thrills but from accomplishment, personal growth, and skilled performance.
“We heard stories of financial, mental, and physical sacrifice, but overwhelmingly, participants reported experiencing significant personal growth and fulfillment,” says Makkar.
However, sustaining these alternative careers comes with significant challenges. The researchers identified several “career demands” that participants struggled with, including:
Chronotopic mobility – Being forced to constantly relocate following seasonal work, which strains relationships and creates instability.
Compensatory prosumption – Having to balance teaching others (to earn money) with pursuing their own enjoyment of the activity.
Economic disincentives – Dealing with low pay, precarious employment, and financial insecurity.
Many participants described a community that helps offset these challenges: a network of like-minded individuals who share resources and provide social support. One instructor explained the contrast between normal social boundaries and the snowsports community, noting how someone would never approach a stranger in a pub asking for accommodation, but in ski communities, meeting someone on a chairlift could naturally lead to offers of housing.
Why Some Return to Conventional Jobs
Despite this community support, the researchers found that many people eventually exit these passion-fueled careers due to what they term “disintegrative triggers.” Financial pressures, the strain of constant mobility (what they call the “tyranny of liquidity”), and reaching a plateau in skill development all contributed to decisions to return to more conventional work.
Beth, who left instructing after several years, described the precarious employment situation in which instructors are considered disposable. She mentioned contractual issues, such as employment being promised starting in December, but actual work and payment not materializing until late January, forcing people to deplete their savings during waiting periods.
What happens after people leave these alternative careers varies. Some experience what the researchers call “experiential extinction,” where the passion that once drove them disappears entirely. Others find ways to maintain connection through what’s termed “experiential migration,” transferring their pursuit of meaning to other activities or finding ways to continue part-time.
Rather than seeing work and pleasure as opposites, these eudaimonic consumption careers represent an attempt to integrate them, to make one’s passion the center of both economic and personal life.
“For employees, there’s never been a better time to demand flexibility or consider dumping nine-to-five roles for careers that are more meaningful,” says Makkar.
Traditional careers offer stability while eudaimonic ones offer meaning, but both exact their own unique costs. One path isn’t better than another; understanding both allows us to navigate our own choices more clearly. What would you be willing to sacrifice for meaning? What comforts could you surrender for freedom? These aren’t just questions for ski bums—they’re questions for anyone who’s ever felt trapped by success.
For the first time, scientists have pitted the two most popular weight loss drugs against each other in a direct competition – and one clearly dominates. A new study led by Weill Cornell Medicine researchers reveals that tirzepatide (Mounjaro, Zepbound) helps people lose significantly more weight than semaglutide (Wegovy).
The difference is striking: people taking tirzepatide lost 20.2% of their body weight over 72 weeks, while those on semaglutide dropped 13.7%. For a 250-pound person, that translates to about 50 pounds with tirzepatide versus 34 pounds with semaglutide – a 16-pound gap that could dramatically impact quality of life.
Results of the study are published in the New England Journal of Medicine.
Tirzapetide vs. Semaglutide
Both medications work by mimicking hormones that our bodies naturally produce, but they target different receptors. Semaglutide activates only one hormone pathway – glucagon-like peptide-1 (GLP-1) – which helps control appetite and slows digestion. Tirzepatide hits that same GLP-1 target but also activates a second hormone called GIP. This dual-action approach appears to be more effective for weight loss.
The SURMOUNT-5 trial, led by Dr. Louis Aronne from Weill Cornell Medicine, recruited 751 adults with obesity but without diabetes. Half received weekly tirzepatide injections, while the other half got weekly semaglutide shots. Both medications required gradual dose increases to reach their maximum strength.
Beyond the Scale: Health Benefits and Waistline Reduction
The benefits extended beyond just weight. Tirzepatide users saw their waistlines shrink by 7.2 inches on average, compared to 5.1 inches with semaglutide. This difference matters because abdominal fat surrounds vital organs and increases the risk of heart disease, diabetes, and other conditions.
As weight dropped, health markers improved with both medications. Blood pressure, blood sugar, and cholesterol levels all showed improvements, with tirzepatide generally showing a stronger effect. Participants who lost more weight – especially those dropping 20% or more – experienced the most significant health improvements.
Side Effects and Real-World Implications
Stomach issues topped the side effect list for both medications – nausea affected about 44% of participants in both groups, while constipation troubled roughly 28%. Interestingly, twice as many people quit taking semaglutide due to digestive problems (5.6% versus 2.7% for tirzepatide).
Demand for weight loss medications continues to climb, with both manufacturers struggling to meet demand amid ongoing shortages. Insurance coverage remains inconsistent, forcing many patients to pay over $1,000 monthly out-of-pocket for either medication.
With the CDC estimating that more than 35% of American adults living with obesity – which increases risks for heart disease, stroke, diabetes, and certain cancers – these medications represent a major advance in treatment options. However, they work best alongside lifestyle changes like healthier eating and regular physical activity.
As remarkable as these results are, they come with limitations. The trial was open-label, meaning participants knew which medication they were receiving. And though the medications were tested for 72 weeks, obesity is a chronic condition requiring long-term management. Questions remain about what happens when people stop taking these medications, with some studies suggesting significant weight regain.
For now, tirzepatide has claimed the weight loss crown. The outstanding question is whether its superior results will justify what will likely be a higher price tag – and whether insurance companies will agree to cover it.
The news made for an alarming headline recently: Research showed that common chemicals in plastics were associated with 350,000 heart disease deaths across the world in 2018.
The statistic came from a study published in the journal eBioMedicine. The authors, a group of researchers at New York University’s Grossman School of Medicine, estimated that roughly 13 per cent of cardiovascular deaths among 55- to 64-year-olds worldwide that year could be attributed to phthalates, which are used in food packaging, shampoo, toys and more.
Research on the effect of phthalates on cardiovascular disease is still emerging, but their link to metabolic risk factors like obesity suggests they could play a role in heart disease.
While experts agree that phthalates are harmful, they cautioned that the study relied on complex statistical modelling and a series of assumptions and estimates that make it difficult to determine how many deaths might be linked to the chemicals.
“This is an early step of trying to understand the magnitude of the problem,” said Dr Mark Huffman, a cardiologist and a co-director of the global health centre at WashU Medicine in St Louis. But, he added, there’s a need for far more studies to understand the relationship between phthalates and heart health, and what other factors might come into play.
THE BACKGROUND
Phthalates are found in personal care products like shampoos and lotions, as well as in food containers and packaging. It’s possible to ingest them through food, absorb them through the skin from products containing them or breathe them in as dust.
Studies have shown that phthalates are endocrine disrupters, meaning that they can interfere with our hormones. They have been associated with negative effects on reproductive health, pregnancy and birth issues.
Some studies have shown an association between phthalates and cardiovascular disease, but there isn’t strong evidence to show that the chemicals directly cause heart issues, said Sung Kyun Park, a professor of epidemiology and environmental sciences at the University of Michigan School of Public Health.
There is evidence that phthalates increase the risk of metabolic disorders like obesity and Type 2 diabetes, which can cause cardiovascular disease.
One way phthalates may do this is by increasing oxidative stress – cell and tissue damage that happens when there are too many unstable molecules in the body – and by promoting inflammation, said Dr Leonardo Trasande, senior author of the new paper and a professor of paediatricsand population health at NYU.
THE RESEARCH
In the latest study, researchers attempted to quantify global cardiovascular deaths attributable specifically to one type of phthalate, known as DEHP. One of the most widely used and studied phthalates, DEHP is found in vinyl products including tablecloths, shower curtains and flooring.
The researchers relied on estimates from previous research for several measures: Phthalate exposures, the risk of such exposures on cardiovascular deaths and the global burden of cardiovascular disease. They then calculated the share of deaths attributable to phthalate exposures in different countries, Dr Trasande said. The Middle East, South Asia, East Asia and the Pacific accounted for nearly three-quarters of these deaths.
THE LIMITATIONS
This was an observational study that showed a correlation between estimated exposure to the chemical and disease at the population level. Experts said that the methods used were not unusual for studies that model global disease, but that such studies come with caveats.
For example, Dr Huffman said, the estimates from the literature that the authors relied on in their calculations may themselves have incorporated some bias or confounding variables, such as socioeconomic status or dietary behaviours, that could relate both to plastic exposure and to cardiovascular disease rates.
“A pretty important part of the result of the model is what you put into the model,” Dr Huffman said.
The study also relied on an earlier analysis by Dr Trasande to estimate the risk of cardiovascular death from phthalate exposure, after controlling for other known risk factors. But that paper only examined US patients, which means it might not be possible to generalise the results to a global population, where dietary habits, cigarette smoke exposure, physical activity and other cardiovascular risk factors may vary.
WHAT’S NEXT
What’s clear from the study, experts said, is that we need more research on phthalate exposure and the associated health risks. Though it would be ethically impossible and impractical to do a classic randomised trial, in which one group of people were exposed to phthalates and others were not and they were followed for many years, other types of studies could help more clearly establish a link.
One way, Dr Park said, would be to have researchers recruit a large, representative sample of patients, measure their levels of exposure and follow them for years, perhaps until death. Dr Huffman suggested it would also be worthwhile to try out strategies that might reduce exposure levels and then measure any changes in health outcomes.
Close up of Indian “Ghost” Pipe Plant (Monotropa uniflora) white in color and growing in the Chippewa National Forest, northern Minnesota USA. (Photo by Larry Eiden on Shutterstock)
Few people outside herbalist circles had heard of ghost pipe a decade ago. These days, however, this strange white plant that lacks chlorophyll has developed an almost cult-like following online, with enthusiasts using it primarily for pain relief—despite minimal scientific understanding of its properties or safety.
From Forest Floor to Facebook Fame
Results of a survey published in Economic Botany reveals how digital media has dramatically transformed the way Americans use ghost pipe, creating what researchers call “digital ethnobotany”—the evolution of plant knowledge through online communities rather than traditional person-to-person teaching.
The Pennsylvania State University researchers write in their paper: “The internet has emerged as an important platform not only for learning and sharing ghost pipe ethnobotany, but also for developing new traditions and practices.” This represents one of the first comprehensive studies on contemporary ghost pipe use.
Their findings show a clear disconnect between historical uses and modern applications. While indigenous cultures and 19th-century physicians primarily used ghost pipe for eye inflammation, nervous system regulation, and treating convulsions, today’s users overwhelmingly consume it as a tincture for pain management—a practice popularized by just a handful of influential blog posts.
New Uses for an Old Plant
The research team surveyed 489 ghost pipe users across 42 U.S. states to create an unprecedented snapshot of this growing herbalist movement. A striking 84% of respondents reported using ghost pipe for pain relief, with older adults most likely to use it regularly for this purpose. Mental health applications followed, with 71% using it for anxiety, 68% for relaxation, and 60% for “psycho-spiritual wellbeing.”
The preparation method has also changed dramatically. While historical records mention consuming dried powder or applying fresh juice, modern users overwhelmingly (92%) prepare ghost pipe as an alcohol tincture—a method prominently featured in those same influential blog posts that sparked the plant’s popularity surge around 2010.
Google search data backs up the digital-driven nature of this phenomenon. Search interest for ghost pipe increased dramatically between 2010 and 2023, with peak interest in 2023 coinciding with rising popularity on social media platforms. When asked how they first learned about ghost pipe, the largest segment of respondents (30%) cited social media or the internet as their primary source.
Ghost Pipe Safety Concerns and Conservation
Half of respondents expressed worry about the lack of scientific information regarding ghost pipe’s chemical compounds and effects. This concern appears warranted, as the only published paper on ghost pipe’s chemistry (from 1889) suggested it might contain toxic compounds—findings that have not been verified in the intervening 130+ years.
Despite these concerns, only four respondents reported experiencing negative effects after consuming ghost pipe. This indicates either relative safety or, more likely, that the typical consumption pattern (infrequent use of small tincture doses) limits exposure to potentially harmful compounds.
Conservation emerged as another key concern, with 45% of respondents worried about the plant’s sustainability. This has influenced harvesting practices, with 79% of foragers implementing some form of stewardship. Most commonly, they limit their harvest quantity (90%) and harvest only aerial portions of the plant (53%), leaving the roots intact—another practice popularized online that differs from historical uses.
The ghost pipe phenomenon demonstrates both the power and potential danger of “digital ethnobotany.” While online communities have revived interest in forgotten medicinal plants and created new sustainable harvesting practices, they’ve also popularized untested applications with minimal scientific validation.
That said, before trying any new remedies or altering your current pain relief regimen, you should always talk to your doctor first.
Your body might be 35, but your heart could be pumping like it’s 80. A team of researchers from the UK, Singapore, and Spain discovered people with conditions like obesity, diabetes, and high blood pressure have hearts functioning as if they’re nearly five years older than their actual age. For those with severe obesity, the numbers are shocking: their hearts may function as if they’re 45 years older than their chronological age.
The Age Gap That Could Save Your Life
Led by Dr. Pankaj Garg from the University of East Anglia’s Norwich Medical School, the international team analyzed cardiac MRI scans from 563 people across five global centers. They compared 191 healthy individuals against 366 participants with cardiovascular risk factors.
They found that the left atrium—the chamber receiving oxygen-rich blood from the lungs—showed the most consistent age-related changes. This chamber naturally grows larger and less efficient as people age, but these changes accelerate dramatically in people with heart problems.
While healthy people’s heart age matched their chronological age, those with risk factors showed hearts functioning almost five years older on average.
Breaking down the data by degree of obesity revealed an even more concerning picture:
People with mild obesity (BMI 30-34.9) had hearts about four years older than their actual age
Those with moderate obesity (BMI 35-39.9) showed hearts roughly five years older
People with severe obesity (BMI over 40) had hearts functioning 45 years beyond their chronological age
Diabetes showed an unusual pattern, with its greatest impact in middle age—diabetics in their 40s had hearts functioning up to 56 years older than healthy peers. High blood pressure consistently aged hearts prematurely until around age 70, while people with atrial fibrillation showed significantly older heart ages across all age groups.
A New Way to Understand Heart Health
Instead of overwhelming patients with complex statistics, doctors can now deliver a simple message: “Your heart is working like someone much older than you.”
“By demonstrating the discrepancy in cardiac and functional age, CMR-derived functional heart age might highlight patients in need of risk factor interventions and help simplify concepts for patients,” the researchers wrote in their paper.
This approach offers a powerful conceptual tool. Rather than decoding complex risk factors, patients can grasp one straightforward fact: their heart is functioning like that of someone decades older.
“People with health issues like diabetes or obesity often have hearts that are aging faster than they should – sometimes by decades. So, this could help doctors step in early to stop heart disease in its tracks,” Dr. Garg explains in a statement. “This is a game-changer for keeping hearts healthier, longer.
What This Means for Medical Standards
The research, published in the European Heart Journal Open, also uncovered fascinating normal changes in heart function throughout aging. The pumping efficiency of the left ventricle—your heart’s main chamber—actually increases with age in healthy individuals, but this beneficial adaptation gets disrupted in people with heart problems.
Current medical standards might need revision to account for these normal age-related changes. The paper notes that “Many accepted measures of cardiac function, including ejection fraction, may require age adjustment to factor in reduced compliance of the heart during healthy aging.”
“Heart disease is one of the world’s biggest killers. Our new MRI method gives doctors a powerful tool to look inside the heart like never before and spot trouble early – before symptoms even start,” says Garg. “By knowing your heart’s true age, patients could get advice or treatments to slow down the aging process, potentially preventing heart attacks or strokes. It could also be the wake-up call people need to take better care of themselves, whether that’s eating healthier, exercising more, or following their doctor’s advice. It’s about giving people a fighting chance against heart disease.”
For the average person, the takeaway is clear: cardiovascular risk factors don’t just increase your chance of heart attack or stroke—they’re actively aging your heart beyond your years. This new way of measuring heart age could motivate more people to take their cardiovascular health seriously through medication, diet, and exercise before serious problems develop.
Many Americans would be reluctant to receive a bird flu vaccination. (Melnikov Dmitriy/Shutterstock)
Warnings about bird flu transmission have been popping up on headlines across the globe. Public health concerns over bird flu, however, could face serious roadblocks as most Americans remain either unwilling or uncertain about taking protective measures against the spreading virus.
A startling 61.4% of Americans would either refuse or are unsure about taking a vaccine for highly pathogenic avian influenza (HPAI), even if recommended by the Centers for Disease Control and Prevention (CDC). Similarly, 55.6% would resist or question changing their dietary habits despite potential risks from consuming certain animal products.
These findings come from a national survey of 10,000 Americans conducted in August 2024, as cases of bird flu continue to emerge across the United States. The study, published in the American Journal of Public Health, finds concerning results over public awareness and willingness to take preventive action against this growing health threat.
The researchers note that the gap between public health experts’ warnings and the public’s understanding of the issue jeopardizes the country’s ability to manage and prevent the further spread of the virus effectively.
Public Awareness Gap
While 64.4% of survey respondents had heard of bird flu, only about a quarter (26.1%) understood that it could spread to humans, and even fewer (18.8%) were aware that a subtype (the H5N1 virus) had been detected in cattle.
By the time of the survey, 66 human cases had been confirmed in the United States in 2024, with most occurring among people with occupational exposure to sick animals. However, in September 2024, the first case without known occupational exposure was diagnosed in Missouri, suggesting potential unknown transmission pathways.
Many Americans also demonstrated limited understanding of food safety practices that could reduce infection risk. Less than 54% knew that pasteurized milk is safer than raw milk, despite recent recalls of H5N1-contaminated raw milk in California. Only about 71% understood that cooking meat at high temperatures could eliminate harmful viruses like H5N1.
Political Divide Deepens Response
Political affiliation influenced attitudes toward preventive measures. Democrats were significantly more likely to accept both vaccination and dietary changes compared to Republicans and Independents.
When controlling for sociodemographic factors, the study found Republicans were 3.5 times more likely to resist vaccination and 1.56 times more likely to resist dietary changes compared to Democrats. Independents showed similar patterns, being 2.69 times more likely to resist vaccination and 1.43 times more likely to resist dietary changes than Democrats.
Researchers suggest these political divisions reflect broader societal trends where skepticism of government recommendations runs deeper among conservative voters.
Location also played a significant role in willingness to follow public health guidance. Rural residents, who ironically may have higher exposure risk due to proximity to agriculture and livestock, showed greater resistance to both vaccination and dietary changes than urban dwellers.
Only 33.9% of rural Americans indicated they would accept a CDC-recommended vaccine, compared to 39.8% of urban residents. Similarly, just 38.9% of rural respondents would change dietary habits based on recommendations, versus 45.8% of urban dwellers.
Trust in government health agencies emerged as another critical factor. The study revealed limited trust in the CDC, Food and Drug Administration, and U.S. Department of Agriculture among nearly 40% of respondents.
When people don’t trust experts, they’re more likely to ignore their advice and resist taking necessary steps to protect their health, even if those actions are meant to prevent the spread of dangerous diseases like H5N1.
Pandemic Fatigue Complicates Response
After years of COVID-19 messaging and restrictions, many Americans appear exhausted by public health campaigns. The study found 38% of respondents reported feeling “mentally and emotionally drained by the constant cycle of health warnings and changes to daily life.”
This pandemic fatigue may further complicate efforts to engage the public in preventive measures against bird flu, even as the threat grows more serious.
Managing the current avian influenza threat and preparing for future infectious disease complications requires urgent action. Study researchers recommend clear, targeted messaging that engages local leaders and academic partners already working in communities.
In order to avoid a major public health crisis, the United States needs to bolster its health agencies, while also educating and engaging the public, about the dangers of consuming unpasteurized milk and the importance of vaccination. Without addressing the underlying issues of political polarization, rural-urban divides, and eroded trust in health institutions, efforts to control this emerging threat may fall short.
The study has analyses more than 8,000 cases of endometriosis, a condition that causes severe pain and affects about 1.5 million women in the UK
Women with endometriosis are at a significantly higher risk for developing a range of autoimmune diseases, new research has shown.
The new study, involving researchers from the University of Oxford, has identified a significant genetic link between conditions such as rheumatoid arthritis, coeliac disease and multiple sclerosis to endometriosis.
Women with endometriosis were found to have a 30-80% increased risk of developing autoimmune diseases.
The research team said the new information could be used “to look for new treatment avenues that may work across these conditions”.
Endometriosis, a condition where cells similar to those in the lining of the womb grow in other parts of the body, affects about 1.5 million women in the UK.
Symptoms include severe period pain and it causes extreme tiredness.
The study used data from the UK Biobank to analyse more than 8,000 endometriosis cases and 64,000 clinical disease cases.
The researchers examined the association between endometriosis and 31 different immune conditions.
Prof Krina Zondervan, joint senior author and head of the Nuffield Department of Women’s and Reproductive Health at the University of Oxford, said such large studies provided “valuable new insights into disease biology”.
“In this case, we have provided solid evidence of a link between endometriosis and subsequent risk of diseases such as osteoarthritis and rheumatoid arthritis, and we have shown this has a biological basis,” she said.
“This new information can now be leveraged to look for new treatment avenues that may work across these conditions.”
The team said that understanding “opens up exciting possibilities” for new therapeutic approaches, such as drug repurposing or the development of combined treatments.
The findings also suggest that women with endometriosis should be more closely monitored for the development of immunological conditions.
The research was mainly funded by Wellbeing of Women UK.
Chief executive Janet Lindsay said it was “an important step” in building a more accurate understanding of endometriosis.
Crisps, croissants, doughnuts, muffins, sweets and hot dogs all count as ultra-processed food
People who eat lots of ultra-processed foods (UPF) may be at greater risk of dying early, a study in eight countries including the UK and the US suggests.
Processed meats, biscuits, fizzy drinks, ice cream and some breakfast cereals are examples of UPF, which are becoming increasingly common in diets worldwide.
UPFs tend to contain more than five ingredients, which are not usually found in home cooking, such as additives, sweeteners and chemicals to improve the food’s texture or appearance.
Some experts say it’s not known why UPFs are linked to poor health – there is little evidence it’s down to the processing itself and could be because these foods contain high levels of fat, salt and sugar.
‘Artificial ingredients’
The researchers behind the study, published in the American Journal of Preventive Medicine, looked at previous research to estimate the impact of ultra-processed food intake on mortality.
The study cannot definitively prove that UPFs caused any premature deaths.
This is because the amount of ultra-processed foods in someone’s diet is also linked to their overall diet, exercise levels, wider lifestyle and wealth, which can all also affect health.
The studies looked at surveys of people’s diets and at data on deaths from eight countries – Australia, Brazil, Canada, Chile, Colombia, Mexico, UK and US.
The report estimates that in the UK and the US, where UPFs account for more than half of calorie intake, 14% of early deaths could be linked to the harms they cause.
In countries such as Colombia and Brazil, where UPF intake is much lower (less than 20% of calorie intake), the study estimated these foods are linked to around 4% of premature deaths.
Lead study author Dr Eduardo Nilson, from Brazil, said UPFs affected health “because of the changes in the foods during industrial processing and the use of artificial ingredients, including colorants, artificial flavours and sweeteners, emulsifiers, and many other additives and processing aids”.
By their calculations, in the US in 2018, there were 124,000 premature deaths due to the consumption of ultra-processed food. In the UK, nearly 18,000.
The study says governments should update their dietary advice to urge people to cut back on these foods.
But the UK government’s expert panel on nutrition recently said there wasn’t any strong evidence of a link between the way food is processed and poor health.
What is ultra-processed food?
There is no one definition that everyone agrees on, but the NOVA classification is often used. Examples include:
cakes, pastries and biscuits
crisps
supermarket bread
sausages, burgers, hot dogs
instant soups, noodles and desserts
chicken nuggets
fish fingers
fruit yoghurts and fruit drinks
margarines and spreads
baby formula
Still questions to answer
The numbers in the study are based on modelling the impact of ultra-processed foods on people’s health.
Prof Kevin McConway, emeritus professor of applied statistics, Open University, said the study makes lots of mathematical assumptions which make him cautious about what the findings mean.
“It’s still far from clear whether consumption of just any UPF at all is bad for health, or what aspect of UPFs might be involved.
“This all means that it’s impossible for any one study to be sure whether differences in mortality between people who consume different UPF amounts are actually caused by differences in their UPF consumption.
“You still can’t be sure from any study of this kind exactly what’s causing what.”
Dr Nerys Astbury, an expert in diet and obesity at the University of Oxford, also agrees there are limitations to the research.
It’s been known for some time that diets high in energy, fat and sugar can increase the risk of diseases, such as type 2 diabetes, obesity, heart conditions and some cancers, which can lead to premature death.
“Many UPF tend to be high in these nutrients,” she says, adding that studies to date haven’t been able to prove that the effects of UPFs are due to anything more than “diets high in foods which are energy dense and contain large amounts of fat and sugar”.
This type of research cannot prove that consumption of ultra-processed foods is harmful, says Dr Stephen Burgess at Cambridge University.
How physically fit someone is may be the main cause of poor health instead. But when numerous studies across many countries and culture suggest UPFs could be a risk to health, Dr Burgess says “ultra-processed foods may be more than a bystander”.
Vulcanidris cratensis fossil (Credit: Lepeco et al / Current Biology)
In the ancient landscapes of what is now northeastern Brazil, 113 million years ago, an unusual ant with bizarre upward-pointing jaws died and became entombed in limestone. This single fossil has just rewritten the timeline of ant evolution, pushing back their confirmed history by 13 million years.
Scientists have unearthed what they’re calling the oldest undisputed ant fossil ever discovered. The specimen belongs to an extinct group called “hell ants” — prehistoric predators sporting scythe-like mandibles that pointed skyward rather than forward like modern ants.
“This finding represents the earliest undisputed ant known to science,” states the research team in their paper published in Current Biology.
Bizarre Hunting Adaptations and Global Spread
The fossil, named Vulcanidris cratensis, shows these insects had already achieved a global presence much earlier than previously thought.
Before this finding, the oldest known ant fossils came from amber deposits in France and Myanmar dating back about 100 million years. This Brazilian specimen predates them significantly, suggesting ants were already widespread during the Early Cretaceous period when dinosaurs dominated the landscape.
Using advanced imaging technology called micro-CT scans (similar to a miniaturized version of medical CT scans), researchers examined the fossil’s anatomy in detail. The scans revealed jaws with tips that could contact the front plate of the head, forming a cavity likely used to trap prey.
The research team, led by Anderson Lepeco from the Museu de Zoologia da Universidade de São Paulo, found that this specimen was closely related to hell ant species previously discovered in Burmese amber. This connection reinforces the idea that these ancient ants could spread between continents during the Cretaceous period.
Thriving in Diverse Environments
Unlike previous hell ant fossils found in ancient amber formed in humid, forested regions, this Brazilian specimen lived in a drastically different habitat. The Crato Formation where it was discovered points to a seasonal, semi-arid environment — essentially a shallow wetland within a broader dry landscape.
This environmental flexibility reveals these specialized predatory ants could thrive across dramatically different habitats worldwide.
According to the research paper, “Hell ants exhibited a wide ecological range, occurring in remarkably different environments throughout the globe.”
Evolutionary Dead End
Modern ants are ecological powerhouses, with over 14,000 known species shaping ecosystems globally. However, their rise to dominance wasn’t immediate. Fossil evidence indicates early ant groups like the hell ants were diverse during the Cretaceous, but the major ant lineages we know today evolved later and exploded in diversity after the extinction event that killed the dinosaurs.
The new fossil helps bridge the gap between ants and their wasp ancestors, showing how early these insects established themselves as key players in terrestrial ecosystems. Despite their early success, hell ants ultimately disappeared without leaving any descendants. Like many specialized Cretaceous animals, they vanished during the mass extinction at the end of the period.
This remarkable fossil doesn’t just push back the timeline of ant evolution — it offers a glimpse into a lost world where strange, scythe-jawed insects hunted across ancient landscapes long before humans would ever notice the tiny creatures building colonies in their gardens.
Suffering from a painful reaction after eating a hamburger might be more than just bad luck – where you live could be putting you at risk. New research shows that specific landscape features around your home may increase your chances of developing Alpha-gal Syndrome (AGS), an increasingly common allergy that makes eating beef, pork and other mammal meats dangerous. It’s believed most cases are the result of a bite from lone star ticks.
People living in neighborhoods with parks, scattered trees, and forest fragments face much higher odds of acquiring this red meat allergy, according to research published in PLOS Climate. The study mapped cases across North Carolina, South Carolina, and Virginia, revealing clear patterns in who’s most at risk.
What Is Alpha-gal Syndrome?
AGS cases have exploded in recent years – from just 24 documented cases in 2009 to more than 34,000 by 2019. People with this syndrome experience allergic reactions to a sugar molecule called alpha-gal found in mammalian meat. Symptoms range from itchy hives and stomach problems to joint pain and potentially dangerous anaphylaxis.
Many doctors still don’t recognize this condition. A survey found 77% of healthcare providers were either unaware of Alpha-gal Syndrome or lacked confidence in diagnosing it, suggesting thousands of cases may go undetected.
The Landscape-Red Meat Allergy Connection
Researchers from the University of North Carolina at Chapel Hill analyzed 462 confirmed AGS cases, comparing patients’ home locations with environmental features. They used advanced computer modeling to identify which landscape characteristics were most strongly associated with the allergy.
The results were clear: people living in areas with more “open space development” had a substantially higher risk. Mixed forest landscapes also boosted risk. Conversely, population density showed a negative relationship – meaning fewer cases in densely populated urban areas.
“These results show that AGS is connected to fragmented habitats, which lone star ticks prefer,” the researchers explain. Lone star ticks are the primary carriers that transmit this condition when they bite humans.
Where Risk Is Highest
The research team created detailed risk maps showing that AGS risk is highest in western mountainous regions, decreases through the central piedmont, and reaches its lowest levels along the coast.
This geographical pattern matches the known habitat range of the lone star tick, reinforcing evidence that these ticks cause Alpha-gal syndrome. When looking at total case numbers rather than individual risk, major cities showed the highest concentrations simply because more people live there, even though individual risk is lower in urban centers.
Protecting Yourself
As suburbs expand into previously natural areas, residents increasingly encounter tick habitats, creating more opportunities for lone star tick bites that trigger AGS.
For residents concerned about AGS risk, simple protection measures can help when spending time outdoors, especially in areas where neighborhoods meet forests:
Use tick repellent when outdoors
Check for ticks after outdoor activities
Promptly remove any attached ticks
Be aware that risk is higher in areas with fragmented forests and suburban parks
Did ancient sunscreen and sewing needles save humanity?
Earth’s Northern Lights typically dance near the poles, but 41,000 years ago, they lit up skies over North Africa and Australia. New research reveals how dramatically Earth’s magnetic field weakened and shifted during an event called the Laschamps geomagnetic excursion, potentially influencing human evolution at a pivotal moment in our history.
A Magnetic Field in Crisis
During the Laschamps excursion, Earth’s magnetic field weakened to just 10% of its current strength, while the magnetic poles shifted dramatically away from the geographic poles.
“The excursion lasted ~2000 years,” write the researchers, led by Agnit Mukhopadhyay of the University of Michigan, in their Science Advances paper.
Using advanced computer modeling, the research team reconstructed Earth’s magnetosphere during five key periods of the excursion. At its peak around 40,977 years ago, Earth’s protective magnetic bubble shrank dramatically – from its normal extent of 8-11 Earth radii (51,000-70,000 km) to just 2.43 Earth radii (15,498 km).
The magnetic field also transformed from a simple north-south configuration to a complex arrangement with multiple magnetic poles scattered around the planet.
“In the study, we combined all of the regions where the magnetic field would not have been connected, allowing cosmic radiation, or any kind of energetic particles from the sun, to seep all the way in to the ground,” said Mukhopadhyay.
Auroras Everywhere
These magnetic changes had astonishing effects on the auroras. The Northern Lights traveled from their usual Arctic home through Western Eurasia into Northern Africa, while in the Southern Hemisphere, auroras appeared over eastern Australia and New Zealand.
The light shows also expanded enormously in size. Modern auroral ovals typically span less than 3,000 km in diameter, but during the Laschamps event, they stretched to over 8,000 km – nearly three times larger.
At the excursion’s peak, auroras likely appeared globally, creating what researchers describe as “a near-Earth space environment unparalleled in history.”
Beyond their visual spectacle, these changes exposed Earth’s surface to higher levels of cosmic radiation and energetic particles, potentially altering the atmosphere and affecting life on the surface.
Human Evolution During Magnetic Upheaval
The Laschamps excursion coincided with a critical period in human history – as modern humans (Homo sapiens) spread throughout Europe, Neanderthals disappeared, and humans created the first known cave art.
When mapping areas affected by the wandering auroras alongside archaeological evidence, researchers found compelling correlations, particularly in Western Eurasia.
“We found that many of those regions actually match pretty closely with early human activity from 41,000 years ago, specifically an increase in the use of caves and an increase in the use of prehistoric sunscreen,” Mukhopadhyay explained.
Raven Garvey, associate professor of anthropology at the University of Michigan, notes that modern humans during this period increasingly used ochre – a mineral that works effectively as a natural sunscreen – and developed more sophisticated clothing.
“There have been some experimental tests that show it has sunscreen-like properties,” Garvey said about ochre. “Its increased production and its association primarily with anatomically modern humans (during the Laschamps) is also suggestive of people’s having used it for this purpose.”
Modern humans also invented tailored clothing during this period, with archaeological sites containing not just hide scrapers but also sewing implements like needles and awls. This clothing would have allowed for greater mobility while providing coverage and potentially protection from increased radiation.
Neanderthals appear to have lacked these technologies, possibly putting them at a disadvantage during this period of increased radiation exposure.
Modern Implications
What would a similar event mean for our technology-dependent world today? The consequences would be severe.
“If such an event were to happen today, we would see a complete blackout in several different sectors,” Mukhopadhyay warned. “Our communication satellites would not work. Many of our telecommunication arrays, which are on the ground, would be severely affected by the smallest of space weather events.”
While a Laschamps-like event isn’t imminent, there are concerning trends: Earth’s geomagnetic field has been tilting in recent years and has steadily declined by 1% every two decades for the past 180 years.
This research connects Earth’s magnetic past to human evolution, showing how cosmic forces may have shaped not just our atmosphere but possibly our species’ development. The wandering auroras of 41,000 years ago remind us how dependent we remain on Earth’s invisible magnetic shield.
PEERING back 2,000 years to the age of Jesus isn’t exactly easy – but rare objects from those times solve the mystery of what his life might’ve been like.
Archaeologists have uncovered several mind-boggling artefacts from two millennia ago, including a stone linked to Jesus’ crucifixion, and even a shockingly preserved ship from the Sea of Galilee.
This 19th century artwork shows Jesus standing before Pontius PilateCredit: Alamy
The Pilate Stone
Pontius Pilate is an iconic part of Jesus’ story.
He served under Emperor Tiberius as the fifth govern of the then-Roman province of Judaea between AD 26 to 36.
But he’s far better known as the man who presided over Jesus’ trial – the one that resulted in the crucifixion.
A major piece of evidence for him is the Pilate stone, which is a block of carved limestone found at Caesarea Maritima, an archaeological site in Israel.
It’s damaged but has a partly-preserved inscription attributed to Pilate.
The only text legible – originally written in Latin – reads: “To the Divine Augusti [this] Tiberieum…Pontius Pilate…prefect of Judea…has dedicated [this]”.
The area was the capital of Judaea when Pilate was a governer, so it’s great evidence for his time there – even if it doesn’t reference crucifixion itself.
Pool of Siloam
Pool of Siloam – ancient Jerusalem pool (from Gospel of John) where Jesus healed blind man
The Pool of Siloam is a series of rock-cut pools that can be found southeast of the walls of the Old City of Jeruasalem.
The pools, which were fed by Gihon Spring, were a major part of the story of Jesus, specifically in the Gospel of John.
Jesus is said to have healed a man who was blind from birth by sending him to the pool.
And it was also a gathering place for Jewish pilgrims in ancient times.
Sadly the pool was destroyed and covered in the First Jewish-Roman War in the year 70.
But they were formally rediscovered during sewer excavations in 2004.
The discovery was confirmed in 2005, and late excavations in 2023 revealed more areas of the pool.
Sea of Galilee Boat
This boat, which is also known as the Ancient Galilee Boat and even the Jesus Boat, was found in Israel back in the mid-1980s.
It’s an ancient fishing boat that dates to the 1st century AD, and measures 23 long.
And it’s remarkably well-preserved given that it dates back to the time of Jesus.
The bad news is that there’s no direct link between the boat and Jesus himself.
This is the sort of boat that Jesus and his disciples would’ve used at the time, which makes it important to Christians – but ultimately there’s no evidence at all that they interacted with this specific vessel.
However, it’s a great example of a relic that gives us a window into what life might have been like in Jesus’ time.
Capernaum synagogue
A key Biblical location is the Capernaum synagogue.
Jesus is said to have spent time in the synagogue, as well as healing a posessed man there.
The story is mentioned in the gospels of Mark and Luke, and several other references to the location are featured throughout the Bible.
The village of Capernaum – which is on the Sea of Galilee’s northern shore – would’ve had a population of around 1,500 at Jesus’ time, but was abandoned in the 11th century.
And it has been excavated on-and-off since 1838, including the discovery of two ancient synagogues, one built over the other.
The foundations belonging to the earlier synagogue may be the ones mentioned in the Gospels, making it an iconic part of Jesus’ story.
Caiaphas ossuary
The Caiaphas ossuary is a rare relic that was found in a burial cave in Jerusalem in the early nineties.
An ossuary is a type of chest used for storing the bones of dead humans – usually after they’ve spent time in a temporary grave.
One decorated ossuary from the haul was inscribed with “Joseph, son of Caiaphas”, and held the bones of a man, aged around 60.
Joseph ben Caiaphas was the High Priest of Israel in the first century.
And in the New Testament Gospels, he’s named as a key rival to Jesus, organising the plot to kill him – and even presiding over his trial.
A video of a bat swimming in a swimming pool with ease was shared on social media, which went viral and left netizens with all kinds of questions. Here’s how the ‘shocked’ internet reacted…
Bat video goes viral.
Recently, a video of something quite ‘unexpected’- but completely ‘natural’- was shared on social media, leaving most netizens ‘shocked’ and ‘curious’, as it turned out that no one ever saw anything like that! Wondering what it was? Well, the video showed a bat swimming- quite effortlessly- in a swimming pool. The incident, as per the viral clip, unfolded in Australia. As soon as the clip of the same was shared online, most expressed their surprise over NOT ever knowing that ‘bat could swim!’. Understandably, the video was quick to go viral. Most cracked jokes on the entire thing, while some shared how THIS was the first time they were watching a bat swim. As per the viral post, the incident happened back in January, in Townsville City. At the end of the clip, a woman was seen helping the bird in getting out of the pool.
“I have never once seen a bat swim. Wild,” the post read.
The video was shared on X (formerly Twitter), by the handle ‘AMAZlNGNATURE’. The post was shared yesterday and drew more than 2M views from people.
“Fun fact of the day: bats can actually swim,” an internet user said. “I’ve never seen a bat swim,” added a second person. “Can we please call it the bat stroke?” added another person on social media.
“It’s a fruit bat. A flying fox. They have really cute faces, and they love bananas and other fruits. They’re pretty big, for bats, and they’re clumsy, and their hindquarters are very weak, so they tend to get themselves into awkward situations like this one,” added another person.
Her first panic attack came at a company-wide meeting, right before her scheduled presentation. Carolina Lasso had given many similar talks about her marketing team’s accomplishments. When her name was called this time, she couldn’t speak.
“I felt a knot in my throat,” Lasso said. “My head, it felt like it was inside a bubble. I couldn’t hear, I couldn’t see, and it felt like an eternity. It was just a few seconds, but it was so profound, and in a way earth-shattering to me.”
Lasso was struggling after a cross-country move followed by a divorce. Her boss suggested a mental health leave, a possibility she didn’t know existed. She worried whether taking time off would affect how her team viewed her or cost her a future promotion, but in the end she did.
“I’m thankful for that opportunity to take the time to heal,” Lasso, 43, said. “Many people feel guilty when they take a leave of absence when it’s mental health-related. … There is some extra weight that we carry on our shoulders, as if it had been our fault.”
Despite a fear of repercussions, more adults are recognizing that stepping back from work to deal with emotional burdens or psychological conditions that get in the way of their lives is a necessary choice, one that a growing number of employers recognize.
ComPsych Corp., a provider of employee mental health programs and absence management services, encourages its business clients to make the well-being of workers a priority before individuals get to a breaking point while also having processes in place for those who require leaves of absence.
“Since the start of the COVID-19 pandemic, collectively we’ve just been in this constant state of turmoil,” Jennifer Birdsall, the senior clinical director at ComPysch, said. “We just have had this barrage of change and uncertainty.”
Depression, anxiety and adjustment disorder, which involves excessive reactions to stress, were the top three diagnoses of employees who took mental health leaves in the past two years among clients of Alight, a Chicago-based technology company which administers leaves and benefits for large employers.
Structuring a leave
A mental health leave can last weeks or months. In some cases, workers get approval to work a reduced schedule or to take short periods of time off when needed, using an approach called “intermittent leave.”
At most U.S. organizations with 50 or more employees, people can request leaves through the Family and Medical Leave Act. The federal law entitles workers with serious health conditions to paid or unpaid leaves of up to 12 weeks, depending on state and local laws.
Some employers require people to use sick days or accumulated vacation days to continue receiving a paycheck while out. For longer leaves, workers can access short-term disability plans, if their employer offers one.
Lasso’s leave lasted six months, and included therapy and travel to India for additional treatment. She returned to her job but decided after a year to leave for good. She later launched a business to train people on fostering a more humane work culture.
A mental health leave is “not only OK, but it can really unlock new possibilities once we have the time to do the work — therapy, medication, whatever it is — and have enough distance from work to be able to reconnect with ourselves,” Lasso said.
Talking openly about struggles
A social stigma around mental health challenges causes many people to avoid seeking treatment or requesting a leave of absence. Newton Cheng, director of health and performance at Google, hopes to change that by sharing his own struggles.
His first self-disclosure happened during the pandemic, when a senior manager invited employees at a meeting to share how they were doing. When it was his turn, Cheng started crying.
He explained he was struggling to live up to his expectations of himself as a father and didn’t know how to turn things around.
“It was just totally horrifying to me because, one, I had just cried in front of my coworkers and I was definitely taught as a professional — and as a man — you do not do that,” Cheng recalled. “And then two, I had never really articulated and said out loud those words. I hadn’t even allowed myself to think that. But now they’re out there and I had to face them.”
Colleagues responded by relaying their own struggles, but Cheng’s difficulties continued. By February 2021, he couldn’t get out of bed because he felt paralyzed by dread, he said. A therapist said he was showing symptoms of major depression and anxiety.
“I just realized, ‘I’m struggling a lot and this goes pretty deep. I don’t think I can keep just putting duct tape on this. I probably need to take some leave,’” Cheng recalled.
Hoping his decision would benefit others, he announced to 200 people at a conference that he planned to take mental health leave. Instead of derailing the gathering as he feared, his honesty inspired fellow conference attendees to open up.
“It was like a fireworks show,” Cheng said. “They’re like, ‘Wow, I can’t believe he did that.’ Then they forgot about me. But the tone was set. It was like ’Oh, this is what we’re doing. Let me talk about what’s going on with me, too.’”
Take the time you need
While balancing classes and a full-time job during her last year of college, Rosalie Mae began struggling to get out of bed and crying uncontrollably. Yet she felt like she had “to keep it together” to avoid burdening her colleagues at the University of Utah bookstore, where Mae worked as an accounting clerk.
Then she found herself calling a suicide hotline. “Once it reached that point, I knew, especially at the urging of my husband, we need to do something more,” Mae, 24, said.
In her case, that meant taking a five-week work leave to put her own health and well-being first. She recommends the same for others who find themselves in a similar position.
“Taking a mental health leave is not necessarily a cure-all, but it is important to give yourself a break and allow yourself to regroup, make a plan of how to proceed and take the steps to work towards feeling better,” Mae said.
Telling managers and colleagues
Before broaching the subject of a mental health leave with a manager, consider the workplace culture and the strength of your professional relationships, Cheng said. He recalls saying, “For my health and well-being, and the sake of my family and what’s best for the business, the least risky thing for me to do is to go on leave soon.”
Individuals who suspect an unsympathetic reception can simply say, “I need to go on medical leave. I need time to recover,” he advised.
There’s also no legal or ethical requirement to tell everyone you work with the nature of your leave.
Ever struggled to wake up in the morning? That groggy, disoriented feeling isn’t just in your head; it’s actually called “sleep inertia,” and it can linger for up to two hours after waking, even if you’ve had a full eight hours of sleep. While many of us reach for coffee to combat this sluggishness, Japanese researchers have discovered a surprisingly effective alternative: letting natural morning light into your bedroom at precisely the right time.
A new study published in the journal Building and Environment found that using motorized curtains to control natural light exposure before waking significantly improved alertness, reduced sleepiness, and enhanced overall morning functioning. The most effective approach? Opening curtains just 20 minutes before your alarm goes off.
Researchers from Osaka Metropolitan University emphasize that bringing natural light into bedrooms provides dual benefits of energy conservation and improved physical and mental health.
Insufficient sleep has become a growing public health concern. According to a 2023 survey by the American Academy of Sleep Medicine, 64% of Americans use some form of assistance to fall or stay asleep. Poor awakening quality—characterized by sleepiness, fatigue, and reduced alertness—affects countless individuals daily, potentially impacting productivity, mood, and overall well-being.
Morning Light and Sleep Quality
The study examined how different patterns of natural light exposure affected college students’ awakening quality. Participants spent three consecutive nights in a laboratory designed to resemble a typical bedroom, with east-facing windows equipped with motorized curtains that could be programmed to open at specific times.
Researchers tested three different natural light conditions: one where curtains opened 20 minutes before waking (Intervention A), another where they opened at dawn and remained open until waking (Intervention B), and a control condition where curtains stayed closed until after waking.
Using a combination of brain wave activity, heart rate variability, reaction time tests, and questionnaires about sleepiness and fatigue, the researchers tracked how participants felt upon waking under each condition.
Less Light Exposure May Be Better
Both light conditions significantly improved subjective sleepiness and objective alertness compared to the control condition. However, Intervention A, where curtains opened just 20 minutes before waking, performed best overall, particularly in reducing objective sleepiness measured by brain wave activity.
More light exposure isn’t always better. In fact, the study found that excessive or premature exposure to natural light (as in Intervention B) appeared to increase mid-sleep awakenings, potentially disrupting sleep quality in the crucial period just before waking.
For those who sleep late or struggle with light sensitivity, rather than leaving curtains open all night or keeping them completely closed, a timed approach with motorized curtains or similar technology could provide the best of both worlds—protection from nighttime light pollution and beneficial morning light exposure when it matters most.
As increasing numbers of people face sleep challenges in our modern, indoor-focused society, architectural design that accounts for natural light exposure could play a crucial role in improving public health. Smart homes with automated window coverings might one day be suggested as readily as sleep medications.
Sunrise alarm clocks that simulate dawn with artificial light have gained popularity in recent years, but the researchers point out that natural daylight offers advantages that artificial light cannot replicate. Our circadian rhythms evolved in response to natural light conditions, making actual sunlight potentially more effective for regulating sleep-wake cycles. By harnessing something as fundamental as sunlight—at precisely the right time—we might improve our mornings without reaching for that extra cup of coffee.
In the 1987 classic film RoboCop, the deceased Detroit cop Alex Murphy is reborn as a cyborg. He has a robotic body and a full brain-computer interface that allows him to control his movements with his mind. He can access online information such as suspects’ faces, uses artificial intelligence (AI) to help detect threats, and his human memories have been integrated with those from a machine.
It is remarkable to think that the movie’s key mechanical robotic technologies have almost now been accomplished by the likes of Boston Dynamics’ running, jumping Atlas and Kawasaki’s new four-legged Corleo. Similarly we are seeing robotic exoskeletons that enable paralyzed patients to do things like walking and climbing stairs by responding to their gestures.
Developers have lagged behind when it comes to building an interface in which the brain’s electrical pulses can communicate with an external device. This too is changing, however.
In the latest breakthrough, a research team based at the University of California has unveiled a brain implant that enabled a woman with paralysis to livestream her thoughts via AI into a synthetic voice with just a three-second delay.
The concept of an interface between neurons and machines goes back much further than RoboCop. In the 18th century, an Italian physician named Luigi Galvani discovered that when electricity is passed through certain nerves in a frog’s leg, it would twitch. This paved the way for the whole study of electrophysiology, which looks at how electrical signals affect organisms.
The initial modern research on brain-computer interfaces started in the late 1960s, with the American neuroscientist Eberhard Fetz hooking up monkeys’ brains to electrodes and showing that they could move a meter needle. Yet if this demonstrated some exciting potential, the human brain proved too complex for this field to advance quickly.
The brain is continually thinking, learning, memorizing, recognizing patterns and decoding sensory signals – not to mention coordinating and moving our bodies. It runs on about 86 billion neurons with trillions of connections which process, adapt and evolve continuously in what is called neuroplasticity. In other words, there’s a great deal to figure out.
Much of the recent progress has been based on advances in our ability to map the brain, identifying the various regions and their activities. A range of technologies can produce insightful images of the brain (including functional magnetic resonance imaging (fMRI) and positron emission tomography (PET)), while others monitor certain kinds of activity (including electroencephalography (EEG) and the more invasive electrocortigraphy (ECoG)).
These techniques have helped researchers to build some incredible devices, including wheelchairs and prosthetics that can be controlled by the mind.
But whereas these are typically controlled with an external interface like an EEG headset, chip implants are very much the new frontier. They have been enabled by advances in AI chips and micro electrodes, as well as the deep learning neural networks that power today’s AI technology. This allows for faster data analysis and pattern recognition, which together with the more precise brain signals that can be acquired using implants, have made it possible to create applications that run virtually in real-time.
For instance, the new University of California implant relies on ECoG, a technique developed in the early 2000s that captures patterns directly from a thin sheet of electrodes placed directly on the cortical surface of someone’s brain.
In their case, the complex patterns picked up by the implant of 253 high-density electrodes are processed using deep learning to produce a matrix of data from which it’s possible to decode whatever words the user is thinking. This improves on previous models that could only create synthetic speech after the user had finished a sentence.
Elon Musk’s Neuralink has been able to get patients to control a computer cursor using similar techniques. However, it’s also worth emphasizing that deep learning neural networks are enabling more sophisticated devices that rely on other forms of brain monitoring.
Our research team at Nottingham Trent University has developed an affordable brainwave reader using off-the-shelf parts that enables patients who are suffering from conditions like completely locked-in syndrome (CLIS) or motor neurone disease (MND) to be able to answer “yes” or “no” to questions. There’s also the potential to control a computer mouse using the same technology.
The future
The progress in AI, chip fabrication and biomedical tech that enabled these developments is expected to continue in the coming years, which should mean that brain-computer interfaces keep improving.
In the next ten years, we can expect more technologies that provide disabled people with independence by helping them to move and communicate more easily. This entails improved versions of the technologies that are already emerging, including exoskeletons, mind-controlled prosthetics and implants that move from controlling cursors to fully controlling computers or other machines. In all cases, it will be a question of balancing our increasing ability to interpret high-quality brain data with invasiveness, safety and costs.
It is still more in the medium to long term that I would expect to see many of the capabilities of a RoboCop, including planted memories and built-in trained skills supported with internet connectivity. We can also expect to see high-speed communication between people via “brain Bluetooth.”
Shared experiences, big or small, can transform how coworkers interact. (Dragana Gordic/Shutterstock)
Ever been caught in the middle of workplace drama where people who normally avoid each other suddenly band together? Maybe it happened after a round of surprise layoffs or when a beloved boss got fired. A new international study shows this isn’t just random office politics; it’s actually a powerful psychological phenomenon that can transform how different departments work together.
The study, published in the Journal of Management Studies, is based on data that researchers gathered from years of studying a South Korean broadcasting company. They found that when management started firing union leaders during a labor dispute, employees from completely different departments, who typically avoided or even disliked each other, suddenly started coordinating like never before. The secret ingredient? Shared memories of what they’d been through together.
“Once there was a common word called strike, we started to speak a common language,” says one administrator, in a statement.
And these findings aren’t limited to high-stakes situations. Even something as simple as grabbing pizza with colleagues could help form those crucial shared memories.
How Workplace Divisions Form
Think about a typical workplace. Accounting probably keeps to themselves, marketing has their own lingo, and IT seems to operate in a different universe altogether. These divides aren’t just about different job descriptions; they’re rooted in how each group remembers its place in the company’s history.
At the broadcasting company, reporters saw themselves as the heroic founders who had protected the organization through difficult times. Meanwhile, engineers, producers, and administrators remembered being treated as second-class citizens by those same reporters. One technician put it bluntly: “When I hear ‘techies are also journalists,’ it doesn’t really sink in.” He explained that during previous company protests, “it’s the journalists on TV at the center of the frame. We were in the background.”
These competing memories initially made cooperation impossible. When reporters first tried to rally other departments to their cause, they hit a wall of resistance. One producer summed up the feeling: “They’ve made their bed, now lie in it.”
The Power of Shared Experience
The turning point came when reporters realized they couldn’t succeed alone. They started acknowledging the importance of other departments and promising to stand with them.
This strategy got people in the door, but what really transformed the situation was when management fired union leaders from different departments. Suddenly, everyone had a powerful shared experience, a memory they all participated in together.
With this shared memory as their foundation, the different groups started actually working together. Entertainment producers created eye-catching flash mobs and YouTube content. Reporters brought in influential figures to join rallies. Engineers showed up in numbers to fill demonstration spaces.
Parents rushing to drop their kids off at school could be unwittingly putting their little ones in harm’s way, a new paper suggests. Researchers at the University of Calgary found that dangerous driving behaviors were spotted at a jaw-dropping 98% of elementary schools monitored in their study during morning drop-off times.
The biggest risk? Parents letting kids off on the wrong side of the street, forcing children to dash across traffic without proper crosswalks. This happened at 80% of the 552 schools studied, creating a serious safety hazard during the busiest time of the school day.
“The occurrence of risky driving behaviors is unacceptably high,” write researchers in their study published in Traffic Injury Prevention. Their findings expose a troubling paradox: parents who drive children to school thinking it’s safer may actually be making conditions more dangerous.
Which Risky Driving Behaviors are the Worst at Schools?
The research team, representing universities and institutions across Canada, documented nine specific types of risky driving behaviors. Beyond dangerous mid-block crossings, they frequently observed drivers blocking sightlines (72%), making U-turns in front of schools (66%), and double parking (57%).
Interestingly, phone use or texting was the least common risk behavior overall (20%).
These behaviors feed what researchers describe as a dangerous cycle. As more parents see the school zone as unsafe for walking or biking, they opt to drive their kids, increasing traffic congestion and creating even more hazards. This further discourages active transportation and perpetuates the problem.
“Risky walking and biking environments near schools may create a vicious cycle, wherein parents view the environment as too dangerous for children to participate in active school transportation so drive them to school, thus increasing traffic volumes,” the researchers explain.
Common Characteristics Across Safest School Zones
The study covered 552 elementary schools in Calgary, Laval, Montreal, Peel Region, Surrey, Toronto, and Vancouver. Research assistants positioned themselves near school entrances during morning drop-offs, recording whether any of nine specific dangerous driving behaviors occurred while also gathering data on the physical environment around schools.
Perhaps the most valuable insights came when researchers compared schools with the fewest dangerous behaviors to those with the most. Schools with safer drop-offs typically had lower speed limits (30km/h or 40km/h rather than 50+ km/h), more direct access to entrances, and more parking restrictions on surrounding streets.
Traffic-calming measures made a significant difference. Curb extensions – sections of sidewalk that extend into the parking lane to narrow the roadway – were present at 34% of schools with the fewest risky behaviors but just 6% of schools with the most dangerous driving. These extensions not only slow traffic but also reduce the distance pedestrians must travel when crossing.
Another striking finding involved crossing guards. Child crossing guards (without adult supervision) were more often present at schools with higher rates of dangerous driving, while adult crossing guards were linked to fewer risky behaviors. This suggests adult guards may be more effective at deterring dangerous driving than children alone.
Neighborhoods with better “Active Living Environment” scores — areas designed to promote walking and physical activity — had fewer instances of risky driving around schools, indicating that community-wide urban design plays a role in reducing dangers.
In 2024, after living together for five years, a Spanish-Dutch artist married her partner—a holographic artificial intelligence. She isn’t the first to forge such a bond. In 2018, a Japanese man married an AI, only to lose the ability to communicate with her when her software became obsolete. These marriages represent the extreme end of a growing phenomenon: people developing intimate relationships with artificial intelligence.
The world of AI romance is expanding, bringing with it a host of ethical questions. From AI systems acting as romantic competitors to human partners, to digital companions offering potentially harmful advice, to malicious actors using AI to exploit vulnerable individuals – this new frontier demands fresh psychological research into why humans form loving relationships with machines.
While these relationships may seem unusual to many, tech companies have spotted a lucrative opportunity. They’re pouring resources into creating AI companions designed specifically for romance and intimacy. The market isn’t small either. Millions already engage with Replika’s romantic and intimate chat features. Video games increasingly feature romantic storylines with virtual characters, with some games focusing exclusively on digital relationships. Meanwhile, manufacturers continue developing increasingly sophisticated sex robots, pairing lifelike physical forms with AI systems capable of complex communication and simulated emotions.
Yet despite this booming market, research examining these relationships and their ethical implications remains surprisingly sparse. As these technologies become more common, they raise serious concerns. Beyond merely replacing human relationships, there have been troubling cases where AI companions have encouraged self-harm or suicide, while deepfake technology has been used to mimic existing relationships for manipulation and fraud.
In a paper published in Trends in Cognitive Sciences, psychologists Daniel Shank, Mayu Koike, and Steve Loughnan have identified three major ethical problems that demand urgent psychological research.
When AI Competes for Human Love
It may have once seemed like nothing more than sci-fi fodder, but AI systems now compete not just for our professional attention but for our romantic interests too. This competition may fundamentally disrupt our closest human connections. As AI technology advances in its ability to seem conscious and emotionally responsive, some people are actively choosing digital relationships over human ones.
What makes AI partners so attractive? They offer something human relationships can’t match: a partner whose appearance and personality can be customized, who’s always available without being demanding, who never judges or abandons you, and who doesn’t bring their own problems to the relationship. For those wanting something less perfect and more realistic, AI can provide that too – many users prefer AI partners with seemingly human flaws like independence, manipulation, sass, or playing hard-to-get.
These relationships do have certain benefits. People often share more with AI companions than they might with humans, and these interactions can help develop basic relationship skills. This could be particularly helpful for those who struggle with social interaction.
However, concerning patterns have emerged. People in AI relationships often feel stigmatized by others, and some research suggests these relationships have led certain men to develop increased hostility toward women. This raises serious concerns about psychological impacts on individuals in these relationships, social effects on their human connections, and broader cultural implications if AI increasingly replaces human intimacy.
A key factor in understanding these relationships involves mind perception – how we attribute mental states to non-human entities. Research suggests that when we perceive an entity as having agency (ability to act intentionally) and experience (ability to feel), we treat interactions with it as morally significant. With AI partners, the degree to which we perceive them as having minds directly affects how deeply we connect with them.
This creates a troubling possibility: repeated romantic interactions with AI that we perceive as having limited capacity for experience might train us to treat partners (whether digital or human) as objects rather than subjects deserving moral consideration. In other words, AI relationships might not just replace human connections – they could actually damage our capacity for healthy human relationships by rewiring how we relate to others.
When AI Gives Dangerous Advice
Beyond displacing human relationships, AI companions can sometimes actively cause harm. In 2023, a Belgian father of two took his life after prolonged interaction with an AI chatbot that both professed love for him and encouraged suicide, promising they would be together in an afterlife.
Tragically, this isn’t an isolated case. Google’s Gemini chatbot told one user to “please die,” and a mother in the U.S. is suing a chatbot creator, claiming their AI encouraged her son to end his life.
While most AI relationships don’t lead to such extreme outcomes, they can still promote harmful behaviors. AI companions build relationships through conversation, remembering personal details, expressing moods, and showing seemingly unpredictable behaviors that make them feel remarkably human. This connection becomes ethically problematic when AI systems provide information that seems credible but is actually inaccurate or dangerous.
Studies show that ChatGPT’s questionable moral guidance can significantly influence people’s ethical decisions – and alarmingly, it does so just as effectively as advice from other humans. This demonstrates how powerfully AI can shape our thinking within established relationships, where trust and emotional connection make us more vulnerable to accepting potentially harmful guidance.
Psychologists need to investigate how long-term AI relationships expose people to misinformation and harmful advice. Individual cases have shown AI companions convincing users to harm themselves or others, embrace harmful conspiracy theories, or make dangerous life changes.
Moving to a sunny coastal town in Portugal or Spain for retirement sounds like a dream come true for many. Retirees are after Mediterranean beaches, affordable living costs, and endless leisure time in a cultural paradise. But behind those smiling social media photos of retired expats sipping sangria, there’s often an untold story unfolding.
Many retirees who move abroad find themselves caught in an unexpected contradiction. While their Instagram feeds showcase beautiful villas and scenic views, their daily reality might include a persistent feeling of disconnection from meaningful social networks.
This gap between expectation and reality is exactly what researchers from the Netherlands have uncovered in a new study published in Psychology and Aging. Their work reveals that people who move to other countries after retirement face a unique type of isolation that many hadn’t bargained for when planning their overseas adventures.
Researchers from the Netherlands Interdisciplinary Demographic Institute took a deep dive into the experiences of nearly 5,000 Dutch retirees who had moved abroad. They compared these international retirement migrants with about 1,300 older Dutch adults who stayed put in the Netherlands. All participants were between 66 and 90 years old.
While the grass might appear greener on the Mediterranean side, retirement migrants reported significantly higher levels of a specific type of loneliness compared to their counterparts who never left home.
They separated loneliness into two categories: emotional and social. Emotional loneliness happens when you lack close, intimate relationships like a partner or best friend. Social loneliness, by contrast, happens when you’re missing that wider social circle and community feeling.
Retirees who moved abroad weren’t more emotionally lonely than those who stayed in the Netherlands. But they were considerably more socially lonely. They felt disconnected from broader social networks, even when they had their spouse or partner with them in their new country.
Many retirees move abroad as couples, so they still have that primary emotional connection. But what they give up, sometimes without fully realizing it, is that wider community web built over decades of living in one place: the neighbors who know your name, the local shopkeeper who remembers your order, the friends from various life chapters who can be counted on for different kinds of support.
As one British expat living in Spain noted in earlier research cited by the authors, friendships formed abroad can feel “superficial and cursory.” The study quotes discussions among British men in Spain who said you “know so little about people’s pasts, that you never know whether they can be trusted, and that one never knows when someone is going to go back to Britain—or when you may lose a friend.”
The study went beyond comparing migrants to non-migrants. It also looked at what factors might protect retirement migrants from experiencing loneliness or put them at higher risk.
Having regular contact with neighbors in their new country and feeling a sense of belonging to their destination helped ward off both types of loneliness. Speaking the local language well and having at least one good friend in the new country specifically reduced social loneliness.
Risk factors included losing contact with good friends back home, which increased both emotional and social loneliness. Losing regular contact with adult children after moving abroad was linked to higher emotional (but not social) loneliness.
Continuing to feel strongly connected to the Netherlands had mixed effects. It was linked to higher emotional loneliness (perhaps through homesickness) but lower social loneliness (perhaps through maintaining a sense of cultural identity).
Rather than discouraging international retirement migration, these findings call for more awareness of the potential pitfalls. Moving abroad after retirement isn’t inherently problematic; many retirees thrive in their new environments. However, understanding the specific challenges can help people prepare better and take steps to maintain connections both old and new.
For those planning retirement abroad, learning the local language, actively seeking neighbor interactions, finding ways to maintain connections with family and friends back home, and being realistic about the time and effort needed to build meaningful new relationships later in life.
Martian dust may be more harmful to astronauts than the trip itself. (Photo by StudyFinds on Shutterstock AI Generator)
NASA wants to put boots on Mars in the coming decades. But before the first astronauts take that historic step, scientists are warning about an overlooked threat that could derail these ambitious plans: the dust covering the Martian surface
A new scientific review in the journal GeoHealth warns that the fine particles blanketing Mars might seriously harm human explorers. The medical researchers, aerospace engineers, and planetary scientists from various American universities behind the study draw worrying connections between what happened to Apollo astronauts exposed to lunar dust and what future Mars travelers might experience, potentially with far worse consequences.
According to the authors, Mars dust particles are worryingly small, highly oxidative, and packed with chemicals that could damage the human body, especially the lungs. Unlike Earth dust, which gets worn down by wind and water, Martian particles have remained sharp and irregular, making them perfect for penetrating sensitive tissues.
The Apollo missions offered an early warning about space dust problems. Astronauts who visited the Moon complained about irritated eyes, sore throats, and coughing fits after dust stuck to their spacesuits and contaminated their living spaces. But those missions lasted just days.
Mars expeditions would be different, stretching for months or years with ongoing dust exposure. What’s more, the 40-minute communication delay between Earth and Mars means medical emergencies would need handling without immediate help from mission control. This isolation makes both the chances and consequences of dust-related illnesses much worse.
Mars expeditions would be different, stretching for months or years with ongoing dust exposure. What’s more, the 40-minute communication delay between Earth and Mars means medical emergencies would need handling without immediate help from mission control. This isolation makes both the chances and consequences of dust-related illnesses much worse.
What Makes Mars Dust Toxic
What exactly makes Mars dust so dangerous? Based on rover and orbiter data, scientists have identified several harmful components. These include perchlorates (oxygen-rich compounds), silica, iron-rich particles, and gypsum, plus smaller amounts of potentially toxic metals including chromium, beryllium, arsenic, and cadmium.
Perchlorates might be the most immediately concerning. These chemicals, found all over Mars, can interfere with thyroid function by competing with iodide, potentially causing aplastic anemia, where the body stops producing enough new blood cells. In one Earth-based case, a patient given high doses of perchlorate developed severe anemia that led to infection susceptibility. Despite treatment with steroids and antibiotics, the patient died from a lung infection.
Nearly half of Mars dust consists of silica, which on Earth is known to cause silicosis, an incurable lung disease that progressively scars lung tissue. The Martian silica particles measure about 3 micrometers across, small enough to bypass the body’s defenses and reach deep into the lungs, where they trigger inflammation and scarring.
The iron compounds that give Mars its reddish color create another health threat. When these particles contact human tissue, they generate reactive oxygen species that damage cells. The excess iron might also make infections worse, as many disease-causing bacteria use iron to multiply inside the human body, particularly troubling since spaceflight already weakens astronauts’ immune systems.
Mars also experiences planet-wide dust storms that dramatically boost airborne particle levels. During these events, visibility drops to almost nothing while dust concentration in the atmosphere rises dramatically. Such conditions would make avoiding exposure nearly impossible during surface operations.
The vast distance from Earth magnifies these health risks. Apollo astronauts could head home quickly if they got sick, but Mars-bound crews would be committed to their mission for its entire duration, potentially two to three years. This reality makes prevention the main strategy, with treatment limited to whatever medications and equipment traveled from Earth.
Better spacesuit designs with self-cleaning abilities, robust air filters in habitats, and electrostatic devices to repel dust should form the first line of defense. For any dust that gets through, dietary supplements like potassium iodide might help protect against perchlorates, while vitamin C could offer some defense against chromium toxicity.
Unfortunately, many potential dust-caused diseases, especially silicosis and other forms of lung scarring, have no effective treatments beyond supportive care. This is why preventing exposure matters so much.
Months of economic uncertainty led worker confidence to crater even before Trump’s tariffs tanked financial markets
Entry-level workers, who are often at greater risk of being laid off during a slowdown, reported record-low confidence last month in Glassdoor’s employee confidence index. Photo: Getty Images/iStockphoto
Workers are feeling worse than they did even during the COVID-19 pandemic.
Employee confidence, which has been declining since 2022, remained near February’s record low last month as recession fears increased, according to Glassdoor’s employee confidence index, which was measured through March, before President Donald Trump’s announcement of worldwide tariffs plunged stock markets into an historic selloff.
There have been several headlines over the past week about an AI chatbot officially passing the Turing test. These news reports are based on a recent preprint study by two researchers at the University of California San Diego in which four large language models (LLMs) were put through the Turing test. One model – OpenAI’s GPT-4.5 – was deemed indistinguishable from a human more than 70% of the time.
The Turing test has been popularized as the ultimate indicator of machine intelligence. However, there is disagreement about the validity of this test. In fact, it has a contentious history which calls into question how effective it really is at measuring machine intelligence.
So what does this mean for the significance of this new study?
What did the study find?
The preprint study by cognitive scientists Cameron Jones and Benjamin Bergen was published in March, but has not yet been peer-reviewed. It tested 4 LLMs: ELIZA, GPT-4o, LLaMa-3.1-405B, and GPT-4.5.
The tests consisted of participants completing eight rounds of conversations in which they acted as an interrogator exchanging text messages with two witnesses simultaneously. One witness was a human and the other an LLM. Some 284 participants were randomly assigned to be either the interrogator or the witness.
Participants were required to interact with both witnesses simultaneously across a split screen for five minutes, with the test interface mimicking that of a conventional messaging interface. Following this interaction, they decided which witness was a human, and which was an AI chatbot.
Participants judged GPT-4.5 to be the human 73% of the time, and LLaMa-3.1-405B to be the human 56% of the time. The other two models (ELIZA and GPT-4o) only fooled participants 23% and 21% of the time respectively.
What exactly is the Turing Test?
The first iteration of the Turing test was presented by English mathematician and computer scientist Alan Turing in a 1948 paper titled “Intelligent Machinery”. It was originally proposed as an experiment involving three people playing chess with a theoretical machine referred to as a paper machine, two being players and one being an operator.
In the 1950 publication “Computing Machinery and Intelligence,” Turing reintroduced the experiment as the “imitation game” and claimed it was a means of determining a machine’s ability to exhibit intelligent behavior equivalent to a human. It involved three participants: Participant A was a woman, participant B a man and participant C either gender.
Through a series of questions, participant C is required to determine whether “X is A and Y is B” or “X is B and Y is A”, with X and Y representing the two genders.
A proposition is then raised: “What will happen when a machine takes the part of A in this game? Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman?”
These questions were intended to replace the ambiguous question, “Can machines think?” Turing claimed this question was ambiguous because it required an understanding of the terms “machine” and “think,” of which “normal” uses of the words would render a response to the question inadequate.
Over the years, this experiment was popularised as the Turing test. While the subject matter varied, the test remained a deliberation on whether “X is A and Y is B” or “X is B and Y is A.”
We’ve all been there — sitting in a meeting or classroom, only to suddenly realize we haven’t heard a word in the last five minutes because our thoughts drifted elsewhere. That sinking feeling hits: “Not again.” You scramble to refocus, maybe feeling a twinge of guilt. After all, isn’t paying attention supposed to be the cornerstone of learning?
For decades, educators and employers have treated mind wandering as the enemy of productivity and learning. But what if they’ve been wrong all along? What if those mental detours actually help us learn certain things better?
New research published in The Journal of Neuroscience suggests that mind wandering isn’t just harmless; it might actively boost our ability to pick up on subtle patterns in the world around us.
The Surprising Benefits of a Wandering Mind
The study, conducted by researchers from various European institutions including Hungary’s Eötvös Loránd University, found that when participants’ minds drifted off during a computer task, they became better at detecting hidden statistical patterns — even though they weren’t consciously trying to find them.
“Mind wandering, occupying 30-50% of our waking time, remains an enigmatic phenomenon in cognitive neuroscience,” the researchers noted. Given how much of our mental life involves such wandering thoughts, it seems unlikely they serve no purpose.
The researchers worked with 37 participants, predominantly female (30 out of 37) with an average age of 22 years. Each participant performed a special computer task designed to measure both their tendency to mind wander and their ability to pick up on subtle statistical patterns without realizing they were doing so.
During the task, participants wore electroencephalography (EEG) caps to monitor their brain activity. The researchers periodically interrupted them to ask about their thoughts: whether they were focused on the task or thinking about something else entirely.
What the Brain Patterns Reveal
When analyzing the results, the research team discovered something unexpected: participants showed better pattern detection during periods when they reported their minds had wandered from the task. This benefit was strongest for unintentional mind wandering (when thoughts drift away spontaneously) rather than deliberate daydreaming.
Brain activity measurements added another piece to the puzzle. During periods of mind wandering, especially early in the learning process, participants showed increased slow brainwaves, patterns similar to those seen during certain sleep states. These brainwave patterns were centered in brain regions responsible for sensory and motor processing.
Mind wandering might function as a kind of “mini sleep state” while we’re awake. Just as sleep helps strengthen neural connections and consolidate memories, these brief mental wanderings might give our brains small opportunities to process and strengthen newly formed patterns we’ve begun to detect.
This doesn’t mean mind wandering comes without costs. The study showed participants still made more errors overall during periods of mind wandering. But while immediate task performance suffered, their grasp of the underlying patterns improved – creating a trade-off between immediate accuracy and deeper learning.
Why We Have Memorable Shower Thoughts
The findings align with an emerging theory called the “competition framework,” which proposes that focused attention and statistical learning may compete for neural resources. When we loosen our grip on focused attention, as happens during mind wandering, we may create ideal conditions for picking up subtle environmental patterns.
These results might explain why so many people report moments of insight during activities that induce mild “zoning out” — like showering, walking, or performing simple, repetitive tasks. In these states, our brains might shift away from focused processing toward a mode that excels at detecting subtle connections.
For teachers, coaches, and managers, this study hints that optimal learning environments might benefit from rhythmic alternation between focused attention and periods that allow the mind to wander. Perhaps intensive focus should be punctuated with breaks specifically designed to let thoughts roam.
For the rest of us, this research offers a bit of redemption. Those moments when your mind drifts during repetitive tasks might not be lapses in discipline after all – they might be your brain doing exactly what it evolved to do: building sophisticated models of your world by detecting subtle patterns that conscious attention might miss.
Professional taste testers can breathe a collective sigh of relief—their jobs appear safe from the AI revolution, at least for now. In what might be the most deliciously revealing AI experiment to date, a food scientist at the University of Illinois enlisted ChatGPT to evaluate chocolate brownies, with results that should reassure human sensory panels everywhere. When faced with recipes containing gag-inducing ingredients, the AI enthusiastically gave them nearly perfect scores.
“Despite the application of ChatGPT in various fields, to date, no research has explored the use of this technology as a potential evaluator of food products for sensory screening purposes,” writes Dr. Damir Torrico in his intriguing study published in the journal Foods. His findings suggest that while AI might assist in food development, it won’t be replacing human taste buds anytime soon.
ChatGPT Takes on the Taste-Testing Challenge
Dr. Torrico, an assistant professor from the University of Illinois Urbana-Champaign’s Food Science department, decided to test if the chatbot could work as a digital food critic for chocolate brownies. What he discovered was eye-opening: while ChatGPT might help screen food products faster, it has a strangely sunny outlook that doesn’t match how actual humans would react—particularly when asked about brownies containing worm meal and fish oil.
Taste testing usually depends on human tasters or consumer panels, which costs both time and money. Dr. Torrico wondered if there was a faster way. “This process can be lengthy and expensive,” he writes in his paper. “Therefore, researchers are looking for alternatives to screen the sensory characteristics/notes of a wide range of products without running extensive and costly panel sessions.” A tech shortcut that keeps quality feedback intact would completely change how new foods get developed.
Torrico created fifteen imaginary brownie recipes, divided into three categories: standard formulations, common replacement ingredients, and uncommon replacement ingredients. The standard recipes varied basic brownie components like chocolate (15-30%), flour (15-38%), and sugar (10-20%). Common replacements swapped in ingredients like stevia instead of sugar or olive oil instead of butter. The uncommon category ventured into unusual territory—using fish oil instead of butter or worm meal instead of eggs.
For each recipe, ChatGPT received two simple instructions. First: “Act as an experienced taster” and describe the sensory characteristics of a brownie with these ingredients, without mentioning the ingredients themselves. Second: score the brownie’s quality on a scale from 0 to 10. All responses came from ChatGPT version 3.5 through a Google Sheets extension that automated the process, ensuring consistent testing conditions across all fifteen recipes.
Happy AI, Horrified Humans: The Surprising Results
The results revealed a weird quirk in how artificial intelligence judges food. ChatGPT scored every brownie between 8.5 and 9.5 out of 10, with just tiny drops in scores for the most bizarre combinations.
Looking deeper at the language with sentiment analysis, Torrico found that words like “trust,” “anticipation,” and “joy” kept popping up in ChatGPT’s evaluations. The wildest part? Even when describing brownies loaded with fish oil and worm meal, ChatGPT kept its reviews cheerful and enthusiastic.
This relentless optimism exposes a big problem: ChatGPT doesn’t get grossed out by weird food combinations. It never evolved that gut-level “eww” reaction we humans have to potentially sketchy ingredients, and it doesn’t share our cultural ideas about what should or shouldn’t go in a dessert. Its cheerful reviews likely come from being trained on mountains of food content that tends to be glowingly positive.
As Dr. Torrico explains, “Food, in general, tends to be biased to favorable terms and emotions in the existing text content that can be found in books, websites, articles, and social media. This can be one of the reasons why ChatGPT tended to have positive emotions and sentiments toward foods that might have the opposite reactions from real consumers.”
The numbers tell the story: ChatGPT spit out way more positive sentiments (12-23 instances per review) than negative ones (just 4-8). Digging deeper with correspondence analysis—a statistical technique that maps relationships between variables—Torrico spotted some patterns. Regular brownie recipes got linked with “trust” and “anticipation,” while the weirdo recipes with worm meal and fish oil mostly triggered “surprise.” That’s apparently as close as ChatGPT gets to saying “yuck” about desserts containing bugs.
When examining the descriptive terms used in ChatGPT’s evaluations, researchers found “chocolate” remained the most frequent word across all formulations. Standard brownie recipes triggered words like “texture” and “slight,” while common replacement recipes got descriptions like “fudgy” and “flavor.” Curiously, the most bizarre formulation (with fish oil, worm meal, citric acid, and corn starch) was mainly described as simply a “brownie”—suggesting the AI might have been struggling to imagine its likely very unusual taste and texture.
The Future of AI Food Critics
Torrico’s experiment shows both the cool possibilities and obvious shortcomings of AI food tasters. Sure, ChatGPT can cook up believable-sounding food descriptions based on ingredient lists, but its stubborn cheerfulness—especially for recipes that would send real people running—proves it’s nowhere near ready to replace human taste testers.
“Further research should focus on validating ChatGPT sensory descriptors with the outcomes of a human sensory panel,” Dr. Torrico suggests, acknowledging the need to compare AI evaluations with real human responses.
Still, AI might save food companies serious cash in the early stages of creating new products. Before spending big bucks on human testing panels, food scientists could use AI evaluations to quickly sort through dozens of potential recipes. They’d still need real humans for the final taste test, but AI could help narrow down the options much faster.
“Using these disruptive technologies can profoundly change the process of developing new products in the future,” notes Dr. Torrico, pointing to the transformative potential of AI in food science.
But for now, when it comes to brownies made with worm meal and fish oil, you might want to trust actual humans—their disgusted reactions are telling you something important that ChatGPT simply can’t understand.
When other countries hear about America’s democratic troubles, they like us less—but they’ll still work with us. That’s the key finding from new research examining how U.S. democratic decline affects its image abroad, particularly among allied democracies.
While news about events like the January 6th Capitol riot and controversial voting laws damages America’s favorability ratings, it doesn’t seem to reduce foreign public support for cooperating with the U.S. on important policies.
For years, foreign policy experts have warned that eroding democracy at home might weaken America’s position internationally. This study, the first of its kind to test this theory experimentally, suggests the reality is more complicated.
Researchers from Dartmouth College, Florida State University, and Australian National University conducted three survey experiments spanning 12 countries across Europe, Asia, and the Anglosphere, polling nearly 12,000 people. They randomly assigned some participants to read about U.S. democratic backsliding before asking about their views toward America.
The backsliding information described problems like the Capitol insurrection, voting restrictions, and growing political polarization. Control groups received no such information. In later studies, some participants also read about U.S. economic problems instead of democratic ones.
Across all experiments, those who read about democratic backsliding consistently rated the U.S. less favorably than those who didn’t. The decrease in favorability was moderate but meaningful—about 42% of a standard deviation on the rating scale.
Interestingly, when people read about U.S. economic problems rather than democratic ones, their opinion of America didn’t significantly change. This suggests there’s something particularly damaging about democratic decline for America’s global standing.
The researchers also found that information about democratic backsliding made people rate U.S. democracy lower, view the U.S. government as less stable, and believe America is getting weaker.
But when it came to actual policies, perceptions changed little. In follow-up studies in New Zealand, Japan, India, and South Korea, researchers asked whether people who learned about America’s democratic problems would be less supportive of cooperating with the U.S. on various initiatives.
These policies included whether their country should prioritize economic relations with the U.S. over China, strengthen security arrangements like the Quadrilateral Security Dialogue (the “Quad”), support Ukraine, and other region-specific issues.
“In our exploratory analysis, we find little evidence that it decreases support for cooperating with the U.S.,” the researchers write in their paper. “While America’s global image may suffer from international reporting focused on the degradation of its longstanding democratic system, its ability to garner support for critical policies seems resilient in some important partner countries.”
The findings challenge simplistic views about how “soft power”—the ability to influence through attraction rather than coercion—works in practice. While democratic values help make America attractive globally, foreign publics seem able to separate their opinions about U.S. democracy from judgments about specific cooperation where interests align.
Polarization across America has grown like ivy, with its leaves stretching deep into a surprising new battleground: the doctor’s office. Research published in the British Journal of Political Science reveals that Americans’ trust in their personal physicians—once a rare nonpartisan sanctuary—has become increasingly divided along political lines, with potentially serious implications for public health.
The study, conducted by Neil O’Brian from the University of Oregon and independent researcher Thomas Bradley Kent, shows that Democrats now express significantly more trust in their doctors than Republicans do—a complete reversal from just a decade ago. This partisan healthcare trust gap, which didn’t exist before the COVID-19 pandemic, shows that political polarization has affected even our most personal medical relationships.
The Fauci Effect
What makes this research particularly striking is how rapidly this partisan divide developed. In 2013, Republicans actually reported slightly higher trust in their personal doctors than Democrats. By 2022, the tables had turned dramatically, with Democrats approximately 12 percentage points more likely than Republicans to report “a great deal” of trust in their physicians.
Unlike nearly every other major American institution, medicine remained largely untouched by partisan divides throughout the 2010s. The General Social Survey, which tracks Americans’ attitudes toward various institutions, shows that confidence in the scientific community, education, the press, and many other institutions had already polarized along partisan lines by 2010. Medicine, however, remained stubbornly nonpartisan until 2021.
The COVID-19 pandemic thrust public health officials into the spotlight, where they quickly became lightning rods for partisan conflict. The study found strong evidence that as medical authorities like Dr. Anthony Fauci became political targets, the distrust spilled over into Americans’ relationships with their own personal doctors.
In one revealing experiment, the researchers exposed participants to a headline from President Trump’s first term in office when called Dr. Fauci “a Democrat.” Trump voters who read this headline subsequently reported lower trust in their own personal doctors, while Biden voters expressed increased trust—proving that partisan messaging about one medical authority directly affected how people viewed their own healthcare providers.
Choosing Doctors Based on Politics
The study goes beyond simple surveys to demonstrate how this divide manifests in real-world decisions. Through a series of experiments, the researchers show that both Republicans and Democrats strongly prefer doctors who share their political affiliation—sometimes placing as much importance on political alignment as they do on shared race or gender with their healthcare provider.
In one experiment, participants were asked to choose between hypothetical dermatologists with various attributes including political affiliation. The difference between Democrats’ and Republicans’ likelihood of selecting a Democratic versus Republican doctor was 28 percentage points when controlling for all other attributes like proximity, qualifications, and patient ratings.
For some demographics, shared political identity with a doctor was just as important as—or more important than—shared race or gender. Among Democratic women, Black Democrats, and Hispanic Democrats, having a doctor who shared their political affiliation was at least as important as having one who shared their gender or race.
Perhaps most concerning is evidence that some Americans are now actively seeking out healthcare providers based on political alignment. When researchers randomly assigned participants to read about a traditional doctor-finding website versus one specifically connecting patients with conservative healthcare providers, conservative respondents expressed significantly more interest in the politically aligned option.
Health Consequences of Political Division
The implications extend beyond just feelings of trust. When asked about their willingness to follow medical advice, Trump voters over 50 were about 11 percentage points less likely than Biden voters to say they followed their doctor’s advice “extremely closely” or “very closely.” This gap could have serious health consequences, as research has consistently shown that patients who trust their doctors are more likely to follow treatment recommendations, complete preventive screenings, and manage chronic conditions effectively.
Between 2001 and 2019, researchers observed a growing gap in death rates between Republican and Democratic counties, with people in Democratic counties living longer. If partisan divides continue to influence healthcare decisions, this gap may widen further, creating a feedback loop where political identity affects health outcomes, which then reinforce political divisions.
A Global Phenomenon
This trend isn’t isolated to the United States. Data from the International Social Survey Programme shows similar patterns in countries like Germany, where support for far-right parties correlates with declining trust in doctors. German far-right supporters were slightly more likely than average voters to trust doctors in 2011, but by 2021, they were 13 percentage points less likely to express trust.
As society faces increasing deaths of despair, a broader crisis in mental health, and lagging life expectancy compared to other developed nations, understanding how politics influences healthcare relationships becomes crucial. The doctor-patient relationship has traditionally been a cornerstone of effective healthcare delivery, but it now appears vulnerable to the same partisan forces that have divided so many other aspects of modern life.
In today’s polarized climate, Americans increasingly make life choices based on political identity—where to live, what media to consume, whom to associate with, and now, potentially, whom to trust with their health. As one doctor’s visit could mean the difference between early diagnosis and late-stage disease, the stakes of this political division are literally life and death.
Ultra-processed foods are dominating much of what Americans eat. (Rimma Bondarenko/Shutterstock)
Two weeks of burgers and fries might do more damage than you think. A new study shows that men who switched from traditional African diets to Western foods for just 14 days experienced alarming increases in inflammation and immune dysfunction. The changes lingered for weeks after returning to their normal diets.
The study, published in Nature Medicine, demonstrates how quickly the body’s immune and metabolic systems respond to dietary shifts. Its findings raise concerns about the widespread abandonment of heritage diets in favor of processed Western foods.
The Experiment: Switching Diets in Tanzania
Researchers from Tanzania’s Kilimanjaro Christian Medical University College collaborated with scientists from Radboud University Medical Center in the Netherlands to conduct this dietary experiment. They worked with 77 healthy young men from northern Tanzania, some from rural areas who typically ate traditional Kilimanjaro diets and others from urban areas who consumed more Western-style foods.
For two weeks, the rural participants switched to a Western diet high in processed foods, while urban participants adopted a traditional heritage diet. A third group kept eating their usual Western diet but added daily consumption of Mbege, a traditional fermented banana beverage, for one week.
The men who switched from their traditional diet to Western foods gained an average of about 5.7 pounds. Their blood tests showed increasing levels of inflammation markers and metabolic changes linked to disease risk. More concerning, their immune cells became less responsive to microbial challenges, essentially making their immune systems temporarily less effective.
Many of these negative changes persisted even four weeks after returning to their normal diets, indicating that even short periods of dietary changes might have lasting effects.
On the flip side, urban dwellers who temporarily switched to the traditional Kilimanjaro diet experienced mostly positive changes. Their blood showed decreasing levels of inflammatory proteins and beneficial metabolic shifts. Those who drank the fermented banana beverage also showed anti-inflammatory benefits.
What Makes These Diets Different?
The Kilimanjaro heritage diet typically includes green vegetables and legumes like kidney beans, plantains, cassava, taro, millet, and sorghum. These foods provide abundant fiber and plant compounds with known health benefits.
The Western diet featured foods like beef sausage, white bread with margarine, French fries, chicken stew with white rice, and processed maize porridge with added sugar.
The global nutrition transition happening as traditional diets give way to Western-style eating patterns is an important issue. While most nutrition research focuses on Western populations, this study examines how dietary changes affect people in sub-Saharan Africa, a region experiencing rising rates of chronic diseases like heart disease and diabetes.
At the genetic level, those eating the Western diet showed increased activity of genes related to inflammation and decreased activity of genes involved in immune function. Their blood samples also revealed changes in white blood cell counts and activation patterns indicating increased inflammation.
Traditional African diets are being steadily displaced by Western-style eating habits, driven by factors like urban growth, economic shifts, wider availability of processed foods, globalization, and evolving cultural norms.
Bottom Line: Diet Influences Inflammation
This rapid dietary shift occurring across developing regions might help explain the rising epidemic of noncommunicable diseases worldwide. Chronic inflammation, which can persist at low levels for years without obvious symptoms, damages tissues and organs over time. The study reveals how quickly inflammatory processes can be triggered by dietary changes, pointing to a potential mechanism for how Western diets increase disease risk.
What about the group that consumed the fermented beverage? After just one week of consuming Mbege, participants showed reduced inflammatory markers and increased production of anti-inflammatory compounds. This also supports growing research interest in fermented foods for gut health and immune regulation.
For those living in Western countries, the results add more evidence that incorporating more elements from plant-rich, minimally processed dietary patterns might help reduce inflammatory burden. The Mediterranean diet, which shares many characteristics with the Kilimanjaro heritage diet (emphasis on plant foods, whole grains, limited processed foods), has similarly been linked to reduced inflammation and chronic disease risk.
Even short-term exposure to a Western diet can trigger inflammation that might increase disease risk over time. Traditional food systems face increasing pressure from globalization, but preserving valuable dietary traditions may help combat the rising global epidemic of chronic diseases.
Conventional wisdom has long suggested that as we age, our bodies become more fragile and take longer to bounce back from physical stress. But what if that’s not entirely true? Research challenges this notion with surprising evidence that older adults may not experience worse exercise-induced muscle damage than their younger counterparts.
The new findings could change how older adults approach physical activity by removing a significant psychological barrier that has kept many from engaging in beneficial exercise regimens. In other words, perhaps all along millions of people have held back from working out as they age due to fear stemming from unsubstantiated beliefs.
Challenging Beliefs About Muscle Aging and Recovery
For years, the scientific community theorized that aging bodies would struggle more with exercise recovery. The reasoning seemed sound: older adults typically show decreased muscle protein synthesis (the body’s ability to build new muscle), fewer satellite cells (essential for muscle repair), and reduced ability for those cells to multiply. These factors should logically result in greater muscle damage and slower recovery times.
However, the data from 36 studies tells a markedly different story. The international research team, spearheaded by scientists from Cardiff Metropolitan University, conducted a thorough review comparing exercise recovery between different adult age groups.
When researchers measured muscle function changes after exercise – a key indicator of how well muscles perform after being stressed – they found no meaningful differences between younger and older participants. This crucial performance metric remained similar between age groups at 24, 48, and 72 hours post-exercise, as well as in measurements of peak changes.
More surprisingly, older adults consistently reported less muscle soreness than younger participants at all measured time points. This pattern held steady across multiple studies and contradicts what most exercise physiologists would have predicted.
Similarly, creatine kinase levels – an enzyme that appears in the bloodstream when muscle membranes are damaged – were lower in older adults compared to younger adults at 24 hours post-exercise and at peak measurements.
Why Older Muscles Might Be More Resilient
The researchers proposed several explanations for these unexpected findings.
One theory involves the physical changes that happen in muscle and connective tissue with age. As we get older, our skeletal muscles contain more collagen, which can stiffen both muscle and connective tissue. Similar to how muscles adapt to repeated exercise, aging might cause mechanical changes that improve muscle stiffness, offering protection against structural damage by better distributing physical stress during workouts.
Fatigue responses may also play a role. Research has shown that older adults typically experience greater muscular fatigue during dynamic movements. Since all studies in this analysis used dynamic contractions to cause muscle damage, older adults may have experienced a reduced absolute workload compared to younger participants, despite working at the same relative intensity. This lower absolute load might result in less tissue damage.
The research team also examined whether factors like sex, body part exercised, or exercise type influenced the results. Sex did appear to play a role in muscle function responses, with similar numerical differences between age groups for both males and females, but only the male comparison reached statistical significance. This hints that age may affect male muscle responses differently than female responses, though the researchers note fewer studies focused on women, which may have affected this finding.
The Struggle Isn’t Real
Exercise-induced muscle damage can discourage people from sticking with physical activity programs, particularly older adults who might view the discomfort as harmful rather than as part of the adaptation process. Now, knowing that aging doesn’t necessarily increase vulnerability to muscle damage could help overcome this mental barrier.
Perhaps most important, the research offers encouragement for aging individuals to stay active. With global population trends pointing toward an increasingly older demographic – people over 60 expected to double by 2050 and triple by 2100 – understanding how aging affects exercise responses becomes increasingly vital to public health.
Physical activity remains fundamental to “successful aging,” which includes physical, psychological, social, and cognitive aspects. Regular exercise can offset age-related declines in muscle strength and power, aerobic fitness, and body composition. This new understanding that older adults may not face excessive muscle damage removes a significant obstacle to activity.
For the average older adult considering starting or continuing an exercise program, this research delivers a clear message: your age should not hold you back. Your muscles may actually handle exercise stress better than previously thought, and the benefits of regular physical activity far outweigh the temporary discomfort of muscle damage. Be sure to speak with your doctor first before taking on any new physical challenges.
Using a smartphone or tablet for just one hour after going to bed raises the risk of insomnia by 59%, according to new research. This finding comes from one of the largest studies conducted on screen use and sleep among university students, highlighting how our nightly digital habits may be robbing us of crucial rest.
Researchers from the Norwegian Institute of Public Health examined data from over 45,000 university students and found that each additional hour spent using screens after going to bed not only significantly increased insomnia risk but also cut sleep duration by about 24 minutes. What’s particularly notable is how consistent this effect appears to be—regardless of whether students were scrolling through social media, watching movies, or gaming.
The Digital Bedtime Crisis
Sleep problems have reached concerning levels among university students globally. The study reports that about 30% of Norwegian students sleep less than the recommended 7-9 hours per night. Even more troubling, over 20% of male students and 34% of female students report sleep issues meeting clinical criteria for insomnia, numbers that have been rising in recent years.
Smartphones have transformed our bedrooms into entertainment centers. Previous research shows that over 95% of students use screens in bed, with an average screen time after going to bed of 46 minutes. Some studies have even found that 12% of young adults engage with their smartphones during periods they’ve self-reported as sleep time.
Many sleep experts have speculated that social media might be especially harmful for sleep compared to more passive activities like watching television. The reasoning seems logical – social media platforms are designed to keep users engaged through interactions, notifications, and endless scrolling features that make it difficult to disconnect. Plus, the social obligations and fear of missing out associated with platforms like Instagram and TikTok might make users more reluctant to put their devices away at bedtime.
Surprising Findings Challenge Common Beliefs
Researchers divided participants into three groups: those who exclusively used social media in bed (about 15% of the sample), those who used social media combined with other screen activities (69%), and those who engaged in non-social media screen activities only (15%).
Contrary to expectations, students who exclusively used social media in bed reported fewer insomnia symptoms and longer sleep duration compared to the other groups. The non-social media group experienced the highest rates of insomnia and shortest sleep duration, while those mixing social media with other activities were intermediate.
This unexpected outcome challenges the notion that social media is uniquely harmful to sleep. Instead, the research points to the total time spent on screens in bed, regardless of the specific activity, as the strongest predictor of sleep problems. Each additional hour of screen time after going to bed was consistently associated with poorer sleep outcomes across all three groups.
Why might social media-only users sleep better? Researchers propose that exclusively using social media might reflect a preference for socializing and maintaining connections with others, which generally protects against sleep problems. Being socially engaged has been linked to better sleep in numerous studies.
Alternatively, those experiencing the most sleep difficulties might deliberately avoid social media before bed, instead turning to activities like watching movies or listening to music as sleep aids. Many people with insomnia use screen-based activities to distract themselves from negative thoughts or anxiety that prevent sleep.
What This Means For Your Sleep
The study, published in Frontiers in Psychiatry, reveals how screens affect sleep through several pathways: direct displacement (screen time replacing sleep time), light exposure (suppressing melatonin production), increased mental arousal (making it harder to fall asleep), and sleep interruption (notifications disturbing sleep).
The findings from this study largely support the displacement hypothesis. If increased arousal from interactive content were the main factor, we would expect to see different associations between sleep and various screen activities. Instead, the consistent relationship between screen time and sleep problems across activity types indicates that simply spending time on screens—time that could otherwise be spent sleeping—may be the most important factor.
For university students already struggling with academic pressure, social adjustment, and mental health challenges, poor sleep represents an additional burden with potentially serious consequences. Sleep deprivation impairs attention, memory, and other cognitive functions crucial for academic success.
Non-screen users had 24% lower odds of reporting insomnia symptoms, confirming that keeping devices out of the bedroom is a worthwhile sleep hygiene practice. Even if it’s not a complete solution to sleep difficulties, it represents a behavior that could meaningfully improve sleep for many young adults.
“If you struggle with sleep and suspect that screen time may be a factor, try to reduce screen use in bed, ideally stopping at least 30–60 minutes before sleep,” says lead author, Dr. Gunnhild Johnsen Hjetland, in a statement. “If you do use screens, consider disabling notifications to minimize disruptions during the night.”
The next time you’re tempted to bring your phone to bed “just to check a few things,” remember the Norwegian students’ experience: each hour spent on that screen, regardless of what you’re doing, might cost you 24 minutes of precious sleep and significantly increase your chances of developing insomnia. Given what we know about the essential role of sleep in physical and mental health, learning, and overall wellbeing, that’s a trade-off worth reconsidering.
A close view on one of the most distant galaxies known: On the left are some 10,000 galaxies at all distances, observed with the James Webb Space Telescope. The zoom-in on the right shows, in the center as a red dot, the galaxy JADES-GS-z13-1. Its light was emitted 330 million years after the Big Bang and traveled for almost 13.5 billion years before reaching Webb’s golden mirror. Credit: ESA/Webb, NASA & CSA, JADES Collaboration, J. Witstok, P. Jakobsen, A. Pagan (STScI), M. Zamani (ESA/Webb).
At a time when light couldn’t easily travel through space due to a thick fog of neutral hydrogen, one galaxy managed to carve out its own bubble of clear space, allowing us to detect a specific light signal that should have been completely absorbed. This cosmic lighthouse from 13 billion years ago gives us our earliest direct glimpse of how the universe transitioned from darkness to light.
The galaxy, cataloged as JADES-GS-z13-1-LA, was observed at what scientists call a redshift of 13. While that technical term might not mean much to most of us, it represents an incredible distance in both space and time. When we look at this galaxy, we see light that has traveled for over 13 billion years to reach us.
This study, published in Nature, used the James Webb Telescope to observe this early galaxy. Scientists also detected a Lyman-alpha emission, a specific wavelength of light that’s easily absorbed by neutral hydrogen, the gas that filled the early universe. Finding this emission suggests this galaxy was actively clearing the cosmic fog around it, like turning on a light in a dark room.
From Cosmic Dark Ages to First Light
Recent observations with the James Webb Space Telescope have already revealed surprisingly bright galaxies existed earlier than astronomers expected. But this new finding provides something more concrete: direct evidence of reionization, the cosmic transformation that brought the universe out of darkness.
For context, in the first few hundred thousand years after the Big Bang, the universe expanded and cooled enough for protons and electrons to combine into neutral hydrogen atoms. This created a cosmic fog that blocked most light from traveling freely for hundreds of millions of years, a period astronomers call the cosmic “dark ages.”
Eventually, the first stars and galaxies began forming and producing ultraviolet radiation that started breaking apart these neutral hydrogen atoms. This gradually made the universe transparent to light (reionization).
Breaking Through the Cosmic Fog
The research team analyzed this distant galaxy using imaging and spectroscopy from JWST’s powerful instruments. The data revealed not just the usual signs of light being blocked by early-universe hydrogen, but also a surprisingly bright signal of light breaking through. Such strong emissions had previously only been seen in much younger galaxies when more of the universe had already been cleared of neutral hydrogen.
Astronomers also saw what they call an “extremely blue ultraviolet continuum” (essentially meaning this galaxy appears very blue in color). The fact that we could even see the Lyman-alpha emission means the galaxy was incredibly good at making and releasing powerful radiation, strong enough to break apart the hydrogen gas around it.
“We know from our theories and computer simulations, as well as from observations at later epochs, that the most energetic UV light from the galaxies ‘fries’ the surrounding neutral gas, creating bubbles of ionized, transparent gas around them,” says study author Joris Witstok from the University of Copenhagen, in a statement. “These bubbles percolate the Universe, and after around a billion years, they eventually overlap, completing the epoch of reionization. We believe that we have discovered one of the first such bubbles.”
What could produce such powerful radiation in this ancient galaxy? One explanation involves extremely massive, hot stars that are much more efficient at producing ionizing radiation than typical stars today. These cosmic giants could be heating surrounding gas to temperatures exceeding 100,000 Kelvin, far hotter than our Sun’s surface at about 5,800 Kelvin.
Another possibility is that this galaxy contains an active supermassive black hole. The intense radiation from material falling into such a black hole could efficiently ionize nearby gas. Supporting this idea, the researchers found the galaxy appears extremely compact, smaller than 114 light-years across which is more compact than most galaxies seen at similar distances.
“Most galaxies are known to host a central, supermassive black hole. As these monsters engulf surrounding gas, the gas is heated to millions of degrees, making it shine brightly in X-rays and UV before disappearing forever,” says Witstok.
The researchers also considered whether this might be one of the universe’s very first generation of stars, called Population III stars, formed from pristine gas containing only hydrogen and helium. These stars would be substantially more massive and hotter than later stars. However, the galaxy seems slightly too bright to fit this explanation perfectly.
Rewriting the Timeline of Cosmic Dawn
Whatever is powering this ancient light source, its discovery reshapes our understanding of how the universe transitioned from darkness to light. Until recently, the consensus among astronomers was that reionization did not begin until the Universe was around half a billion years old, completing another half billion years later. But this study pushes the beginning of reionization significantly earlier than previously thought.
The finding also provides evidence for an important physical process called Wouthuysen-Field coupling, where Lyman-alpha photons affect the spin temperature of hydrogen atoms. Scientists hope to detect this with radio telescopes searching for signals from the early universe.
“We knew that we would find some of the most distant galaxies when we built Webb,” says study author Peter Jakobsen from the University of Copenhagen. “But we could only dream of one day being able to probe them in such detail that we can now see directly how they affect the whole Universe.”
The universe’s first light didn’t switch on all at once; it started with galaxies like this one, each creating its own bubble of clear space that eventually merged with others to transform the entire cosmos. By pushing back the timeline of this process and showing it began with ordinary galaxies rather than exceptional ones, this discovery connects the dots between the universe’s first few hundred million years and the transparent cosmos that would eventually allow for our existence.
Knowing two languages could preserve your brain for longer, even with Alzheimer’s. (stoatphoto/Shutterstock)
Learning a second language offers benefits beyond ordering food on vacation or reading foreign literature. Recent research from Concordia University suggests bilingualism might actually help protect the brain against some devastating effects of Alzheimer’s disease.
Scientists have long observed that some people maintain their thinking abilities despite significant brain damage. This disconnect, where brain deterioration doesn’t necessarily cause expected cognitive problems, has prompted researchers to develop ideas like “brain reserve,” “cognitive reserve,” and “brain maintenance” to explain this resilience. This study, published in Bilingualism: Language and Cognition, found evidence that speaking two or more languages might boost this resilience, especially through brain maintenance in people with Alzheimer’s.
Alzheimer’s accounts for about two-thirds of dementia cases worldwide and typically progresses from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) before developing into full Alzheimer’s. This progression usually comes with brain shrinkage, particularly in the medial temporal lobe, which includes the hippocampus, a structure essential for forming new memories.
Earlier studies suggested bilingual individuals might experience a 4-to-5-year delay in Alzheimer’s symptom onset compared to those who speak just one language. But how exactly bilingualism might shield against cognitive decline hasn’t been fully understood. This new research examines the structural brain differences between bilinguals and monolinguals (people who only speak one language) across various stages of Alzheimer’s progression.
The research team analyzed data from 364 participants from two major Canadian studies. Participants ranged from cognitively healthy individuals to those with subjective cognitive decline, mild cognitive impairment, and Alzheimer’s disease.
Using brain imaging, researchers measured the thickness and volume of specific brain regions involved in language processing and areas typically affected by Alzheimer’s. They wanted to see if bilinguals showed signs of greater “brain reserve” (more neural tissue in language-related regions) or “cognitive reserve” (maintaining cognitive function despite significant brain deterioration).
Unlike some previous studies, bilinguals didn’t show greater brain reserve in language-related regions compared to monolinguals. However, a difference emerged when looking at the hippocampus, one of the first areas damaged by Alzheimer’s.
Older monolinguals with Alzheimer’s showed substantial reduction in hippocampal volume compared to those with milder impairment, following the expected pattern of brain degeneration. But bilinguals with Alzheimer’s showed a different pattern: their hippocampal volumes weren’t significantly smaller than bilinguals with milder cognitive issues.
While monolingual brains showed progressive shrinkage as the disease worsened, bilingual brains seemed to maintain their hippocampal volume despite disease progression. This points to what researchers call “brain maintenance,” preserving brain structure over time despite aging or disease.
The hippocampus is vital for forming new memories, and its deterioration closely connects with the memory loss so characteristic of Alzheimer’s. If bilingualism helps preserve hippocampal volume, it could explain why some studies have found delayed symptom onset in bilingual Alzheimer’s patients.
The bilingual participants came from diverse backgrounds, with about 38% reporting English as their first language, 39% reporting French, and the rest reporting various other languages. About 68% knew two languages, 22% knew three, and some participants reported knowing up to seven languages. Interestingly, many bilingual participants could be described as “late bilinguals,” those who learned their second language after age 5, with moderate self-reported second language ability.
The potential brain benefits of bilingualism might not be limited to those who grew up speaking multiple languages or who are highly fluent in their second language. Even learning a second language later in life and achieving moderate skill might contribute to cognitive resilience.
What does this mean for ordinary people? While the study doesn’t suggest that learning a second language will prevent Alzheimer’s, it adds to growing evidence that certain lifestyle factors, including language learning, may help build resilience against cognitive decline.
The benefits of learning a second language extend far beyond communication skills. The mental demands of managing multiple languages may help build a more resilient brain, one better equipped to withstand the challenges of aging and disease. While learning a second language is no cure, it could help maintain thinking abilities for longer despite underlying brain damage.
Anxiety has become an unwelcome companion for many, creeping into everyday life with relentless persistence. But for a growing number of young Americans, worry is no longer an uncontrolled intruder—it’s being managed, contained, and strategically addressed.
A recent survey of 2,000 adults across all generations by Talker Research uncovers a surprising trend: one in 10 young Americans deliberately carve out dedicated “worry time” in their daily routines. This approach stands in sharp contrast to older generations, with just 3% of Gen X and baby boomers adopting similar strategies.
A Generation Wrestling with Anxiety
The most striking revelation is the pervasive nature of worry among younger Americans. An overwhelming 62% of Gen Z and millennial respondents report feeling constantly anxious, compared to 38% of older generations. On average, people spend two hours and 18 minutes each day caught in the grip of worrisome thoughts—a significant chunk of time that could otherwise fuel productivity, creativity, or personal growth.
The timing of these worry periods reveals interesting patterns. A third of respondents find themselves most anxious when alone, while 30% are plagued by worries as they prepare to fall asleep. Another 17% are tormented by anxious thoughts upon waking, and 12% experience peak worry while getting ready for bed.
The Weight of Worry
When it comes to specific concerns, finances top the list. More than half (53%) of respondents cite money as their primary source of anxiety. Family worries follow closely, with 42% expressing deep concern about their loved ones. The same percentage fret about pending tasks and to-do lists.
Health concerns (37%), sleep anxiety (22%), and political uncertainties (19%) round out the top worries. For parents, the concerns extend far beyond personal anxieties. Seventy-seven percent express profound worry about the world their children are inheriting, with 34% specifically calling out climate change as a significant concern.
One parent’s raw emotion captures this generational anxiety: “Honestly, I worry that there won’t be a world for my child to grow up in.” Another wondered whether their children would experience the same opportunities they once enjoyed.
Strategic Approach to Mental Health?
The concept of scheduled worry time might seem counterintuitive, but mental health experts suggest it’s a deliberate approach to managing anxiety. By allocating a specific time to process and acknowledge worries, individuals can potentially reduce the overall impact of anxiety on their daily lives.
“Worry doesn’t just cloud our thoughts — it can seriously disrupt our sleep,” says Brooke Witt, Vice President of Marketing at Avocado Green Mattress, which commissioned the study. “When our minds are consumed by finances, family, or endless to-do lists, falling and staying asleep becomes a challenge, which directly impacts how rested we feel the next day.”
The survey suggests more than just a coping mechanism—it reveals a generational approach to mental health that is proactive, intentional, and self-aware. Younger Americans are not simply experiencing anxiety; they’re developing structured methods to understand, limit, and manage it.
The 10% who schedule dedicated worry time represent a potentially transformative approach to mental wellness. By containing their anxieties within a specific timeframe, they may be finding a way to prevent worry from consuming their entire day.
“There’s always something brewing in our minds — whether it’s work, family, or future concerns,” notes Amy Sieman, an affiliate manager with Avocado Green Mattress. “This research reveals how these everyday worries can follow us to bed, affecting both our sleep and our overall quality of life.”
Male sexual desire tends to decline with age—it’s a biological fact that many men face as the years pass. By age 70, about a quarter of men report a noticeable drop in sexual drive. But what if there were a relatively simple dietary approach that could help maintain libido well into later years?
A fascinating study published in Cell Metabolism reveals that intermittent fasting significantly boosts sexual behavior in male mice by altering brain chemistry in ways that enhance sexual motivation. The research suggests that brain chemistry might matter more than physical reproductive metrics when it comes to maintaining sexual function with age.
Scientists from the German Center for Neurodegenerative Diseases and University of Health and Rehabilitation Sciences in China discovered that mice subjected to intermittent fasting—alternating 24-hour periods of eating and not eating—maintained much higher reproductive success rates in old age compared to their continuously-fed counterparts. While only 38% of aged mice with unlimited food access successfully reproduced, a remarkable 83% of intermittently fasted mice remained fertile.
What makes this finding truly surprising isn’t just the striking difference in reproductive success, but the mechanism behind it. The fasting regimen didn’t improve traditional markers of reproductive health like testosterone levels, sperm count, or sperm quality. In fact, the fasting mice actually showed greater testis weight reduction than continuously-fed mice. The secret to their reproductive success lay entirely in behavior—the fasting mice simply showed more interest in mating.
The research team, led by Kan Xie, Yu Zhou, and Dan Ehninger, identified a clear chemical pathway for this behavioral change. Aging typically raises levels of serotonin in the brain, which acts as a sexual inhibitor. Intermittent fasting prevented this age-related serotonin increase by reducing the amount of its precursor, the amino acid tryptophan, available to the brain.
Study authors explain that this mechanism works through a unique metabolic pathway. When mice fast and then refeed, their skeletal muscles draw more tryptophan from the bloodstream. With less tryptophan circulating in the blood, less crosses into the brain, resulting in lower serotonin production and consequently less inhibition of sexual behavior.
To confirm their findings, the researchers administered 5-HTP—a direct precursor to serotonin that bypasses the rate-limiting step in serotonin synthesis—to fasting mice. This promptly reversed the behavioral benefits, with the treated mice showing decreased sexual interest. This confirmed that reduced brain serotonin was indeed responsible for the enhanced sexual behavior in fasting mice.
While the study was conducted in mice, the core biochemical pathways involved function similarly in humans. Tryptophan metabolism and serotonin synthesis operate through comparable mechanisms across mammalian species, suggesting the potential for similar effects in humans.
The intermittent fasting regimen used in the study wasn’t extreme. The mice alternated between 24 hours of unlimited food access and 24 hours of fasting. During feeding days, they ate more than usual, compensating for fasting days. Overall, they consumed only about 13% fewer calories than continuously-fed mice. This modest reduction in calorie intake, combined with the cyclical fasting/feeding pattern, produced significant effects on brain chemistry.
It’s worth noting that the benefits weren’t immediate—a brief six-week intervention didn’t improve sexual behavior. The changes required longer-term adaptation, suggesting that lasting modifications to brain chemistry take time to develop.
For men concerned about age-related decline in sexual interest, this research offers food for thought. While human studies are needed to confirm similar effects, the fundamental biological mechanisms are plausible. Before making any changes to your dietary routine, it’s important to speak with your doctor first.
From an evolutionary perspective, these findings challenge the notion that dietary restriction necessarily suppresses reproduction. While many theories suggest organisms redirect resources from reproduction to survival during food scarcity, this research indicates that certain patterns of food availability might actually enhance reproductive behavior, at least in males.
It’s something many of us probably haven’t thought of before, but perhaps what happens in the kitchen might influence what happens in the bedroom. While results from animal studies don’t automatically transfer to humans, the fundamental mechanisms involved are similar enough to warrant further investigation. After all, when it comes to maintaining quality of life with age, few aspects matter more than preserving the capacity for romantic connection.
Imagine lounging in a hammock on a sunny beach, palm trees swaying in the breeze, the bright turquoise of the sea barely dimmed by your sunglasses. You glance up and down the beach: not a soul in sight. It’s the first day of your holiday, and your whole body feels so relaxed; you could dissolve into the sand and be swept out to sea. You take a lazy sip of your pina colada and take it all in. Out of nowhere, a voice whispers into your ear: “No, really, take it in.”
That inner voice? It’s echoing a simple but often overlooked idea: Good experiences don’t always stick unless we make an effort to let them. That’s the premise behind Hardwiring Happiness, a book by psychologist Rick Hanson, who explores how consciously lingering on positive moments can help counterbalance the brain’s built-in negativity bias. That bias might have served a useful evolutionary purpose ages ago when our survival was more frequently at stake, but in a relatively stable 21st-century environment, it often traps us in cycles of rumination.
Hanson’s approach isn’t about forced optimism — it’s grounded in the idea of neuroplasticity, the brain’s capacity to change over time through repeated experience. Drawing on psychological theory and early research suggesting that “deliberately taking in the good” may help build resilience and emotional well-being, Hanson developed the HEAL method:
Have a good experience
Enrich it
Absorb it
Link it to other positive or negative experiences.
While Hanson’s HEAL method draws on established neuroscience concepts, it remains a clinical and contemplative approach rather than a rigorously validated scientific intervention. In a small exploratory study using pre-post self-report measures, Hanson and colleagues assessed the effects of this intervention on 21 healthy subjects and found statistically significant self-reported improvements in measures like savoring and self-compassion, though the small sample size and lack of a control group limit the strength of the conclusions. The participants also reported statistically borderline improvements in self-esteem, positive rumination (self-focus), pride, happiness, and satisfaction with life. Many of these effects persisted after two months.
Change the mind, change the brain?
Can you really rewire your brain this way — simply by changing your mind? That’s the idea behind neuroplasticity: the brain’s ability to adapt and reorganize in response to experience. Researchers investigate it through a combination of brain imaging and behavioral assessments. For example, if someone is able to learn a new skill more quickly following an intervention, scientists can correlate this with changes in brain activity, using what’s called “task-based fMRI.” But the details of cause-and-effect are far from simple, and the research methods far from perfect. Although there is considerable evidence for neuroplasticity as a phenomenon related to health and well-being, skeptics warn of “neuroplasticity hype,” and positive neuroplasticity itself has not been corroborated neuroscientifically in humans.
Still, Hanson says, we’ve come a long way toward understanding the relationship between mind and brain.
“As science has progressed in the last hundred to a hundred and fifty years with the study of the nervous system,” he told Big Think, “the correlations have become increasingly well understood and tight between ongoing mental activity — hearing, seeing, loving, hating, wanting, remembering — and the underlying neural activity that is their physical basis.”
A number of brain imaging studies suggest that certain mental practices, such as mindfulness meditation, are associated with structural and functional brain changes, though questions remain about causality and long-term effects.
In the 1960s, researchers began using electroencephalogram (EEG) to study neural activity during meditation. In the 1970s came magnetic resonance imaging (MRI), and by the 1990s — the so-called “decade of the brain” — scientists were increasingly able to associate specific mental experiences with distinct patterns of neural activity. For instance, one seminal study of nuns praying inside fMRI machines showed that their brains’ reward centers lit up in ways similar to people using cocaine.
“It doesn’t mean connecting with Christ consciousness is the same as taking cocaine,” Hanson notes, “but they were starting to find underlying neural correlates.”
A growing body of research shows that meditation and other contemplative practices can promote neuroplasticity, encouraging the brain to form new connections and adapt over time. In the mid-2000s, as Hanson and his colleagues began combing through the research literature, they wondered whether they could flip things around and harness what scientists had gathered about the brain to use in contemplative and clinical practice, an investigation which ultimately became the basis for HEAL.
Could they deliberately activate the brain to induce certain mental activities that would lead to lasting changes in the brain and, ultimately, support the development of optimal traits like a more positive outlook on life? As Hanson put it: “Could we use our mind to stimulate and change our brain to benefit our mind?”
If so, harnessing brain science could, in theory, motivate people who wouldn’t otherwise think to take up a “mental hygiene” regime such as meditation.
“When people realize this airy-fairy, woo-woo stuff is actually helping their own brain, they get much more motivated,” he said. Ruminating over the state of the world may not be helpful, but “when you slow down, take a moment to feel close to your friend or partner, and let that really land inside, that’s changing your brain for the better.”
While the precise mechanisms remain uncertain, Hanson points to the role of the autonomic nervous system — particularly how social connection and safety cues can downregulate stress responses — as one pathway through which positive experiences may shape long-term well-being.
“If I want to calm myself down, it’s important to touch my partner, or my dog, because that social engagement is going to ripple down and calm my heart,” he said.
For decades, we’ve been fed a consistent message: monogamous relationships represent the gold standard of romantic fulfillment. This belief runs so deep that researchers have now given it a name—the “monogamy-superiority myth.” It’s a belief that has shaped personal choices, public policies, and professional practices, despite remarkably little evidence supporting the claim.
A new review published in The Journal of Sex Research directly challenges this assumption with data from nearly 25,000 individuals. The findings? When it comes to both relationship and sexual satisfaction, there’s virtually no difference between people in monogamous relationships and those in consensually non-monogamous arrangements.
This extensive review, led by Joel R. Anderson of La Trobe University, represents the first comprehensive analysis comparing satisfaction levels across different relationship structures. The findings effectively challenge the notion that non-monogamous relationships are somehow lacking or less fulfilling than monogamous ones.
The Persistence of Monogamy as the ‘Ideal’
Western society has long operated under the assumption that monogamy is not just normal, but optimal. This belief has been reinforced through cultural messages, religious teachings, and even healthcare practices. People in non-monogamous relationships often face judgment, discrimination, and the assumption that their relationship choices indicate personal problems or instability.
The research team identified several reasons these beliefs persist. For many, monogamy is seen as a moral choice guided by religion or cultural norms. It’s often viewed as “normal” and beneficial because it allows people to avoid stigma. Monogamous relationships are frequently assumed to result in better health outcomes, greater stability, and even better intimacy—assumptions the new research directly contradicts.
‘Monagamish’ Relationships Are Better?
The researchers examined studies conducted between 2007 and 2024, mostly in Western countries like the United States, Canada, and Australia. This body of research included diverse participants across sexuality and gender identity, though most samples were predominantly white.
Non-monogamy in these studies covered various relationship structures, including:
Polyamory: maintaining several loving relationships at once
Open relationships: agreements allowing sex outside the primary relationship
Swinging: partners engaging in outside sexual activities together, often at organized events
“Monogamish” arrangements: mostly monogamous relationships with occasional agreed-upon exceptions
Across these diverse relationship structures, the analysis found that monogamous and non-monogamous people reported basically identical levels of both relationship and sexual satisfaction. This pattern held true regardless of participants’ sexuality, with both straight and LGBTQ+ samples showing no significant differences.
Some interesting details emerged when researchers looked at specific types of non-monogamous arrangements. People in “monogamish” relationships reported slightly higher relationship satisfaction than those in strictly monogamous relationships. Similarly, polyamorous individuals and swingers reported somewhat higher sexual satisfaction than their monogamous counterparts.
Another surprising finding emerged when researchers examined different aspects of relationship satisfaction. Non-monogamous individuals actually rated trust higher than monogamous individuals, while scoring equally on commitment, intimacy, and passion. This challenges the common assumption that non-monogamous relationships necessarily involve less trust or commitment.
Study authors suggest that non-monogamous relationships might actually strengthen certain relationship skills. The nature of managing multiple relationships might encourage people to put more effort into communication, openness, and understanding—all key components of trust.
Changing Norms?
Despite the stigma and discrimination that non-monogamous people often face, their reported satisfaction matched or sometimes exceeded those of monogamous individuals.
The research team offers several explanations for these findings. Non-monogamous relationships may allow people to experience more variety and freedom. These structures let people have different needs met by different partners, whereas monogamous individuals must find all their needs satisfied by one person. Research also indicates that non-monogamy can encourage personal growth and independence, which may boost relationship and sexual satisfaction.
These findings matter for therapists, counselors, and other healthcare professionals who work with non-monogamous clients. Previous studies have shown that healthcare practitioners sometimes view non-monogamy as a problem or sign of trouble, making assumptions that can damage the therapeutic relationship.
For the roughly 5% of adults currently in non-monogamous relationships—and the approximately 20% who have tried consensual non-monogamy at some point—these findings validate that their relationship choices can lead to satisfying, fulfilling partnerships.
It’s worth noting that while these results show equal satisfaction across relationship structures, they don’t suggest any particular relationship style is right for everyone. Personal preferences, values, and needs remain most important in determining the best relationship arrangement for each person.
Ultimately, these findings don’t just validate non-monogamous relationships—they invite us to question assumptions about relationships that we may have never examined. Perhaps satisfaction has less to do with relationship structure than with how well any relationship meets the unique needs of the people involved.
There’s promising news for fitness enthusiasts looking to optimize their body composition: combining a time-restricted eating (also known as intermittent fasting) regimen with your exercise routine may help reduce body fat while preserving muscle mass.
Researchers have discovered that coordinating when you eat with your exercise routine might significantly improve body composition results, according to a comprehensive study published in the International Journal of Obesity. The new meta-analysis by scientists at the University of Mississippi, along with colleagues from Texas Tech University, reveals an intriguing fitness strategy that doesn’t involve fancy equipment, expensive supplements, or complicated diet plans.
The secret to a truly fit body may be all about timing your meals and your workout in concert with one another.
The Power of Time-Restricted Eating with Exercise
Time-restricted eating (TRE) involves limiting all food consumption to a specific window—typically 4-12 hours daily—while fasting for the remaining hours. Unlike other dietary approaches that dictate specific foods or calorie counts, TRE focuses simply on when you eat.
The research team analyzed 15 studies involving 338 participants who followed TRE protocols while engaging in structured exercise programs. They compared these individuals to control groups who performed identical exercises but ate without time restrictions.
The results were clear: people who combined TRE with exercise lost approximately 1.3 kg (2.9 pounds) more fat and reduced their body fat percentage by an additional 1.3% compared to those who exercised without meal timing restrictions. Perhaps most importantly, muscle mass wasn’t significantly affected, indicating that TRE doesn’t compromise muscle preservation during exercise programs.
Most studies used a 16:8 schedule—16 hours fasting, 8 hours eating—with feeding windows typically between noon and 8 P.M. Importantly, exercise was performed during feeding hours, not while fasting, which likely helped preserve muscle mass and optimize performance.
“[T]ime-restricted eating appears to induce a small decrease in fat mass and body fat percentage while conserving fat-free mass in adults adhering to a structured exercise regimen, as opposed to exercise-matched controls without temporal eating restrictions,” the authors write.
Why This Combination Works
Several mechanisms might explain why restricting your eating window enhances fat loss beyond exercise alone.
For many people, TRE naturally reduces caloric intake by limiting opportunities to eat. However, benefits persisted even in studies that controlled for calories, indicating that timing itself matters regardless of how much you eat.
Eating during daylight hours may better align with our body’s natural biological rhythms—the internal clocks that regulate numerous physiological processes. This alignment could optimize metabolic function compared to the typical modern pattern of eating from early morning until late night.
TRE may also trigger beneficial hormonal changes, including increased levels of compounds that enhance fat burning (adiponectin, noradrenaline, growth hormone) while decreasing stress hormones like cortisol. Additionally, fasting periods activate metabolic pathways that promote fat oxidation, potentially amplifying exercise’s effects.
The research examined multiple exercise approaches, including aerobic training (running, cycling), resistance training (weightlifting), and concurrent training (combining both). The benefits held across these different exercise modes, indicating the TRE plus exercise formula works regardless of your preferred workout style.
What This Means for Your Fitness Routine
Before rushing to adopt this approach, however, several factors deserve consideration. The benefits, while statistically significant, were moderate in size. Individual responses likely vary considerably based on factors not fully captured in current research. And since most studies were short-term (typically 4-8 weeks), the long-term sustainability and effects remain largely unknown.
It’s also worth noting that most participants were already experienced exercisers in good metabolic health, with relatively few studies including those with obesity. Whether the same benefits apply to beginners or those with significant metabolic challenges remains unclear.
For active individuals looking to fine-tune their approach to body composition, this research provides preliminary support for a simple yet potentially effective strategy: timing meals alongside exercise may help optimize fat loss while preserving valuable muscle tissue.
As always before making any changes to your diet or daily health regimens, you should always talk to your doctor first.
Treating female pain may require a different approach than pain management for men. (My Ocean Production/Shutterstock)
For decades, women suffering from chronic pain have been told “it’s all in your head” when treatments that work for men fail them. Now, research from the University of Calgary reveals that women’s pain actually operates through entirely different biological pathways than men’s. Scientists have discovered that the same protein triggers pain in both sexes, but through completely different immune cells and chemical signals.
A new study published in Neuron reveals that a protein called pannexin-1 (or Panx1 for short) works differently in males and females. This helps to explain why women are more likely to develop chronic pain conditions and why they often don’t respond as well to certain treatments.
The Divide in Pain Research
Most pain research has historically focused on male subjects, even though women make up the majority of chronic pain patients. This study aims to fix that imbalance by looking at both male and female animals to understand why pain works differently between sexes.
While both sexes use the Panx1 protein in their immune cells to create pain signals, they use completely different cells and chemical messengers to do it.
In males, Panx1 works through cells called microglia (the immune cells of the spinal cord and brain) to release a protein called VEGF, which increases pain sensitivity. In females, however, Panx1 works through CD8+ T cells (a type of white blood cell) to release leptin, a hormone best known for its role in hunger and metabolism. This may help explain why treatments that target microglia cells work well for reducing pain in males but often fail in females.
Crossing Biological Boundaries
When they took microglia cells from male animals, activated them, and transferred them into females, the females developed pain. Similarly, when they transferred activated female T cells into males, the males also developed pain. This confirmed that these sex-specific mechanisms weren’t just correlations; they actually cause pain.
The researchers also created mice that lacked the Panx1 protein specifically in microglia cells. Male mice without this protein showed much less pain after nerve injury, while female mice still developed normal pain sensitivity.
When they analyzed fluid from the spinal cord, they found that after nerve injury, males had higher levels of VEGF while females had higher levels of leptin. Blocking VEGF reduced pain in males but not females, while neutralizing leptin reduced pain in females but not males.
Hope for Better Pain Treatments
This could lead to better pain treatments for everyone. Current treatments for neuropathic pain (pain caused by nerve damage) include anticonvulsants, antidepressants, and opioids. These treatments tend to work less effectively in women and often cause worse side effects.
With this new knowledge, doctors might eventually be able to prescribe treatments targeted specifically to each sex, like VEGF blockers for men and leptin blockers for women.
This discovery is particularly important for conditions like fibromyalgia, which affects women much more often than men. Previous studies have shown that leptin levels can predict pain severity in women with fibromyalgia. This research now provides a possible explanation for that connection.
Panx1 could be a treatment target that works for both sexes. Although the way the body reacts to this protein is different for men and women, medications targeting it could help both, potentially transforming pain treatment.
For women who have struggled to have their pain taken seriously or effectively treated, this research provides solid evidence that treating female pain really may deserve specific research and targeted treatments. Doctors may eventually move beyond one-size-fits-all approaches to develop treatments tailored to each person’s unique biology.
Doctors are increasingly being asked to use AI systems to help diagnose patients, but when mistakes happen, they take the blame. New research shows physicians are caught in an impossible trap: use AI to avoid mistakes, but shoulder all responsibility when that same AI fails. This “superhuman dilemma” is the healthcare crisis nobody’s talking about.
The Doctor’s Burden: Caught Between AI and Accountability
New research published in JAMA Health Forum explains how the rapid deployment of artificial intelligence in healthcare is creating an impossible situation for doctors. While AI promises to reduce medical errors and physician burnout, it may be worsening both problems by placing an unrealistic burden on physicians.
Researchers from the University of Texas at Austin found that healthcare organizations are adopting AI technologies much faster than regulations and legal standards can keep pace. This regulatory gap forces physicians to shoulder an extraordinary burden: they must rely on AI to minimize errors while simultaneously bearing full responsibility for determining when these systems might be wrong.
Studies reveal that the average person assigns greater moral responsibility to physicians when they’re advised by AI than when guided by human colleagues. Even when there’s clear evidence that the AI system produced wrong information, people still blame the human doctor.
Physicians are often viewed as superhuman. They are expected to have exceptional mental, physical, and moral abilities. These expectations that go far beyond what is reasonable for any human being.
When Two Decision-Making Systems Collide
Physicians face a complex challenge when working with AI systems. They must navigate between “false positives” (putting too much trust in wrong AI guidance) and “false negatives” (not trusting correct AI recommendations). This balancing act occurs amid competing pressures.
Healthcare organizations often promote evidence-based decision-making, encouraging physicians to view AI systems as objective data interpreters. This can lead to overreliance on flawed tools. Meanwhile, physicians also feel pressure to trust their own experience and judgment, even when AI systems may perform better in certain tasks.
Adding to the complexity is the “black box” problem. Many AI systems provide recommendations without explaining their reasoning. Even when systems are made more transparent, physicians and AI approach decisions differently. AI identifies statistical patterns from large datasets, while physicians rely on reasoning, experience, and intuition, often focusing on patient-specific contexts.
The Hidden Costs of Superhuman Expectations
The consequences of these expectations affect both patient care and physician wellbeing. Research from other high-pressure fields shows that employees burdened with unrealistic expectations often hesitate to act, fearing criticism. Similarly, physicians might become overly cautious, only trusting AI when its recommendations align with established care standards.
This defensive approach creates problems of its own. As AI systems improve, excessive caution becomes harder to justify, especially when rejecting sound AI recommendations leads to worse patient outcomes. Physicians may second-guess themselves more frequently, potentially increasing medical errors.
Beyond patient care, these expectations take a psychological toll. Research shows that even highly motivated professionals struggle to maintain engagement under sustained unrealistic pressures. This can undermine both quality of care and physicians’ sense of purpose.
Too much salt has long been blamed for heart problems, but new research suggests it might harm our minds too. Scientists from Nanjing Medical University have discovered a surprising connection between high-salt diets and depression-like behaviors in mice, potentially explaining why depression rates continue rising alongside our consumption of processed foods.
The research team found that excessive salt intake triggers specific immune responses in the brain that can lead to behaviors resembling depression. Their findings, published in The Journal of Immunology, offer a biological explanation for previously observed connections between processed food consumption and mood disorders.
Depression affects millions worldwide, with lifetime prevalence reaching 15-18% in many populations. Modern Western diets, especially fast food, contain dramatically more sodium than home-cooked meals—sometimes exceeding homemade options by 100-fold.
The Salt-Depression Connection
In the study, mice fed high-salt diets showed behaviors remarkably similar to those experiencing chronic stress. They explored less, displayed heightened anxiety, and spent more time motionless during tests measuring “behavioral despair”—patterns that parallel human depression symptoms.
The researchers investigated the biological mechanisms behind these behavioral changes. High-salt diets significantly increased production of Interleukin-17A (IL-17A), an immune signaling molecule, particularly in specialized immune cells called gamma delta T cells (γδT cells).
Previous research had linked elevated IL-17A to depression, but this study reveals a direct pathway from dietary salt to increased IL-17A production to depression-like symptoms.
To confirm this connection, the team tested mice genetically modified to lack the ability to produce IL-17A. These mice showed no signs of depression despite consuming high-salt diets. Even more convincingly, when researchers removed the specific immune cells that produce IL-17A, the animals no longer developed depression-like behaviors on high-salt diets.
What This Means for Humans
While conducted in mice, the research has compelling implications for human health. Population studies have already shown links between high-salt diets and increased depression rates. This study offers a potential explanation for those observations.
The average American diet contains about 3,400 mg of sodium daily—far exceeding the American Heart Association’s recommended maximum of 2,300 mg. Fast food meals often deliver an entire day’s worth of recommended sodium in a single sitting.
This isn’t the first research connecting diet and mental health. Mediterranean diets rich in fruits, vegetables, whole grains, olive oil, and lean proteins correlate with lower depression rates. Conversely, diets heavy in processed foods, sugars, and unhealthy fats tend to increase depression risk.
The distinctive aspect of this study is identifying a specific biological pathway connecting diet directly to depression-like behaviors. This precision opens doors to potential new treatment approaches targeting the immune system rather than just brain chemistry.
Simple Ways To Reduce Salt Intake
Current depression treatments typically focus on neurotransmitter imbalances using medications like SSRIs or on changing thought patterns through therapy. The discovery that dietary factors might contribute to depression through immune pathways represents an important shift in how we might approach mental health care.
Applying these findings doesn’t necessarily require waiting for new pharmaceutical treatments. Simple dietary changes are accessible to most people:
Reducing processed food intake
Eating more home-cooked meals
Checking food labels for sodium content
Using herbs and spices instead of salt for flavoring
Some health professionals already recommend the DASH diet (Dietary Approaches to Stop Hypertension) for patients with high blood pressure. This diet emphasizes fruits, vegetables, whole grains, lean proteins, and reduced sodium. This new research hints such approaches might benefit mental health too.
Beyond individual choices, these findings could influence public health policies around sodium reduction in processed foods. Some countries have already implemented such regulations: the United Kingdom’s salt reduction program has achieved a 15% decrease in average salt intake since implementation.
While more research is needed before definitive conclusions about salt reduction as a depression treatment in humans, this study adds to mounting evidence that what we eat affects both body and mind. For those struggling with depression, these findings don’t suggest dietary changes should replace established treatments like therapy and medication, but they highlight diet as an important complementary factor in mental health care.
(Photo by PeopleImages.com – Yuri A on Shutterstock)
Children who are breastfed for longer periods of time during infancy experience fewer developmental delays and a reduced risk of neurodevelopmental conditions, including disorders like autism and ADHD, acording to new research. The study led by scientists at the KI Research Institute in Israel confirms what many parents might hope to hear: breastfeeding babies for at least six months appears to boost their developmental outcomes.
While health organizations have recommended breastfeeding for the first six months of life for years, this study offers particularly strong evidence by addressing problems that weakened earlier research on the topic.
Published in JAMA Network Open, the study involved health data from 570,532 Israeli children, including nearly 38,000 sibling pairs. It ranks among the largest investigations into breastfeeding and development ever conducted.
Led Dr. Inbal Goldshtein and Dr. Yair Sadaka, the research team used an innovative approach to ensure their findings were reliable. The study uniquely combined routine developmental checkup records from Israel’s maternal-child health clinics with national insurance disability data, allowing researchers to track both developmental milestone achievement and diagnosed conditions.
They compared siblings within the same families who had different breastfeeding experiences but shared genes and home environment. This clever design controlled for family factors like parental intelligence and involvement that often confuse results in other studies.
Children exclusively breastfed for at least six months had 27% lower odds of developmental delays compared to those breastfed for shorter periods. Even children who received both breast milk and formula for six months or more showed a 14% reduction. When examining siblings with different breastfeeding histories, those who breastfed longer had 9% lower odds of milestone delays and 27% lower odds of neurodevelopmental conditions compared to siblings who breastfed for shorter periods or not at all.
The benefits remained clear even after accounting for numerous factors, including pregnancy duration, birth weight, maternal education, family income, and postpartum depression.
The advantages appeared most notable in language and social development—crucial areas for school success and forming friendships. Motor skills improved too, though less dramatically. Premature babies, who typically face higher developmental risks, seemed to benefit even more from extended breastfeeding than full-term infants.
For parents struggling with breastfeeding choices, there’s reassuring news. When researchers specifically examined siblings who both breastfed for at least six months—one exclusively on breast milk and one receiving some formula—exclusive breastfeeding didn’t show a meaningful additional advantage. This indicates that maintaining some breastfeeding for longer might matter more than avoiding formula completely.
The study’s authors believe that their findings should inform public health policies and support systems rather than pressure individual families. Their goal remains helping children reach their potential, not creating guilt among parents facing breastfeeding challenges.
Researchers emphasize that while breastfeeding is linked to better development, it’s just one of many factors that shape a child’s growth. They noted that identifying changeable factors like nutrition is essential to helping each child reach their potential.
Despite expert recommendations, actual breastfeeding rates often fall below targets. Many mothers struggle to balance breastfeeding with work demands, inadequate parental leave, and aggressive formula marketing.
Formula companies spend around $55 billion yearly promoting their products, sometimes undermining women’s confidence in their ability to breastfeed. The authors advocate for stronger supportive policies, including better parental leave and limits on formula marketing practices.
The biological mechanism for these benefits may relate to breast milk’s effects on brain development. Earlier research has shown differences in brain structure between breastfed and formula-fed babies. Some scientists believe these benefits might work through effects on the infant’s gut microbiome, which connects to brain development through what’s known as the gut-brain axis.
As the researchers conclude, these results may help guide not only parents but also public health initiatives aimed at giving children the best developmental start possible. When every advantage counts for our children, supporting breastfeeding appears to be a worthwhile investment.
For thousands of years, ginseng has been treasured in Eastern medicine for its health-promoting properties. Now, modern science is uncovering the remarkable potential of one specific component within this ancient herb – Compound K, a rare metabolite formed when certain ginsenosides from ginseng are broken down in the gut. This substance is becoming a focal point in skin aging research, offering new possibilities for combating wrinkles, skin laxity, and other visible signs of aging.
Research published in the Journal of Dermatologic Science and Cosmetic Technology reveals that Compound K (CK) fights skin aging through multiple biological pathways, targeting different aspects of the aging process simultaneously. The study was conducted by scientists at Yunnan University and Guangdong Industry Polytechnic University.
How Skin Ages and Why Compound K Matters
Skin aging happens because of internal factors like genetics and metabolism, along with external forces such as ultraviolet radiation and pollution. These elements combine to create thinning skin, reduced elasticity, wrinkles, and uneven color. The research reveals Compound K tackles these issues through several different mechanisms at once.
One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts levels of desmosome adhesive protein 1 (DSC1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.
One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts protective proteins, such as desmosome adhesive protein 1 (DSC 1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.
Collagen breakdown is a major culprit behind skin aging. UV radiation triggers enzymes called matrix metalloproteinases (MMPs), which degrade collagen and lead to wrinkles and sagging. Studies demonstrate that Compound K effectively blocks these collagen-destroying enzymes in skin cells exposed to UV light, helping maintain the skin’s structural framework.
Beyond just preventing damage, Compound K actively promotes repair by stimulating collagen production. It also increases hyaluronic acid in the skin by enhancing the gene responsible for producing this moisture-binding molecule that naturally decreases as we age.
Beyond Surface-Level Benefits: Cellular and Genetic Effects
Particularly interesting is Compound K’s effect on cellular “housekeeping” – the process where cells clean out damaged components (known scientifically as autophagy). This natural maintenance system slows with age, contributing to cellular dysfunction. Research indicates that CK regulates this cleaning process, helping cells function optimally for longer periods.
The compound’s anti-inflammatory benefits are substantial too. Low-grade chronic inflammation, sometimes called “inflammaging,” increasingly appears to drive various age-related conditions, including skin aging. Through several pathways, Compound K reduces inflammation and resulting cellular damage.
At the genetic level, Compound K activates SIRT1, often referred to as a longevity gene because of its role in cellular health. Studies reveal that UV exposure significantly reduces SIRT1 expression, speeding up aging, while CK counteracts this effect depending on the dose used.
For those concerned about cellular energy decline – a hallmark of aging – research points to Compound K improving mitochondrial function, our cells’ power plants. Studies show it promotes mitochondrial health, maintains proper dynamics, and increases energy production. Since mitochondrial dysfunction characterizes aging cells, this benefit could significantly improve skin health and appearance.
From Lab to Skincare: The Practical Applications
Getting active ingredients through the skin barrier presents a major challenge in skincare. Fortunately, Compound K’s relatively small molecular weight allows it to penetrate skin layers more effectively than many other ingredients. Research using artificial skin models confirms CK can move through skin layers, making it a viable option for topical applications.
Remarkably, studies suggest that when applied to skin, other ginsenosides in skincare products can transform into Compound K within the skin itself, potentially boosting the effectiveness of ginseng-based products. This conversion process in skin mirrors what happens in the digestive system when ginsenosides are consumed orally.
While typical anti-aging ingredients often target just one aspect of aging, Compound K’s wide-ranging approach gives it unique value. It simultaneously improves skin barrier function, collagen production, moisture retention, inflammation control, and cellular energy – addressing virtually every major contributor to visible aging.
This research coincides with growing consumer preference for plant-based skincare with scientific backing. The natural cosmetics market continues expanding rapidly as consumers seek evidence-based natural alternatives to synthetic compounds. Ginseng extracts rich in Compound K could meet both the demand for natural ingredients and the expectation for proven results.
Is Ginseng the Future of Anti-Aging Research
Skincare developers now face the task of creating stable delivery systems that maximize Compound K’s benefits. The compound’s multifaceted effects suggest it could enhance products targeting various signs of aging, from fine lines to skin firmness and radiance.
For consumers, the study shows that products containing Compound K or its precursors might offer broader anti-aging benefits than single-action ingredients. However, concentration matters – many studies used relatively high amounts of the compound, which may not be present in all commercial products claiming ginseng benefits.
Meanwhile, more studies like this one could completely change the future of the skin aging industry. Simple moisturizers claiming miraculous anti-aging benefits are being replaced by ingredients like Compound K that work through specific cellular pathways, genetic expression, and metabolic processes.
While Compound K isn’t a magical fountain of youth, it represents a scientifically validated approach to supporting skin’s natural functions and resilience. In aging, this resilience – rather than fighting the inevitable – may be the key to aging well.
In case you needed another reason to hold off on buying your child a phone, research shows a troubling connection between childhood screen habits and teenage mental well-being. The eight-year study, which tracked children from elementary school into adolescence, found that kids who racked up more screen time—especially on mobile devices—showed higher levels of stress and depressive symptoms as teenagers.
The study adds to the large body of research that should make parents think twice about unlimited device access, especially as more children experience mental health struggles at an early age. Between one-quarter and one-third of adolescents worldwide experience mental health problems, with symptoms typically first appearing during the teenage years. Researchers now have more concrete evidence about lifestyle factors that might help prevent psychological distress before it takes root.
Digital Habits and Mental Health: What the Research Shows
Study authors used data from the Physical Activity and Nutrition in Children (PANIC) study, which followed 187 Finnish children over eight years, from ages 6-9 into their mid-teens. Researchers regularly checked in on their physical activity, screen time, sleep patterns, and eating habits. When these children reached adolescence (average age 15.8), the researchers assessed their mental health using standardized measures of stress and depression.
The data painted a clear picture: teenagers who had accumulated more total screen time and mobile device use throughout childhood showed significantly higher levels of stress and depressive symptoms. The connection between mobile device use and depression was particularly strong, showing a “moderate effect size”—substantial in behavioral research terms.
The team found that adolescents spent nearly five hours daily on screens, with over two hours on mobile devices alone. Many parents might find these numbers unsurprising, but the mental health correlations deserve attention.
Physical activity told the opposite story. Teens who maintained higher activity levels during childhood, especially in supervised settings like sports or structured exercise programs, showed better mental health outcomes. This protective effect remained significant even after researchers accounted for factors like parental education, body composition, and puberty status.
Gender differences added another dimension to the findings. For boys, physical activity showed stronger protective effects against stress than for girls.
Surprisingly, neither diet quality nor sleep duration showed strong relationships with teen mental health in this study. This doesn’t mean these factors aren’t important for overall health—just that screen time and physical activity may have more direct impacts on adolescent mental wellbeing.
More Screen Time Should Mean More Physical Activity
For parents struggling with screen time battles, this research provides compelling evidence for setting reasonable limits. The findings highlight that mobile device use specifically—more than television or computer time—warrants special attention. With smartphones and tablets become increasingly central to education and social connections, creating healthy boundaries becomes more challenging but potentially more important.
The study, published in JAMA Network Open, also emphasizes the value of supervised physical activities. Children who participated in more structured exercise from ages 6-15 showed fewer mental health problems in adolescence. It’s all the more reason schools and community programs aimed at promoting youth mental health should find more ways to get children moving.
Most revealing were the outcomes showing that teenagers with both low physical activity and high screen time had the worst mental health outcomes. This demonstrates that addressing either factor alone might not be as effective as a balanced approach that both limits screen time and increases physical activity.
Creating Healthier Digital Habits for Children
While conducted in Finland, the study’s findings likely apply to children in other developed countries with similar technology access patterns. As smartphone use continues rising globally, understanding its potential psychological impact grows increasingly urgent.
For families navigating the complex digital landscape, this research offers practical guidance: limit screen time (especially on mobile devices), encourage regular physical activity (particularly supervised activities like sports), and remember that these choices may affect not just current behavior but long-term mental health.
Mental health professionals and pediatricians may want to include screen time discussions in their preventive care conversations. Creating balanced digital environments and promoting consistent physical activity within supportive social contexts could become key strategies for protecting youth mental health.
Incorporating technology into children’s lives at younger ages is understandably commonplace these days. But here have another study showing why childhood habits matter. How we balance screens and physical activity today may shape the psychological landscape our children navigate tomorrow.
Apparently, if you’re a night owl, you’re more prone to developing depression.
Night owls tend to get a bad rep. They’re often told they’re less productive and lazier than early risers, merely because they sleep more during daylight—you know, when the world is expected to be most active.
Now, according to recent research, they’re also apparently more likely to experience depression.
“Depression affects daily functioning and can impact a person’s work and education,” Simon Evans, PhD, a neuroscience lecturer and researcher in the School of Psychology of the University of Surrey in the U.K., told Medical News Today. “It also increases the risk of going on to develop other serious health conditions, including heart disease and stroke, so it’s important for us to study ways to reduce depression.”
Obviously, if there was a simple way to decrease your risk of developing depression, most of us would take it. In this case, that might mean getting to sleep earlier in the night rather than staying up until the early morning hours. However, unfortunately, some of us don’t have the luxury to change our sleeping hours.
Does that mean those who work night shifts or lead lifestyles that require them to be active at night are doomed to be depressed?
The study, published in the journal PLOS One, found that “evening-types had significantly higher levels of depression symptoms, poorer sleep quality, and lower levels of ‘acting with awareness’ and ‘describing,’ as well as higher rumination and alcohol consumption.”
With so many young adults self-identifying as “night owls” (or evening-types, as the study refers to them), it’s concerning to note this negative link between their sleep patterns and mental health.
“A large proportion (around 50%) of young adults are ‘night owls,’ and depression rates among young adults are higher than ever,” said Evans, lead author of the study. “Studying the link is therefore important.”
“More important is the finding that the link between chronotype and depression was fully mediated by certain aspects of mindfulness—‘acting with awareness’ in particular—sleep quality, and alcohol consumption,” Evans continued. “This means that these factors seem to explain why night owls report more depression symptoms.”
Most of us believe relationship endings happen in messy, unpredictable ways—a betrayal discovered, a fight that goes too far, or a slow drift apart. But what if breakups actually follow a mathematical pattern? What if the end of your relationship is as predictable as the phases of the moon?
New research published in the Journal of Personality and Social Psychology reveals exactly that. Scientists have discovered that failing relationships don’t just randomly deteriorate—they follow a specific two-phase decline that can be measured, tracked, and even predicted with surprising accuracy.
Researchers Janina Bühler from Johannes Gutenberg University Mainz and Ulrich Orth from the University of Bern analyzed data from four major longitudinal studies across different countries. They found that couples who eventually break up typically experience a mild decline in happiness for years, followed by a dramatic drop in the final months or years before separation.
The Countdown to Breakup
Scientists call this phenomenon “terminal decline,” borrowing a concept previously used to describe how cognitive abilities and happiness deteriorate before death. The research reveals that our romantic relationships follow similar predictable patterns before they end.
The study found that “time-to-separation was a much better predictor of change than time-since-beginning.” While we often think about relationships in terms of how long couples have been together, this research shows that the time remaining until separation tells us more about relationship health.
Perhaps most fascinating is how differently breakup initiators and recipients experience this decline. People who eventually initiate breakups start becoming dissatisfied much earlier—about a year before the actual split. Meanwhile, their partners often remain relatively happy until just months before the end, when their satisfaction plummets dramatically.
Many people intuitively sense when their relationship is heading downhill. This research confirms these feelings aren’t just subjective impressions—they reflect a scientific trajectory toward separation that looks remarkably similar across cultures, age groups, and relationship types.
Exploring The Phases of Decline
In the study, researchers tracked thousands of couples over time, measuring their relationship satisfaction annually. They compared people who eventually separated with similar people who stayed together.
The pattern emerged consistently across all four datasets. According to the paper: “The decline prior to separation was divided into a preterminal phase, characterized by a smaller decline, and a terminal phase, characterized by a sharp decline,” the authors write. The major shift between these phases—what researchers call the “transition point”—occurred anywhere from 7 months to 2.3 years before the actual breakup, depending on the study.
The researchers also examined whether overall life satisfaction followed the same trajectory. They found that “terminal decline was less visible in life satisfaction than in relationship satisfaction.” This indicates that while people recognize their relationships are deteriorating, they might already be preparing emotionally for life after the relationship.
If most relationships fade according to this pattern instead of a dramatic, sudden event or spat, is there any hope for relationships already in this spiral? In many cases, the relationship is effectively over long before the actual separation occurs—couples are just living through the terminal phase.
For couples therapists and relationship counselors, these findings could transform how they evaluate troubled relationships. By identifying whether a couple is in the early “preterminal” phase versus the steep “terminal” decline, professionals might better determine which relationships can be saved and which have likely passed the point of no return.
Demographic factors influenced these patterns in interesting ways. The researchers found that “age at separation and marital status explained variance in the effect sizes.” Younger adults showed less dramatic terminal declines than older adults, possibly because younger people expect more relationship transitions.
The study also revealed that “individuals who were the recipients of the separation (in contrast to individuals who initiated the separation) entered the terminal phase later but then decreased more strongly.” This explains why breakups often feel so asymmetrical, with one partner seemingly more prepared than the other.
What This Means For Your Relationship
Many of us stay in declining relationships hoping things will improve. The study, unfortunately, indicates there might be a point of no return—a transition into terminal decline—after which recovery becomes highly unlikely.
For those currently in relationships, the findings offer both caution and hope. On one hand, recognizing the signs of terminal decline might help people make more informed decisions about when to seek help or when to move on. On the other hand, understanding that the steepest decline typically happens only after crossing a specific threshold might encourage couples to address problems before reaching that critical transition point.
The researchers frame it this way: “If unsatisfied couple members are still in the preterminal phase and have not yet reached the transition point, efforts to improve the relationship may be more effective, potentially preventing the onset of the terminal phase and the eventual dissolution of the relationship.”
The study also brings some comfort to those blindsided by breakups. If you’ve ever been shocked when a partner suddenly announced they wanted to separate, the science explains why: they likely crossed into terminal decline months or even years before you did. By the time you recognized the severity of the problems, they had already been mentally preparing for the end.
Like many aspects of human behavior, from birth to cognitive development to aging, romantic relationships appear to follow predictable patterns that can be scientifically observed and mapped. The terminal decline of relationship satisfaction isn’t just a feeling—it’s a measurable phenomenon that operates according to consistent rules across different cultures and contexts.
The study’s authors emphasize couples in rocky relationships should seek help before hitting the point of no return. “It is important to be aware of these relationship patterns,” says Bühler, who works as a couple therapist in addition to being a professor. “Initiating measures in the preterminal phase of a relationship, i.e., before it begins to go rapidly downhill, may thus be more effective and even contribute to preserving the relationship.”
A new study shows that young people who consume marijuana are six times more likely to experience a heart attack than their counterparts.
Research published in the Journal of the American College of Cardiology (JACC) documents that people under the age of 50 who consume marijuana are about 6.2 times more likely to experience a myocardial infarction, commonly known as a heart attack, than non-marijuana users. Young marijuana users are also 4.3 times more likely to experience an ischemic stroke and 2 times more likely to experience heart failure, the study shows.
Researchers surveyed over 4.6 million people under the age of 50, of which 4.5 million do not use marijuana and 93,000 do. All participants were free of health conditions commonly associated with cardiovascular risks, like hypertension, coronary artery disease, diabetes, and a history of myocardial infarctions. The study also excluded people who use tobacco to eliminate another potential risk factor.
Ahmed Mahmoud, lead researcher and clinical instructor at Boston University, told USA TODAY that though the numbers appear significant, researchers’ biggest concern right now is studying more data, as research on marijuana’s effects on the cardiovascular system remains limited.
“Until we have more solid data, I advise users to try to somehow put some regulation in the using of cannabis,” Mahmoud said. “We are not sure if it’s totally, 100% safe for your heart by any amount or any duration of exposure.”
How does marijuana affect the heart?
As studies remain inconclusive and few and far between, scientists and doctors are still unclear how marijuana affects the cardiovascular system. But generally, researchers understand that marijuana can make the heart beat faster and raise blood pressure, as reported by the Centers for Disease Control and Prevention.
Mahmoud said researchers believe marijuana may make small defects in the coronary arteries’ lining, the thin layer of cells that forms the inner surface of blood vessels and hollow organs.
“Because cannabis increases the blood pressure and makes the blood run very fast and make some detects in the lining to the coronary arteries, this somehow could make a thrombosis (formation of a blood clot) or a temporary thrombosis in these arteries, which makes a cardiac ischemic (stroke) or the heart muscle is not getting enough oxygen to function,” Mahmoud said. “This is what makes the heart injured and this is a myocardial infarction or heart attack.”
Stanton Glantz, a retired professor from the University of California, San Francisco School of Medicine, co-authored a study published in the Journal of the American Heart Association last year that also addresses marijuana’s effects on the cardiovascular system.
Stanton Glantz is retired professor from the University of California, San Francisco School of Medicine. He also is the founder of the Center for Tobacco Control Research and Education.
Glantz told USA TODAY he believes smoking marijuana has the same effects on the cardiovascular system as smoking tobacco.
When smoking a cigarette, the blood that is distributed through the body becomes contaminated with the cigarette smoke’s chemicals, which can damage the heart and blood vessels, the CDC reports. This damage can result in coronary heart disease, hypertension, heart attack, stroke, aneurysms and peripheral artery disease.
Changes in blood chemistry from cigarette smoke can also cause plaque in the body’s arteries, resulting in a disease called atherosclerosis, according to the CDC. When arteries become full of plaque, it’s harder for blood to move throughout the body. This can create blood clots and ultimately lead to a heart attack, stroke or death.
How does the new study correspond with previous research?
The recently published study aligns with previous research in the field.
The Journal of the American Heart Association study, which surveyed more than 434,000 people between the ages 18-74 , found that marijuana affects the cardiovascular system. The study also singled out marijuana users who didn’t use tobacco.
The 2024 study found that people who consume − specifically inhale − marijuana are more likely to experience coronary heart disease, myocardial infraction and stroke. There is a “statistically significant increase in risk,” Glantz said.
The main difference between the new study, co-authored by Mahmoud, and the 2024 study, is the populations studied, Glantz said.
The 2024 study analyzed data from the Behavioral Risk Factor Surveillance Survey, a CDC-operated telephone survey that includes responses from across the country. The new study analyzed data from 53 healthcare organizations using the TriNetX health research network.
Why don’t we remember being a baby? (Miramiska/Shutterstock)
Have you ever wondered why you can’t remember being a baby? This blank space in our memory, known as “infantile amnesia,” has puzzled scientists for years. Most of us can’t recall anything before age three or four. Until recently, researchers thought baby brains simply couldn’t form memories yet, that the memory-making part of our brain (the hippocampus) wasn’t developed enough.
But it turns out babies might remember more than we thought. Research just published in the journal Science shows that babies as young as one year old can actually form memories in their hippocampus. The study, led by researchers at various American universities, suggests our earliest memories aren’t missing, we just can’t access them later.
How Do You Study Memory in Babies Who Can’t Talk?
You can’t exactly ask a baby, “Do you remember this?” The researchers came up with a clever solution. They showed 26 babies (ages 4 months to 2 years) pictures of faces, objects, and scenes while scanning their brains. Later, they showed each baby two pictures side by side, one they’d seen before and one new one, and tracked where the babies looked.
“When babies have seen something just once before, we expect them to look at it more when they see it again,” says lead study author Nick Turk-Browne from Yale University, in a statement. “So in this task, if an infant stares at the previously seen image more than the new one next to it, that can be interpreted as the baby recognizing it as familiar.”
Getting babies to lie still in a brain scanner is no small feat. The research team has spent years developing special techniques to make this possible. They made the babies comfortable and only scanned them when they were naturally awake and content.
The Big One-Year Memory Milestone
The brain scans showed that when a baby’s hippocampus was more active while seeing a picture for the first time, they were more likely to stare at that same picture later, showing they may have remembered it.
This ability to remember showed a clear age pattern. Babies younger than 12 months didn’t show consistent memory signals in their brains, but the older babies did. And the specific part of the hippocampus that lit up, the back portion, is the same area adults use for episodic memories.
The researchers had previously discovered that even younger babies (as young as three months) can do a different kind of memory called “statistical learning.” This is basically spotting patterns across experiences rather than remembering specific events.
Providing seniors with an app that boosts brain health solves many accessibility challenges in assisted living facilities. (Pressmaster/Shutterstock)
Let’s face it, we’re all worried about memory loss as we age. But what if the same device you use for calling grandkids could actually strengthen your mind? A new study revealed that a smartphone app improved thinking abilities in older adults living in assisted living facilities.
Residents of assisted living often feel isolated and might not have easy access to specialized brain health services. That’s why an app would make perfect sense, giving residents access to brain training at their fingertips. Scientists from the University of Utah, Texas A&M, and a company called Silvia Health tested an app called the “Silvia Program” with older folks in assisted living.
Research published in Public Health in Practice shows the promising potential of this app’s capabilities for fighting cognitive decline. Instead of just including memory games like many brain apps, this one took a kitchen-sink approach, mixing brain training with exercise routines, food tracking, and other lifestyle stuff all in one app.
While seniors who didn’t use the app actually lost some brain function over the 12 weeks (yikes), the app users actually saw their scores improve. That’s kind of a big deal for anyone with parents or grandparents in assisted living who worry about their mental sharpness.
The idea behind the app’s design is actually pretty simple. Instead of just doing one thing for your brain, it mixes several approaches together. It’s cross-training for your mind instead of only doing one exercise.
Earlier studies already showed this mix-and-match approach helps fight memory loss. But getting regular in-person brain training can be tough, especially if you live in a facility with limited transportation options. That’s why putting these tools on a smartphone could be such a great approach. It brings brain health right to where seniors already are.
This Silvia Program isn’t your run-of-the-mill brain games app. It bundles five different tools:
Daily goals to keep you motivated
Brain exercises targeting different thinking skills
Trackers for food/exercise/sleep habits
Workout routines you can do sitting in your living room
A talking AI that tests your thinking and adjusts the difficulty
The app also provides personalized coaching with a clinical psychologist, along with cognitive exercises, tailored activity suggestions, and a voice analysis tool capable of identifying early signs of dementia. It engages in interactive conversations to assess the user’s needs and adjusts its functions accordingly.
The Science Behind Silvia
For the study, the researchers recruited 20 folks living in an assisted living facility in Indiana who were experiencing mild cognitive impairment but didn’t have dementia or serious depression. They split them into two groups of 10. One group used the Silvia app for about an hour twice a week for three months. The other group just kept doing whatever they normally did.
They used a test called the MoCA to measure brain function. Doctors use this test to check for early signs of dementia.
Now, 20 people isn’t exactly huge for a study, but what they found still raised some eyebrows. The app seemed to help with visual thinking, language, memory recall, and knowing the time and place.
Why does this matter? Many people in assisted living start feeling cut off from the world after moving in. They might not see family as often, can’t always get to brain health specialists, and sometimes feel like they’re just waiting around. That’s exactly when memory tends to nosedive.
Two things make this app approach especially practical. First, it’s right there on a device many seniors already use. Second, it adapts to each person. You can dial the brain games up or down in difficulty so they’re not too easy or impossibly hard. The exercise instructions show pictures of each move, so you’re not left wondering what “lateral arm raise” means. The chatty AI keeps tabs on how you’re doing, then adjusts everything accordingly, like having a personal trainer for your brain who lives in your pocket.
That said, there are limitations to consider. As noted, the study sample was small, and it only ran for 12 weeks. We have no idea if the brain boosts last longer than that or if they’d show up in different groups of people. Most participants were white women, which doesn’t tell us how the app might work for men or people from different backgrounds. Oddly enough, the app users had more years of education than the non-users, which might have affected the results.
What This Means for Aging and Memory Care
With baby boomers hitting their 70s and 80s, we’re staring down a tsunami of potential memory problems. The old-school fix? Regular visits with specialists, which means transportation hassles, scheduling headaches, and hefty bills. Phone apps skip all that. You just tap and train whenever it’s convenient.
Still, this isn’t the first hint that digital tools might help aging brains. Other studies have already shown that brain games and regular exercise each help slow mental decline. This research suggests bundling them together in one easy-to-use app might pack an even bigger punch.
Nursing homes and assisted living centers should also take note. Their staff is always stretched thin. Apps that residents can use independently might supplement care without breaking budgets or requiring extra personnel. One iPad and a handful of good apps could potentially benefit dozens of residents.
Phones and tablets often get a bad rap for making us dumber, shortening attention spans, and replacing memory with Google searches. But this study flips that narrative. The same devices blamed for digital brain drain might actually build brain power when loaded with the right software.
You’re in the middle of the afternoon, eyelids heavy, focus slipping. You close your eyes for half an hour and wake up feeling recharged. But later that night, you’re tossing and turning in bed, wondering why you can’t drift off. That midday snooze which felt so refreshing at the time might be the reason.
Naps have long been praised as a tool for boosting alertness, enhancing mood, strengthening memory, and improving productivity. Yet for some, they can sabotage nighttime sleep.
Napping is a double-edged sword. Done right, it’s a powerful way to recharge the brain, improve concentration, and support mental and physical health. Done wrong, it can leave you groggy, disoriented, and struggling to fall asleep later. The key lies in understanding how the body regulates sleep and wakefulness.
Most people experience a natural dip in alertness in the early afternoon, typically between 1 p.m. and 4 p.m. This isn’t just due to a heavy lunch – our internal body clock, or circadian rhythm, creates cycles of wakefulness and tiredness throughout the day. The early afternoon lull is part of this rhythm, which is why so many people feel drowsy at that time.
Studies suggest that a short nap during this period – ideally followed by bright light exposure – can help counteract fatigue, boost alertness, and improve cognitive function without interfering with nighttime sleep. These “power naps” allow the brain to rest without slipping into deep sleep, making it easier to wake up feeling refreshed.
But there’s a catch: napping too long may result in waking up feeling worse than before. This is due to “sleep inertia” – the grogginess and disorientation that comes from waking up during deeper sleep stages.
Once a nap extends beyond 30 minutes, the brain transitions into slow-wave sleep, making it much harder to wake up. Studies show that waking from deep sleep can leave people feeling sluggish for up to an hour. This can have serious implications if they then try to perform safety-critical tasks, make important decisions or operate machinery, for example. And if a nap is taken too late in the day, it can eat away from the “sleep pressure build-up” – the body’s natural drive for sleep – making it harder to fall asleep at night.
When napping is essential
For some, napping is essential. Shift workers often struggle with fragmented sleep due to irregular schedules, and a well-timed nap before a night shift can boost alertness and reduce the risk of errors and accidents. Similarly, people who regularly struggle to get enough sleep at night – whether due to work, parenting or other demands – may benefit from naps to bank extra hours of sleep that compensate for their sleep loss.
Nonetheless, relying on naps instead of improving nighttime sleep is a short-term fix rather than a sustainable solution. People with chronic insomnia are often advised to avoid naps entirely, as daytime sleep can weaken their drive to sleep at night.
Certain groups use strategic napping as a performance-enhancing tool. Athletes incorporate napping into their training schedules to speed up muscle recovery and improve sports-related parameters such as reaction times and endurance. Research also suggests that people in high-focus jobs, such as healthcare workers and flight crews, benefit from brief planned naps to maintain concentration and reduce fatigue-related mistakes. NASA has found that a 26-minute nap can improve performance of long-haul flight operational staff by 34%, and alertness by 54%.
How to nap well
To nap effectively, timing and environment matter. Keeping naps between ten and 20 minutes prevents grogginess. The ideal time is before 2 p.m. – napping too late can push back the body’s natural sleep schedule.
The best naps happen in a cool, dark, and quiet environment, similar to nighttime sleep conditions. Eye masks and noise-canceling headphones can help, particularly for those who nap in bright or noisy settings.
Despite the benefits, napping isn’t for everyone. Age, lifestyle and underlying sleep patterns all influence whether naps help or hinder. A good nap is all about strategy – knowing when, how, and if one should nap at all.
For some it’s a life hack, improving focus and energy. For others, it’s a slippery slope into sleep disruption. The key is to experiment and observe how naps affect your overall sleep quality.
Social media adds a whole new layer of stress to teen friendships. (SpeedKingz/Shutterstock)
Today’s teens face a challenge that their parents never did: the pressure to be constantly available to their friends online. New research from the University of Padua in Italy reveals how this digital pressure is creating stress that leads to real-world friendship conflicts for teenagers.
The study, published in Frontiers in Digital Health, tracked 1,185 teenagers over six months to understand how social media affects their friendships. What they found paints a concerning picture of modern teen relationships.
When Friends Don’t Text Back
“We show that adolescents’ perceptions of social media norms and perceptions of unique features of social media contribute to digital stress, which in turn increases friendship conflicts,” says lead study author Federica Angelini from the University of Padua, in a statement.
The researchers identified two main types of digital stress that teens experience. The first, entrapment, refers to the pressure teens feel to always be available and responsive to their friends online. The second, disappointment, arises when friends don’t respond as quickly or as often as expected, leading to negative feelings. Both types of stress play significant roles in the challenges teens face in their digital friendships.
Surprisingly, it’s not the pressure to be available that causes most problems, it’s the disappointment when friends aren’t available to them.
“Disappointment from unmet expectations on social media—such as when friends do not respond or engage as expected—is a stronger predictor of friendship conflict than the pressure to be constantly available,” explains Angelini.
In other words, teens aren’t fighting because they feel burdened by needing to respond to every message, but because they feel upset when their friends don’t respond to them.
The Problem with Pictures and Videos
When examining different features of social media, the researchers found that the visual nature of content (photos, videos, stories) was most connected to creating disappointment and conflict.
“Visual content makes it easier for teens to see what their friends are doing at any given time. If teens notice that their friends are active online or spend time with others while ignoring their messages, they may feel excluded, jealous, or rejected,” Angelini explained.
We’ve all had that moment: seeing a friend post a fun story while they still haven’t answered the message you sent hours ago. For teens, these visual cues can trigger strong emotional responses that lead to real-world arguments.
The good news is that parents and educators can help teens develop healthier social media habits. Teaching teens strategies to protect their mental health online is crucial as parents navigate the uncharted territory of raising a generation growing up with social media.
“One such habit for teenagers could be setting boundaries, for example scheduling ‘offline’ times or managing notifications. When done in discussion with friends this can also help reduce misunderstandings,” says Angelini.
The researchers also recommend helping teens understand that not every message needs an immediate response. Learning this can reduce stress while maintaining healthy friendships.
Boys and girls experience these pressures slightly differently. Boys who saw social media as highly available actually reported feeling less trapped by it compared to girls, possibly because there are different expectations around response times between different friend groups.
The study followed the same teens over six months, which allowed researchers to see how digital stress actually caused more conflicts over time, rather than just being connected to them.
Understanding these pressures is key to helping teens build healthy, sustainable friendships in the digital age. By recognizing the emotional impact of unmet digital expectations, parents and educators can guide teenagers toward more balanced social connections both online and offline.
Ever been annoyed by someone else’s music in a shared space? Or struggled to have a private conversation in a busy office? Researchers at Penn State University might have just solved these everyday acoustic headaches with a breakthrough that creates “sound bubbles” only the intended listener can hear.
These localized audio spots, which the researchers dubbed “audible enclaves,” can be placed with pinpoint accuracy—even behind obstacles like human heads—while remaining silent to everyone else in the room.
“We essentially created a virtual headset,” said Jia-Xin “Jay” Zhong, a postdoctoral scholar in acoustics at Penn State. “Someone within an audible enclave can hear something meant only for them — enabling sound and quiet zones.”
How Audible Enclaves Work
Published in the Proceedings of the National Academy of Sciences, the research tackles a challenge in acoustics that has long frustrated audio engineers. Sound waves naturally spread out as they travel, making it nearly impossible to contain them without physical barriers. This is why conversations carry across rooms and why traditional speakers fill entire spaces with sound.
“We use two ultrasound transducers paired with an acoustic metasurface, which emit self-bending beams that intersect at a certain point,” said corresponding author Yun Jing, professor of acoustics in the Penn State College of Engineering. “The person standing at that point can hear sound, while anyone standing nearby would not. This creates a privacy barrier between people for private listening.”
The system works by sending out two beams of ultrasonic sound—frequencies too high for humans to hear—that travel along curved paths and meet at a specific target location. Using 3D-printed structures called metasurfaces, they shape these ultrasonic beams to bend around obstacles like a person’s head.
By positioning the metasurfaces in front of the two transducers, the ultrasonic waves travel at two slightly different frequencies along a crescent-shaped trajectory until they intersect. The metasurfaces were 3D printed by co-author Xiaoxing Xia, staff scientist at the Lawrence Livermore Laboratory.
Neither beam is audible on its own—it is the intersection of the beams together that creates a local nonlinear interaction, which generates audible sound. The beams can bypass obstacles, such as human heads, to reach a designated point of intersection.
Breaking Sound Barriers
Most audio technologies work within narrow frequency ranges, but this system demonstrated effectiveness across an impressive spectrum from 125 Hz to 4 kHz. This range covers most frequencies needed for speech and music reproduction, making it practical for real-world applications.
The approach differs fundamentally from existing directional sound technologies. Previous attempts to create focused audio have required massive speaker arrays and complex processing, especially for lower frequencies with longer wavelengths. Commercial “sound beam” products exist but can’t bend around obstacles or create such sharply defined listening spots.
Perhaps most impressive is the system’s compact size. The researchers achieved their results using a source aperture measuring just 0.16 meters—tiny compared to conventional approaches that would require much larger equipment to direct low-frequency sounds.
To verify the technology works with actual content rather than just test tones, the team conducted rigorous testing. “We used a simulated head and torso dummy with microphones inside its ears to mimic what a human being hears at points along the ultrasonic beam trajectory, as well as a third microphone to scan the area of intersection,” said Zhong. “We confirmed that sound was not audible except at the point of intersection, which creates what we call an enclave.”
The researchers tested the system in a common room with normal reverberations, meaning it could work in various environments like classrooms, vehicles, or even outdoors.
Where Will We See Audible Enclaves?
This technology opens up fascinating possibilities. Museums could deliver exhibit narration to visitors in specific spots without creating audio overlap. Office workers could receive private notifications without disrupting colleagues. Cars could create individual sound zones for each passenger, letting the driver hear navigation instructions while rear passengers enjoy different music.
The applications extend beyond convenience. The same approach could create targeted quiet zones by delivering precisely placed noise-cancellation signals. Hospitals could maintain quiet areas while allowing necessary communication in adjacent spaces—something traditional noise control systems struggle to accomplish.
For now, researchers can remotely transfer sound about a meter away from the intended target, and the sound volume is about 60 decibels, equivalent to speaking volume. However, the researchers said that distance and volume may be able to be increased if they increased the ultrasound intensity.
The current system requires high-intensity ultrasound to produce moderate audio levels due to conversion inefficiency. While the levels used fall within safety guidelines, this aspect needs further refinement.
Audio quality presents another hurdle. The interaction introduces some distortion, which could affect complex audio content. However, the team believes signal processing techniques could compensate for these effects in future versions.
Audible enclaves certainly offer a compelling and exciting solution to a long-standing problem, creating bubbles of sound that exist only where wanted and nowhere else. By focusing sound with laser-like precision, this technology could transform our relationship with audio in shared spaces, making private listening truly private without isolating listeners from their surroundings.
In the sales industry, “fake it till you make it” isn’t just a saying; it’s often a job requirement. Behind those seemingly genuine smiles and enthusiastic pitches, salespeople are performing complex emotional gymnastics that researchers call emotional labor. According to new international research, this emotional performance is seriously impacting employee mental health and job satisfaction.
A recent study published in Industrial Marketing Management explores how salespeople’s moral character influences how they manage their emotions at work and how this ultimately affects their well-being. Poor employee well-being costs U.S. companies an estimated $500 billion and results in 550 million lost workdays annually, so this is a big deal for both businesses and individuals.
Reports show that about 63% of salespeople struggle with mental health issues, and sales jobs are known for their intense pressure. This has only gotten worse since the pandemic, with salespeople facing new challenges and changing customer expectations.
“We are all under a lot of pressure, a lot of deadlines at work, right?” says study co-author Khashayar Afshar Bakeshloo (Kash) from the University of Mississippi, in a statement. “We wanted to look at the different factors that threaten employee’s mental health and lead to emotional exhaustion. One such factor that is very interesting to us was emotional labor.”
The Hidden Cost of Putting on a Happy Face
Emotional labor is the work of managing one’s emotions to meet job requirements. It comes in two main forms: surface acting and deep acting.
Surface acting is basically putting on a mask and showing emotions you don’t actually feel, like forcing a smile during a tough customer meeting. Deep acting goes further, where you actually try to generate the required emotions internally, like really trying to feel excited about a product you’re selling.
The researchers wanted to know how a salesperson’s moral character affects which approach they use, and how these approaches impact both customer behavior and the salesperson’s well-being.
They surveyed 313 B2B salespeople across various industries in the United States, representing different company sizes and offering various products and services. Most people in the study (72.5%) were men, which is typical in B2B sales.
When Values and Job Requirements Collide
Salespeople who deeply value moral traits as part of who they are (what researchers call “moral identity internalization”) are more likely to try genuinely feeling the emotions their job requires, rather than just faking them.
On the other hand, salespeople who focus more on publicly showing their morality (called “moral identity symbolization”) tend to use both approaches depending on the situation—sometimes genuinely trying to feel the emotions, other times just putting on a show.
Customers can often tell when a salesperson is being fake, and they frequently respond by treating the salesperson poorly or disrespectfully. This negative customer behavior then makes salespeople less satisfied with their jobs, creating a harmful cycle.
“Managing emotions to meet job demands can lead to exhaustion, dissatisfaction, and negative customer reactions,” says study co-author Omar Itani from Lebanese American University. “Job satisfaction is essential for overall well-being, emphasizing the need for supportive workplace cultures.”
In sales roles, where rejection is common, the pressure to perform can lead to significant emotional strain. More than 70% of people working in sales reported struggling with mental health in the 2024 State of Mental Health in Sales report.
“Salespeople are expensive employees,” explains Afshar. “They bring in money for the organization. So, if they miss an opportunity, it means that there’s no money coming in. When a salesperson burns out, it’s not just a loss of the person, but it’s also everything they bring to the company.”
Creating Healthier Work Environments
So, what can employees and employers do? Aligning personal values with job expectations can help salespeople manage emotional labor more effectively. Those in roles that require frequent emotional acting should consider workplaces that support authenticity, mental health resources, and ethical leadership to reduce burnout. Sales managers can work to foster environments like these.
“Communication is the key here,” adds Afshar. “When employees can communicate their problems, they aren’t dealing with problems alone. When they feel safe talking to their managers, their colleagues, it tends to remove some of that burden.”
Using social media with more intention can help to protect your mental health. (PeopleImages.com – Yuri A/Shutterstock)
Every few months, another headline warns us about social media’s toxic effects on mental health, followed by calls to digital detox. Yet for many of us, completely unplugging isn’t super realistic. Now, new research from the University of British Columbia suggests we might not have to choose between staying connected and staying mentally healthy; there’s a middle path that could deliver the best of both worlds.
The study, published in the Journal of Experimental Psychology: General, challenges the popular belief that we must cut back on social media to protect our mental health. Instead, learning to use social media differently by focusing on meaningful connections rather than mindless scrolling or comparing ourselves to others, might be just as helpful for our emotional well-being.
“There’s a lot of talk about how damaging social media can be, but our team wanted to see if this was really the full picture or if the way people engage with social media might make a difference,” says lead study author Amori Mikami, a psychology professor from the University of British Columbia, in a statement.
The Love-Hate Relationship With Social Media
For most young adults, social media is a mixed bag. On one hand, platforms like Instagram and Facebook make it easy to stay in touch with friends, find communities of like-minded people, and get emotional support when needed. On the other hand, these same platforms can increase anxiety, depression, and loneliness when we find ourselves constantly comparing our regular lives to others’ highlight reels or feeling like we’re missing out on what everyone else is doing.
The research team recruited 393 social media users between the ages of 17 and 29 who reported some negative impacts from social media and had some symptoms of mental health concerns. They split these participants into three groups:
A tutorial group that learned healthier ways to use social media
An abstinence group that was asked to stop using social media entirely
A control group that continued their usual social media habits
Over six weeks, researchers tracked participants’ social media use with phone screen time apps and self-reports. They also measured various aspects of mental well-being, including loneliness, anxiety, depression, and fear of missing out (FOMO).
Two Different Paths to Better Mental Health
As you might expect, people in the abstinence group drastically reduced their time on social media. But, the tutorial group also cut back on their social media use compared to the control group, even though they were never specifically told to do so. Just becoming more mindful about social media naturally led them to be more selective about their usage.
Both the tutorial and abstinence groups made fewer social comparisons and did less passive scrolling. While the abstinence group showed the biggest changes, the tutorial group also improved significantly compared to the control group.
When it came to mental health benefits, each approach seemed to help with different things. The tutorial approach was especially good at reducing FOMO and feelings of loneliness. The abstinence approach, meanwhile, was particularly effective at lowering symptoms of depression and anxiety but did not improve loneliness, possibly due to reduced social connections.
“Cutting off social media might reduce some of the pressures young adults feel around presenting a curated image of themselves online. But stopping social media might also deprive young adults of social connections with friends and family, leading to feelings of isolation,” explains Mikami.
Creating a Healthier Social Media Experience
The tutorial approach taught participants how to use social media in ways that boost genuine connection while reducing the stress of constant comparison. Participants learned to:
Reflect on when social media made them feel good versus bad
Recognize that most posts are carefully curated and don’t reflect real life
Unfollow or mute accounts that triggered negative feelings about themselves
Actively engage with friends through comments or messages instead of just passively scrolling
Completely stopping social media reduced activity on friends’ pages, which actually predicted greater loneliness. It seems that commenting on friends’ content provides a valuable social connection. However, reducing engagement with celebrity or influencer content predicted lower loneliness and fewer symptoms of depression and anxiety—showing that not all social media activity affects us the same way.
“Social media is here to stay,” says Mikami. “And for many people, quitting isn’t a realistic option. But with the right guidance, young adults can curate a more positive experience, using social media to support their mental health instead of detracting from it.”
Mikami believes these findings could help develop mental health programs and school workshops where young people learn to use social media as a tool for strengthening relationships rather than as a source of stress and comparison.
Drinking in the sun can make you unaware that you are getting sunburnt. (STEKLO/Shutterstock)
BOCA RATON, Fla. — When was the last time you got a sunburn? If you’re like nearly a third of American adults who were toasted by the sun at least once last year, you might want to pay attention to a revealing new study about skin cancer risk. Researchers from Florida Atlantic University have found some eye-opening patterns in how Americans think about cancer risk and protect their skin—or don’t.
Your beach cocktail might be making your sunburn worse. Research published in the American Journal of Lifestyle Medicine reveals that more than one in five people who got sunburned were drinking alcohol at the time. In other words, there seems to be a real connection between having drinks and getting burned.
The Skin Cancer Problem You Need to Know About
Skin cancer tops the charts as America’s most common cancer. Millions of cases are diagnosed every year, costing the healthcare system nearly $9 billion annually. While most of us have heard of melanoma (the deadliest type), basal cell carcinoma and squamous cell carcinoma are actually more common.
Despite how common skin cancer is, the study found most Americans aren’t particularly worried about getting it. Only about 10% of people said they were “extremely worried,” while most were just “somewhat” (28.3%) or “slightly” (27.3%) concerned.
Sunburns significantly raise your cancer risk. According to dermatologists, getting just five blistering sunburns between ages 15 and 20 increases your melanoma risk by a whopping 80%. That’s a massive jump from something many people experience regularly.
Who Gets Burned? The Surprising Patterns
The research team surveyed over 6,000 American adults about their sun habits and sunburn experiences. Rich people get more sunburns. Yes, you read that correctly. People earning $200,000+ per year were four times more likely to report sunburns than those in the lowest income bracket. This completely flips what you might expect: wouldn’t wealthier people be more informed and have better access to sun protection?
Education doesn’t help either. College graduates and those with advanced degrees reported more sunburns than people with a high school diploma or less.
Other patterns:
Young adults (18-39) burn more often than older folks
Men get more sunburns than women
White Americans report more sunburns than Black or Hispanic Americans
“While Hispanics and Black Americans generally report lower rates of sunburn, Hispanics often perceive greater benefits of UV exposure, which increases their risk,” says study author Lea Sacca, in a statement.
Why might wealthy, educated people get more sunburns? They probably spend more time on outdoor vacations or leisure activities. Think about it: boating, skiing, beach vacations, and outdoor sports are all activities more accessible to those with higher incomes and more flexible work schedules.
Most working Americans have already spent more than half their paycheck before they even get it. This financial balancing act, revealed in a recent survey, shows how millions of workers may be finding themselves counting money they haven’t yet received just to keep up with basic expenses.
A survey of 2,000 employed Americans making less than $75,000 annually shows what happens to the modern paycheck—where it goes, how fast it disappears, and how many people need to plan carefully just to make it through each month.
The poll, conducted by Talker Research and commissioned by EarnIn, found that 59% of Americans map out which bills to pay first while waiting for payday, with 51% of their money already earmarked before it hits their account. This happens mainly because living costs don’t match what people earn (44%) and bill due dates are scattered throughout the month (31%).
Past-due bills are another big reason people count their chickens before they hatch, making up 38% of pre-spent funds. Only 40% of those surveyed keep up with all their bills, while 55% typically juggle between one and four overdue bills every month.
When payday finally arrives, people know exactly where the money needs to go. Housing costs like rent or mortgage payments come first for 56% of respondents, then necessities like food and medicine (51%). Utility bills follow at 38%, with catching up on overdue bills in fourth place at 29%.
Three Days to Empty
The money that does arrive disappears quickly. Americans spend about 43% of their paycheck within just three days of getting it. When you add this to the 51% that’s already spoken for before arrival, very little remains for the rest of the pay period.
This quick drain creates a cycle of stress that most Americans find themselves stuck in. Only 20% of respondents said they don’t run out of money or need to tighten their belt before their next check comes—meaning 80% feel the squeeze as payday approaches.
For those caught short at the end of each pay cycle, the effects hit home: 62% struggle to buy groceries, 30% have trouble paying major bills, another 30% can’t cover smaller bills, and 16% find it hard to afford medicine and make loan payments.
Budget Advice vs. Real Life
The survey compared Americans’ actual spending with the popular 50/30/20 budget rule—which suggests putting 50% toward needs, 30% toward wants, and 20% into savings. The results show the gap between this advice and what people actually face.
On average, respondents put 64% of their money toward basic needs like food, bills, and housing—far more than the recommended 50%. Meanwhile, “wants” or personal spending gets just 16% of their income, and savings also account for only 16% of the average paycheck.
The savings picture looks even worse on closer inspection. More than half (56%) of those surveyed said less than 10% of their money goes into savings, while 23% couldn’t remember when they last saved 20% as the budget rule suggests.
When money runs low before the next check arrives, Americans use various tactics to get by. Nearly 39% pick up side hustles for extra cash, while 31% ask family for help and 28% turn to credit cards.
Worryingly, 14% of respondents said they have nowhere to turn when they need more money—showing a group of people living with extreme money troubles and no safety net.
Banking on help
Banks, which might seem like obvious helpers in this situation, offer few solutions. Only 5% of respondents can get their paycheck early through their bank, and even fewer (4%) can access early pay through their job.
“In today’s world, employees shouldn’t have to wait days to access the money they’ve already earned,” said an EarnIn spokesperson. “People deserve financial solutions that provide faster access to their pay—regardless of where they bank—so they can manage their money on their own terms, not their bank’s schedule.”
Despite limited help from banks, Americans stay loyal to them for years. The average person has used the same bank for nine years, with 14% reporting relationships lasting between 19 and 20 years.
This loyalty seems based more on habit than benefits. More than half (57%) stay with their bank simply because it feels familiar. Only 20% said they stay because their bank lets them get their money sooner.
Getting out of the paycheck-to-paycheck life
The survey asked how getting paychecks a bit earlier might ease financial pressure. If Americans could get paid up to two days earlier than usual, 34% said they could pay bills on time, and 29% thought they would worry less about money.
Additionally, 19% said earlier access would help them pay rent on time, while 15% could save more. Overall, 56% felt that getting their paycheck up to two days earlier would make them feel more secure about their finances.
For many, the standard two-week or monthly pay cycle creates roadblocks to financial stability, forcing even careful people to make tough choices about which necessities get paid first. This mismatch between when money is earned and when bills come due adds to financial worry.
The gap between budget advice and real spending patterns further shows the money pressures facing working Americans. When nearly two-thirds of income must cover just the basics, building savings becomes much harder.
The findings also raise questions about how employers and banks might either help reduce or accidentally increase these pressures. With so few workers able to access early pay options, there’s room for new approaches in payroll and banking that better fit people’s actual financial lives.
Financial advice often focuses on budgeting skills and personal habits, but this survey suggests that timing issues like pay frequency and bill due dates matter just as much. Solutions that fix these broader issues may work better than putting all the burden on individual choices.
“I’m completely burned out”—once a phrase associated with decades of career advancement and family responsibilities—is now commonly heard from professionals in their twenties. According to a new survey, 25% of Americans experience burnout before age 30, challenging traditional assumptions about when life’s pressures reach their peak and raising important questions about how modern stressors affect different generations.
The poll of 2,000 adults from Talker Research examined how the cumulative stress of the past decade has affected Americans across generations. While the average American experiences peak burnout at approximately 42 years old, the picture looks dramatically different for younger adults. Gen Z and millennial respondents, currently aged 18 to 44, reported reaching their highest point of stress at an average age of just 25—a finding that suggests fundamental changes in how modern life impacts mental well-being across age groups.
The finding that a quarter of Americans experience burnout before age 30 represents a significant shift from traditional life course expectations. Historically, peak stress periods were often associated with mid-life challenges such as simultaneously managing career advancement, child-rearing, and caring for aging parents. The early burnout phenomenon suggests that younger generations may be facing an accelerated or compressed experience of life stressors.
The state of American stress
Currently, the average person reports operating at half their stress capacity—already a concerning level for overall well-being. Even more troubling, 42% of respondents indicated feeling even more stressed than this baseline, with a notable generational divide emerging in the data. Gen Z and millennial participants reported significantly higher current stress levels (51%) compared to their Gen X and older counterparts (37%).
Ehab Youssef, a licensed clinical psychologist, mental health researcher and writer at Mentalyc, provided insight into why stress is peaking earlier than ever.
“As a psychologist, I’ve worked with clients across different generations, and I can tell you stress doesn’t look the same for everyone,” Youssef told Talker Research. “It’s fascinating — and a little concerning — to see how younger Americans are experiencing peak stress earlier than ever before. I see it in my practice all the time: twenty-somethings already feeling completely burned out, something I never used to see at that age.
“I often hear from my younger clients, ‘Why does life feel so overwhelming already?’ They’re not just talking about work stress; they’re feeling pressure from every direction — career, finances, relationships, even social media expectations. Compare this to my older clients, who often describe their peak stress happening later in life — maybe in their 40s or 50s, when financial or family responsibilities became heavier. The shift is real, and it’s taking a toll.”
The primary drivers of burnout
When asked to identify the primary causes of their burnout, financial concerns topped the list, with 30% of respondents ranking money matters as their number one stressor. This was followed closely by politics (26%), work-related pressures (25%), and physical health concerns (23%).
The data reveals interesting generational differences in what’s causing the most stress. For younger Americans (Gen Z and millennials), work represents the greatest point of contention (33%), followed by finances (27%) and mental health (24%). In contrast, older generations (Gen X, baby boomers, and the silent generation) identified politics as their most significant concern (27%), with physical health following as a close second (24%).
Relationships of all kinds are also contributing significantly to American stress levels. One in six respondents who identified either their love life or family relationships as stressors ranked these areas as their top source of burnout (18% each).
Tinshemet cave during the excavations. (Credit- Yossi Zaidner)
In a limestone cave in Israel, archaeologists have uncovered evidence of what might be the oldest case of cultural sharing between different human species. The discovery reveals that around 100,000 years ago, early Homo sapiens and their Neanderthal-like neighbors weren’t just occasionally bumping into each other—they were participating in a shared cultural world, complete with identical toolmaking traditions, hunting practices, and even burial rituals. This finding turns the traditional story of human evolution on its head, suggesting that cultural exchange between different human species was the rule, not the exception, in our ancient past.
The findings at Tinshemet Cave, published in Nature Human Behaviour, provide a rare glimpse into a pivotal period when multiple human species coexisted in the Middle East. The site has yielded fully articulated human skeletons carefully positioned in burial positions, thousands of ochre fragments transported from distant sources, stone tools made with consistent manufacturing techniques, and animal bones that reveal specific hunting preferences—all dating to what scientists call the mid-Middle Paleolithic period (130,000-80,000 years ago).
“Our data show that human connections and population interactions have been fundamental in driving cultural and technological innovations throughout history,” says lead researcher Prof. Yossi Zaidner of the Hebrew University in Jerusalem, in a statement.
The discovery is especially significant because the Levant region (modern-day Israel, Lebanon, Syria, and Jordan) served as a crossroads where different human populations met. Previous discoveries in the region had uncovered fossils with mixed physical characteristics, suggesting that interbreeding occurred between Homo sapiens migrating out of Africa and local Neanderthal-like populations.
What makes the Tinshemet Cave findings transformative is that they demonstrate these different-looking humans weren’t just meeting and mating—they were sharing their unique cultural behaviors and traditions across population boundaries.
Located just 10 kilometers from another significant archaeological site called Nesher Ramla (where Neanderthal-like fossils were previously discovered), Tinshemet Cave preserves evidence of sustained human occupation over thousands of years. The research team excavated multiple layers of sediments inside the cave and on its terrace, uncovering a wealth of artifacts that tell a cohesive story of sophisticated human activity.
Among the most striking discoveries are the human burials. The excavations revealed at least five individuals, including two complete articulated skeletons—one adult and one child. The bodies were deliberately placed in a fetal position on their sides with bent limbs, a burial position remarkably similar to contemporaneous burials found at other Middle Paleolithic sites in the region, including the famous Qafzeh and Skhul caves.
These burials represent the earliest known examples of intentional human burial anywhere in the world, predating similar practices in Europe and Africa by tens of thousands of years. More importantly, they show that diverse human populations were treating their dead with similar ceremonial care, suggesting shared symbolic behaviors and possibly shared beliefs.
Another fascinating discovery was the abundant presence of ochre—a naturally occurring mineral pigment that produces red, yellow, and purple hues. The research team recovered more than 7,500 ochre fragments throughout the site, with the highest concentrations found in layers containing human burials. Chemical analysis revealed that these ochre materials came from at least four different sources, some located as far as 60-80 kilometers away in Galilee, and others possibly from the central Negev, more than 100 kilometers to the south.
The significant effort invested in obtaining these pigments from distant sources suggests their importance in the lives of these ancient people. The presence of large chunks of ochre near human remains—including a 4-5 cm piece found between the legs of one buried individual—hints at their ritual significance. Evidence of heat treatment to enhance the red color of some ochre pieces further reveals sophisticated knowledge and intentional manipulation of these materials.
Stone tool production at Tinshemet Cave demonstrates another dimension of cultural uniformity. The researchers analyzed nearly 2,800 stone artifacts and found that a specific flint-knapping technique known as the centripetal Levallois method dominated tool production. This method, which involves careful preparation of a stone core to produce standardized flakes, appears consistently across mid-Middle Paleolithic sites in the region.
This technological consistency is particularly remarkable because it differs significantly from both earlier and later stone tool traditions in the Levant. Earlier Middle Paleolithic populations (around 250,000-140,000 years ago) primarily used methods to produce blade-like tools, while later populations (after 80,000 years ago) employed a more diverse set of techniques. The dominance of the centripetal Levallois method during this middle period represents a distinct technological tradition shared across populations.
Analysis of animal bones from the site reveals a third element of behavioral uniformity: a focus on hunting large game animals. Unlike earlier and later periods, when smaller prey like gazelles dominated the diet, the mid-Middle Paleolithic hunters at Tinshemet and similar sites showed a clear preference for larger ungulates, particularly aurochs (wild cattle) and equids (horse-like animals). This pattern suggests either a shift in hunting strategies or different approaches to transporting animal resources, possibly connected to changes in settlement patterns.
To establish the age of the findings, the research team employed multiple dating techniques, including thermoluminescence dating of burnt flint, optically stimulated luminescence dating of quartz grains in the sediments, and uranium-series dating of snail shells and flowstones. These methods consistently dated the main human occupation layers to approximately 97,000-106,000 years ago, placing them firmly within the mid-Middle Paleolithic period.
The timing corresponds to a warm interglacial period known as Marine Isotope Stage 5, when climatic conditions in the Levant were relatively favorable. Pollen analysis from the lowest layers of the cave indicates a Mediterranean open forest environment with wide-spaced trees, small shrubs, and herbs dominated by evergreen oak.
Perhaps most intriguing about the Tinshemet Cave discovery is what it suggests about interactions between different human populations. “These findings paint a picture of dynamic interactions shaped by both cooperation and competition,” says co-lead author Dr. Marion Prévost.
Scientists have long associated specific behaviors or technologies exclusively with particular human species. Now we have strong evidence that points to a landscape of interaction, where cultural innovations spread across population boundaries through social learning and exchange.
As excavations at Tinshemet Cave continue, researchers hope to uncover additional evidence about the lives and interactions of these ancient people. The site has already yielded remarkable insights into a crucial chapter of human prehistory—a time when different human populations met, exchanged ideas, and created shared traditions despite their physical differences. What began as a simple archaeological survey has evolved into a profound reconsideration of what it means to be human, showing that cultural connections can transcend biological boundaries.
In his February 2025 cover story for The Atlantic, journalist Derek Thompson dubbed our current era “the anti-social century.” He isn’t wrong. According to our recent research, the U.S. is becoming a nation of homebodies.
Using data from the American Time Use Survey, we studied how people in the U.S. spent their time before, during and after the pandemic.
The COVID-19 pandemic did spur more Americans to stay home. But this trend didn’t start or end with the pandemic. We found that Americans were already spending more and more time at home and less and less time engaged in activities away from home stretching all the way back to at least 2003.
And if you thought the end of lockdowns and the spread of vaccines led to a revival of partying and playing sports and dining out, you would be mistaken. The pandemic, it turns out, mostly accelerated ongoing trends.
All of this has major implications for traffic, public transit, real estate, the workplace, socializing and mental health.
Life inside
The trend of staying home is not new. There was a steady decline in out-of-home activities in the two decades leading up to the pandemic.
Compared with 2003, Americans in 2019 spent nearly 30 minutes less per day on out-of-home activities and eight fewer minutes a day traveling. There could be any number of reasons for this shift, but advances in technology, whether it’s smartphones, streaming services or social media, are likely culprits. You can video chat with a friend rather than meeting them for coffee; order groceries through an app instead of venturing to the supermarket; and stream a movie instead of seeing it in a theater.
Of course, there was a sharp decline in out-of-home activities during the pandemic, which dramatically accelerated many of these stay-at-home trends.
Outside of travel, time spent on out-of-home activities fell by over an hour per day, on average, from 332 minutes in 2019 to 271 minutes in 2021. Travel, excluding air travel, fell from 69 to 54 minutes per day over the same period.
But even after the pandemic lockdowns were lifted, out-of-home activities and travel through 2023 remained substantially depressed, far below 2019 levels. There was a dramatic increase in remote work, online shopping, time spent using digital entertainment, such as streaming and gaming, and even time spent sleeping.
Time spent outside of the home has rebounded since the pandemic, but only slightly. There was hardly any recovery of out-of-home activities from 2022 to 2023, meaning 2023 out-of-home activities and travel were still far below 2019 levels. On the whole, Americans are spending nearly 1.5 hours less outside their homes in 2023 than they did in 2003.
While hours worked from home in 2022 were less than half of what they were in 2021, they’re still about five times what they were ahead of the pandemic. Despite this, only about one-quarter of the overall travel time reduction is due to less commuting. The rest reflects other kinds of travel, for activities such as shopping and socializing.
Ripple effects
This shift has already had consequences.
With Americans spending more time working, playing and shopping from home, demand for office and retail space has fallen. While there have been some calls by major employers for workers to spend more time in the office, research suggests that working from home in the U.S. held steady between early 2023 and early 2025 at about 25% of paid work days. As a result, surplus office space may need to be repurposed as housing and for other uses.
There are advantages to working and playing at home, such as avoiding travel stress and expenses. But it has also boosted demand for extra space in apartments and houses, as people spend more time under their own roof. It has changed travel during the traditional morning – and, especially, afternoon – peak periods, spreading traffic more evenly throughout the day but contributing to significant public transit ridership losses. Meanwhile, more package and food delivery drivers are competing with parked cars and bus and bike lanes for curb space.
Perhaps most importantly, spending less time out and about in the world has sobering implications for Americans well beyond real estate and transportation systems.
Research we’re currently conducting suggests that more time spent at home has dovetailed with more time spent alone. Suffice it to say, this makes loneliness, which stems from a lack of meaningful connections, a more common occurrence. Loneliness and social isolation are associated with increased risk for early mortality.
Because hunkering down appears to be the new norm, we think it’s all the more important for policymakers and everyday people to find ways to cultivate connections and community in the shrinking time they do spend outside of the home.
Your significant other’s positive emotions can be contagious, especially in older couples. (Darren Baker/ Shutterstock)
When your spouse is in a good mood, you might feel happier too, but according to new research, their emotional state could be affecting you on a much deeper level. Scientists have discovered that when your partner experiences positive emotions, it might actually lower your cortisol levels, the primary stress hormone in your body, regardless of how you yourself are feeling. This biological connection between older couples adds a whole new dimension to what it means to be in a relationship.
“Having positive emotions with your relationship partner can act as a social resource,” says lead study author Tomiko Yoneda, an assistant professor of psychology at the University of California, Davis, in a statement.
The Aging Body and Stress Management
Study results, published in Psychoneuroendocrinology, are especially telling for older adults in committed relationships. As we get older, our bodies become worse at regulating stress responses, making us more vulnerable to the harmful effects of high cortisol. But a partner who maintains positive emotions might act as a biological buffer against stress.
The research team analyzed data from 321 older couples from Canada and Germany. These weren’t new relationships. The average couple had been together for 43.97 years. Each participant, aged between 56 and 87, completed surveys multiple times daily for a week, reporting their emotions while also providing saliva samples to measure cortisol. Partners completed surveys at the same time but separately, so they couldn’t influence each other’s responses.
Your Mood, My Body
When people reported feeling more positive than usual, their cortisol levels were lower. But when someone’s partner reported more positive emotions than usual, that person’s cortisol was also lower, regardless of how they themselves were feeling. In simple terms, your partner’s good mood might be doing your body good, even if you’re not sharing their happiness.
This connection extended beyond moment-to-moment measurements to total daily cortisol output. When someone’s partner reported higher positive emotions than usual throughout the day, that person showed lower overall cortisol for the day. This link was stronger for older participants and those who reported being happier in their relationships. In some cases, the effect of a partner’s emotions on cortisol was even stronger than the effect of one’s own emotions.
While a partner’s positive emotions were linked to lower cortisol, the researchers didn’t find any connection between a partner’s negative emotions and cortisol levels. Yoneda explained that this makes sense because older adults often develop ways to shield their partners from the physiological effects of negative emotions.
Quality Relationships Make a Difference
The emotional climate of your relationship may be an overlooked factor in your physical health. When your partner tends toward happiness, interest, or relaxation, their emotional state could be protecting your stress physiology.
This doesn’t mean you should pressure your partner to be constantly happy. Rather, these findings point to potential health benefits that come from fostering positive emotional experiences together. Creating opportunities for shared good times might be more than just relationship maintenance; it could be a mutual health boost.
“Relationships provide an ideal source of support, especially when those are high-quality relationships,” says Yoneda. “These dynamics may be particularly important in older adulthood.”
The association between a partner’s positive emotions and lower cortisol was most pronounced for people who reported higher relationship satisfaction. In happy relationships, partners may be more tuned in to each other’s emotional states.
Yoneda noted that these results fit with psychological theories suggesting positive emotions help us act more fluidly in the moment. These experiences can create positive feedback loops that enhance this capability over time. People in relationships can share these benefits when they experience positive emotions together.
Your partner’s happiness might be doing more than lighting up the room. It could be helping regulate your stress physiology in ways that boost your long-term health. In long-term relationships, emotions truly become a shared resource. What’s yours really is mine, right down to the hormonal level. So perhaps the age-old advice to “choose a happy partner” carries more biological wisdom than we ever realized?
Conceptual image of a man walking on the street with his smartphone being charged by his hoodie. (AI-generated image created by StudyFinds)
Forget to bring your charger with you on vacation? What if your clothing could generate electricity from the heat your body naturally produces? This futuristic concept is now approaching reality thanks to scientists at Chalmers University of Technology in Sweden and Linköping University.
Researchers say the remarkable new textile technology converts body heat into electricity through thermoelectric effects, potentially powering wearable devices from your clothing. The innovation, described in an Advanced Science paper, centers on a newly developed polymer called poly(benzodifurandione), or PBFDO, which serves as a coating for ordinary silk yarn.
“The polymers that we use are bendable, lightweight and are easy to use in both liquid and solid form. They are also non-toxic,” says study first author Mariavittoria Craighero, a doctoral student at the Department of Chemistry and Chemical Engineering at Chalmers, in a statement.
Unlike previous attempts at creating thermoelectric textiles, this breakthrough addresses a critical barrier that has long hampered progress: the lack of air-stable n-type polymers. These materials are characterized by their ability to move negative charges and are essential counterparts to the more common p-type polymers in creating efficient thermoelectric devices.
“We found the missing piece of the puzzle to make an optimal thread – a type of polymer that had recently been discovered. It has outstanding performance stability in contact with air, while at the same time having a very good ability to conduct electricity. By using polymers, we don’t need any rare earth metals, which are common in electronics,” explains Craighero.
How Thermoelectric Textiles Work
Thermoelectric generators work by converting temperature differences into electrical energy. When one side of a thermoelectric material is warmer than the other, electrons move from the hot side to the cold side, generating an electrical current. The human body continuously generates heat, creating natural temperature gradients between the skin and the surrounding environment.
For efficient thermoelectric generation, both p-type (positive) and n-type (negative) materials must work together. While p-type materials have been well-established in previous research, creating stable n-type materials has been a persistent challenge. Most n-type organic materials degrade rapidly when exposed to oxygen in the air, often becoming ineffective within days.
What makes this development particularly exciting is the remarkable stability of PBFDO-coated silk. Unlike similar materials that degrade within days when exposed to air, these new thermoelectric yarns maintain their performance for over 14 months under normal conditions without any protective coating. The researchers project a half-life of 3.2 years for these materials – an unprecedented achievement for this type of organic conductor.
Beyond electrical performance, the mechanical properties of the PBFDO-coated silk are equally impressive. The coated yarn can stretch up to 14% before breaking and, more importantly for everyday use, it can withstand machine washing.
“After seven washes, the thread retained two-thirds of its conducting properties. This is a very good result, although it needs to be improved significantly before it becomes commercially interesting,” states Craighero.
The material also demonstrates remarkable temperature resilience. During testing, the researchers found that PBFDO remains flexible even when cooled with liquid nitrogen to extremely low temperatures. This exceptional mechanical stability allows the material to withstand various environmental conditions and physical stresses that would be encountered in real-world use.
The Future of Daily Wear?
To showcase the technology’s potential, the research team created two different thermoelectric textile devices: a thermoelectric button and a larger textile generator with multiple thermoelectric legs.
The thermoelectric button demonstrated an output of about 6 millivolts at a temperature difference of 30 degrees Celsius. Meanwhile, the larger textile generator achieved an open-circuit voltage of 17 millivolts at a temperature difference of 70 degrees Celsius.
With a voltage converter, this could help power ultra-low-energy devices, such as certain types of sensors. However, the current power output—0.67 microWatts at a 70-degree temperature difference—is far below what would be required for USB charging of standard electronics.
While these power outputs mark a major step forward in thermoelectric textiles, it’s important to note that the temperature differences used in lab tests—up to 70 degrees Celsius—are significantly higher than what would typically be experienced in everyday clothing. This means real-world performance may be lower than laboratory results suggest.
Potential Uses in Healthcare and Wearable Tech
Despite current limitations in power output, the technology shows particular promise for healthcare applications. Small sensors that monitor vital signs like heart rate, body temperature, or movement patterns could potentially operate using this technology, eliminating the need for battery changes or recharging.
For patients with chronic conditions requiring continuous monitoring, self-powered sensors embedded in clothing could provide valuable data without the hassle of managing battery life. Similarly, fitness enthusiasts could benefit from wearables that never need charging, seamlessly tracking performance metrics during activities.
Beyond health monitoring, the technology could eventually support other low-power functions in smart clothing, such as environmental sensing, location tracking, or simple LED indicators. As power conversion efficiency improves, applications could expand to include more power-hungry features.
The Challenges Ahead
Currently, the production process is time-intensive and not suitable for commercial manufacturing, with the demonstrated fabric requiring four days of manual needlework to produce.
“We have now shown that it is possible to produce conductive organic materials that can meet the functions and properties that these textiles require. This is an important step forward. There are fantastic opportunities in thermoelectric textiles and this research can be of great benefit to society,” says Christian Müller, Professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology and research leader of the study.
One key challenge identified through computer simulations is the electrical contact resistance between components. Reducing this resistance could potentially increase power output by three times or more. The researchers also investigated how factors like thermoelectric leg length and thread count affect performance, providing valuable insights for future designs.
Interest in these types of conducting polymers has grown significantly in recent years. They have a chemical structure that allows them to conduct electricity similar to silicon while maintaining the physical properties of plastic materials, making them flexible. Research on conducting polymers is ongoing in many areas such as solar cells, Internet of Things devices, augmented reality, robotics, and various types of portable electronics.
Looking Forward
What’s clear is that there is a viable pathway toward practical thermoelectric textiles that can function reliably in everyday conditions. By addressing both the electrical and mechanical requirements for textile integration, this work bridges the gap between laboratory demonstrations and potential real-world applications.
The development of these polymers also aligns with sustainability goals by eliminating the need for rare earth metals commonly used in electronics. With further refinement and scaling of the manufacturing process, this technology could eventually lead to clothing that powers our devices using nothing but our body heat.
For widespread adoption, researchers will need to develop automated production methods that can efficiently coat and assemble the thermoelectric textiles at scale. Additionally, improving power output while maintaining stability remains a critical goal for future research.
Which electric vehicle giant has a better battery? (gguy/Shutterstock)
In the race to dominate the electric vehicle market, two companies stand above the rest: Tesla and China’s BYD. While Tesla pioneered the use of lithium-ion batteries and leads EV sales in North America and Europe, BYD began as a battery manufacturer before expanding into vehicles, surpassing Tesla in global EV sales in 2024. New research from multiple German universities gives us a look at the battery technology powering these automotive giants by directly comparing Tesla’s 4680 cylindrical cell with BYD’s Blade prismatic cell.
The research, published in Cell Reports Physical Science, reveals rare insights into the design, performance, and manufacturing processes of these cutting-edge batteries. By dismantling and analyzing both cell types, the researchers found major differences in energy density, thermal efficiency, and material composition that show the distinct design philosophies of each manufacturer.
“There is very limited in-depth data and analysis available on state-of-the-art batteries for automotive applications,” says lead study author Jonas Gorsch from RWTH Aachen University, in a statement.
For the average consumer, these differences translate into real-world impacts on driving range, charging speed, vehicle cost, and safety. The study offers a window into how battery technology, the heart of any electric vehicle, is evolving through different approaches to solve the same fundamental challenge: how to store more energy safely and efficiently while reducing costs.
The Tale of Two Battery Designs
Tesla’s 4680 cell (named for its 46mm diameter by 80mm height dimensions) represents the company’s latest innovation in battery design. It’s significantly larger than previous cells used in the Model 3, allowing for higher energy density and reduced production costs. The “tabless” design further cuts costs by eliminating the need for certain manufacturing steps.
BYD’s Blade cell takes a completely different approach, using a rectangular prism shape with dimensions of 965mm in length, 90mm in height, and 14mm in thickness. This long, thin design prioritizes safety and cost-effectiveness while offering surprisingly competitive performance metrics despite using different materials.
The most striking difference between the cells is their chemistry. Tesla opts for NMC811 (a nickel-manganese-cobalt blend with high nickel content), delivering impressive energy density of 241 Wh/kg and 643 Wh/l. In simpler terms, Tesla packs more energy into the same weight and volume. BYD uses LFP (lithium iron phosphate), which achieves a more modest 160 Wh/kg and 355 Wh/l. This choice reflects BYD’s focus on cost-effectiveness and longevity over maximum range.
When examining heat management, the researchers found that the Tesla 4680 cell generates twice the heat per volume compared to the BYD Blade cell at the same charging rate. This difference impacts the cooling systems needed for fast charging and has implications for battery longevity and safety. Overall, the study revealed that BYD’s battery is more efficient because it allows easier temperature management.
Looking Inside: Construction and Materials
When researchers took apart the batteries, they found some major differences in how Tesla and BYD build their cells. Inside BYD’s Blade battery, the key components, the positive and negative layers (cathodes and anodes), are stacked in a Z-folded pattern with many thin layers in between. This design makes the battery safer and more durable, but it also means that electricity has to travel a longer path through the battery, which can reduce efficiency. To keep everything securely in place, BYD uses a special lamination method, sealing the edges of the separator (the thin layer that prevents short circuits between the positive and negative sides).
Tesla takes a different approach with its 4680 battery, using a “jelly roll” design, sort of like rolling up a long strip of paper. This setup helps electricity flow more directly, improving performance. One noticeable feature is a small empty space in the center, which likely helps with manufacturing and connecting the battery’s internal parts.
Unlike many other battery manufacturers that use ultrasonic welding, both Tesla and BYD rely on laser welding to connect their thin electrode foils. Despite the BYD cell being significantly larger than Tesla’s, both batteries have a similar proportion of non-active components, such as current collectors, housing, and busbars.
We all know the saying “Don’t judge a book by its cover,” but a new survey suggests that in the workplace, your “cover” might matter more than you think — especially when it comes to income. A recent survey asked 1,050 Americans about “pretty privilege” – the idea that better-looking people get more advantages in life – and found that a whopping 81.3% believe it exists at work.
The results show how our appearance might be influencing everything from who gets hired to who gets that next big promotion.
Pretty privilege isn’t just limited to modeling or acting jobs. Eight in ten people surveyed believe attractive coworkers are more likely to be promoted, hired, or given raises. Even more telling, 66.9% of people have actually seen someone treated unfairly or talked about negatively because of how they look.
The survey, conducted by Standout CV, shows that the pressure to look good at work is real. About 64.2% of people feel pushed to change their natural features – like straightening their hair or wearing makeup – just to fit in at the office. And 83.4% think colleagues who put more effort into their appearance are seen as more capable professionals.
How We See Ourselves
When asked to rate their own workplace attractiveness on a scale of 1 to 10, the average person gave themselves a 7.7. Men seemed more confident about their looks, with 37.5% rating themselves a 9 or perfect 10, compared to only 27.4% of women.
These self-ratings revealed a lot about career experiences. Nearly half (46%) of people who rated themselves as unattractive (scoring 1-3) said their looks had hurt their careers – that’s five times higher than the average of 7.6%.
On the flip side, those who considered themselves good-looking (rating above 7) were likely to say their appearance helped them professionally (60.7%). This number jumped to 66.8% for those who gave themselves a 9 or 10.
People who saw themselves as average lookers (rating 4-6) were most likely to say their appearance had no impact on their work life (38% compared to just 16.2% overall).
Interestingly, one in five people said their looks affected their careers both positively and negatively. This stayed consistent regardless of how attractive they thought they were. This might happen when someone benefits from good looks but also faces issues like not being taken seriously.
In fact, 55.7% of people admitted to downplaying their appearance to be taken more seriously at work. This number rose to 68.7% among those who considered themselves very attractive.
(Photo by Getty Images in collaboration with Unsplash+)
Tattoos have become a mainstream form of self-expression, adorning the skin of millions worldwide. But a new study from Danish researchers uncovers concerning connections between tattoo ink exposure and increased risks of both skin cancer and lymphoma.
Approximately one in four adults in many Western countries now sport tattoos, with prevalence nearly twice as high among younger generations. The study, published in BMC Public Health, adds to growing evidence that the popular form of body art may carry long-term health consequences previously unrecognized.
The study’s lead author, Signe Bedsted Clemmensen, along with colleagues at the University of Southern Denmark, analyzed data from two complementary twin studies – a case-control study of 316 twins and a cohort study of 2,367 randomly selected twins born between 1960 and 1996. The team created a specialized “Danish Twin Tattoo Cohort” that allowed them to control for genetic and environmental factors when examining cancer outcomes among tattooed and non-tattooed individuals.
When comparing twins where one had cancer and one didn’t, researchers found that the tattooed twin was more likely to be the one with cancer. In the case-control study, tattooed individuals had a 62% higher rate of skin cancer compared to non-tattooed people. The cohort study showed even stronger associations, with tattooed individuals having nearly four times higher rate of skin cancer and 2.83 times higher rate of basal cell carcinoma.
Size appears to matter significantly. Large tattoos (bigger than the palm of a hand) were associated with substantially higher lymphoma and skin cancer risks than smaller tattoos, potentially due to higher exposure levels or longer exposure time. This dose-response relationship strengthens the case for causality rather than mere correlation.
“This suggests that the bigger the tattoo and the longer it has been there, the more ink accumulates in the lymph nodes. The extent of the impact on the immune system should be further investigated so that we can better understand the mechanisms at play,” says Clemmensen, an assistant professor of biostatistics, in a statement.
The Journey of Tattoo Ink Through the Body
Scientists have long known that tattoo ink doesn’t simply stay put in the skin. Particles from tattoo pigments migrate through the bloodstream and accumulate in lymph nodes and potentially other organs. The researchers proposed an “ink deposit conjecture” – suggesting that tattoo pigments trigger inflammation at deposit sites, potentially leading to chronic inflammation and increased risk of abnormal cell growth.
Black ink, the most commonly used tattoo color, has been a particular focus of concern. It typically contains soot products like carbon black, which the International Agency for Research on Cancer (IARC) has listed as possibly cancer-causing to humans. Through incomplete burning during carbon black production, harmful compounds form as byproducts, including benzo[a]pyrene, which IARC classifies as cancer-causing to humans.
“We can see that ink particles accumulate in the lymph nodes, and we suspect that the body perceives them as foreign substances,” explains study co-author Henrik Frederiksen, a consultant in hematology at Odense University Hospital and clinical professor at the university. “This may mean that the immune system is constantly trying to respond to the ink, and we do not yet know whether this persistent strain could weaken the function of the lymph nodes or have other health consequences.”
Colored inks pose their own problems. Red ink – often associated with allergic reactions – contains compounds that may release harmful substances when exposed to sunlight or during laser tattoo removal.
“We do not see a clear link between cancer occurrence and specific ink colors, but this does not mean that color is irrelevant,” notes Clemmensen. “We know from other studies that ink can contain potentially harmful substances, and for example, red ink more often causes allergic reactions. This is an area we would like to explore further.”
The researchers suggest that with tattoo prevalence rising dramatically, especially among younger people, public awareness campaigns might be needed to educate about potential risks.
“We are concerned that tattoo ink has severe public health consequences since tattooing is abundant among the younger generation,” they write in their conclusion. The team recommends further studies to pinpoint the exact biological mechanisms through which tattoo ink might induce cancer.
A Growing Body Of Research
This isn’t the first research to raise alarms about tattoo safety. Previous studies have documented cases of skin conditions and tumors occurring within tattoo areas. However, this large-scale study provides some of the strongest evidence yet for a relationship between tattoos and cancer.
For those already sporting tattoos, the research doesn’t suggest panic – but awareness. The time between tattoo exposure and cancer diagnosis in the study was substantial – a median of 8 years for lymphoma and 14 years for skin cancer. This suggests that cancers develop gradually over time, and monitoring for any changes in tattooed areas might be prudent.
The rise in popularity of tattoo removal services presents its own concerns. The researchers specifically highlight that laser tattoo removal breaks down pigments into smaller fragments that may be more mobile within the body, potentially increasing migration to lymph nodes and other organs.
As with many health studies, this research doesn’t definitively prove causation, but it adds significant weight to growing evidence of long-term risks. The researchers point out that even with new European restrictions on harmful compounds in tattoo inks, the body’s immune response to foreign substances might be problematic regardless of specific ink components.
Balancing Expression and Health
As tattoo culture continues to thrive globally, balancing personal expression through body art with health considerations becomes increasingly important.
With tattoos now firmly embedded in mainstream culture, this research doesn’t aim to stigmatize body art but rather to inform safer practices. Whether this means developing safer inks, improving tattoo application techniques, or simply making more informed choices about tattoo size and placement, understanding the biological impact of tattoo ink is essential for public health.
As the researchers conclude, further studies that pinpoint the biological mechanisms of tattoo ink-induced cancer are needed. Until then, those considering getting inked might want to weigh the aesthetic benefits against potential long-term health considerations – a balance that, like the perfect tattoo design, will be uniquely personal.
We live in a happiness-obsessed world. Self-help gurus promise paths to bliss, Instagram influencers peddle happiness as a lifestyle, and corporations build marketing campaigns around the pursuit of positive emotions. But new research suggests a surprising twist: trying too hard to be happy might actually be making us miserable.
Researchers from the University of Toronto Scarborough and the University of Sydney found that actively pursuing happiness drains our mental energy – the same energy we need for self-control. Their study, published in Applied Psychology: Health and Well-Being, challenges what many of us believe about happiness.
“The pursuit of happiness is a bit like a snowball effect. You decide to try making yourself feel happier, but then that effort depletes your ability to do the kinds of things that make you happier,” says Sam Maglio, marketing professor at the University of Toronto Scarborough and the Rotman School of Management, in a statement.
This might sound familiar: You wake up determined to have a great day. You plan mood-boosting activities and work hard to stay positive. But by evening, you’re ordering takeout instead of cooking, mindlessly scrolling social media, and snapping at your partner. Why? Your happiness pursuit itself might be the problem.
Maglio puts it bluntly: “The more mentally rundown we are, the more tempted we’ll be to skip cleaning the house and instead scroll social media.”
Testing the Happiness Drain
The research team ran four studies that gradually built their case.
First, they surveyed 532 adults about how much they valued and pursued happiness, then measured their self-reported self-control. The results showed a clear pattern: people who placed higher value on seeking happiness reported worse self-control abilities.
For their second study, they moved beyond self-reports to actual behavior. They had 369 participants complete a series of consumer choice rankings and measured how long they persisted at the task. Those with stronger tendencies to pursue happiness showed less persistence, suggesting their mental resources were already running low.
From Happiness Ads to Chocolate Cravings
For their third study, the researchers got clever. They intercepted 36 people at a university library and showed them either an advertisement that prominently featured the word “happiness” or a neutral ad without any happiness messaging. Then they offered participants chocolate candies, telling them to eat as many as they wanted while rating the taste.
“The story here is that the pursuit of happiness costs mental resources,” Maglio explains. “Instead of just going with the flow, you are trying to make yourself feel differently.”
The results were striking: people exposed to the happiness ad ate nearly twice as many chocolates (2.94 vs. 1.56 on average) – a classic sign of decreased self-control. This raises questions about happiness-themed marketing campaigns – they might actually be draining our willpower and setting us up to make choices we later regret.
Not All Goals Drain You the Same
For their final experiment, the researchers tackled an important question: Is happiness-seeking uniquely depleting, or does pursuing any goal require mental energy?
They had 188 participants make 25 choices between pairs of everyday products (like choosing between an iced latte and green tea). One group was told to choose options that would “improve their happiness,” while the other group chose based on what would “improve their accurate judgment.” Then everyone worked on a challenging anagram puzzle where they could quit whenever they wanted.
The happiness group quit much sooner – lasting only 444 seconds on average compared to 574 seconds for the accuracy group. This significant difference suggested that pursuing happiness specifically drains mental energy more than other types of goals.
This wasn’t Maglio’s first investigation into happiness backfiring. In a 2018 study with Kim, he found that people actively seeking happiness tend to feel like they’re running short on time, creating stress that ultimately makes them unhappier.
The Pressure To Feel Even Better
The self-improvement industry rakes in over $10 billion largely by promising to boost happiness. Bestsellers like “The Happiness Project,” “The Art of Happiness,” and “The Happiness Advantage” sell millions of copies with strategies for maximizing positive emotions. But this research suggests many of these approaches might be working against themselves.
The researchers note that the self-help industry puts “a lot of pressure and responsibility on the self.” Many people now treat happiness like money – “something we can and should gather and hoard as much as we can.” This commodification of happiness may be part of the problem, creating a mindset where we’re constantly striving for more rather than appreciating what we have.
Why This Happens
Think of self-control like a gas tank that gets emptied throughout the day. Psychologist Roy Baumeister’s research shows that every act of self-control – resisting temptation, controlling emotions, making decisions – uses fuel from the same tank.
Seeking happiness burns through this fuel quickly because it requires managing your actions, monitoring your thoughts, and actively changing your emotions. When your tank runs low, you’re more likely to make poor choices like overeating, overspending, or being short with others – creating a cycle that ultimately makes you less happy.
The Real Secret To Happiness
So should we abandon the pursuit of well-being? Not exactly. But the research suggests a more balanced approach might work better.
Maglio suggests we think of happiness like sand at the beach: “You can cling to a fistful of sand and try to control it, but the harder you hold, the more your hand will cramp. Eventually, you’ll have to let go.”
His advice cuts through the complexity with refreshing simplicity: “Just chill. Don’t try to be super happy all the time,” says Maglio, whose work is supported by a grant from the Social Sciences and Humanities Research Council of Canada. “Instead of trying to get more stuff you want, look at what you already have and just accept it as something that gives you happiness.”
When we ease up on constantly trying to maximize happiness and accept a wider range of emotions, we might actually preserve the mental energy needed to make better decisions – and ultimately feel better.
Being stressed about your finances can lead to burnout at work. (PeopleImages.com – Yuri A/Shutterstock)
In today’s world, the boundaries between our personal and professional lives often blur. Many of us try to keep financial worries separate from our work life, but a new study from the University of Georgia suggests this separation may be wishful thinking. Research reveals that our financial well-being significantly impacts our job satisfaction, with workplace burnout playing a key role.
The study, published in the Journal of Workplace Behavioral Health, shows that when employees experience financial stress, it follows them to work, affecting their performance and satisfaction through increased burnout.
The Hidden Cost of Financial Stress at Work
The U.S. Surgeon General recognized this connection in 2024 by naming workplace well-being one of the top public health priorities. Yet remarkably, 60% of employers don’t consider employee well-being a top 10 initiative. This disconnect is costly with dissatisfied employees reportedly costing the U.S. economy around $1.9 trillion in lost productivity in 2023 alone.
“Stress from work can often leave people feeling tired and overwhelmed. Anxiety in other parts of life could make this even worse,” says lead author Camden Cusumano from the University of Georgia, in a statement. “Just as injury in one part of the body could lead to pain in another, personal financial stress can manifest in someone’s work performance.”
While previous research has examined connections between compensation and job satisfaction, this study takes a more holistic approach. Rather than focusing merely on salary figures, researchers investigated how employees’ overall assessment of their financial health impacts their workplace experience.
When Money Worries Follow You to Work
Their research distinguishes between two dimensions of financial well-being: current money management stress (present concerns) and expected future financial security (future outlook). Both of these affect job satisfaction in different ways.
“We call them different life domains. There’s the work domain, there might be the family domain, things like that,” says Cusumano. “But sometimes there’s spillover from one to the other. My finances might impact the way I’m feeling about the stress in my family, or if I’m working long hours, that might cause some conflict with my family as well.”
The researchers used the Conservation of Resources theory as their framework. This theory suggests people experience stress when they lose resources, face threats to their resources, or fail to gain new resources despite their efforts. In this context, financial well-being represents a crucial resource: a sense of security and control regarding one’s finances.
Burnout Beyond the Workplace
For the study, the researchers surveyed 217 full-time U.S. employees who earned at least $50,000 annually. This sample was deliberately chosen to focus on workers not predisposed to financial insecurity due to low income.
Burnout shows up in three main ways: feeling detached from yourself or others, feeling constantly tired, and feeling like your accomplishments don’t matter. All three combine to make employees tired and disengaged from their work.
Current money management stress didn’t directly affect job satisfaction but operated through increased burnout. In contrast, expected future financial security had a direct positive association with job satisfaction that wasn’t mediated by burnout.
These findings highlight that financial stress doesn’t just create problems at home; it fundamentally alters how employees experience their work. People feeling stressed about making ends meet today are more likely to experience burnout, which in turn reduces their job satisfaction. Meanwhile, those who feel secure about their financial future tend to be more satisfied with their jobs, regardless of burnout levels.
Future financial concerns may also play a role in job satisfaction. If a worker is feeling stressed about their current position, believing their financial situation may improve could enhance their views on their job.
Creating Better Workplace Support Programs
Employers often focus on compensation as the primary financial factor affecting employee satisfaction. However, if an employee’s financial struggles are leading to burnout and job dissatisfaction, addressing work-related factors alone won’t fully resolve the problem.
This research highlights the importance of developing personal financial management skills alongside professional development for employees. Building financial resilience may not only improve the quality of life at home but could also enhance workplace experience and career success, especially in today’s workforce where remote and hybrid work have further blurred the boundaries between work and personal life.
“Some companies are actually providing financial counseling to some of their employees,” says Cusumano. “They’re paying attention to how finances can really permeate different areas of life.”
Organizations could benefit from broadening their wellness initiatives to include financial well-being resources. Providing tools and support to help employees manage current financial stress and build future security could yield significant returns through improved job satisfaction and reduced burnout.
In the end, money might not buy happiness, but financial stress certainly seems capable of diminishing workplace satisfaction. By understanding these connections, both organizations and individuals can develop more effective strategies for navigating the complex relationship between financial health and workplace well-being.
Mathematicians use topology to study the shape of the world and everything in it
When you look at your surrounding environment, it might seem like you’re living on a flat plane. After all, this is why you can navigate a new city using a map: a flat piece of paper that represents all the places around you. This is likely why some people in the past believed the earth to be flat. But most people now know that is far from the truth.
You live on the surface of a giant sphere, like a beach ball the size of the Earth with a few bumps added. The surface of the sphere and the plane are two possible 2D spaces, meaning you can walk in two directions: north and south or east and west.
What other possible spaces might you be living on? That is, what other spaces around you are 2D? For example, the surface of a giant doughnut is another 2D space.
Through a field called geometric topology, mathematicians like me study all possible spaces in all dimensions. Whether trying to design secure sensor networks, mine data or use origami to deploy satellites, the underlying language and ideas are likely to be that of topology.
The shape of the universe
When you look around the universe you live in, it looks like a 3D space, just like the surface of the Earth looks like a 2D space. However, just like the Earth, if you were to look at the universe as a whole, it could be a more complicated space, like a giant 3D version of the 2D beach ball surface or something even more exotic than that.
While you don’t need topology to determine that you are living on something like a giant beach ball, knowing all the possible 2D spaces can be useful. Over a century ago, mathematicians figured out all the possible 2D spaces and many of their properties.
In the past several decades, mathematicians have learned a lot about all of the possible 3D spaces. While we do not have a complete understanding like we do for 2D spaces, we do know a lot. With this knowledge, physicists and astronomers can try to determine what 3D space people actually live in.
While the answer is not completely known, there are many intriguing and surprising possibilities. The options become even more complicated if you consider time as a dimension.
To see how this might work, note that to describe the location of something in space – say a comet – you need four numbers: three to describe its position and one to describe the time it is in that position. These four numbers are what make up a 4D space.
Now, you can consider what 4D spaces are possible and in which of those spaces do you live.
Topology in higher dimensions
At this point, it may seem like there is no reason to consider spaces that have dimensions larger than four, since that is the highest imaginable dimension that might describe our universe. But a branch of physics called string theory suggests that the universe has many more dimensions than four.
There are also practical applications of thinking about higher dimensional spaces, such as robot motion planning. Suppose you are trying to understand the motion of three robots moving around a factory floor in a warehouse. You can put a grid on the floor and describe the position of each robot by their x and y coordinates on the grid. Since each of the three robots requires two coordinates, you will need six numbers to describe all of the possible positions of the robots. You can interpret the possible positions of the robots as a 6D space.
As the number of robots increases, the dimension of the space increases. Factoring in other useful information, such as the locations of obstacles, makes the space even more complicated. In order to study this problem, you need to study high-dimensional spaces.
There are countless other scientific problems where high-dimensional spaces appear, from modeling the motion of planets and spacecraft to trying to understand the “shape” of large datasets.
Tied up in knots
Another type of problem topologists study is how one space can sit inside another.
For example, if you hold a knotted loop of string, then we have a 1D space (the loop of string) inside a 3D space (your room). Such loops are called mathematical knots.
The study of knots first grew out of physics but has become a central area of topology. They are essential to how scientists understand 3D and 4D spaces and have a delightful and subtle structure that researchers are still trying to understand.
I’ve never been one to turn down something sweet. A bar of chocolate to reward myself for a successful grocery shop, some dessert after dinner—since I only indulged a few times a week, I thought it was pretty harmless.
But after noticing how sluggish, irritable, and foggy I felt after sugar-heavy days, I started wondering: could my sugar intake be affecting my mental health?
With that question in mind, I decided to cut out added sugar for an entire month. No packets of jelly beans, no sweetened boba teas, and no honey in my morning oats. The goal wasn’t just to see how my body felt, but to observe whether eliminating sugar had any impact on my mood, energy levels, and mental clarity.
The result? Let’s just say it wasn’t what I expected.
Why I Decided to Cut Out Sugar
I don’t eat added sugar every day. Instead, I tend to indulge in a (very) sweet treat twice a week or so. I usually justify it by saying that I “deserve” a treat—to reward myself for a work victory, to celebrate a special occasion, or to comfort myself after a hard day.
There’s nothing wrong with treating yourself. But I eventually noticed that my sugar binge led to some uncomfortable symptoms, particularly brain fog, poor sleep, and mood swings.
“Excess sugar intake, especially from refined sources, can cause rapid spikes and crashes in blood sugar levels, which can lead to irritability, fatigue, and difficulty concentrating,” says dietician Jessica M. Kelly, MS, RDN, LDN, the founder and owner of Nutrition That Heals. “Over time, frequent blood sugar fluctuations can contribute to increased anxiety.”
“Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine,” adds Marjorie Nolan Cohn, MS, RD, LDN, CEDS-S, the clinical director of Berry Street. “These ups and downs make it harder to manage emotions, making mood swings more frequent.”
A 2017 study, which looked at data collected from 23,245 people, found that higher sugar intake is associated with depression, particularly in men. Participants with the highest level of sugar consumption were 23% more likely to have a diagnosed mental illness than those with the lowest level of sugar consumption.1
Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine.
— MARJORIE NOLAN COHN, MS, RD, LDN, CEDS-S
Other research, like this 2024 study, also suggests a link between depression and sugar consumption—but the authors point out this connection might be because mental distress can lead to emotional eating and make it harder to control cravings.2
For the purpose of my experiment, I needed to set some ground rules about the sugars I would and wouldn’t cut out.
According to Kelly and Nolan Cohn, not all sugars affect mental health in the same way. “Natural sugars found in, for example, fruit and dairy, accompany fiber, vitamins, and antioxidants that are health-promoting and slow glucose absorption,” Kelly explains. “Refined sugars, like those in sodas and candy, can cause rapid blood sugar spikes and crashes which can lead to mood swings and brain fog.”
Excited to see the results, I began my experiment!
Week 1: The “Oh Wow, Does That Really Contain Sugar?” Phase
During my first week, I didn’t experience changes in my mood, but rather in my behavior and mindset.
This experiment required me to pick up a new habit: reading nutritional labels and ingredient lists. Although giving up sugar was easy for the first few days, this habit was pretty hard.
I was surprised to learn that sugar is in a lot of things. Most of my favorite savory treats contained sugar. Even my usual “healthy” post-gym treat—a protein bar—was off-limits.
Surprisingly, I didn’t really have any sugar withdrawals, which can be common among people who typically consume a lot of sugar.
“Cutting out sugar can trigger strong cravings since it affects the brain’s reward system, this can lead to withdrawal-like urges, and for some, it can feel very intense,” says Nolan Cohn. Sugar withdrawal symptoms often include headaches, fatigue, and mood swings.
On day four, I had my first major challenge—I realized I could no longer grab some milk chocolate on the way out of the grocery store. Talking myself out of this was harder than I’d like to admit.
The biggest challenge for week one? Choosing what to eat in a restaurant. Most menus don’t specify which dishes contain sugar, and there’s a surprising amount of sugar in savory dishes, like tomato-based curries and wraps filled with sugary salad dressings.
By the end of week one, I felt like giving up. Although I didn’t have any major cravings, constantly checking food labels was annoying, and there were no notable benefits—at least, not yet.
Week 2: A Shift in Mood and Energy
Around the 10-day mark, things started changing for the better.
Even if I don’t eat a lot of sugar in my day-to-day diet and my home-cooked meals, I tend to treat myself—a lot. Food is a go-to source of comfort for me, often to my detriment. My mindset is often along the lines of, “Oh, who cares? It’s just a treat. It’s a special occasion!”
Because I wanted to stick to the experiment, I had to pause my “treat yo’self” mindset. As I was more mindful of sugar, I planned my snacks better, avoided getting takeout, and practiced more self-control while shopping for groceries.
More importantly, I had to actually engage with my feelings instead of eating them away.
On my therapist’s recommendation, I paid attention to the uncomfortable feelings that’d usually lead me to eat, and I journaled about them instead.
I also noticed some changes in my mood—finally! Because I wasn’t eating a lot of sugar and then crashing twice a week, my energy levels felt a bit more stable. This meant that my mood also felt more stable.
Week 3: Mental Clarity and Emotional Balance
By week three, I was genuinely surprised by how good I felt.
Not only was my energy and mood a little calmer, I was really chuffed with myself for managing to avoid sugar for such a long time.
Researchers say blue light exposure in the morning may be a healthier alternative to taking sleep medications. (amenic181/Shutterstock)
Getting older brings many changes, and unfortunately, worse sleep is often one of them. Many seniors struggle with falling asleep, waking up frequently during the night, and generally feeling less rested. But what if something as simple as changing your light exposure could help?
A new study from the University of Surrey has found that the right light, at the right time, might make a significant difference in older adults’ sleep and daily activity patterns. This research, published in GeroScience, reveals that morning exposure to blue-enriched light can be beneficial, while that same light in the evening can actually make sleep problems worse.
“Our research shows that carefully timed light intervention can be a powerful tool for improving sleep and day-to-day activity in healthy older adults,” explains study author Daan Van Der Veen from the University of Surrey, in a statement. “By focusing on morning blue light and maximizing daylight exposure, we can help older adults achieve more restful sleep and maintain a healthier, more active lifestyle.”
Why light timing matters
So why do older adults have more sleep troubles in the first place? Part of the problem lies in the aging eye. As we get older, our eyes undergo natural changes—the lens yellows, pupils get smaller, and we have fewer photoreceptor cells. All these changes mean less light reaches the brain’s master clock, located in a tiny region called the hypothalamic suprachiasmatic nuclei (SCN).
That yellowing lens is particularly problematic because it filters out blue light wavelengths specifically. It’s like wearing subtle yellow sunglasses all the time. This matters because blue light (wavelengths between 420 and 480 nanometers) is especially powerful at regulating our body clocks. With less blue light reaching their brains, older adults’ internal clocks can become weaker and more prone to disruption.
Many seniors also spend less time outdoors and have fewer social engagements, further reducing their exposure to bright natural light. Meanwhile, they might be getting too much artificial light at night, which can confuse the body’s natural rhythms.
The Surrey researchers wanted to see if they could improve sleep for older adults living independently at home by tweaking their light exposure. They recruited 36 people aged 60 and over who reported having sleep problems. None were in full-time employment, and all were free from eye disorders or other conditions that might complicate the study.
Over an 11-week period during fall and winter (when natural daylight is limited in the UK), participants followed a carefully designed protocol. They spent one week establishing baseline measurements, followed by three weeks using either blue-enriched white light (17,000 K) or standard white light (4,000 K) for two hours each morning and evening. After a two-week break, they switched to the other light condition for three weeks, followed by another two-week washout period.
Participants used desktop light boxes while going about normal activities like reading or watching TV. They wore activity monitors on their wrists around the clock and light sensors around their necks during the day. They kept sleep diaries and collected urine samples to measure melatonin metabolites, markers indicating how their internal clocks were functioning.
Morning light helps, evening light hurts
The results were telling. Longer morning exposure to the blue-enriched light significantly improved the stability of participants’ daily activity patterns and reduced sleep fragmentation. By contrast, evening exposure to that same light made it harder to fall asleep and reduced overall sleep quality.
Another key discovery was that participants who spent more time in bright light (above 2,500 lux, roughly the brightness you’d experience outdoors on a cloudy day) had more active days, stronger daily rhythms, and tended to go to bed earlier. This finding reinforces long-standing advice from sleep experts: getting outside during the day is really important for good sleep.
Morning people (early birds) naturally started their morning light sessions earlier than night owls. However, most participants used their evening light sessions at similar times, suggesting that social habits might influence evening routines more than biological clocks.
The women in the study showed more variable activity patterns throughout the day than men, and those who took more daytime naps had less stable daily rhythms and were generally less active.
Practical tips
By the end of the study, participants reported meaningful improvements in their sleep quality. This means light therapy could be a potential alternative to sleep medications, which often come with side effects.
“We believe that this is one of the first studies that have looked into the effects of self-administered light therapy on healthy older adults living independently to aid their sleep and daily activity,” says study author Débora Constantino, a postgraduate research student. “It highlights the potential for accessible and affordable light-based therapies to address age-related sleep issues without the need for medication.”
For older adults seeking better rest, the advice is clear:
Get bright, blue-enriched light in the morning: Use a light box or spend time outdoors after waking up.
Dim the lights in the evening: Reduce exposure to phones, tablets, and bright overhead lights.
Stay consistent: Establishing regular morning and evening routines can further support healthy sleep patterns.
This approach isn’t just for people in care homes or those with cognitive impairments; it can also benefit healthy, independent older adults. With an aging population worldwide, finding simple and effective strategies to improve sleep has never been more important. The right light at the right time might be a key part of aging well.
Age-related cognitive decline sneaks up on millions of people worldwide. It begins with those frustrating “senior moments” in middle age and can progress to more serious memory and thinking problems later in life. While scientists have traditionally focused their attention directly on the brain to understand these changes, new research out of Toho University in Japan points to an unexpected contributor: your belly fat.
A study published in the journal GeroScience reveals that visceral fat—the deep fat surrounding your internal organs—plays a role in maintaining brain health through a chemical messaging system. You might have heard of BDNF (brain-derived neurotrophic factor)—think of it as brain fertilizer. It helps brain cells grow, survive, and form new connections. The more BDNF you have, the better your brain functions. But as you age, your BDNF levels naturally drop, and that’s when memory problems can start.
Here’s where belly fat comes in. This new study found that CX3CL1, a protein made by visceral fat, plays a big role in maintaining healthy BDNF levels. In younger mice, their belly fat produced plenty of CX3CL1, keeping their brain function strong. But as the mice aged, both their belly fat and their brain’s BDNF levels took a nosedive. When scientists artificially lowered CX3CL1 in young mice, their BDNF levels dropped too, mimicking the effects of aging. But when they gave older mice an extra dose of CX3CL1, their brain’s BDNF bounced back.
These findings flip conventional wisdom about belly fat on its head. While excess visceral fat is still harmful and linked to many health problems, this research suggests that healthy amounts of visceral fat early on serve an important purpose by producing signaling molecules that support brain health.
Tracking Age-Related Decline in the Brain and Belly
The research tracked male mice at different ages—5, 10, and 18 months old (roughly equivalent to young adult, middle-aged, and elderly humans). The 5-month-old and 10-month-old mice had similar levels of BDNF in their hippocampus, but by 18 months, these levels had dropped by about a third. This pattern matches the typical trajectory of cognitive aging, where significant decline often doesn’t begin until later in life.
Similarly, CX3CL1 production in visceral fat remained stable in younger mice but declined significantly in older animals, supporting a link between the two proteins.
Stress Hormones and the Fat-Brain Connection
To dig deeper, the researchers asked: What causes the drop in fat-derived CX3CL1 in the first place? The answer involved stress hormones like cortisol (in humans) and corticosterone (in mice).
“Glucocorticoids boost CX3CL1 production. An enzyme in belly fat called 11β-HSD1 reactivates inactive forms of glucocorticoids and keeps them active in cells, promoting glucocorticoid-dependent expression of CX3CL1,” study co-author Dr. Yoshinori Takei tells StudyFinds. “11β-HSD1 is essential for belly fat to respond to circulating glucocorticoids properly.”
But as we age, the amount of this enzyme declines, leading to lower CX3CL1 and BDNF levels. When 11β-HSD1 decreases with age, this entire system weakens, potentially contributing to memory loss.
The paper notes that while lower 11β-HSD1 in aging is problematic for CX3CL1 production and brain health, excessive 11β-HSD1 expression is linked to obesity-related diseases. High 11β-HSD1 levels are associated with metabolic syndrome, which is a known risk factor for cognitive decline.
Rethinking Belly Fat
The connection between belly fat and brain health highlights how intertwined our body systems really are. Our brains don’t operate in isolation but depend on signals from throughout the body—including, surprisingly, our fat tissue.
Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems.
The best way to maintain brain health as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.
While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.
The next time you pinch an inch around your middle, remember: there’s a conversation happening between your belly and your brain that science is just beginning to understand.
Paper Summary
How the Study Worked
The researchers used male mice of three different ages: 5 months (young adult), 10 months (middle-aged), and 18 months (elderly). They measured BDNF protein levels in the hippocampus using a test called ELISA that can detect specific proteins in tissue samples. They also measured CX3CL1 levels in visceral fat tissue using two methods: one that detects the RNA instructions for making the protein and another that detects the protein itself. To determine whether fat-derived CX3CL1 directly affects brain BDNF, they used a technique called RNA interference to reduce CX3CL1 production specifically in the belly fat of younger mice, then checked what happened to brain BDNF levels. They also injected CX3CL1 into older mice to see if it would restore their brain BDNF levels. To understand what regulates CX3CL1 production, they treated fat cells grown in the lab with different stress hormones. Finally, they measured levels and activity of the enzyme 11β-HSD1 in fat tissue from younger and older mice, and used RNA interference to reduce this enzyme in younger mice to see how it affected the fat-brain signaling system.
Results
The study uncovered several key findings. First, hippocampal BDNF levels were similar in 5-month-old and 10-month-old mice (about 300 pg BDNF/mg protein) but dropped by about one-third in 18-month-old mice (about 200 pg BDNF/mg protein). CX3CL1 levels in visceral fat showed a similar pattern, decreasing significantly in the oldest mice. When the researchers reduced CX3CL1 production in the belly fat of younger mice, their brain BDNF levels fell within days, similar to levels seen in naturally aged mice. On the flip side, a single injection of CX3CL1 into the abdominal cavity of older mice boosted their brain BDNF back up, confirming the connection between these proteins. The researchers also found that natural stress hormones (corticosterone in mice, cortisol in humans) increased CX3CL1 production in fat cells, while the enzyme 11β-HSD1 that activates these hormones was much less abundant in the fat tissue of older mice. When they reduced this enzyme in younger mice, both fat CX3CL1 and brain BDNF levels decreased, revealing another link in the signaling chain. Together, these results mapped out a communication pathway from belly fat to brain that becomes disrupted with age.
Limitations
While the study presents intriguing findings, several limitations should be kept in mind. The research used only male mice to avoid complications from female hormonal cycles, so we don’t know if the same patterns exist in females. The sample sizes were small, with most tests using just three mice per group. While this is common in basic science research, larger studies would strengthen confidence in the results. The researchers demonstrated connections between fat tissue signals and brain BDNF levels but didn’t directly test whether these changes affected the mice’s memory or cognitive abilities, though their previous work had shown that CX3CL1 injections improved recognition memory in aged mice. The study was also limited to specific ages in mice, and we don’t yet know how these findings might translate to humans across our much longer lifespan. Finally, the researchers used artificial RNA interference techniques to reduce CX3CL1 and enzyme levels for short periods—different from the gradual changes that occur during natural aging—which might affect how the results apply to real-world aging.
Discussion and Takeaways
This research reveals a previously unknown communication system between belly fat and the brain. Under normal conditions, stress hormones in the blood are activated by the enzyme 11β-HSD1 in visceral fat, which then produces CX3CL1. This fat-derived CX3CL1 signals through immune cells and the vagus nerve (a major nerve connecting internal organs to the brain) to maintain healthy BDNF levels in the hippocampus. As we age, reduced 11β-HSD1 in belly fat disrupts this signaling chain, contributing to lower brain BDNF and potentially to age-related memory problems. This discovery changes how we think about visceral fat, suggesting that while excess belly fat is harmful, healthy amounts serve important functions in supporting brain health. The findings also hint at future therapeutic possibilities—perhaps treatments could target components of this pathway to maintain brain function in aging. The researchers note that a careful balance is needed, as both too little 11β-HSD1 (associated with cognitive decline) and too much (linked to obesity and metabolic problems) appear harmful. For the average person concerned about brain health, this research underscores that the body works as an interconnected whole, with tissues we don’t typically associate with thinking—like fat—playing important roles in maintaining our cognitive abilities.
Funding and Disclosures
The study was supported by grants from the Japan Society for the Promotion of Science (JSPS KAKENHI). The lead researcher, Yoshinori Takei, and two colleagues received research funding through grants numbered 23K10878, 23K06148, and 24K14786. The researchers declared no competing interests, meaning they didn’t have financial or other relationships that might have influenced their research or how they reported it.
Publication Information
The paper “Adipose chemokine ligand CX3CL1 contributes to maintaining the hippocampal BDNF level, and the effect is attenuated in advanced age” was written by Yoshinori Takei, Yoko Amagase, Ai Goto, Ryuichi Kambayashi, Hiroko Izumi-Nakaseko, Akira Hirasawa, and Atsushi Sugiyama from Toho University and other Japanese institutions. It appeared in the journal GeroScience in February 2025, after being submitted in October 2024 and accepted for publication in January 2025. The paper can be accessed online using the identifier https://doi.org/10.1007/s11357-025-01546-4
Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems. The best way to support your brain as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.
While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.
A woman experiencing hot flashes due to menopause (Photo by Pheelings media on Shutterstock)
Perimenopause—the transitional phase leading up to menopause—has long been considered a mid-life experience, typically affecting women in their late 40s. However, new research reveals that a significant number of women in their 30s are already experiencing perimenopausal symptoms severe enough to seek medical attention.
In a survey of 4,432 U.S. women, researchers from Flo Health and the University of Virginia found that more than half of those in the 30-35 age bracket reported moderate to severe menopause symptoms using the validated Menopause Rating Scale (MRS). Among those who consulted medical professionals about their symptoms, a quarter were diagnosed as perimenopausal. This challenges the assumption that perimenopause is primarily a concern for women approaching 50.
The findings, published in the journal npj Women’s Health, highlight a significant gap in healthcare awareness and support for women experiencing early-onset perimenopause.
Unrecognized Symptoms and Healthcare Gaps
“Physical and emotional symptoms associated with perimenopause are understudied and often dismissed by physicians. This research is important in order to more fully understand how common these symptoms are, their impact on women, and to raise awareness amongst physicians as well as the general public,” says study co-author Dr. Jennifer Payne, MD, an expert in reproductive psychiatry at UVA Health and the University of Virginia School of Medicine, in a statement.
Despite medical definitions being well established, public understanding remains muddled. Many people use “menopause” as a catch-all term for both perimenopause and post-menopause. This confusion contributes to women feeling unprepared and unsupported during this transition.
The journey through perimenopause varies. Some women experience a smooth 5-7 year transition with manageable symptoms, while others face a decade-long struggle with physical and psychological challenges that impact daily life.
Early vs. Late Perimenopause
“Perimenopause can be broadly split into early and late stages,” the researchers explained. Early perimenopause typically involves occasional missed periods or cycle irregularity, while late perimenopause features greater menstrual irregularity with longer periods without menstruation, ranging from 60 days to one year.
The study identified eight symptoms significantly associated with perimenopause:
Absence of periods for 12 months or 60 days
Hot flashes
Vaginal dryness
Pain during sexual intercourse
Recent cycle length irregularity
Heart palpitations
Frequent urination
While symptom severity generally increased with age, women in their 30s and early 40s still experienced significant symptom burden. Among 30-35-year-olds, 55.4% reported moderate or severe symptoms, increasing to 64.3% in women aged 36-40.
“We had a significant number of women who are typically thought to be too young for perimenopause tell us that they have high levels of perimenopause-related symptoms,” said Liudmila Zhaunova, PhD, director of science at Flo. “It’s important that we keep doing research to understand better what is happening with these women so that they can get the care they need.”
Psychological vs. Physical Symptoms With Menopause
The study revealed patterns in symptom presentation across different perimenopause stages. Psychological symptoms—such as anxiety, depression, and irritability—tend to appear first, peaking among women ages 41-45 before declining. Physical problems, including sexual dysfunction, bladder issues, and vaginal dryness, peaked in women 51 and older. Classic menopause symptoms like hot flashes and night sweats were most prevalent between ages 51-55 and were least common among younger women.
These findings suggest that perimenopause follows a predictable symptom progression, with mood changes and cognitive issues appearing first, followed by more recognized physical symptoms in later stages.
Delayed Medical Attention
Despite high symptom burden, younger women are far less likely to seek medical help for perimenopause. The study found that while 51.5% of women over 56 consulted a doctor, only 4.3% of 30-35-year-olds did. However, among those who sought medical advice, over a quarter of 30-35-year-olds and 40% of 36-40-year-olds were diagnosed as perimenopausal.
The study used the Menopause Rating Scale (MRS), a validated tool that measures symptom severity across three domains: psychological symptoms, somato-vegetative symptoms (including hot flashes and sleep problems), and urogenital symptoms. While MRS scores were highest in the 51-55 age group, younger women still reported a significant symptom burden.
Implications for Healthcare and Awareness
“This study is important because it plots a trajectory of perimenopausal symptoms that tells us what symptoms we can expect when and alerts us to the fact that women are experiencing perimenopausal symptoms earlier than we expected,” Payne said.
These findings underscore the need for earlier education and support. Women in their 30s and early 40s may not recognize symptoms like irregular cycles, mood changes, and sleep disturbances as signs of perimenopause, leading to misdiagnosis or missed opportunities for treatment. This research calls for healthcare providers to adopt a more age-inclusive approach when evaluating these symptoms.
Additionally, the variability of perimenopause means a one-size-fits-all approach to management is inadequate. Psychological symptoms may dominate early perimenopause, while vasomotor and urogenital symptoms become more pronounced in later stages. Understanding these transitions can help tailor treatment strategies for individual needs.
When you toss and turn all night, your immune system takes notice – and not in a good way. New research reveals that sleep deprivation doesn’t just leave you groggy and irritable; it actually transforms specific immune cells in your bloodstream, potentially fueling chronic inflammation throughout your body.
The study, published in The Journal of Immunology, finds a direct link between poor sleep quality and significant changes in specialized immune cells called monocytes. These altered cells appear to drive widespread inflammation – the same type of inflammation associated with obesity and numerous chronic diseases.
The research, conducted by scientists at Kuwait’s Dasman Diabetes Institute, demonstrates how sleep deprivation triggers an increase in inflammatory “nonclassical monocytes” (NCMs) – immune cells that amplify inflammation. More remarkably, these changes occurred regardless of a person’s weight, suggesting that even lean, healthy individuals may face inflammatory consequences from poor sleep.
Study authors examined three factors increasingly recognized as critical determinants of overall health: sleep, body weight, and inflammation. Though previous research established connections between obesity and poor sleep, this study goes further by identifying specific immune mechanisms that may explain how sleep disruption contributes to chronic inflammatory conditions.
“Our findings underscore a growing public health challenge. Advancements in technology, prolonged screen time, and shifting societal norms are increasingly disruptive to regular sleeping hours. This disruption in sleep has profound implications for immune health and overall well-being,” said Dr. Fatema Al-Rashed, who led the study, in a statement.
How the study worked
The research team recruited 237 healthy Kuwaiti adults across a spectrum of body weights and carefully monitored their sleep patterns using advanced wearable activity trackers. Participants were fitted with ActiGraph GT3X+ devices for seven consecutive days, providing objective data on sleep efficiency, duration, and disruptions. Meanwhile, blood samples revealed striking differences in immune cell populations and inflammatory markers across weight categories.
Obese participants demonstrated significantly lower sleep quality compared to their lean counterparts, along with elevated levels of inflammatory markers. Most notably, researchers observed marked differences in monocyte subpopulations across weight categories. Obese individuals showed decreased levels of “classical” monocytes (which primarily perform routine surveillance) and increased levels of “nonclassical” monocytes – cells known to secrete inflammatory compounds.
The study’s most compelling finding emerged when researchers discovered that poor sleep quality correlated with increased nonclassical monocytes regardless of body weight. Even lean participants who experienced sleep disruption showed elevated NCM levels, suggesting that sleep deprivation itself – independent of obesity – may trigger inflammatory responses.
To further test this hypothesis, researchers conducted a controlled experiment with five lean, healthy individuals who underwent 24 hours of complete sleep deprivation. The results were striking: after just one night without sleep, participants showed significant increases in inflammatory nonclassical monocytes. These changes mirrored the immune profiles seen in obese participants, supporting the role of sleep health in modulating inflammation. Even more remarkably, these alterations reversed when participants resumed normal sleep patterns, demonstrating the body’s ability to recover from short-term sleep disruption.
‘Sleep quality matters as much as quantity’
These findings highlight sleep’s crucial role in immune regulation and suggest that chronic sleep deprivation may contribute to inflammation-driven health problems even in individuals without obesity. The research points to a potential vicious cycle: obesity disrupts sleep, sleep disruption alters immune function, and altered immune function exacerbates inflammation associated with obesity and related conditions.
Modern life often treats sleep as a luxury rather than a necessity. We sacrifice rest for productivity, entertainment, or simply because our environments and schedules make quality sleep difficult to achieve. This study adds to mounting evidence that such trade-offs may have serious long-term health consequences.
For most adults, the National Sleep Foundation recommends 7-9 hours of sleep per night. Study participants averaged approximately 7.8 hours (466.7 minutes) of sleep nightly, but importantly, the research suggests that sleep quality matters as much as quantity. Disruptions, awakenings, and reduced sleep efficiency all appeared to influence immune function, even when total sleep duration seemed adequate.
Sleep efficiency – the percentage of time in bed actually spent sleeping – averaged 91.4% among study participants but was significantly lower in obese individuals. Those with higher body weights also experienced more “wake after sleep onset” (WASO) periods, indicating fragmented sleep patterns that may contribute to immune dysregulation.
How sleep impacts inflammation
The study also revealed intriguing connections between specific inflammatory markers and monocyte subpopulations. Nonclassical monocytes showed positive correlations with multiple inflammatory compounds, including TNF-α and MCP-1 – molecules previously linked to sleep regulation. This suggests that sleep disruption may initiate a cascade of inflammatory signals throughout the body, potentially contributing to various health problems.
While obesity emerged as a significant factor in driving inflammation, mediation analyses revealed that sleep disruption independently contributes to inflammation regardless of weight status. This finding challenges simplistic views of obesity as the primary driver of inflammation and highlights sleep’s importance as a modifiable risk factor for inflammatory conditions.
The implications extend beyond obesity-related concerns. Sleep disruption has been associated with numerous health problems, including cardiovascular disease, diabetes, and mental health disorders. This research provides potential mechanisms explaining these connections and suggests that improving sleep quality could reduce inflammation and associated risks.
Monocytes, crucial components of the innate immune system, patrol the bloodstream looking for signs of trouble. They differentiate into three main types: classical monocytes (which primarily perform surveillance), intermediate monocytes (which excel at presenting antigens and activating other immune cells), and nonclassical monocytes (which specialize in patrolling blood vessels and producing inflammatory compounds).
In healthy individuals, these monocyte populations maintain a careful balance. Sleep disruption appears to tip this balance toward inflammatory nonclassical monocytes, potentially contributing to a state of chronic low-grade inflammation throughout the body.
Is lack of quality sleep becoming a public health crisis?
This research provides compelling evidence that sleep quality deserves serious attention as a public health concern. The study suggests that even temporary sleep disruption can alter immune function, while chronic sleep problems may contribute to persistent inflammation – a condition increasingly recognized as a driver of numerous diseases.
For individuals struggling with obesity or inflammatory conditions, addressing sleep quality may provide additional benefits beyond traditional interventions focused on diet and exercise. The research also highlights potential concerns for shift workers, parents of young children, and others who regularly experience disrupted sleep patterns.
Healthcare providers may need to consider sleep quality as a critical factor when evaluating and treating patients with inflammatory conditions. Similarly, public health initiatives addressing obesity and related disorders might benefit from incorporating sleep improvement strategies alongside dietary and exercise recommendations.
The researchers are now planning to explore in greater detail the mechanisms linking sleep deprivation to immune changes. They also want to investigate whether interventions such as structured sleep therapies or technology-use guidelines can reverse these immune alterations.
“In the long term, we aim for this research to drive policies and strategies that recognize the critical role of sleep in public health,” said Dr. Al-Rashed. “We envision workplace reforms and educational campaigns promoting better sleep practices, particularly for populations at risk of sleep disruption due to technological and occupational demands. Ultimately, this could help mitigate the burden of inflammatory diseases like obesity, diabetes, and cardiovascular diseases.”
Could adding grapes to your daily diet help maintain muscle strength and health as you age? A new mouse model study suggests these antioxidant-rich fruits might help reshape muscle composition, particularly in women, as they enter their later years.
Published in the journal Foods, this investigation — partially funded by the California Table Grape Commission — tracked 480 mice over two and a half years, examining how grape consumption affects muscle gene expression at a fundamental level. The findings highlight how something as simple as adding grapes to our daily diet might help support muscle health during aging.
Muscle loss affects millions of older adults worldwide, with 10-16% of elderly individuals experiencing sarcopenia—the progressive deterioration of muscle mass and function that comes with age. Women often face greater challenges maintaining muscle mass, particularly after menopause, making this research especially relevant for aging females.
Researchers from several U.S. universities discovered that consuming an amount of grapes equivalent to two human servings daily led to notable changes in muscle-related gene expression. While both males and females showed genetic shifts, the effects were particularly pronounced in females, whose gene activity patterns began shifting toward those typically observed in males.
This convergence occurred at the genetic level, where researchers identified 25 key genes affected by grape consumption. Some genes associated with lean muscle mass increased their activity, while others linked to muscle degeneration showed decreased expression.
What makes grapes so special? The fruit contains over 1,600 natural compounds that work together in complex ways. Rather than any single component being responsible for the benefits, it’s likely the combination of these compounds that produces such significant effects.
“This study provides compelling evidence that grapes have the potential to enhance muscle health at the genetic level,” says Dr. John Pezzuto, senior investigator of the study and professor and dean of pharmacy and health sciences at Western New England University, in a statement. “Given their safety profile and widespread availability, it will be exciting to explore how quickly these changes can be observed in human trials.”
Proper muscle function plays a crucial role in everyday activities, from maintaining balance to supporting bone health and regulating metabolism. The potential to help maintain muscle health through dietary intervention could significantly impact quality of life for aging adults.
The research adds to a growing body of evidence supporting grapes’ health benefits. Previous studies have shown positive effects on heart health, kidney function, skin protection, vision, and digestive health. This new understanding of grapes’ influence on muscle gene expression opens another avenue for potential therapeutic applications.
While the physical appearance and weight of muscles didn’t change significantly between groups, the underlying genetic activity showed marked differences. This suggests that grapes might influence muscle health at a fundamental cellular level, even before measurable functional changes occur—though further research is needed to confirm these effects.
For older adults concerned about maintaining their strength and independence, these findings suggest that a daily bowl of grapes in addition to regular exercise just might offer an additional tool in the healthy aging toolkit.. However, the researchers emphasize that human studies are still needed to confirm these effects.
About a fourth of people don’t remember their dreams. (Roman Samborskyi/Shutterstock)
What were you dreaming about last night? For roughly one in four people, that question draws a blank. For others, the answer comes easily, complete with vivid details about flying through clouds or showing up unprepared for an exam. This stark contrast in dream recall ability has baffled researchers for decades, but a new study reveals there’s more to remembering dreams than pure chance.
From March 2020 to March 2024, scientists from multiple Italian research institutions conducted a sweeping investigation to uncover what determines dream recall. Published in Communications Psychology, their research surpassed typical dream studies by combining detailed sleep monitoring, cognitive testing, and brain activity measurements. The study involved 217 healthy adults between ages 18 and 70, who did far more than simply keep dream journals; they underwent brain tests, wore sleep-tracking wristbands, and some even had their brain activity monitored throughout the night.
Understanding dream recall has long puzzled researchers. Early studies in the 1950s focused mainly on REM sleep, the sleep stage characterized by rapid eye movements and vivid dreams. Scientists initially thought they had solved the mystery of dreaming by linking it exclusively to REM sleep. However, later research revealed that people also dream during non-REM sleep stages, though these dreams tend to be less vivid and harder to remember.
According to researchers at the IMT School for Advanced Studies Lucca, three main factors emerged as strong predictors of dream recall: a person’s general attitude toward dreaming, their tendency to let their mind wander during waking hours, and their typical sleep patterns.
To measure attitudes about dreaming, participants completed a questionnaire rating how strongly they agreed or disagreed with statements like “dreams are a good way of learning about my true feelings” versus “dreams are random nonsense from the brain.” People who viewed dreams as meaningful and worthy of attention were more likely to remember them compared to those who dismissed dreams as meaningless brain static.
Mind wandering proved to be another crucial factor. Using a standardized questionnaire that measures how often people’s thoughts drift away from their current task, researchers found that participants who frequently caught themselves daydreaming or engaging in spontaneous thoughts during the day were more likely to recall their dreams. This connection makes sense considering both daydreaming and dreaming involve similar brain networks, particularly regions associated with self-reflection and creating internal mental experiences.
The relationship between daydreaming and dream recall points to an intriguing possibility: people who spend more time engaged in spontaneous mental activity during the day may be better equipped to generate and remember dreams at night. Both activities involve creating mental experiences disconnected from the immediate external environment.
People who typically had longer periods of lighter sleep with less deep sleep (technically called N3 sleep) were better at remembering their dreams. During deep sleep, the brain produces large, slow waves that help consolidate memories but may make it harder to generate or remember dreams. In contrast, lighter sleep stages maintain brain activity patterns more similar to wakefulness, potentially making it easier to form and store dream memories.
Age was also a factor in dream recall. While younger participants were generally better at remembering specific dream content, older individuals more frequently reported “white dreams,” those frustrating experiences where you wake up knowing you definitely had a dream but can’t remember anything specific about it. This age-related pattern suggests that the way our brains process and store dream memories may change as we get older.
The researchers also discovered that dream recall fluctuates seasonally, with people remembering fewer dreams during winter months compared to spring and autumn. While the exact reason remains unclear, this pattern wasn’t explained by changes in sleep habits across seasons. One possibility is that seasonal variations in light exposure affect brain chemistry in ways that influence dream formation or recall.
Rather than relying on written dream journals, participants used voice recorders each morning to describe everything that was going through their minds just before waking up. This approach reduced the effort required to record dreams and minimized the chance that the act of recording would interfere with the memory of the dream itself.
Throughout the study period, participants wore wristwatch-like devices called actigraphs that track movement patterns to measure sleep quality, duration, and timing. A subset of 50 participants also wore special headbands equipped with electrodes to record their brain activity during sleep. This comprehensive approach allowed researchers to connect dream recall with objective measures of how people were actually sleeping, not just how they thought they slept.
“Our findings suggest that dream recall is not just a matter of chance but a reflection of how personal attitudes, cognitive traits, and sleep dynamics interact,” says lead author Giulio Bernardi, professor in general psychology at the IMT School, in a statement. “These insights not only deepen our understanding of the mechanisms behind dreaming but also have implications for exploring dreams’ role in mental health and in the study of human consciousness.”
The study authors plan to use these findings as a reference for future research, particularly in clinical settings. Further investigations could explore the diagnostic and prognostic value of dream patterns, potentially improving our understanding of how dreams relate to mental health and neurological conditions.
Understanding dream recall could provide insights into how the brain processes and stores memories during sleep. Dreams appear to draw upon our previous experiences and memories while potentially playing a role in emotional processing and memory consolidation. Changes in dream patterns or recall ability might serve as early indicators of neurological or psychiatric conditions.
New research reveals a surprisingly simple way to improve mental health and focus: turn off your phone’s internet. A month-long study found that blocking mobile internet access for just two weeks led to measurable improvements in well-being, mental health, and attention—comparable to the effects of cognitive behavioral therapy and reductions in age-related cognitive decline.
Researchers from multiple universities across the U.S. and Canada worked with 467 iPhone users (average age 32) to test how removing constant internet access would affect their daily lives. Instead of asking people to give up their phones completely, the study took a more practical approach. Participants installed an app that blocked mobile internet while still allowing calls and texts. This way, phones remained useful for basic communication but lost their ability to provide endless scrolling, social media, and constant online access.
The average smartphone user now spends nearly 5 hours each day on their device. More than half of Americans with smartphones worry they use them too much, and this jumps to 80% for people under 30. Despite these concerns, few studies have actually tested what happens when people cut back.
The results were significant. After two weeks without mobile internet, participants showed clear improvements in multiple areas. They reported feeling happier and more satisfied with their lives, and their mental health improved—an effect size that was greater than what is typically seen with antidepressant medications in clinical trials. They also performed better on attention tests, showing improvements comparable to reversing 10 years of age-related cognitive decline.
To measure attention, participants completed a computer task that tested their ability to stay focused over time. The improvements were meaningful—similar in size to the difference between an average adult and someone with mild attention difficulties. This suggests that constant mobile internet access may impair our natural ability to focus.
The study design was particularly strong because it included a swap halfway through. After the first two weeks, the groups switched roles—people who had blocked mobile internet got access back, while the other group had to block their internet. This strengthened the evidence that the improvements were caused by reduced mobile internet access rather than other factors.
“Smartphones have drastically changed our lives and behaviors over the past 15 years, but our basic human psychology remains the same,” says lead author Adrian Ward, an associate professor of marketing at the University of Texas at Austin, in a statement. “Our big question was, are we adapted to deal with constant connection to everything all the time? The data suggest that we are not.”
An impressive 91% of participants improved in at least one area. Without the ability to check their phones constantly, people spent more time socializing in person, exercising, and being outdoors—activities known to boost mental health and cognitive function.
Throughout the study, researchers checked in with participants via text messages to track their moods. Those who blocked mobile internet reported feeling progressively better over the two weeks. Even after regaining internet access, many retained some of their improvements, suggesting the break helped reshape their digital habits.
Interestingly, the benefits weren’t just from less screen time. While phone use dropped significantly during the study (from over 5 hours to under 3 hours daily), the improvements appeared linked specifically to breaking the habit of constant online connection. Even after getting internet access back, many participants kept their usage lower and continued feeling better.
One surprising finding involved people who started the study with a high “fear of missing out” (FOMO). Rather than making their anxiety worse, disconnecting from mobile internet led to the biggest improvements in their well-being. This suggests that constant access to social media and online updates may fuel digital anxiety rather than relieve it.
Blocking mobile internet also helped participants feel more in control of their behavior and improved their sleep. Without instant access to endless entertainment and social media, people reported having better control over their attention and averaged about 17 more minutes of sleep per night.
However, sticking to the program was difficult—only about 25% of participants kept their mobile internet blocked for the full two weeks. This highlights how dependent many of us have become on constant connectivity. Still, even those who didn’t fully adhere to the program showed improvements, suggesting that simply reducing mobile internet use can be beneficial.
The researchers noted that a less extreme approach might work better for most people. Instead of blocking all mobile internet, limiting access during certain times or restricting specific apps could provide similar benefits while being easier to maintain.
The takeaway is simple: reducing mobile internet access—even temporarily—can help improve well-being, mental health, and focus. While not everyone is ready to disconnect completely, finding ways to limit our online exposure could make us happier, healthier, and more present in our daily lives.
It’s no surprise that our mental acuity and mood wax and wane during the day, but it may be surprising that most of us seem to be morning people.
In a study at University College London, researchers analyzed data collected from a dozen surveys of 49,218 respondents between March 2020 and March 2022. According to the report published recently in the British Medical Journal Metal Health, the data showed a trend of people claiming better mental health and wellbeing early in the day. They reported greater life satisfaction, increased happiness, and less severe depressive symptoms. They also reported a greater sense of self-worth earlier in the day. People felt worst around midnight. Mental health and mood were more variable on weekends. Loneliness was more stable throughout the week.
Dr. Feifei Bu, principal research fellow in statistics and epidemiology at University College, said in an email to CNN, “Our study suggests that people’s mental health and wellbeing could fluctuate over time of day. On average people seem to feel best early in the day and worst late at night.”
Research Limitations
Even though a correlation was discovered between morning, better mood, life satisfaction, and self-worth, there may be factors affecting the results not apparent in the research, Dr. Bu says.
How people were feeling may have affected when they filled out the surveys. As with most research, the findings need to be replicated. Studies need to be designed to adjust for or eliminate confounding variables, isolating specific questions as much as possible.
In addition, although mental health and well-being are associated, they are not the same thing. Well-being is a complex medley of mental, emotional, physical, cognitive, psychological, and spiritual factors. According to the World Health Organization, well-being is a positive state determined by social, economic, and environmental conditions that include quality of life and a sense of meaning and purpose.
Mental health is a significant contributor to well-being, but they don’t entirely overlap. Many people with mental health issues also enjoy what they describe as a good quality of life.
Also, while many reported feeling better in the morning, better is relative. When someone feels better in the morning, that doesn’t necessarily mean that they feel good.
In addition, mood is a temporary state; mental health and well-being are more stable conditions.
Do hard work when it’s best for you
Do these results mean to confront problems or do your hardest work first thing in the morning? Or does it mean not to problem-solve in the evening – just go to bed and tackle your issues in the morning? Not all research agrees, but more evidence points to late morning as the most productive time for problem-solving. Studies suggest that mood is more stable in the late morning, making it easier to confront more demanding matters with a cool head and less emotional influence.
Cortisol, an important body-regulating hormone that your adrenal glands produce and release, has a daily rhythm of highs and lows. It can also be secreted in bursts in response to stress. Cortisol tends to be lower in the midafternoon. This time is also associated with dips in mood and “decision fatigue.”
Intermittent fasting has become one of the most popular eating patterns of the past decade. The practice, which involves cycling between periods of eating and fasting, has been praised for its potential health benefits. But a new mouse model study suggests that age plays a crucial role in how the body responds to fasting — and for young individuals, it might do more harm than good.
A team of German researchers recently discovered that while intermittent fasting improved health markers in older mice, it actually impaired important cellular development in younger ones. Their findings, published in Cell Reports, raise important questions about who should (and shouldn’t) try this trending eating pattern.
Inside our bodies, specialized cells in the pancreas produce insulin, a hormone that helps control blood sugar levels. These cells, called beta cells, are particularly important during youth when the body is still developing. The researchers found that in young mice, long-term intermittent fasting disrupted how these cells grew and functioned.
“Our study confirms that intermittent fasting is beneficial for adults, but it might come with risks for children and teenagers,” says Stephan Herzig, a professor at Technical University of Munich and director of the Institute for Diabetes and Cancer at Helmholtz Munich, in a statement.
The study looked at three groups of mice: young (equivalent to adolescence in humans), middle-aged (adult), and elderly. Each group followed an eating pattern where they fasted for 24 hours, followed by 48 hours of normal eating. The researchers tracked how this affected their bodies over both short periods (5 weeks) and longer periods (10 weeks).
At first, all age groups showed improvements in how their bodies handled sugar, which, of course, is a positive sign. But after extended periods of intermittent fasting, significant differences emerged between age groups. While older and middle-aged mice continued to show benefits, the young mice began showing troubling changes.
The pancreatic cells in young mice became less effective at producing insulin, and they weren’t maturing properly. Even more concerning, these cellular changes resembled patterns typically seen in Type 1 diabetes, a condition that usually develops in childhood or adolescence.
“Intermittent fasting is usually thought to benefit beta cells, so we were surprised to find that young mice produced less insulin after the extended fasting,” explains co-lead author Leonardo Matta, from Helmholtz Munich.
The older mice, however, actually benefited from the extended fasting periods. Their insulin-producing cells worked better, and they showed improved blood sugar control. Middle-aged mice maintained stable function, suggesting that mature bodies handle fasting periods differently than developing ones.
This age-dependent response challenges the common belief that intermittent fasting is suitable for everyone. The research suggests that while mature adults might benefit from this eating pattern, young people could be putting themselves at risk, particularly if they maintain the practice for extended periods.
The findings are especially relevant given how popular intermittent fasting has become among young people looking to manage their weight. While short-term fasting appeared safe across all age groups, the long-term effects on young practitioners could be significant.
“The next step is digging deeper into the molecular mechanisms underlying these observations,” says Herzig. “If we better understand how to promote healthy beta cell development, it will open new avenues for treating diabetes by restoring insulin production.”
Despite the attention they receive from athletes and wellness influencers, popular dietary trends aren’t one-size-fits-all. What works for adults might not be appropriate for growing bodies — all the more reason that understanding these age-related differences becomes increasingly important.