Here’s What Happened After Healthy Eaters Switched to a Western Diet for 2 Weeks

Ultra-processed foods are dominating much of what Americans eat. (Rimma Bondarenko/Shutterstock)

Two weeks of burgers and fries might do more damage than you think. A new study shows that men who switched from traditional African diets to Western foods for just 14 days experienced alarming increases in inflammation and immune dysfunction. The changes lingered for weeks after returning to their normal diets.

The study, published in Nature Medicine, demonstrates how quickly the body’s immune and metabolic systems respond to dietary shifts. Its findings raise concerns about the widespread abandonment of heritage diets in favor of processed Western foods.

The Experiment: Switching Diets in Tanzania

Researchers from Tanzania’s Kilimanjaro Christian Medical University College collaborated with scientists from Radboud University Medical Center in the Netherlands to conduct this dietary experiment. They worked with 77 healthy young men from northern Tanzania, some from rural areas who typically ate traditional Kilimanjaro diets and others from urban areas who consumed more Western-style foods.

For two weeks, the rural participants switched to a Western diet high in processed foods, while urban participants adopted a traditional heritage diet. A third group kept eating their usual Western diet but added daily consumption of Mbege, a traditional fermented banana beverage, for one week.

The men who switched from their traditional diet to Western foods gained an average of about 5.7 pounds. Their blood tests showed increasing levels of inflammation markers and metabolic changes linked to disease risk. More concerning, their immune cells became less responsive to microbial challenges, essentially making their immune systems temporarily less effective.

Many of these negative changes persisted even four weeks after returning to their normal diets, indicating that even short periods of dietary changes might have lasting effects.

On the flip side, urban dwellers who temporarily switched to the traditional Kilimanjaro diet experienced mostly positive changes. Their blood showed decreasing levels of inflammatory proteins and beneficial metabolic shifts. Those who drank the fermented banana beverage also showed anti-inflammatory benefits.

What Makes These Diets Different?

The Kilimanjaro heritage diet typically includes green vegetables and legumes like kidney beans, plantains, cassava, taro, millet, and sorghum. These foods provide abundant fiber and plant compounds with known health benefits.

The Western diet featured foods like beef sausage, white bread with margarine, French fries, chicken stew with white rice, and processed maize porridge with added sugar.

The global nutrition transition happening as traditional diets give way to Western-style eating patterns is an important issue. While most nutrition research focuses on Western populations, this study examines how dietary changes affect people in sub-Saharan Africa, a region experiencing rising rates of chronic diseases like heart disease and diabetes.

At the genetic level, those eating the Western diet showed increased activity of genes related to inflammation and decreased activity of genes involved in immune function. Their blood samples also revealed changes in white blood cell counts and activation patterns indicating increased inflammation.

Traditional African diets are being steadily displaced by Western-style eating habits, driven by factors like urban growth, economic shifts, wider availability of processed foods, globalization, and evolving cultural norms.

Bottom Line: Diet Influences Inflammation

This rapid dietary shift occurring across developing regions might help explain the rising epidemic of noncommunicable diseases worldwide. Chronic inflammation, which can persist at low levels for years without obvious symptoms, damages tissues and organs over time. The study reveals how quickly inflammatory processes can be triggered by dietary changes, pointing to a potential mechanism for how Western diets increase disease risk.

What about the group that consumed the fermented beverage? After just one week of consuming Mbege, participants showed reduced inflammatory markers and increased production of anti-inflammatory compounds. This also supports growing research interest in fermented foods for gut health and immune regulation.

For those living in Western countries, the results add more evidence that incorporating more elements from plant-rich, minimally processed dietary patterns might help reduce inflammatory burden. The Mediterranean diet, which shares many characteristics with the Kilimanjaro heritage diet (emphasis on plant foods, whole grains, limited processed foods), has similarly been linked to reduced inflammation and chronic disease risk.

Even short-term exposure to a Western diet can trigger inflammation that might increase disease risk over time. Traditional food systems face increasing pressure from globalization, but preserving valuable dietary traditions may help combat the rising global epidemic of chronic diseases.

Source : https://studyfinds.org/western-diet-inflammation-two-weeks/

Too Old To Lift? Nonsense! Why Your Aging Muscles Are Tougher Than You Think

(© Halfpoint – stock.adobe.com)

Conventional wisdom has long suggested that as we age, our bodies become more fragile and take longer to bounce back from physical stress. But what if that’s not entirely true? Research challenges this notion with surprising evidence that older adults may not experience worse exercise-induced muscle damage than their younger counterparts.

The new findings could change how older adults approach physical activity by removing a significant psychological barrier that has kept many from engaging in beneficial exercise regimens. In other words, perhaps all along millions of people have held back from working out as they age due to fear stemming from unsubstantiated beliefs.

Challenging Beliefs About Muscle Aging and Recovery

For years, the scientific community theorized that aging bodies would struggle more with exercise recovery. The reasoning seemed sound: older adults typically show decreased muscle protein synthesis (the body’s ability to build new muscle), fewer satellite cells (essential for muscle repair), and reduced ability for those cells to multiply. These factors should logically result in greater muscle damage and slower recovery times.

However, the data from 36 studies tells a markedly different story. The international research team, spearheaded by scientists from Cardiff Metropolitan University, conducted a thorough review comparing exercise recovery between different adult age groups.

When researchers measured muscle function changes after exercise – a key indicator of how well muscles perform after being stressed – they found no meaningful differences between younger and older participants. This crucial performance metric remained similar between age groups at 24, 48, and 72 hours post-exercise, as well as in measurements of peak changes.

More surprisingly, older adults consistently reported less muscle soreness than younger participants at all measured time points. This pattern held steady across multiple studies and contradicts what most exercise physiologists would have predicted.

Similarly, creatine kinase levels – an enzyme that appears in the bloodstream when muscle membranes are damaged – were lower in older adults compared to younger adults at 24 hours post-exercise and at peak measurements.

Why Older Muscles Might Be More Resilient

The researchers proposed several explanations for these unexpected findings.

One theory involves the physical changes that happen in muscle and connective tissue with age. As we get older, our skeletal muscles contain more collagen, which can stiffen both muscle and connective tissue. Similar to how muscles adapt to repeated exercise, aging might cause mechanical changes that improve muscle stiffness, offering protection against structural damage by better distributing physical stress during workouts.

Fatigue responses may also play a role. Research has shown that older adults typically experience greater muscular fatigue during dynamic movements. Since all studies in this analysis used dynamic contractions to cause muscle damage, older adults may have experienced a reduced absolute workload compared to younger participants, despite working at the same relative intensity. This lower absolute load might result in less tissue damage.

The research team also examined whether factors like sex, body part exercised, or exercise type influenced the results. Sex did appear to play a role in muscle function responses, with similar numerical differences between age groups for both males and females, but only the male comparison reached statistical significance. This hints that age may affect male muscle responses differently than female responses, though the researchers note fewer studies focused on women, which may have affected this finding.

The Struggle Isn’t Real

Exercise-induced muscle damage can discourage people from sticking with physical activity programs, particularly older adults who might view the discomfort as harmful rather than as part of the adaptation process. Now, knowing that aging doesn’t necessarily increase vulnerability to muscle damage could help overcome this mental barrier.

Perhaps most important, the research offers encouragement for aging individuals to stay active. With global population trends pointing toward an increasingly older demographic – people over 60 expected to double by 2050 and triple by 2100 – understanding how aging affects exercise responses becomes increasingly vital to public health.

Physical activity remains fundamental to “successful aging,” which includes physical, psychological, social, and cognitive aspects. Regular exercise can offset age-related declines in muscle strength and power, aerobic fitness, and body composition. This new understanding that older adults may not face excessive muscle damage removes a significant obstacle to activity.

For the average older adult considering starting or continuing an exercise program, this research delivers a clear message: your age should not hold you back. Your muscles may actually handle exercise stress better than previously thought, and the benefits of regular physical activity far outweigh the temporary discomfort of muscle damage. Be sure to speak with your doctor first before taking on any new physical challenges.

Source : https://studyfinds.org/aging-muscles-are-tougher-than-you-think-weightlifting/

Screen Time in Bed Raises Insomnia Risk by 59% Per Hour

(© Point of view – stock.adobe.com)

Using a smartphone or tablet for just one hour after going to bed raises the risk of insomnia by 59%, according to new research. This finding comes from one of the largest studies conducted on screen use and sleep among university students, highlighting how our nightly digital habits may be robbing us of crucial rest.

Researchers from the Norwegian Institute of Public Health examined data from over 45,000 university students and found that each additional hour spent using screens after going to bed not only significantly increased insomnia risk but also cut sleep duration by about 24 minutes. What’s particularly notable is how consistent this effect appears to be—regardless of whether students were scrolling through social media, watching movies, or gaming.

The Digital Bedtime Crisis

Sleep problems have reached concerning levels among university students globally. The study reports that about 30% of Norwegian students sleep less than the recommended 7-9 hours per night. Even more troubling, over 20% of male students and 34% of female students report sleep issues meeting clinical criteria for insomnia, numbers that have been rising in recent years.

Smartphones have transformed our bedrooms into entertainment centers. Previous research shows that over 95% of students use screens in bed, with an average screen time after going to bed of 46 minutes. Some studies have even found that 12% of young adults engage with their smartphones during periods they’ve self-reported as sleep time.

Many sleep experts have speculated that social media might be especially harmful for sleep compared to more passive activities like watching television. The reasoning seems logical – social media platforms are designed to keep users engaged through interactions, notifications, and endless scrolling features that make it difficult to disconnect. Plus, the social obligations and fear of missing out associated with platforms like Instagram and TikTok might make users more reluctant to put their devices away at bedtime.

Surprising Findings Challenge Common Beliefs

Researchers divided participants into three groups: those who exclusively used social media in bed (about 15% of the sample), those who used social media combined with other screen activities (69%), and those who engaged in non-social media screen activities only (15%).

Contrary to expectations, students who exclusively used social media in bed reported fewer insomnia symptoms and longer sleep duration compared to the other groups. The non-social media group experienced the highest rates of insomnia and shortest sleep duration, while those mixing social media with other activities were intermediate.

This unexpected outcome challenges the notion that social media is uniquely harmful to sleep. Instead, the research points to the total time spent on screens in bed, regardless of the specific activity, as the strongest predictor of sleep problems. Each additional hour of screen time after going to bed was consistently associated with poorer sleep outcomes across all three groups.

Why might social media-only users sleep better? Researchers propose that exclusively using social media might reflect a preference for socializing and maintaining connections with others, which generally protects against sleep problems. Being socially engaged has been linked to better sleep in numerous studies.

Alternatively, those experiencing the most sleep difficulties might deliberately avoid social media before bed, instead turning to activities like watching movies or listening to music as sleep aids. Many people with insomnia use screen-based activities to distract themselves from negative thoughts or anxiety that prevent sleep.

What This Means For Your Sleep

The study, published in Frontiers in Psychiatry, reveals how screens affect sleep through several pathways: direct displacement (screen time replacing sleep time), light exposure (suppressing melatonin production), increased mental arousal (making it harder to fall asleep), and sleep interruption (notifications disturbing sleep).

The findings from this study largely support the displacement hypothesis. If increased arousal from interactive content were the main factor, we would expect to see different associations between sleep and various screen activities. Instead, the consistent relationship between screen time and sleep problems across activity types indicates that simply spending time on screens—time that could otherwise be spent sleeping—may be the most important factor.

For university students already struggling with academic pressure, social adjustment, and mental health challenges, poor sleep represents an additional burden with potentially serious consequences. Sleep deprivation impairs attention, memory, and other cognitive functions crucial for academic success.

Non-screen users had 24% lower odds of reporting insomnia symptoms, confirming that keeping devices out of the bedroom is a worthwhile sleep hygiene practice. Even if it’s not a complete solution to sleep difficulties, it represents a behavior that could meaningfully improve sleep for many young adults.

“If you struggle with sleep and suspect that screen time may be a factor, try to reduce screen use in bed, ideally stopping at least 30–60 minutes before sleep,” says lead author, Dr. Gunnhild Johnsen Hjetland, in a statement. “If you do use screens, consider disabling notifications to minimize disruptions during the night.”

The next time you’re tempted to bring your phone to bed “just to check a few things,” remember the Norwegian students’ experience: each hour spent on that screen, regardless of what you’re doing, might cost you 24 minutes of precious sleep and significantly increase your chances of developing insomnia. Given what we know about the essential role of sleep in physical and mental health, learning, and overall wellbeing, that’s a trade-off worth reconsidering.

Source : https://studyfinds.org/screen-time-bed-insomnia-risk/

Webb Telescope Catches Earliest Evidence of the Universe Turning On Its Lights

A close view on one of the most distant galaxies known: On the left are some 10,000 galaxies at all distances, observed with the James Webb Space Telescope. The zoom-in on the right shows, in the center as a red dot, the galaxy JADES-GS-z13-1. Its light was emitted 330 million years after the Big Bang and traveled for almost 13.5 billion years before reaching Webb’s golden mirror. Credit: ESA/Webb, NASA & CSA, JADES Collaboration, J. Witstok, P. Jakobsen, A. Pagan (STScI), M. Zamani (ESA/Webb).

At a time when light couldn’t easily travel through space due to a thick fog of neutral hydrogen, one galaxy managed to carve out its own bubble of clear space, allowing us to detect a specific light signal that should have been completely absorbed. This cosmic lighthouse from 13 billion years ago gives us our earliest direct glimpse of how the universe transitioned from darkness to light.

The galaxy, cataloged as JADES-GS-z13-1-LA, was observed at what scientists call a redshift of 13. While that technical term might not mean much to most of us, it represents an incredible distance in both space and time. When we look at this galaxy, we see light that has traveled for over 13 billion years to reach us.

This study, published in Nature, used the James Webb Telescope to observe this early galaxy. Scientists also detected a Lyman-alpha emission, a specific wavelength of light that’s easily absorbed by neutral hydrogen, the gas that filled the early universe. Finding this emission suggests this galaxy was actively clearing the cosmic fog around it, like turning on a light in a dark room.

From Cosmic Dark Ages to First Light

Recent observations with the James Webb Space Telescope have already revealed surprisingly bright galaxies existed earlier than astronomers expected. But this new finding provides something more concrete: direct evidence of reionization, the cosmic transformation that brought the universe out of darkness.

For context, in the first few hundred thousand years after the Big Bang, the universe expanded and cooled enough for protons and electrons to combine into neutral hydrogen atoms. This created a cosmic fog that blocked most light from traveling freely for hundreds of millions of years, a period astronomers call the cosmic “dark ages.”

Eventually, the first stars and galaxies began forming and producing ultraviolet radiation that started breaking apart these neutral hydrogen atoms. This gradually made the universe transparent to light (reionization).

Breaking Through the Cosmic Fog

The research team analyzed this distant galaxy using imaging and spectroscopy from JWST’s powerful instruments. The data revealed not just the usual signs of light being blocked by early-universe hydrogen, but also a surprisingly bright signal of light breaking through. Such strong emissions had previously only been seen in much younger galaxies when more of the universe had already been cleared of neutral hydrogen.

Astronomers also saw what they call an “extremely blue ultraviolet continuum” (essentially meaning this galaxy appears very blue in color). The fact that we could even see the Lyman-alpha emission means the galaxy was incredibly good at making and releasing powerful radiation, strong enough to break apart the hydrogen gas around it.

“We know from our theories and computer simulations, as well as from observations at later epochs, that the most energetic UV light from the galaxies ‘fries’ the surrounding neutral gas, creating bubbles of ionized, transparent gas around them,” says study author Joris Witstok from the University of Copenhagen, in a statement. “These bubbles percolate the Universe, and after around a billion years, they eventually overlap, completing the epoch of reionization. We believe that we have discovered one of the first such bubbles.”

What could produce such powerful radiation in this ancient galaxy? One explanation involves extremely massive, hot stars that are much more efficient at producing ionizing radiation than typical stars today. These cosmic giants could be heating surrounding gas to temperatures exceeding 100,000 Kelvin, far hotter than our Sun’s surface at about 5,800 Kelvin.

Another possibility is that this galaxy contains an active supermassive black hole. The intense radiation from material falling into such a black hole could efficiently ionize nearby gas. Supporting this idea, the researchers found the galaxy appears extremely compact, smaller than 114 light-years across which is more compact than most galaxies seen at similar distances.

“Most galaxies are known to host a central, supermassive black hole. As these monsters engulf surrounding gas, the gas is heated to millions of degrees, making it shine brightly in X-rays and UV before disappearing forever,” says Witstok.

The researchers also considered whether this might be one of the universe’s very first generation of stars, called Population III stars, formed from pristine gas containing only hydrogen and helium. These stars would be substantially more massive and hotter than later stars. However, the galaxy seems slightly too bright to fit this explanation perfectly.

Rewriting the Timeline of Cosmic Dawn

Whatever is powering this ancient light source, its discovery reshapes our understanding of how the universe transitioned from darkness to light. Until recently, the consensus among astronomers was that reionization did not begin until the Universe was around half a billion years old, completing another half billion years later. But this study pushes the beginning of reionization significantly earlier than previously thought.

The finding also provides evidence for an important physical process called Wouthuysen-Field coupling, where Lyman-alpha photons affect the spin temperature of hydrogen atoms. Scientists hope to detect this with radio telescopes searching for signals from the early universe.

“We knew that we would find some of the most distant galaxies when we built Webb,” says study author Peter Jakobsen from the University of Copenhagen. “But we could only dream of one day being able to probe them in such detail that we can now see directly how they affect the whole Universe.”

The universe’s first light didn’t switch on all at once; it started with galaxies like this one, each creating its own bubble of clear space that eventually merged with others to transform the entire cosmos. By pushing back the timeline of this process and showing it began with ordinary galaxies rather than exceptional ones, this discovery connects the dots between the universe’s first few hundred million years and the transparent cosmos that would eventually allow for our existence.

Source : https://studyfinds.org/webb-telescope-universe-turning-on-its-lights/

How Being Bilingual May Help the Brain Resist Alzheimer’s Damage

Knowing two languages could preserve your brain for longer, even with Alzheimer’s. (stoatphoto/Shutterstock)

Learning a second language offers benefits beyond ordering food on vacation or reading foreign literature. Recent research from Concordia University suggests bilingualism might actually help protect the brain against some devastating effects of Alzheimer’s disease.

Scientists have long observed that some people maintain their thinking abilities despite significant brain damage. This disconnect, where brain deterioration doesn’t necessarily cause expected cognitive problems, has prompted researchers to develop ideas like “brain reserve,” “cognitive reserve,” and “brain maintenance” to explain this resilience. This study, published in Bilingualism: Language and Cognition, found evidence that speaking two or more languages might boost this resilience, especially through brain maintenance in people with Alzheimer’s.

Alzheimer’s accounts for about two-thirds of dementia cases worldwide and typically progresses from subjective cognitive decline (SCD) to mild cognitive impairment (MCI) before developing into full Alzheimer’s. This progression usually comes with brain shrinkage, particularly in the medial temporal lobe, which includes the hippocampus, a structure essential for forming new memories.

Earlier studies suggested bilingual individuals might experience a 4-to-5-year delay in Alzheimer’s symptom onset compared to those who speak just one language. But how exactly bilingualism might shield against cognitive decline hasn’t been fully understood. This new research examines the structural brain differences between bilinguals and monolinguals (people who only speak one language) across various stages of Alzheimer’s progression.

The research team analyzed data from 364 participants from two major Canadian studies. Participants ranged from cognitively healthy individuals to those with subjective cognitive decline, mild cognitive impairment, and Alzheimer’s disease.

Using brain imaging, researchers measured the thickness and volume of specific brain regions involved in language processing and areas typically affected by Alzheimer’s. They wanted to see if bilinguals showed signs of greater “brain reserve” (more neural tissue in language-related regions) or “cognitive reserve” (maintaining cognitive function despite significant brain deterioration).

Unlike some previous studies, bilinguals didn’t show greater brain reserve in language-related regions compared to monolinguals. However, a difference emerged when looking at the hippocampus, one of the first areas damaged by Alzheimer’s.

Older monolinguals with Alzheimer’s showed substantial reduction in hippocampal volume compared to those with milder impairment, following the expected pattern of brain degeneration. But bilinguals with Alzheimer’s showed a different pattern: their hippocampal volumes weren’t significantly smaller than bilinguals with milder cognitive issues.

While monolingual brains showed progressive shrinkage as the disease worsened, bilingual brains seemed to maintain their hippocampal volume despite disease progression. This points to what researchers call “brain maintenance,” preserving brain structure over time despite aging or disease.

The hippocampus is vital for forming new memories, and its deterioration closely connects with the memory loss so characteristic of Alzheimer’s. If bilingualism helps preserve hippocampal volume, it could explain why some studies have found delayed symptom onset in bilingual Alzheimer’s patients.

The bilingual participants came from diverse backgrounds, with about 38% reporting English as their first language, 39% reporting French, and the rest reporting various other languages. About 68% knew two languages, 22% knew three, and some participants reported knowing up to seven languages. Interestingly, many bilingual participants could be described as “late bilinguals,” those who learned their second language after age 5, with moderate self-reported second language ability.

The potential brain benefits of bilingualism might not be limited to those who grew up speaking multiple languages or who are highly fluent in their second language. Even learning a second language later in life and achieving moderate skill might contribute to cognitive resilience.

What does this mean for ordinary people? While the study doesn’t suggest that learning a second language will prevent Alzheimer’s, it adds to growing evidence that certain lifestyle factors, including language learning, may help build resilience against cognitive decline.

The benefits of learning a second language extend far beyond communication skills. The mental demands of managing multiple languages may help build a more resilient brain, one better equipped to withstand the challenges of aging and disease. While learning a second language is no cure, it could help maintain thinking abilities for longer despite underlying brain damage.

Source : https://studyfinds.org/being-bilingual-resist-alzheimers-damage/

Average American Spends 138 Minutes Mired in Worrisome Thoughts Every Day

Photo by Road Trip with Raj on Unsplash

Anxiety has become an unwelcome companion for many, creeping into everyday life with relentless persistence. But for a growing number of young Americans, worry is no longer an uncontrolled intruder—it’s being managed, contained, and strategically addressed.

A recent survey of 2,000 adults across all generations by Talker Research uncovers a surprising trend: one in 10 young Americans deliberately carve out dedicated “worry time” in their daily routines. This approach stands in sharp contrast to older generations, with just 3% of Gen X and baby boomers adopting similar strategies.

A Generation Wrestling with Anxiety

The most striking revelation is the pervasive nature of worry among younger Americans. An overwhelming 62% of Gen Z and millennial respondents report feeling constantly anxious, compared to 38% of older generations. On average, people spend two hours and 18 minutes each day caught in the grip of worrisome thoughts—a significant chunk of time that could otherwise fuel productivity, creativity, or personal growth.

The timing of these worry periods reveals interesting patterns. A third of respondents find themselves most anxious when alone, while 30% are plagued by worries as they prepare to fall asleep. Another 17% are tormented by anxious thoughts upon waking, and 12% experience peak worry while getting ready for bed.

The Weight of Worry

When it comes to specific concerns, finances top the list. More than half (53%) of respondents cite money as their primary source of anxiety. Family worries follow closely, with 42% expressing deep concern about their loved ones. The same percentage fret about pending tasks and to-do lists.

Health concerns (37%), sleep anxiety (22%), and political uncertainties (19%) round out the top worries. For parents, the concerns extend far beyond personal anxieties. Seventy-seven percent express profound worry about the world their children are inheriting, with 34% specifically calling out climate change as a significant concern.

One parent’s raw emotion captures this generational anxiety: “Honestly, I worry that there won’t be a world for my child to grow up in.” Another wondered whether their children would experience the same opportunities they once enjoyed.

Strategic Approach to Mental Health?

The concept of scheduled worry time might seem counterintuitive, but mental health experts suggest it’s a deliberate approach to managing anxiety. By allocating a specific time to process and acknowledge worries, individuals can potentially reduce the overall impact of anxiety on their daily lives.

“Worry doesn’t just cloud our thoughts — it can seriously disrupt our sleep,” says Brooke Witt, Vice President of Marketing at Avocado Green Mattress, which commissioned the study. “When our minds are consumed by finances, family, or endless to-do lists, falling and staying asleep becomes a challenge, which directly impacts how rested we feel the next day.”

The survey suggests more than just a coping mechanism—it reveals a generational approach to mental health that is proactive, intentional, and self-aware. Younger Americans are not simply experiencing anxiety; they’re developing structured methods to understand, limit, and manage it.

The 10% who schedule dedicated worry time represent a potentially transformative approach to mental wellness. By containing their anxieties within a specific timeframe, they may be finding a way to prevent worry from consuming their entire day.

“There’s always something brewing in our minds — whether it’s work, family, or future concerns,” notes Amy Sieman, an affiliate manager with Avocado Green Mattress. “This research reveals how these everyday worries can follow us to bed, affecting both our sleep and our overall quality of life.”

Source : https://studyfinds.org/americans-worry-time-anxiety/

(© pikselstock – stock.adobe.com)

Male sexual desire tends to decline with age—it’s a biological fact that many men face as the years pass. By age 70, about a quarter of men report a noticeable drop in sexual drive. But what if there were a relatively simple dietary approach that could help maintain libido well into later years?

A fascinating study published in Cell Metabolism reveals that intermittent fasting significantly boosts sexual behavior in male mice by altering brain chemistry in ways that enhance sexual motivation. The research suggests that brain chemistry might matter more than physical reproductive metrics when it comes to maintaining sexual function with age.

Scientists from the German Center for Neurodegenerative Diseases and University of Health and Rehabilitation Sciences in China discovered that mice subjected to intermittent fasting—alternating 24-hour periods of eating and not eating—maintained much higher reproductive success rates in old age compared to their continuously-fed counterparts. While only 38% of aged mice with unlimited food access successfully reproduced, a remarkable 83% of intermittently fasted mice remained fertile.

What makes this finding truly surprising isn’t just the striking difference in reproductive success, but the mechanism behind it. The fasting regimen didn’t improve traditional markers of reproductive health like testosterone levels, sperm count, or sperm quality. In fact, the fasting mice actually showed greater testis weight reduction than continuously-fed mice. The secret to their reproductive success lay entirely in behavior—the fasting mice simply showed more interest in mating.

The research team, led by Kan Xie, Yu Zhou, and Dan Ehninger, identified a clear chemical pathway for this behavioral change. Aging typically raises levels of serotonin in the brain, which acts as a sexual inhibitor. Intermittent fasting prevented this age-related serotonin increase by reducing the amount of its precursor, the amino acid tryptophan, available to the brain.

Study authors explain that this mechanism works through a unique metabolic pathway. When mice fast and then refeed, their skeletal muscles draw more tryptophan from the bloodstream. With less tryptophan circulating in the blood, less crosses into the brain, resulting in lower serotonin production and consequently less inhibition of sexual behavior.

To confirm their findings, the researchers administered 5-HTP—a direct precursor to serotonin that bypasses the rate-limiting step in serotonin synthesis—to fasting mice. This promptly reversed the behavioral benefits, with the treated mice showing decreased sexual interest. This confirmed that reduced brain serotonin was indeed responsible for the enhanced sexual behavior in fasting mice.

While the study was conducted in mice, the core biochemical pathways involved function similarly in humans. Tryptophan metabolism and serotonin synthesis operate through comparable mechanisms across mammalian species, suggesting the potential for similar effects in humans.

The intermittent fasting regimen used in the study wasn’t extreme. The mice alternated between 24 hours of unlimited food access and 24 hours of fasting. During feeding days, they ate more than usual, compensating for fasting days. Overall, they consumed only about 13% fewer calories than continuously-fed mice. This modest reduction in calorie intake, combined with the cyclical fasting/feeding pattern, produced significant effects on brain chemistry.

It’s worth noting that the benefits weren’t immediate—a brief six-week intervention didn’t improve sexual behavior. The changes required longer-term adaptation, suggesting that lasting modifications to brain chemistry take time to develop.

For men concerned about age-related decline in sexual interest, this research offers food for thought. While human studies are needed to confirm similar effects, the fundamental biological mechanisms are plausible. Before making any changes to your dietary routine, it’s important to speak with your doctor first.

From an evolutionary perspective, these findings challenge the notion that dietary restriction necessarily suppresses reproduction. While many theories suggest organisms redirect resources from reproduction to survival during food scarcity, this research indicates that certain patterns of food availability might actually enhance reproductive behavior, at least in males.

It’s something many of us probably haven’t thought of before, but perhaps what happens in the kitchen might influence what happens in the bedroom. While results from animal studies don’t automatically transfer to humans, the fundamental mechanisms involved are similar enough to warrant further investigation. After all, when it comes to maintaining quality of life with age, few aspects matter more than preserving the capacity for romantic connection.

Source : https://studyfinds.org/could-intermittent-fasting-refuel-an-aging-libido/

 

“Taking in the good”: A simple way to offset your brain’s negativity bias

Credit: paul / Adobe Stock

Imagine lounging in a hammock on a sunny beach, palm trees swaying in the breeze, the bright turquoise of the sea barely dimmed by your sunglasses. You glance up and down the beach: not a soul in sight. It’s the first day of your holiday, and your whole body feels so relaxed; you could dissolve into the sand and be swept out to sea. You take a lazy sip of your pina colada and take it all in. Out of nowhere, a voice whispers into your ear: “No, really, take it in.”

That inner voice? It’s echoing a simple but often overlooked idea: Good experiences don’t always stick unless we make an effort to let them. That’s the premise behind Hardwiring Happiness, a book by psychologist Rick Hanson, who explores how consciously lingering on positive moments can help counterbalance the brain’s built-in negativity bias. That bias might have served a useful evolutionary purpose ages ago when our survival was more frequently at stake, but in a relatively stable 21st-century environment, it often traps us in cycles of rumination.

Hanson’s approach isn’t about forced optimism — it’s grounded in the idea of neuroplasticity, the brain’s capacity to change over time through repeated experience. Drawing on psychological theory and early research suggesting that “deliberately taking in the good” may help build resilience and emotional well-being, Hanson developed the HEAL method:

  • Have a good experience
  • Enrich it
  • Absorb it
  • Link it to other positive or negative experiences.

While Hanson’s HEAL method draws on established neuroscience concepts, it remains a clinical and contemplative approach rather than a rigorously validated scientific intervention. In a small exploratory study using pre-post self-report measures, Hanson and colleagues assessed the effects of this intervention on 21 healthy subjects and found statistically significant self-reported improvements in measures like savoring and self-compassion, though the small sample size and lack of a control group limit the strength of the conclusions. The participants also reported statistically borderline improvements in self-esteem, positive rumination (self-focus), pride, happiness, and satisfaction with life. Many of these effects persisted after two months.

Change the mind, change the brain?

Can you really rewire your brain this way — simply by changing your mind? That’s the idea behind neuroplasticity: the brain’s ability to adapt and reorganize in response to experience. Researchers investigate it through a combination of brain imaging and behavioral assessments. For example, if someone is able to learn a new skill more quickly following an intervention, scientists can correlate this with changes in brain activity, using what’s called “task-based fMRI.” But the details of cause-and-effect are far from simple, and the research methods far from perfect. Although there is considerable evidence for neuroplasticity as a phenomenon related to health and well-being, skeptics warn of “neuroplasticity hype,” and positive neuroplasticity itself has not been corroborated neuroscientifically in humans.

Still, Hanson says, we’ve come a long way toward understanding the relationship between mind and brain.

“As science has progressed in the last hundred to a hundred and fifty years with the study of the nervous system,” he told Big Think, “the correlations have become increasingly well understood and tight between ongoing mental activity — hearing, seeing, loving, hating, wanting, remembering — and the underlying neural activity that is their physical basis.”

A number of brain imaging studies suggest that certain mental practices, such as mindfulness meditation, are associated with structural and functional brain changes, though questions remain about causality and long-term effects.

In the 1960s, researchers began using electroencephalogram (EEG) to study neural activity during meditation. In the 1970s came magnetic resonance imaging (MRI), and by the 1990s — the so-called “decade of the brain” — scientists were increasingly able to associate specific mental experiences with distinct patterns of neural activity. For instance, one seminal study of nuns praying inside fMRI machines showed that their brains’ reward centers lit up in ways similar to people using cocaine.

“It doesn’t mean connecting with Christ consciousness is the same as taking cocaine,” Hanson notes, “but they were starting to find underlying neural correlates.”

A growing body of research shows that meditation and other contemplative practices can promote neuroplasticity, encouraging the brain to form new connections and adapt over time. In the mid-2000s, as Hanson and his colleagues began combing through the research literature, they wondered whether they could flip things around and harness what scientists had gathered about the brain to use in contemplative and clinical practice, an investigation which ultimately became the basis for HEAL.

Could they deliberately activate the brain to induce certain mental activities that would lead to lasting changes in the brain and, ultimately, support the development of optimal traits like a more positive outlook on life? As Hanson put it: “Could we use our mind to stimulate and change our brain to benefit our mind?”

If so, harnessing brain science could, in theory, motivate people who wouldn’t otherwise think to take up a “mental hygiene” regime such as meditation.

“When people realize this airy-fairy, woo-woo stuff is actually helping their own brain, they get much more motivated,” he said. Ruminating over the state of the world may not be helpful, but “when you slow down, take a moment to feel close to your friend or partner, and let that really land inside, that’s changing your brain for the better.”

While the precise mechanisms remain uncertain, Hanson points to the role of the autonomic nervous system — particularly how social connection and safety cues can downregulate stress responses — as one pathway through which positive experiences may shape long-term well-being.

“If I want to calm myself down, it’s important to touch my partner, or my dog, because that social engagement is going to ripple down and calm my heart,” he said.

Source : https://bigthink.com/neuropsych/rick-hanson-heal-method/

The More Partners the Merrier? Non-Monogamous Relationships Just as Satisfying, Study Shows

(Credit: Casimiro – stock.adobe.com )

For decades, we’ve been fed a consistent message: monogamous relationships represent the gold standard of romantic fulfillment. This belief runs so deep that researchers have now given it a name—the “monogamy-superiority myth.” It’s a belief that has shaped personal choices, public policies, and professional practices, despite remarkably little evidence supporting the claim.

A new review published in The Journal of Sex Research directly challenges this assumption with data from nearly 25,000 individuals. The findings? When it comes to both relationship and sexual satisfaction, there’s virtually no difference between people in monogamous relationships and those in consensually non-monogamous arrangements.

This extensive review, led by Joel R. Anderson of La Trobe University, represents the first comprehensive analysis comparing satisfaction levels across different relationship structures. The findings effectively challenge the notion that non-monogamous relationships are somehow lacking or less fulfilling than monogamous ones.

The Persistence of Monogamy as the ‘Ideal’

Western society has long operated under the assumption that monogamy is not just normal, but optimal. This belief has been reinforced through cultural messages, religious teachings, and even healthcare practices. People in non-monogamous relationships often face judgment, discrimination, and the assumption that their relationship choices indicate personal problems or instability.

The research team identified several reasons these beliefs persist. For many, monogamy is seen as a moral choice guided by religion or cultural norms. It’s often viewed as “normal” and beneficial because it allows people to avoid stigma. Monogamous relationships are frequently assumed to result in better health outcomes, greater stability, and even better intimacy—assumptions the new research directly contradicts.

‘Monagamish’ Relationships Are Better?

The researchers examined studies conducted between 2007 and 2024, mostly in Western countries like the United States, Canada, and Australia. This body of research included diverse participants across sexuality and gender identity, though most samples were predominantly white.

Non-monogamy in these studies covered various relationship structures, including:

  • Polyamory: maintaining several loving relationships at once
  • Open relationships: agreements allowing sex outside the primary relationship
  • Swinging: partners engaging in outside sexual activities together, often at organized events
  • “Monogamish” arrangements: mostly monogamous relationships with occasional agreed-upon exceptions

Across these diverse relationship structures, the analysis found that monogamous and non-monogamous people reported basically identical levels of both relationship and sexual satisfaction. This pattern held true regardless of participants’ sexuality, with both straight and LGBTQ+ samples showing no significant differences.

Some interesting details emerged when researchers looked at specific types of non-monogamous arrangements. People in “monogamish” relationships reported slightly higher relationship satisfaction than those in strictly monogamous relationships. Similarly, polyamorous individuals and swingers reported somewhat higher sexual satisfaction than their monogamous counterparts.

Another surprising finding emerged when researchers examined different aspects of relationship satisfaction. Non-monogamous individuals actually rated trust higher than monogamous individuals, while scoring equally on commitment, intimacy, and passion. This challenges the common assumption that non-monogamous relationships necessarily involve less trust or commitment.

Study authors suggest that non-monogamous relationships might actually strengthen certain relationship skills. The nature of managing multiple relationships might encourage people to put more effort into communication, openness, and understanding—all key components of trust.

Changing Norms?

Despite the stigma and discrimination that non-monogamous people often face, their reported satisfaction matched or sometimes exceeded those of monogamous individuals.

The research team offers several explanations for these findings. Non-monogamous relationships may allow people to experience more variety and freedom. These structures let people have different needs met by different partners, whereas monogamous individuals must find all their needs satisfied by one person. Research also indicates that non-monogamy can encourage personal growth and independence, which may boost relationship and sexual satisfaction.

These findings matter for therapists, counselors, and other healthcare professionals who work with non-monogamous clients. Previous studies have shown that healthcare practitioners sometimes view non-monogamy as a problem or sign of trouble, making assumptions that can damage the therapeutic relationship.

For the roughly 5% of adults currently in non-monogamous relationships—and the approximately 20% who have tried consensual non-monogamy at some point—these findings validate that their relationship choices can lead to satisfying, fulfilling partnerships.

It’s worth noting that while these results show equal satisfaction across relationship structures, they don’t suggest any particular relationship style is right for everyone. Personal preferences, values, and needs remain most important in determining the best relationship arrangement for each person.

Ultimately, these findings don’t just validate non-monogamous relationships—they invite us to question assumptions about relationships that we may have never examined. Perhaps satisfaction has less to do with relationship structure than with how well any relationship meets the unique needs of the people involved.

Source : https://studyfinds.org/non-monogamous-relationships-just-as-satisfying/

Goodbye, Breakfast? This Science-Backed Eating Window Burns More Fat Than Exercise Alone

(© RasaBasa- stock.adobe.com)

There’s promising news for fitness enthusiasts looking to optimize their body composition: combining a time-restricted eating (also known as intermittent fasting) regimen with your exercise routine may help reduce body fat while preserving muscle mass.

Researchers have discovered that coordinating when you eat with your exercise routine might significantly improve body composition results, according to a comprehensive study published in the International Journal of Obesity. The new meta-analysis by scientists at the University of Mississippi, along with colleagues from Texas Tech University, reveals an intriguing fitness strategy that doesn’t involve fancy equipment, expensive supplements, or complicated diet plans.

The secret to a truly fit body may be all about timing your meals and your workout in concert with one another.

The Power of Time-Restricted Eating with Exercise

Time-restricted eating (TRE) involves limiting all food consumption to a specific window—typically 4-12 hours daily—while fasting for the remaining hours. Unlike other dietary approaches that dictate specific foods or calorie counts, TRE focuses simply on when you eat.

The research team analyzed 15 studies involving 338 participants who followed TRE protocols while engaging in structured exercise programs. They compared these individuals to control groups who performed identical exercises but ate without time restrictions.

The results were clear: people who combined TRE with exercise lost approximately 1.3 kg (2.9 pounds) more fat and reduced their body fat percentage by an additional 1.3% compared to those who exercised without meal timing restrictions. Perhaps most importantly, muscle mass wasn’t significantly affected, indicating that TRE doesn’t compromise muscle preservation during exercise programs.

Most studies used a 16:8 schedule—16 hours fasting, 8 hours eating—with feeding windows typically between noon and 8 P.M. Importantly, exercise was performed during feeding hours, not while fasting, which likely helped preserve muscle mass and optimize performance.

“[T]ime-restricted eating appears to induce a small decrease in fat mass and body fat percentage while conserving fat-free mass in adults adhering to a structured exercise regimen, as opposed to exercise-matched controls without temporal eating restrictions,” the authors write.

Why This Combination Works

Several mechanisms might explain why restricting your eating window enhances fat loss beyond exercise alone.

For many people, TRE naturally reduces caloric intake by limiting opportunities to eat. However, benefits persisted even in studies that controlled for calories, indicating that timing itself matters regardless of how much you eat.

Eating during daylight hours may better align with our body’s natural biological rhythms—the internal clocks that regulate numerous physiological processes. This alignment could optimize metabolic function compared to the typical modern pattern of eating from early morning until late night.

TRE may also trigger beneficial hormonal changes, including increased levels of compounds that enhance fat burning (adiponectin, noradrenaline, growth hormone) while decreasing stress hormones like cortisol. Additionally, fasting periods activate metabolic pathways that promote fat oxidation, potentially amplifying exercise’s effects.

The research examined multiple exercise approaches, including aerobic training (running, cycling), resistance training (weightlifting), and concurrent training (combining both). The benefits held across these different exercise modes, indicating the TRE plus exercise formula works regardless of your preferred workout style.

What This Means for Your Fitness Routine

Before rushing to adopt this approach, however, several factors deserve consideration. The benefits, while statistically significant, were moderate in size. Individual responses likely vary considerably based on factors not fully captured in current research. And since most studies were short-term (typically 4-8 weeks), the long-term sustainability and effects remain largely unknown.

It’s also worth noting that most participants were already experienced exercisers in good metabolic health, with relatively few studies including those with obesity. Whether the same benefits apply to beginners or those with significant metabolic challenges remains unclear.

For active individuals looking to fine-tune their approach to body composition, this research provides preliminary support for a simple yet potentially effective strategy: timing meals alongside exercise may help optimize fat loss while preserving valuable muscle tissue.

As always before making any changes to your diet or daily health regimens, you should always talk to your doctor first.

Source : https://studyfinds.org/goodbye-breakfast-this-science-backed-eating-window-burns-more-fat-than-exercise-alone/

Why Women’s Pain Has Been Misunderstood For Decades

Treating female pain may require a different approach than pain management for men. (My Ocean Production/Shutterstock)

For decades, women suffering from chronic pain have been told “it’s all in your head” when treatments that work for men fail them. Now, research from the University of Calgary reveals that women’s pain actually operates through entirely different biological pathways than men’s. Scientists have discovered that the same protein triggers pain in both sexes, but through completely different immune cells and chemical signals.

A new study published in Neuron reveals that a protein called pannexin-1 (or Panx1 for short) works differently in males and females. This helps to explain why women are more likely to develop chronic pain conditions and why they often don’t respond as well to certain treatments.

The Divide in Pain Research

Most pain research has historically focused on male subjects, even though women make up the majority of chronic pain patients. This study aims to fix that imbalance by looking at both male and female animals to understand why pain works differently between sexes.

While both sexes use the Panx1 protein in their immune cells to create pain signals, they use completely different cells and chemical messengers to do it.

In males, Panx1 works through cells called microglia (the immune cells of the spinal cord and brain) to release a protein called VEGF, which increases pain sensitivity. In females, however, Panx1 works through CD8+ T cells (a type of white blood cell) to release leptin, a hormone best known for its role in hunger and metabolism. This may help explain why treatments that target microglia cells work well for reducing pain in males but often fail in females.

Crossing Biological Boundaries

When they took microglia cells from male animals, activated them, and transferred them into females, the females developed pain. Similarly, when they transferred activated female T cells into males, the males also developed pain. This confirmed that these sex-specific mechanisms weren’t just correlations; they actually cause pain.

The researchers also created mice that lacked the Panx1 protein specifically in microglia cells. Male mice without this protein showed much less pain after nerve injury, while female mice still developed normal pain sensitivity.

When they analyzed fluid from the spinal cord, they found that after nerve injury, males had higher levels of VEGF while females had higher levels of leptin. Blocking VEGF reduced pain in males but not females, while neutralizing leptin reduced pain in females but not males.

Hope for Better Pain Treatments

This could lead to better pain treatments for everyone. Current treatments for neuropathic pain (pain caused by nerve damage) include anticonvulsants, antidepressants, and opioids. These treatments tend to work less effectively in women and often cause worse side effects.

With this new knowledge, doctors might eventually be able to prescribe treatments targeted specifically to each sex, like VEGF blockers for men and leptin blockers for women.

This discovery is particularly important for conditions like fibromyalgia, which affects women much more often than men. Previous studies have shown that leptin levels can predict pain severity in women with fibromyalgia. This research now provides a possible explanation for that connection.

Panx1 could be a treatment target that works for both sexes. Although the way the body reacts to this protein is different for men and women, medications targeting it could help both, potentially transforming pain treatment.

For women who have struggled to have their pain taken seriously or effectively treated, this research provides solid evidence that treating female pain really may deserve specific research and targeted treatments. Doctors may eventually move beyond one-size-fits-all approaches to develop treatments tailored to each person’s unique biology.

Source : https://studyfinds.org/womens-pain-misunderstood-medication/

 

Who Is Liable When AI Makes a Medical Mistake?

Who is held accountable when AI systems make mistakes in medicine? (© BiancoBlue | Dreamstime.com)

Doctors are increasingly being asked to use AI systems to help diagnose patients, but when mistakes happen, they take the blame. New research shows physicians are caught in an impossible trap: use AI to avoid mistakes, but shoulder all responsibility when that same AI fails. This “superhuman dilemma” is the healthcare crisis nobody’s talking about.

The Doctor’s Burden: Caught Between AI and Accountability
New research published in JAMA Health Forum explains how the rapid deployment of artificial intelligence in healthcare is creating an impossible situation for doctors. While AI promises to reduce medical errors and physician burnout, it may be worsening both problems by placing an unrealistic burden on physicians.

Researchers from the University of Texas at Austin found that healthcare organizations are adopting AI technologies much faster than regulations and legal standards can keep pace. This regulatory gap forces physicians to shoulder an extraordinary burden: they must rely on AI to minimize errors while simultaneously bearing full responsibility for determining when these systems might be wrong.

Studies reveal that the average person assigns greater moral responsibility to physicians when they’re advised by AI than when guided by human colleagues. Even when there’s clear evidence that the AI system produced wrong information, people still blame the human doctor.

Physicians are often viewed as superhuman. They are expected to have exceptional mental, physical, and moral abilities. These expectations that go far beyond what is reasonable for any human being.

When Two Decision-Making Systems Collide

Physicians face a complex challenge when working with AI systems. They must navigate between “false positives” (putting too much trust in wrong AI guidance) and “false negatives” (not trusting correct AI recommendations). This balancing act occurs amid competing pressures.

Healthcare organizations often promote evidence-based decision-making, encouraging physicians to view AI systems as objective data interpreters. This can lead to overreliance on flawed tools. Meanwhile, physicians also feel pressure to trust their own experience and judgment, even when AI systems may perform better in certain tasks.

Adding to the complexity is the “black box” problem. Many AI systems provide recommendations without explaining their reasoning. Even when systems are made more transparent, physicians and AI approach decisions differently. AI identifies statistical patterns from large datasets, while physicians rely on reasoning, experience, and intuition, often focusing on patient-specific contexts.

The Hidden Costs of Superhuman Expectations
The consequences of these expectations affect both patient care and physician wellbeing. Research from other high-pressure fields shows that employees burdened with unrealistic expectations often hesitate to act, fearing criticism. Similarly, physicians might become overly cautious, only trusting AI when its recommendations align with established care standards.

This defensive approach creates problems of its own. As AI systems improve, excessive caution becomes harder to justify, especially when rejecting sound AI recommendations leads to worse patient outcomes. Physicians may second-guess themselves more frequently, potentially increasing medical errors.

Beyond patient care, these expectations take a psychological toll. Research shows that even highly motivated professionals struggle to maintain engagement under sustained unrealistic pressures. This can undermine both quality of care and physicians’ sense of purpose.

Source: https://studyfinds.org/ai-medical-mistake/

Could Salty Foods Be Fueling Depression Rates?

(© atipong – stock.adobe.com)

Too much salt has long been blamed for heart problems, but new research suggests it might harm our minds too. Scientists from Nanjing Medical University have discovered a surprising connection between high-salt diets and depression-like behaviors in mice, potentially explaining why depression rates continue rising alongside our consumption of processed foods.

The research team found that excessive salt intake triggers specific immune responses in the brain that can lead to behaviors resembling depression. Their findings, published in The Journal of Immunology, offer a biological explanation for previously observed connections between processed food consumption and mood disorders.

Depression affects millions worldwide, with lifetime prevalence reaching 15-18% in many populations. Modern Western diets, especially fast food, contain dramatically more sodium than home-cooked meals—sometimes exceeding homemade options by 100-fold.

The Salt-Depression Connection
In the study, mice fed high-salt diets showed behaviors remarkably similar to those experiencing chronic stress. They explored less, displayed heightened anxiety, and spent more time motionless during tests measuring “behavioral despair”—patterns that parallel human depression symptoms.

The researchers investigated the biological mechanisms behind these behavioral changes. High-salt diets significantly increased production of Interleukin-17A (IL-17A), an immune signaling molecule, particularly in specialized immune cells called gamma delta T cells (γδT cells).

Previous research had linked elevated IL-17A to depression, but this study reveals a direct pathway from dietary salt to increased IL-17A production to depression-like symptoms.

To confirm this connection, the team tested mice genetically modified to lack the ability to produce IL-17A. These mice showed no signs of depression despite consuming high-salt diets. Even more convincingly, when researchers removed the specific immune cells that produce IL-17A, the animals no longer developed depression-like behaviors on high-salt diets.

What This Means for Humans
While conducted in mice, the research has compelling implications for human health. Population studies have already shown links between high-salt diets and increased depression rates. This study offers a potential explanation for those observations.

The average American diet contains about 3,400 mg of sodium daily—far exceeding the American Heart Association’s recommended maximum of 2,300 mg. Fast food meals often deliver an entire day’s worth of recommended sodium in a single sitting.

This isn’t the first research connecting diet and mental health. Mediterranean diets rich in fruits, vegetables, whole grains, olive oil, and lean proteins correlate with lower depression rates. Conversely, diets heavy in processed foods, sugars, and unhealthy fats tend to increase depression risk.

The distinctive aspect of this study is identifying a specific biological pathway connecting diet directly to depression-like behaviors. This precision opens doors to potential new treatment approaches targeting the immune system rather than just brain chemistry.

Simple Ways To Reduce Salt Intake
Current depression treatments typically focus on neurotransmitter imbalances using medications like SSRIs or on changing thought patterns through therapy. The discovery that dietary factors might contribute to depression through immune pathways represents an important shift in how we might approach mental health care.

Applying these findings doesn’t necessarily require waiting for new pharmaceutical treatments. Simple dietary changes are accessible to most people:

  • Reducing processed food intake
  • Eating more home-cooked meals
  • Checking food labels for sodium content
  • Using herbs and spices instead of salt for flavoring

Some health professionals already recommend the DASH diet (Dietary Approaches to Stop Hypertension) for patients with high blood pressure. This diet emphasizes fruits, vegetables, whole grains, lean proteins, and reduced sodium. This new research hints such approaches might benefit mental health too.

Beyond individual choices, these findings could influence public health policies around sodium reduction in processed foods. Some countries have already implemented such regulations: the United Kingdom’s salt reduction program has achieved a 15% decrease in average salt intake since implementation.

While more research is needed before definitive conclusions about salt reduction as a depression treatment in humans, this study adds to mounting evidence that what we eat affects both body and mind. For those struggling with depression, these findings don’t suggest dietary changes should replace established treatments like therapy and medication, but they highlight diet as an important complementary factor in mental health care.

Source: https://studyfinds.org/salty-food-depression-sodium/

Sibling Study: Longer Breastfeeding Linked to Better Brain Development

(Photo by PeopleImages.com – Yuri A on Shutterstock)

Children who are breastfed for longer periods of time during infancy experience fewer developmental delays and a reduced risk of neurodevelopmental conditions, including disorders like autism and ADHD, acording to new research. The study led by scientists at the KI Research Institute in Israel confirms what many parents might hope to hear: breastfeeding babies for at least six months appears to boost their developmental outcomes.

While health organizations have recommended breastfeeding for the first six months of life for years, this study offers particularly strong evidence by addressing problems that weakened earlier research on the topic.

Published in JAMA Network Open, the study involved health data from 570,532 Israeli children, including nearly 38,000 sibling pairs. It ranks among the largest investigations into breastfeeding and development ever conducted.

Led Dr. Inbal Goldshtein and Dr. Yair Sadaka, the research team used an innovative approach to ensure their findings were reliable. The study uniquely combined routine developmental checkup records from Israel’s maternal-child health clinics with national insurance disability data, allowing researchers to track both developmental milestone achievement and diagnosed conditions.

They compared siblings within the same families who had different breastfeeding experiences but shared genes and home environment. This clever design controlled for family factors like parental intelligence and involvement that often confuse results in other studies.

Children exclusively breastfed for at least six months had 27% lower odds of developmental delays compared to those breastfed for shorter periods. Even children who received both breast milk and formula for six months or more showed a 14% reduction. When examining siblings with different breastfeeding histories, those who breastfed longer had 9% lower odds of milestone delays and 27% lower odds of neurodevelopmental conditions compared to siblings who breastfed for shorter periods or not at all.

The benefits remained clear even after accounting for numerous factors, including pregnancy duration, birth weight, maternal education, family income, and postpartum depression.

The advantages appeared most notable in language and social development—crucial areas for school success and forming friendships. Motor skills improved too, though less dramatically. Premature babies, who typically face higher developmental risks, seemed to benefit even more from extended breastfeeding than full-term infants.

For parents struggling with breastfeeding choices, there’s reassuring news. When researchers specifically examined siblings who both breastfed for at least six months—one exclusively on breast milk and one receiving some formula—exclusive breastfeeding didn’t show a meaningful additional advantage. This indicates that maintaining some breastfeeding for longer might matter more than avoiding formula completely.

The study’s authors believe that their findings should inform public health policies and support systems rather than pressure individual families. Their goal remains helping children reach their potential, not creating guilt among parents facing breastfeeding challenges.

Researchers emphasize that while breastfeeding is linked to better development, it’s just one of many factors that shape a child’s growth. They noted that identifying changeable factors like nutrition is essential to helping each child reach their potential.

Despite expert recommendations, actual breastfeeding rates often fall below targets. Many mothers struggle to balance breastfeeding with work demands, inadequate parental leave, and aggressive formula marketing.

Formula companies spend around $55 billion yearly promoting their products, sometimes undermining women’s confidence in their ability to breastfeed. The authors advocate for stronger supportive policies, including better parental leave and limits on formula marketing practices.

The biological mechanism for these benefits may relate to breast milk’s effects on brain development. Earlier research has shown differences in brain structure between breastfed and formula-fed babies. Some scientists believe these benefits might work through effects on the infant’s gut microbiome, which connects to brain development through what’s known as the gut-brain axis.

As the researchers conclude, these results may help guide not only parents but also public health initiatives aimed at giving children the best developmental start possible. When every advantage counts for our children, supporting breastfeeding appears to be a worthwhile investment.

Source : https://studyfinds.org/breastfeeding-for-six-months-boosts-child-developmen/

Ginseng’s Secret Anti-Aging Weapon: How Compound K is Changing Skincare Science

Ginseng roots. (© mnimage – stock.adobe.com)

For thousands of years, ginseng has been treasured in Eastern medicine for its health-promoting properties. Now, modern science is uncovering the remarkable potential of one specific component within this ancient herb – Compound K, a rare metabolite formed when certain ginsenosides from ginseng are broken down in the gut. This substance is becoming a focal point in skin aging research, offering new possibilities for combating wrinkles, skin laxity, and other visible signs of aging.

Research published in the Journal of Dermatologic Science and Cosmetic Technology reveals that Compound K (CK) fights skin aging through multiple biological pathways, targeting different aspects of the aging process simultaneously. The study was conducted by scientists at Yunnan University and Guangdong Industry Polytechnic University.

How Skin Ages and Why Compound K Matters

Skin aging happens because of internal factors like genetics and metabolism, along with external forces such as ultraviolet radiation and pollution. These elements combine to create thinning skin, reduced elasticity, wrinkles, and uneven color. The research reveals Compound K tackles these issues through several different mechanisms at once.

One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts levels of desmosome adhesive protein 1 (DSC1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.

One key way Compound K benefits aging skin is by strengthening its protective barrier. The research shows that CK boosts protective proteins, such as desmosome adhesive protein 1 (DSC 1) while reducing harmful enzymes that can compromise skin integrity. In everyday terms, this means skin treated with this ginseng compound retains moisture better and has improved defense against environmental damage.

Collagen breakdown is a major culprit behind skin aging. UV radiation triggers enzymes called matrix metalloproteinases (MMPs), which degrade collagen and lead to wrinkles and sagging. Studies demonstrate that Compound K effectively blocks these collagen-destroying enzymes in skin cells exposed to UV light, helping maintain the skin’s structural framework.

Beyond just preventing damage, Compound K actively promotes repair by stimulating collagen production. It also increases hyaluronic acid in the skin by enhancing the gene responsible for producing this moisture-binding molecule that naturally decreases as we age.

Beyond Surface-Level Benefits: Cellular and Genetic Effects

Particularly interesting is Compound K’s effect on cellular “housekeeping” – the process where cells clean out damaged components (known scientifically as autophagy). This natural maintenance system slows with age, contributing to cellular dysfunction. Research indicates that CK regulates this cleaning process, helping cells function optimally for longer periods.

The compound’s anti-inflammatory benefits are substantial too. Low-grade chronic inflammation, sometimes called “inflammaging,” increasingly appears to drive various age-related conditions, including skin aging. Through several pathways, Compound K reduces inflammation and resulting cellular damage.

At the genetic level, Compound K activates SIRT1, often referred to as a longevity gene because of its role in cellular health. Studies reveal that UV exposure significantly reduces SIRT1 expression, speeding up aging, while CK counteracts this effect depending on the dose used.

For those concerned about cellular energy decline – a hallmark of aging – research points to Compound K improving mitochondrial function, our cells’ power plants. Studies show it promotes mitochondrial health, maintains proper dynamics, and increases energy production. Since mitochondrial dysfunction characterizes aging cells, this benefit could significantly improve skin health and appearance.

From Lab to Skincare: The Practical Applications

Getting active ingredients through the skin barrier presents a major challenge in skincare. Fortunately, Compound K’s relatively small molecular weight allows it to penetrate skin layers more effectively than many other ingredients. Research using artificial skin models confirms CK can move through skin layers, making it a viable option for topical applications.

Remarkably, studies suggest that when applied to skin, other ginsenosides in skincare products can transform into Compound K within the skin itself, potentially boosting the effectiveness of ginseng-based products. This conversion process in skin mirrors what happens in the digestive system when ginsenosides are consumed orally.

While typical anti-aging ingredients often target just one aspect of aging, Compound K’s wide-ranging approach gives it unique value. It simultaneously improves skin barrier function, collagen production, moisture retention, inflammation control, and cellular energy – addressing virtually every major contributor to visible aging.

This research coincides with growing consumer preference for plant-based skincare with scientific backing. The natural cosmetics market continues expanding rapidly as consumers seek evidence-based natural alternatives to synthetic compounds. Ginseng extracts rich in Compound K could meet both the demand for natural ingredients and the expectation for proven results.

Is Ginseng the Future of Anti-Aging Research

Skincare developers now face the task of creating stable delivery systems that maximize Compound K’s benefits. The compound’s multifaceted effects suggest it could enhance products targeting various signs of aging, from fine lines to skin firmness and radiance.

For consumers, the study shows that products containing Compound K or its precursors might offer broader anti-aging benefits than single-action ingredients. However, concentration matters – many studies used relatively high amounts of the compound, which may not be present in all commercial products claiming ginseng benefits.

Meanwhile, more studies like this one could completely change the future of the skin aging industry. Simple moisturizers claiming miraculous anti-aging benefits are being replaced by ingredients like Compound K that work through specific cellular pathways, genetic expression, and metabolic processes.

While Compound K isn’t a magical fountain of youth, it represents a scientifically validated approach to supporting skin’s natural functions and resilience. In aging, this resilience – rather than fighting the inevitable – may be the key to aging well.

Source : https://studyfinds.org/ginsengs-secret-anti-aging-weapon-how-compound-k-is-changing-skincare-science/

 

Children Glued to Phones More Likely to Become High-Strung, Depressed Teens

(Credit: Andrea Piacquadio from Pexels)

In case you needed another reason to hold off on buying your child a phone, research shows a troubling connection between childhood screen habits and teenage mental well-being. The eight-year study, which tracked children from elementary school into adolescence, found that kids who racked up more screen time—especially on mobile devices—showed higher levels of stress and depressive symptoms as teenagers.

The study adds to the large body of research that should make parents think twice about unlimited device access, especially as more children experience mental health struggles at an early age. Between one-quarter and one-third of adolescents worldwide experience mental health problems, with symptoms typically first appearing during the teenage years. Researchers now have more concrete evidence about lifestyle factors that might help prevent psychological distress before it takes root.

Digital Habits and Mental Health: What the Research Shows

Study authors used data from the Physical Activity and Nutrition in Children (PANIC) study, which followed 187 Finnish children over eight years, from ages 6-9 into their mid-teens. Researchers regularly checked in on their physical activity, screen time, sleep patterns, and eating habits. When these children reached adolescence (average age 15.8), the researchers assessed their mental health using standardized measures of stress and depression.

The data painted a clear picture: teenagers who had accumulated more total screen time and mobile device use throughout childhood showed significantly higher levels of stress and depressive symptoms. The connection between mobile device use and depression was particularly strong, showing a “moderate effect size”—substantial in behavioral research terms.

The team found that adolescents spent nearly five hours daily on screens, with over two hours on mobile devices alone. Many parents might find these numbers unsurprising, but the mental health correlations deserve attention.

Physical activity told the opposite story. Teens who maintained higher activity levels during childhood, especially in supervised settings like sports or structured exercise programs, showed better mental health outcomes. This protective effect remained significant even after researchers accounted for factors like parental education, body composition, and puberty status.

Gender differences added another dimension to the findings. For boys, physical activity showed stronger protective effects against stress than for girls.

Surprisingly, neither diet quality nor sleep duration showed strong relationships with teen mental health in this study. This doesn’t mean these factors aren’t important for overall health—just that screen time and physical activity may have more direct impacts on adolescent mental wellbeing.

More Screen Time Should Mean More Physical Activity

For parents struggling with screen time battles, this research provides compelling evidence for setting reasonable limits. The findings highlight that mobile device use specifically—more than television or computer time—warrants special attention. With smartphones and tablets become increasingly central to education and social connections, creating healthy boundaries becomes more challenging but potentially more important.

The study, published in JAMA Network Open, also emphasizes the value of supervised physical activities. Children who participated in more structured exercise from ages 6-15 showed fewer mental health problems in adolescence. It’s all the more reason schools and community programs aimed at promoting youth mental health should find more ways to get children moving.

Most revealing were the outcomes showing that teenagers with both low physical activity and high screen time had the worst mental health outcomes. This demonstrates that addressing either factor alone might not be as effective as a balanced approach that both limits screen time and increases physical activity.

Creating Healthier Digital Habits for Children

While conducted in Finland, the study’s findings likely apply to children in other developed countries with similar technology access patterns. As smartphone use continues rising globally, understanding its potential psychological impact grows increasingly urgent.

For families navigating the complex digital landscape, this research offers practical guidance: limit screen time (especially on mobile devices), encourage regular physical activity (particularly supervised activities like sports), and remember that these choices may affect not just current behavior but long-term mental health.

Mental health professionals and pediatricians may want to include screen time discussions in their preventive care conversations. Creating balanced digital environments and promoting consistent physical activity within supportive social contexts could become key strategies for protecting youth mental health.

Incorporating technology into children’s lives at younger ages is understandably commonplace these days. But here have another study showing why childhood habits matter. How we balance screens and physical activity today may shape the psychological landscape our children navigate tomorrow.

Source : https://studyfinds.org/children-glued-to-phones-stressed-depressed-teens/

 

Night Owls Are More Likely to Have Depression

Apparently, if you’re a night owl, you’re more prone to developing depression.

Night owls tend to get a bad rep. They’re often told they’re less productive and lazier than early risers, merely because they sleep more during daylight—you know, when the world is expected to be most active.

Now, according to recent research, they’re also apparently more likely to experience depression.

“Depression affects daily functioning and can impact a person’s work and education,” Simon Evans, PhD, a neuroscience lecturer and researcher in the School of Psychology of the University of Surrey in the U.K., told Medical News Today. “It also increases the risk of going on to develop other serious health conditions, including heart disease and stroke, so it’s important for us to study ways to reduce depression.”

Obviously, if there was a simple way to decrease your risk of developing depression, most of us would take it. In this case, that might mean getting to sleep earlier in the night rather than staying up until the early morning hours. However, unfortunately, some of us don’t have the luxury to change our sleeping hours.

Does that mean those who work night shifts or lead lifestyles that require them to be active at night are doomed to be depressed?

The study, published in the journal PLOS One, found that “evening-types had significantly higher levels of depression symptoms, poorer sleep quality, and lower levels of ‘acting with awareness’ and ‘describing,’ as well as higher rumination and alcohol consumption.”

With so many young adults self-identifying as “night owls” (or evening-types, as the study refers to them), it’s concerning to note this negative link between their sleep patterns and mental health.

“A large proportion (around 50%) of young adults are ‘night owls,’ and depression rates among young adults are higher than ever,” said Evans, lead author of the study. “Studying the link is therefore important.”

“More important is the finding that the link between chronotype and depression was fully mediated by certain aspects of mindfulness—‘acting with awareness’ in particular—sleep quality, and alcohol consumption,” Evans continued. “This means that these factors seem to explain why night owls report more depression symptoms.”

Source : https://www.vice.com/en/article/night-owls-are-more-likely-to-have-depression/

The Science of Falling Out of Love: Study Identifies ‘Point of No Return’ in Dying Relationships

(Photo by Prostock-Studio on Shutterstock)

Most of us believe relationship endings happen in messy, unpredictable ways—a betrayal discovered, a fight that goes too far, or a slow drift apart. But what if breakups actually follow a mathematical pattern? What if the end of your relationship is as predictable as the phases of the moon?

New research published in the Journal of Personality and Social Psychology reveals exactly that. Scientists have discovered that failing relationships don’t just randomly deteriorate—they follow a specific two-phase decline that can be measured, tracked, and even predicted with surprising accuracy.

Researchers Janina Bühler from Johannes Gutenberg University Mainz and Ulrich Orth from the University of Bern analyzed data from four major longitudinal studies across different countries. They found that couples who eventually break up typically experience a mild decline in happiness for years, followed by a dramatic drop in the final months or years before separation.

The Countdown to Breakup

Scientists call this phenomenon “terminal decline,” borrowing a concept previously used to describe how cognitive abilities and happiness deteriorate before death. The research reveals that our romantic relationships follow similar predictable patterns before they end.

The study found that “time-to-separation was a much better predictor of change than time-since-beginning.” While we often think about relationships in terms of how long couples have been together, this research shows that the time remaining until separation tells us more about relationship health.

Perhaps most fascinating is how differently breakup initiators and recipients experience this decline. People who eventually initiate breakups start becoming dissatisfied much earlier—about a year before the actual split. Meanwhile, their partners often remain relatively happy until just months before the end, when their satisfaction plummets dramatically.

Many people intuitively sense when their relationship is heading downhill. This research confirms these feelings aren’t just subjective impressions—they reflect a scientific trajectory toward separation that looks remarkably similar across cultures, age groups, and relationship types.

Exploring The Phases of Decline

In the study, researchers tracked thousands of couples over time, measuring their relationship satisfaction annually. They compared people who eventually separated with similar people who stayed together.

The pattern emerged consistently across all four datasets. According to the paper: “The decline prior to separation was divided into a preterminal phase, characterized by a smaller decline, and a terminal phase, characterized by a sharp decline,” the authors write. The major shift between these phases—what researchers call the “transition point”—occurred anywhere from 7 months to 2.3 years before the actual breakup, depending on the study.

The researchers also examined whether overall life satisfaction followed the same trajectory. They found that “terminal decline was less visible in life satisfaction than in relationship satisfaction.” This indicates that while people recognize their relationships are deteriorating, they might already be preparing emotionally for life after the relationship.

If most relationships fade according to this pattern instead of a dramatic, sudden event or spat, is there any hope for relationships already in this spiral? In many cases, the relationship is effectively over long before the actual separation occurs—couples are just living through the terminal phase.

For couples therapists and relationship counselors, these findings could transform how they evaluate troubled relationships. By identifying whether a couple is in the early “preterminal” phase versus the steep “terminal” decline, professionals might better determine which relationships can be saved and which have likely passed the point of no return.

Demographic factors influenced these patterns in interesting ways. The researchers found that “age at separation and marital status explained variance in the effect sizes.” Younger adults showed less dramatic terminal declines than older adults, possibly because younger people expect more relationship transitions.

The study also revealed that “individuals who were the recipients of the separation (in contrast to individuals who initiated the separation) entered the terminal phase later but then decreased more strongly.” This explains why breakups often feel so asymmetrical, with one partner seemingly more prepared than the other.

What This Means For Your Relationship

Many of us stay in declining relationships hoping things will improve. The study, unfortunately, indicates there might be a point of no return—a transition into terminal decline—after which recovery becomes highly unlikely.

For those currently in relationships, the findings offer both caution and hope. On one hand, recognizing the signs of terminal decline might help people make more informed decisions about when to seek help or when to move on. On the other hand, understanding that the steepest decline typically happens only after crossing a specific threshold might encourage couples to address problems before reaching that critical transition point.

The researchers frame it this way: “If unsatisfied couple members are still in the preterminal phase and have not yet reached the transition point, efforts to improve the relationship may be more effective, potentially preventing the onset of the terminal phase and the eventual dissolution of the relationship.”

The study also brings some comfort to those blindsided by breakups. If you’ve ever been shocked when a partner suddenly announced they wanted to separate, the science explains why: they likely crossed into terminal decline months or even years before you did. By the time you recognized the severity of the problems, they had already been mentally preparing for the end.

Like many aspects of human behavior, from birth to cognitive development to aging, romantic relationships appear to follow predictable patterns that can be scientifically observed and mapped. The terminal decline of relationship satisfaction isn’t just a feeling—it’s a measurable phenomenon that operates according to consistent rules across different cultures and contexts.

The study’s authors emphasize couples in rocky relationships should seek help before hitting the point of no return. “It is important to be aware of these relationship patterns,” says Bühler, who works as a couple therapist in addition to being a professor. “Initiating measures in the preterminal phase of a relationship, i.e., before it begins to go rapidly downhill, may thus be more effective and even contribute to preserving the relationship.”

Source : https://studyfinds.org/falling-out-of-love-point-of-no-return-in-dying-relationships/

Cannabis users under 50 are 6 times more likely to have a heart attack, new study shows

A new study shows that young people who consume marijuana are six times more likely to experience a heart attack than their counterparts.

Research published in the Journal of the American College of Cardiology (JACC) documents that people under the age of 50 who consume marijuana are about 6.2 times more likely to experience a myocardial infarction, commonly known as a heart attack, than non-marijuana users. Young marijuana users are also 4.3 times more likely to experience an ischemic stroke and 2 times more likely to experience heart failure, the study shows.

Researchers surveyed over 4.6 million people under the age of 50, of which 4.5 million do not use marijuana and 93,000 do. All participants were free of health conditions commonly associated with cardiovascular risks, like hypertension, coronary artery disease, diabetes, and a history of myocardial infarctions. The study also excluded people who use tobacco to eliminate another potential risk factor.

Ahmed Mahmoud, lead researcher and clinical instructor at Boston University, told USA TODAY that though the numbers appear significant, researchers’ biggest concern right now is studying more data, as research on marijuana’s effects on the cardiovascular system remains limited.

“Until we have more solid data, I advise users to try to somehow put some regulation in the using of cannabis,” Mahmoud said. “We are not sure if it’s totally, 100% safe for your heart by any amount or any duration of exposure.”

How does marijuana affect the heart?

As studies remain inconclusive and few and far between, scientists and doctors are still unclear how marijuana affects the cardiovascular system. But generally, researchers understand that marijuana can make the heart beat faster and raise blood pressure, as reported by the Centers for Disease Control and Prevention.

Mahmoud said researchers believe marijuana may make small defects in the coronary arteries’ lining, the thin layer of cells that forms the inner surface of blood vessels and hollow organs.

“Because cannabis increases the blood pressure and makes the blood run very fast and make some detects in the lining to the coronary arteries, this somehow could make a thrombosis (formation of a blood clot) or a temporary thrombosis in these arteries, which makes a cardiac ischemic (stroke) or the heart muscle is not getting enough oxygen to function,” Mahmoud said. “This is what makes the heart injured and this is a myocardial infarction or heart attack.”

Stanton Glantz, a retired professor from the University of California, San Francisco School of Medicine, co-authored a study published in the Journal of the American Heart Association last year that also addresses marijuana’s effects on the cardiovascular system.

Stanton Glantz is retired professor from the University of California, San Francisco School of Medicine. He also is the founder of the Center for Tobacco Control Research and Education.
Glantz told USA TODAY he believes smoking marijuana has the same effects on the cardiovascular system as smoking tobacco.

When smoking a cigarette, the blood that is distributed through the body becomes contaminated with the cigarette smoke’s chemicals, which can damage the heart and blood vessels, the CDC reports. This damage can result in coronary heart disease, hypertension, heart attack, stroke, aneurysms and peripheral artery disease.

Changes in blood chemistry from cigarette smoke can also cause plaque in the body’s arteries, resulting in a disease called atherosclerosis, according to the CDC. When arteries become full of plaque, it’s harder for blood to move throughout the body. This can create blood clots and ultimately lead to a heart attack, stroke or death.

How does the new study correspond with previous research?

The recently published study aligns with previous research in the field.

The Journal of the American Heart Association study, which surveyed more than 434,000 people between the ages 18-74 , found that marijuana affects the cardiovascular system. The study also singled out marijuana users who didn’t use tobacco.

The 2024 study found that people who consume − specifically inhale − marijuana are more likely to experience coronary heart disease, myocardial infraction and stroke. There is a “statistically significant increase in risk,” Glantz said.

The main difference between the new study, co-authored by Mahmoud, and the 2024 study, is the populations studied, Glantz said.

The 2024 study analyzed data from the Behavioral Risk Factor Surveillance Survey, a CDC-operated telephone survey that includes responses from across the country. The new study analyzed data from 53 healthcare organizations using the TriNetX health research network.

Source : https://www.usatoday.com/story/news/health/2025/03/21/cannabis-users-heart-attack-risk/82574623007/

Why Can’t We Remember the First Few Years of Life?

Why don’t we remember being a baby? (Miramiska/Shutterstock)

Have you ever wondered why you can’t remember being a baby? This blank space in our memory, known as “infantile amnesia,” has puzzled scientists for years. Most of us can’t recall anything before age three or four. Until recently, researchers thought baby brains simply couldn’t form memories yet, that the memory-making part of our brain (the hippocampus) wasn’t developed enough.

But it turns out babies might remember more than we thought. Research just published in the journal Science shows that babies as young as one year old can actually form memories in their hippocampus. The study, led by researchers at various American universities, suggests our earliest memories aren’t missing, we just can’t access them later.

How Do You Study Memory in Babies Who Can’t Talk?
You can’t exactly ask a baby, “Do you remember this?” The researchers came up with a clever solution. They showed 26 babies (ages 4 months to 2 years) pictures of faces, objects, and scenes while scanning their brains. Later, they showed each baby two pictures side by side, one they’d seen before and one new one, and tracked where the babies looked.

“When babies have seen something just once before, we expect them to look at it more when they see it again,” says lead study author Nick Turk-Browne from Yale University, in a statement. “So in this task, if an infant stares at the previously seen image more than the new one next to it, that can be interpreted as the baby recognizing it as familiar.”

Getting babies to lie still in a brain scanner is no small feat. The research team has spent years developing special techniques to make this possible. They made the babies comfortable and only scanned them when they were naturally awake and content.

The Big One-Year Memory Milestone

The brain scans showed that when a baby’s hippocampus was more active while seeing a picture for the first time, they were more likely to stare at that same picture later, showing they may have remembered it.

This ability to remember showed a clear age pattern. Babies younger than 12 months didn’t show consistent memory signals in their brains, but the older babies did. And the specific part of the hippocampus that lit up, the back portion, is the same area adults use for episodic memories.

The researchers had previously discovered that even younger babies (as young as three months) can do a different kind of memory called “statistical learning.” This is basically spotting patterns across experiences rather than remembering specific events.

Source: https://studyfinds.org/cant-remember-first-years-of-life/

This Smartphone App Helps Seniors in Assisted Living Fight Cognitive Decline

Providing seniors with an app that boosts brain health solves many accessibility challenges in assisted living facilities. (Pressmaster/Shutterstock)

Let’s face it, we’re all worried about memory loss as we age. But what if the same device you use for calling grandkids could actually strengthen your mind? A new study revealed that a smartphone app improved thinking abilities in older adults living in assisted living facilities.

Residents of assisted living often feel isolated and might not have easy access to specialized brain health services. That’s why an app would make perfect sense, giving residents access to brain training at their fingertips. Scientists from the University of Utah, Texas A&M, and a company called Silvia Health tested an app called the “Silvia Program” with older folks in assisted living.

Research published in Public Health in Practice shows the promising potential of this app’s capabilities for fighting cognitive decline. Instead of just including memory games like many brain apps, this one took a kitchen-sink approach, mixing brain training with exercise routines, food tracking, and other lifestyle stuff all in one app.

While seniors who didn’t use the app actually lost some brain function over the 12 weeks (yikes), the app users actually saw their scores improve. That’s kind of a big deal for anyone with parents or grandparents in assisted living who worry about their mental sharpness.

The idea behind the app’s design is actually pretty simple. Instead of just doing one thing for your brain, it mixes several approaches together. It’s cross-training for your mind instead of only doing one exercise.

Earlier studies already showed this mix-and-match approach helps fight memory loss. But getting regular in-person brain training can be tough, especially if you live in a facility with limited transportation options. That’s why putting these tools on a smartphone could be such a great approach. It brings brain health right to where seniors already are.

This Silvia Program isn’t your run-of-the-mill brain games app. It bundles five different tools:

  • Daily goals to keep you motivated
  • Brain exercises targeting different thinking skills
  • Trackers for food/exercise/sleep habits
  • Workout routines you can do sitting in your living room
  • A talking AI that tests your thinking and adjusts the difficulty

The app also provides personalized coaching with a clinical psychologist, along with cognitive exercises, tailored activity suggestions, and a voice analysis tool capable of identifying early signs of dementia. It engages in interactive conversations to assess the user’s needs and adjusts its functions accordingly.

The Science Behind Silvia

For the study, the researchers recruited 20 folks living in an assisted living facility in Indiana who were experiencing mild cognitive impairment but didn’t have dementia or serious depression. They split them into two groups of 10. One group used the Silvia app for about an hour twice a week for three months. The other group just kept doing whatever they normally did.

They used a test called the MoCA to measure brain function. Doctors use this test to check for early signs of dementia.

Now, 20 people isn’t exactly huge for a study, but what they found still raised some eyebrows. The app seemed to help with visual thinking, language, memory recall, and knowing the time and place.

Why does this matter? Many people in assisted living start feeling cut off from the world after moving in. They might not see family as often, can’t always get to brain health specialists, and sometimes feel like they’re just waiting around. That’s exactly when memory tends to nosedive.

Two things make this app approach especially practical. First, it’s right there on a device many seniors already use. Second, it adapts to each person. You can dial the brain games up or down in difficulty so they’re not too easy or impossibly hard. The exercise instructions show pictures of each move, so you’re not left wondering what “lateral arm raise” means. The chatty AI keeps tabs on how you’re doing, then adjusts everything accordingly, like having a personal trainer for your brain who lives in your pocket.

That said, there are limitations to consider. As noted, the study sample was small, and it only ran for 12 weeks. We have no idea if the brain boosts last longer than that or if they’d show up in different groups of people. Most participants were white women, which doesn’t tell us how the app might work for men or people from different backgrounds. Oddly enough, the app users had more years of education than the non-users, which might have affected the results.

What This Means for Aging and Memory Care

With baby boomers hitting their 70s and 80s, we’re staring down a tsunami of potential memory problems. The old-school fix? Regular visits with specialists, which means transportation hassles, scheduling headaches, and hefty bills. Phone apps skip all that. You just tap and train whenever it’s convenient.

Still, this isn’t the first hint that digital tools might help aging brains. Other studies have already shown that brain games and regular exercise each help slow mental decline. This research suggests bundling them together in one easy-to-use app might pack an even bigger punch.

Nursing homes and assisted living centers should also take note. Their staff is always stretched thin. Apps that residents can use independently might supplement care without breaking budgets or requiring extra personnel. One iPad and a handful of good apps could potentially benefit dozens of residents.

Phones and tablets often get a bad rap for making us dumber, shortening attention spans, and replacing memory with Google searches. But this study flips that narrative. The same devices blamed for digital brain drain might actually build brain power when loaded with the right software.

Source : https://studyfinds.org/app-helping-seniors-fight-cognitive-decline-assisted-living/

Can a daily nap do more harm than good? A sleep researcher explains

Woman listening to music (© Prostock-studio – stock.adobe.com)

You’re in the middle of the afternoon, eyelids heavy, focus slipping. You close your eyes for half an hour and wake up feeling recharged. But later that night, you’re tossing and turning in bed, wondering why you can’t drift off. That midday snooze which felt so refreshing at the time might be the reason.

Naps have long been praised as a tool for boosting alertness, enhancing mood, strengthening memory, and improving productivity. Yet for some, they can sabotage nighttime sleep.

Napping is a double-edged sword. Done right, it’s a powerful way to recharge the brain, improve concentration, and support mental and physical health. Done wrong, it can leave you groggy, disoriented, and struggling to fall asleep later. The key lies in understanding how the body regulates sleep and wakefulness.

Most people experience a natural dip in alertness in the early afternoon, typically between 1 p.m. and 4 p.m. This isn’t just due to a heavy lunch – our internal body clock, or circadian rhythm, creates cycles of wakefulness and tiredness throughout the day. The early afternoon lull is part of this rhythm, which is why so many people feel drowsy at that time.

Studies suggest that a short nap during this period – ideally followed by bright light exposure – can help counteract fatigue, boost alertness, and improve cognitive function without interfering with nighttime sleep. These “power naps” allow the brain to rest without slipping into deep sleep, making it easier to wake up feeling refreshed.

But there’s a catch: napping too long may result in waking up feeling worse than before. This is due to “sleep inertia” – the grogginess and disorientation that comes from waking up during deeper sleep stages.

Once a nap extends beyond 30 minutes, the brain transitions into slow-wave sleep, making it much harder to wake up. Studies show that waking from deep sleep can leave people feeling sluggish for up to an hour. This can have serious implications if they then try to perform safety-critical tasks, make important decisions or operate machinery, for example. And if a nap is taken too late in the day, it can eat away from the “sleep pressure build-up” – the body’s natural drive for sleep – making it harder to fall asleep at night.

When napping is essential

For some, napping is essential. Shift workers often struggle with fragmented sleep due to irregular schedules, and a well-timed nap before a night shift can boost alertness and reduce the risk of errors and accidents. Similarly, people who regularly struggle to get enough sleep at night – whether due to work, parenting or other demands – may benefit from naps to bank extra hours of sleep that compensate for their sleep loss.

Nonetheless, relying on naps instead of improving nighttime sleep is a short-term fix rather than a sustainable solution. People with chronic insomnia are often advised to avoid naps entirely, as daytime sleep can weaken their drive to sleep at night.

Certain groups use strategic napping as a performance-enhancing tool. Athletes incorporate napping into their training schedules to speed up muscle recovery and improve sports-related parameters such as reaction times and endurance. Research also suggests that people in high-focus jobs, such as healthcare workers and flight crews, benefit from brief planned naps to maintain concentration and reduce fatigue-related mistakes. NASA has found that a 26-minute nap can improve performance of long-haul flight operational staff by 34%, and alertness by 54%.

How to nap well

To nap effectively, timing and environment matter. Keeping naps between ten and 20 minutes prevents grogginess. The ideal time is before 2 p.m. – napping too late can push back the body’s natural sleep schedule.

The best naps happen in a cool, dark, and quiet environment, similar to nighttime sleep conditions. Eye masks and noise-canceling headphones can help, particularly for those who nap in bright or noisy settings.

Despite the benefits, napping isn’t for everyone. Age, lifestyle and underlying sleep patterns all influence whether naps help or hinder. A good nap is all about strategy – knowing when, how, and if one should nap at all.

For some it’s a life hack, improving focus and energy. For others, it’s a slippery slope into sleep disruption. The key is to experiment and observe how naps affect your overall sleep quality.

Source : https://studyfinds.org/daily-nap-more-harm-than-good/

How social media expectations are destroying teenage friendships

Social media adds a whole new layer of stress to teen friendships. (SpeedKingz/Shutterstock)

Today’s teens face a challenge that their parents never did: the pressure to be constantly available to their friends online. New research from the University of Padua in Italy reveals how this digital pressure is creating stress that leads to real-world friendship conflicts for teenagers.

The study, published in Frontiers in Digital Health, tracked 1,185 teenagers over six months to understand how social media affects their friendships. What they found paints a concerning picture of modern teen relationships.

When Friends Don’t Text Back

“We show that adolescents’ perceptions of social media norms and perceptions of unique features of social media contribute to digital stress, which in turn increases friendship conflicts,” says lead study author Federica Angelini from the University of Padua, in a statement.

The researchers identified two main types of digital stress that teens experience. The first, entrapment, refers to the pressure teens feel to always be available and responsive to their friends online. The second, disappointment, arises when friends don’t respond as quickly or as often as expected, leading to negative feelings. Both types of stress play significant roles in the challenges teens face in their digital friendships.

Surprisingly, it’s not the pressure to be available that causes most problems, it’s the disappointment when friends aren’t available to them.

“Disappointment from unmet expectations on social media—such as when friends do not respond or engage as expected—is a stronger predictor of friendship conflict than the pressure to be constantly available,” explains Angelini.

In other words, teens aren’t fighting because they feel burdened by needing to respond to every message, but because they feel upset when their friends don’t respond to them.

The Problem with Pictures and Videos

When examining different features of social media, the researchers found that the visual nature of content (photos, videos, stories) was most connected to creating disappointment and conflict.

“Visual content makes it easier for teens to see what their friends are doing at any given time. If teens notice that their friends are active online or spend time with others while ignoring their messages, they may feel excluded, jealous, or rejected,” Angelini explained.

We’ve all had that moment: seeing a friend post a fun story while they still haven’t answered the message you sent hours ago. For teens, these visual cues can trigger strong emotional responses that lead to real-world arguments.

The good news is that parents and educators can help teens develop healthier social media habits. Teaching teens strategies to protect their mental health online is crucial as parents navigate the uncharted territory of raising a generation growing up with social media.

“One such habit for teenagers could be setting boundaries, for example scheduling ‘offline’ times or managing notifications. When done in discussion with friends this can also help reduce misunderstandings,” says Angelini.

The researchers also recommend helping teens understand that not every message needs an immediate response. Learning this can reduce stress while maintaining healthy friendships.

Boys and girls experience these pressures slightly differently. Boys who saw social media as highly available actually reported feeling less trapped by it compared to girls, possibly because there are different expectations around response times between different friend groups.

The study followed the same teens over six months, which allowed researchers to see how digital stress actually caused more conflicts over time, rather than just being connected to them.

Understanding these pressures is key to helping teens build healthy, sustainable friendships in the digital age. By recognizing the emotional impact of unmet digital expectations, parents and educators can guide teenagers toward more balanced social connections both online and offline.

Source : https://studyfinds.org/teen-friendships-pressure-social-media-expectations/

End Of Headphones? New ‘Audible Enclaves’ Deliver Sound Only to Your Ears

(© Julia – stock.adobe.com)

Ever been annoyed by someone else’s music in a shared space? Or struggled to have a private conversation in a busy office? Researchers at Penn State University might have just solved these everyday acoustic headaches with a breakthrough that creates “sound bubbles” only the intended listener can hear.

These localized audio spots, which the researchers dubbed “audible enclaves,” can be placed with pinpoint accuracy—even behind obstacles like human heads—while remaining silent to everyone else in the room.

“We essentially created a virtual headset,” said Jia-Xin “Jay” Zhong, a postdoctoral scholar in acoustics at Penn State. “Someone within an audible enclave can hear something meant only for them — enabling sound and quiet zones.”

How Audible Enclaves Work

Published in the Proceedings of the National Academy of Sciences, the research tackles a challenge in acoustics that has long frustrated audio engineers. Sound waves naturally spread out as they travel, making it nearly impossible to contain them without physical barriers. This is why conversations carry across rooms and why traditional speakers fill entire spaces with sound.

“We use two ultrasound transducers paired with an acoustic metasurface, which emit self-bending beams that intersect at a certain point,” said corresponding author Yun Jing, professor of acoustics in the Penn State College of Engineering. “The person standing at that point can hear sound, while anyone standing nearby would not. This creates a privacy barrier between people for private listening.”

The system works by sending out two beams of ultrasonic sound—frequencies too high for humans to hear—that travel along curved paths and meet at a specific target location. Using 3D-printed structures called metasurfaces, they shape these ultrasonic beams to bend around obstacles like a person’s head.

By positioning the metasurfaces in front of the two transducers, the ultrasonic waves travel at two slightly different frequencies along a crescent-shaped trajectory until they intersect. The metasurfaces were 3D printed by co-author Xiaoxing Xia, staff scientist at the Lawrence Livermore Laboratory.

Neither beam is audible on its own—it is the intersection of the beams together that creates a local nonlinear interaction, which generates audible sound. The beams can bypass obstacles, such as human heads, to reach a designated point of intersection.

Breaking Sound Barriers

Most audio technologies work within narrow frequency ranges, but this system demonstrated effectiveness across an impressive spectrum from 125 Hz to 4 kHz. This range covers most frequencies needed for speech and music reproduction, making it practical for real-world applications.

The approach differs fundamentally from existing directional sound technologies. Previous attempts to create focused audio have required massive speaker arrays and complex processing, especially for lower frequencies with longer wavelengths. Commercial “sound beam” products exist but can’t bend around obstacles or create such sharply defined listening spots.

Perhaps most impressive is the system’s compact size. The researchers achieved their results using a source aperture measuring just 0.16 meters—tiny compared to conventional approaches that would require much larger equipment to direct low-frequency sounds.

To verify the technology works with actual content rather than just test tones, the team conducted rigorous testing. “We used a simulated head and torso dummy with microphones inside its ears to mimic what a human being hears at points along the ultrasonic beam trajectory, as well as a third microphone to scan the area of intersection,” said Zhong. “We confirmed that sound was not audible except at the point of intersection, which creates what we call an enclave.”

The researchers tested the system in a common room with normal reverberations, meaning it could work in various environments like classrooms, vehicles, or even outdoors.

Where Will We See Audible Enclaves?

This technology opens up fascinating possibilities. Museums could deliver exhibit narration to visitors in specific spots without creating audio overlap. Office workers could receive private notifications without disrupting colleagues. Cars could create individual sound zones for each passenger, letting the driver hear navigation instructions while rear passengers enjoy different music.

The applications extend beyond convenience. The same approach could create targeted quiet zones by delivering precisely placed noise-cancellation signals. Hospitals could maintain quiet areas while allowing necessary communication in adjacent spaces—something traditional noise control systems struggle to accomplish.

For now, researchers can remotely transfer sound about a meter away from the intended target, and the sound volume is about 60 decibels, equivalent to speaking volume. However, the researchers said that distance and volume may be able to be increased if they increased the ultrasound intensity.

The current system requires high-intensity ultrasound to produce moderate audio levels due to conversion inefficiency. While the levels used fall within safety guidelines, this aspect needs further refinement.

Audio quality presents another hurdle. The interaction introduces some distortion, which could affect complex audio content. However, the team believes signal processing techniques could compensate for these effects in future versions.

Audible enclaves certainly offer a compelling and exciting solution to a long-standing problem, creating bubbles of sound that exist only where wanted and nowhere else. By focusing sound with laser-like precision, this technology could transform our relationship with audio in shared spaces, making private listening truly private without isolating listeners from their surroundings.

Source : https://studyfinds.org/audible-enclaves-sound-waves-penn-state/

Why ‘fake it till you make it’ at work may be draining your mental health

In the sales industry, “fake it till you make it” isn’t just a saying; it’s often a job requirement. Behind those seemingly genuine smiles and enthusiastic pitches, salespeople are performing complex emotional gymnastics that researchers call emotional labor. According to new international research, this emotional performance is seriously impacting employee mental health and job satisfaction.

Faking your emotions at work may lead to employee burnout and stress. (© Prostock-studio – stock.adobe.com)

A recent study published in Industrial Marketing Management explores how salespeople’s moral character influences how they manage their emotions at work and how this ultimately affects their well-being. Poor employee well-being costs U.S. companies an estimated $500 billion and results in 550 million lost workdays annually, so this is a big deal for both businesses and individuals.

Reports show that about 63% of salespeople struggle with mental health issues, and sales jobs are known for their intense pressure. This has only gotten worse since the pandemic, with salespeople facing new challenges and changing customer expectations.

“We are all under a lot of pressure, a lot of deadlines at work, right?” says study co-author Khashayar Afshar Bakeshloo (Kash) from the University of Mississippi, in a statement. “We wanted to look at the different factors that threaten employee’s mental health and lead to emotional exhaustion. One such factor that is very interesting to us was emotional labor.”

The Hidden Cost of Putting on a Happy Face

Emotional labor is the work of managing one’s emotions to meet job requirements. It comes in two main forms: surface acting and deep acting.

Surface acting is basically putting on a mask and showing emotions you don’t actually feel, like forcing a smile during a tough customer meeting. Deep acting goes further, where you actually try to generate the required emotions internally, like really trying to feel excited about a product you’re selling.

The researchers wanted to know how a salesperson’s moral character affects which approach they use, and how these approaches impact both customer behavior and the salesperson’s well-being.

They surveyed 313 B2B salespeople across various industries in the United States, representing different company sizes and offering various products and services. Most people in the study (72.5%) were men, which is typical in B2B sales.

When Values and Job Requirements Collide
Salespeople who deeply value moral traits as part of who they are (what researchers call “moral identity internalization”) are more likely to try genuinely feeling the emotions their job requires, rather than just faking them.

On the other hand, salespeople who focus more on publicly showing their morality (called “moral identity symbolization”) tend to use both approaches depending on the situation—sometimes genuinely trying to feel the emotions, other times just putting on a show.

Customers can often tell when a salesperson is being fake, and they frequently respond by treating the salesperson poorly or disrespectfully. This negative customer behavior then makes salespeople less satisfied with their jobs, creating a harmful cycle.

“Managing emotions to meet job demands can lead to exhaustion, dissatisfaction, and negative customer reactions,” says study co-author Omar Itani from Lebanese American University. “Job satisfaction is essential for overall well-being, emphasizing the need for supportive workplace cultures.”

In sales roles, where rejection is common, the pressure to perform can lead to significant emotional strain. More than 70% of people working in sales reported struggling with mental health in the 2024 State of Mental Health in Sales report.

“Salespeople are expensive employees,” explains Afshar. “They bring in money for the organization. So, if they miss an opportunity, it means that there’s no money coming in. When a salesperson burns out, it’s not just a loss of the person, but it’s also everything they bring to the company.”

Creating Healthier Work Environments

So, what can employees and employers do? Aligning personal values with job expectations can help salespeople manage emotional labor more effectively. Those in roles that require frequent emotional acting should consider workplaces that support authenticity, mental health resources, and ethical leadership to reduce burnout. Sales managers can work to foster environments like these.

“Communication is the key here,” adds Afshar. “When employees can communicate their problems, they aren’t dealing with problems alone. When they feel safe talking to their managers, their colleagues, it tends to remove some of that burden.”

Source: https://studyfinds.org/fake-it-till-you-make-it-work-mental-health/

Log out or lean in? The way you use social media matters more than how long you scroll

Using social media with more intention can help to protect your mental health. (PeopleImages.com – Yuri A/Shutterstock)

Every few months, another headline warns us about social media’s toxic effects on mental health, followed by calls to digital detox. Yet for many of us, completely unplugging isn’t super realistic. Now, new research from the University of British Columbia suggests we might not have to choose between staying connected and staying mentally healthy; there’s a middle path that could deliver the best of both worlds.

The study, published in the Journal of Experimental Psychology: General, challenges the popular belief that we must cut back on social media to protect our mental health. Instead, learning to use social media differently by focusing on meaningful connections rather than mindless scrolling or comparing ourselves to others, might be just as helpful for our emotional well-being.

“There’s a lot of talk about how damaging social media can be, but our team wanted to see if this was really the full picture or if the way people engage with social media might make a difference,” says lead study author Amori Mikami, a psychology professor from the University of British Columbia, in a statement.

The Love-Hate Relationship With Social Media
For most young adults, social media is a mixed bag. On one hand, platforms like Instagram and Facebook make it easy to stay in touch with friends, find communities of like-minded people, and get emotional support when needed. On the other hand, these same platforms can increase anxiety, depression, and loneliness when we find ourselves constantly comparing our regular lives to others’ highlight reels or feeling like we’re missing out on what everyone else is doing.

The research team recruited 393 social media users between the ages of 17 and 29 who reported some negative impacts from social media and had some symptoms of mental health concerns. They split these participants into three groups:

  1. A tutorial group that learned healthier ways to use social media
  2. An abstinence group that was asked to stop using social media entirely
  3. A control group that continued their usual social media habits

Over six weeks, researchers tracked participants’ social media use with phone screen time apps and self-reports. They also measured various aspects of mental well-being, including loneliness, anxiety, depression, and fear of missing out (FOMO).

Two Different Paths to Better Mental Health
As you might expect, people in the abstinence group drastically reduced their time on social media. But, the tutorial group also cut back on their social media use compared to the control group, even though they were never specifically told to do so. Just becoming more mindful about social media naturally led them to be more selective about their usage.

Both the tutorial and abstinence groups made fewer social comparisons and did less passive scrolling. While the abstinence group showed the biggest changes, the tutorial group also improved significantly compared to the control group.

When it came to mental health benefits, each approach seemed to help with different things. The tutorial approach was especially good at reducing FOMO and feelings of loneliness. The abstinence approach, meanwhile, was particularly effective at lowering symptoms of depression and anxiety but did not improve loneliness, possibly due to reduced social connections.

“Cutting off social media might reduce some of the pressures young adults feel around presenting a curated image of themselves online. But stopping social media might also deprive young adults of social connections with friends and family, leading to feelings of isolation,” explains Mikami.

Creating a Healthier Social Media Experience

The tutorial approach taught participants how to use social media in ways that boost genuine connection while reducing the stress of constant comparison. Participants learned to:

  • Reflect on when social media made them feel good versus bad
  • Recognize that most posts are carefully curated and don’t reflect real life
  • Unfollow or mute accounts that triggered negative feelings about themselves
  • Actively engage with friends through comments or messages instead of just passively scrolling

Completely stopping social media reduced activity on friends’ pages, which actually predicted greater loneliness. It seems that commenting on friends’ content provides a valuable social connection. However, reducing engagement with celebrity or influencer content predicted lower loneliness and fewer symptoms of depression and anxiety—showing that not all social media activity affects us the same way.

“Social media is here to stay,” says Mikami. “And for many people, quitting isn’t a realistic option. But with the right guidance, young adults can curate a more positive experience, using social media to support their mental health instead of detracting from it.”

Mikami believes these findings could help develop mental health programs and school workshops where young people learn to use social media as a tool for strengthening relationships rather than as a source of stress and comparison.

Don’t beach and booze: Why alcohol makes it easier to get a sunburn

Drinking in the sun can make you unaware that you are getting sunburnt. (STEKLO/Shutterstock)

BOCA RATON, Fla. — When was the last time you got a sunburn? If you’re like nearly a third of American adults who were toasted by the sun at least once last year, you might want to pay attention to a revealing new study about skin cancer risk. Researchers from Florida Atlantic University have found some eye-opening patterns in how Americans think about cancer risk and protect their skin—or don’t.

Your beach cocktail might be making your sunburn worse. Research published in the American Journal of Lifestyle Medicine reveals that more than one in five people who got sunburned were drinking alcohol at the time. In other words, there seems to be a real connection between having drinks and getting burned.

The Skin Cancer Problem You Need to Know About
Skin cancer tops the charts as America’s most common cancer. Millions of cases are diagnosed every year, costing the healthcare system nearly $9 billion annually. While most of us have heard of melanoma (the deadliest type), basal cell carcinoma and squamous cell carcinoma are actually more common.

Despite how common skin cancer is, the study found most Americans aren’t particularly worried about getting it. Only about 10% of people said they were “extremely worried,” while most were just “somewhat” (28.3%) or “slightly” (27.3%) concerned.

Sunburns significantly raise your cancer risk. According to dermatologists, getting just five blistering sunburns between ages 15 and 20 increases your melanoma risk by a whopping 80%. That’s a massive jump from something many people experience regularly.

Who Gets Burned? The Surprising Patterns

The research team surveyed over 6,000 American adults about their sun habits and sunburn experiences. Rich people get more sunburns. Yes, you read that correctly. People earning $200,000+ per year were four times more likely to report sunburns than those in the lowest income bracket. This completely flips what you might expect: wouldn’t wealthier people be more informed and have better access to sun protection?

Education doesn’t help either. College graduates and those with advanced degrees reported more sunburns than people with a high school diploma or less.

Other patterns:

  • Young adults (18-39) burn more often than older folks
  • Men get more sunburns than women
  • White Americans report more sunburns than Black or Hispanic Americans

“While Hispanics and Black Americans generally report lower rates of sunburn, Hispanics often perceive greater benefits of UV exposure, which increases their risk,” says study author Lea Sacca, in a statement.

Why might wealthy, educated people get more sunburns? They probably spend more time on outdoor vacations or leisure activities. Think about it: boating, skiing, beach vacations, and outdoor sports are all activities more accessible to those with higher incomes and more flexible work schedules.

Source: https://studyfinds.org/alcohol-easier-to-get-a-sunburn/

Is your money gone before it arrives? The sad reality of American paychecks

Most working Americans have already spent more than half their paycheck before they even get it. This financial balancing act, revealed in a recent survey, shows how millions of workers may be finding themselves counting money they haven’t yet received just to keep up with basic expenses.

A survey of 2,000 employed Americans making less than $75,000 annually shows what happens to the modern paycheck—where it goes, how fast it disappears, and how many people need to plan carefully just to make it through each month.

The poll, conducted by Talker Research and commissioned by EarnIn, found that 59% of Americans map out which bills to pay first while waiting for payday, with 51% of their money already earmarked before it hits their account. This happens mainly because living costs don’t match what people earn (44%) and bill due dates are scattered throughout the month (31%).

Past-due bills are another big reason people count their chickens before they hatch, making up 38% of pre-spent funds. Only 40% of those surveyed keep up with all their bills, while 55% typically juggle between one and four overdue bills every month.

When payday finally arrives, people know exactly where the money needs to go. Housing costs like rent or mortgage payments come first for 56% of respondents, then necessities like food and medicine (51%). Utility bills follow at 38%, with catching up on overdue bills in fourth place at 29%.

Three Days to Empty
The money that does arrive disappears quickly. Americans spend about 43% of their paycheck within just three days of getting it. When you add this to the 51% that’s already spoken for before arrival, very little remains for the rest of the pay period.

This quick drain creates a cycle of stress that most Americans find themselves stuck in. Only 20% of respondents said they don’t run out of money or need to tighten their belt before their next check comes—meaning 80% feel the squeeze as payday approaches.

For those caught short at the end of each pay cycle, the effects hit home: 62% struggle to buy groceries, 30% have trouble paying major bills, another 30% can’t cover smaller bills, and 16% find it hard to afford medicine and make loan payments.

Budget Advice vs. Real Life
The survey compared Americans’ actual spending with the popular 50/30/20 budget rule—which suggests putting 50% toward needs, 30% toward wants, and 20% into savings. The results show the gap between this advice and what people actually face.

On average, respondents put 64% of their money toward basic needs like food, bills, and housing—far more than the recommended 50%. Meanwhile, “wants” or personal spending gets just 16% of their income, and savings also account for only 16% of the average paycheck.

The savings picture looks even worse on closer inspection. More than half (56%) of those surveyed said less than 10% of their money goes into savings, while 23% couldn’t remember when they last saved 20% as the budget rule suggests.

When money runs low before the next check arrives, Americans use various tactics to get by. Nearly 39% pick up side hustles for extra cash, while 31% ask family for help and 28% turn to credit cards.

Worryingly, 14% of respondents said they have nowhere to turn when they need more money—showing a group of people living with extreme money troubles and no safety net.

Banking on help
Banks, which might seem like obvious helpers in this situation, offer few solutions. Only 5% of respondents can get their paycheck early through their bank, and even fewer (4%) can access early pay through their job.

“In today’s world, employees shouldn’t have to wait days to access the money they’ve already earned,” said an EarnIn spokesperson. “People deserve financial solutions that provide faster access to their pay—regardless of where they bank—so they can manage their money on their own terms, not their bank’s schedule.”

Despite limited help from banks, Americans stay loyal to them for years. The average person has used the same bank for nine years, with 14% reporting relationships lasting between 19 and 20 years.

This loyalty seems based more on habit than benefits. More than half (57%) stay with their bank simply because it feels familiar. Only 20% said they stay because their bank lets them get their money sooner.

Getting out of the paycheck-to-paycheck life
The survey asked how getting paychecks a bit earlier might ease financial pressure. If Americans could get paid up to two days earlier than usual, 34% said they could pay bills on time, and 29% thought they would worry less about money.

Additionally, 19% said earlier access would help them pay rent on time, while 15% could save more. Overall, 56% felt that getting their paycheck up to two days earlier would make them feel more secure about their finances.

For many, the standard two-week or monthly pay cycle creates roadblocks to financial stability, forcing even careful people to make tough choices about which necessities get paid first. This mismatch between when money is earned and when bills come due adds to financial worry.

The gap between budget advice and real spending patterns further shows the money pressures facing working Americans. When nearly two-thirds of income must cover just the basics, building savings becomes much harder.

The findings also raise questions about how employers and banks might either help reduce or accidentally increase these pressures. With so few workers able to access early pay options, there’s room for new approaches in payroll and banking that better fit people’s actual financial lives.

Financial advice often focuses on budgeting skills and personal habits, but this survey suggests that timing issues like pay frequency and bill due dates matter just as much. Solutions that fix these broader issues may work better than putting all the burden on individual choices.

Source: https://studyfinds.org/sad-reality-american-paycheck/

The burned-out generation: Americans feeling peak stress earlier than ever

(© RawPixel.com – stock.adobe.com)

“I’m completely burned out”—once a phrase associated with decades of career advancement and family responsibilities—is now commonly heard from professionals in their twenties. According to a new survey, 25% of Americans experience burnout before age 30, challenging traditional assumptions about when life’s pressures reach their peak and raising important questions about how modern stressors affect different generations.

The poll of 2,000 adults from Talker Research examined how the cumulative stress of the past decade has affected Americans across generations. While the average American experiences peak burnout at approximately 42 years old, the picture looks dramatically different for younger adults. Gen Z and millennial respondents, currently aged 18 to 44, reported reaching their highest point of stress at an average age of just 25—a finding that suggests fundamental changes in how modern life impacts mental well-being across age groups.

The finding that a quarter of Americans experience burnout before age 30 represents a significant shift from traditional life course expectations. Historically, peak stress periods were often associated with mid-life challenges such as simultaneously managing career advancement, child-rearing, and caring for aging parents. The early burnout phenomenon suggests that younger generations may be facing an accelerated or compressed experience of life stressors.

The state of American stress
Currently, the average person reports operating at half their stress capacity—already a concerning level for overall well-being. Even more troubling, 42% of respondents indicated feeling even more stressed than this baseline, with a notable generational divide emerging in the data. Gen Z and millennial participants reported significantly higher current stress levels (51%) compared to their Gen X and older counterparts (37%).

Ehab Youssef, a licensed clinical psychologist, mental health researcher and writer at Mentalyc, provided insight into why stress is peaking earlier than ever.

“As a psychologist, I’ve worked with clients across different generations, and I can tell you stress doesn’t look the same for everyone,” Youssef told Talker Research. “It’s fascinating — and a little concerning — to see how younger Americans are experiencing peak stress earlier than ever before. I see it in my practice all the time: twenty-somethings already feeling completely burned out, something I never used to see at that age.

“I often hear from my younger clients, ‘Why does life feel so overwhelming already?’ They’re not just talking about work stress; they’re feeling pressure from every direction — career, finances, relationships, even social media expectations. Compare this to my older clients, who often describe their peak stress happening later in life — maybe in their 40s or 50s, when financial or family responsibilities became heavier. The shift is real, and it’s taking a toll.”

The primary drivers of burnout
When asked to identify the primary causes of their burnout, financial concerns topped the list, with 30% of respondents ranking money matters as their number one stressor. This was followed closely by politics (26%), work-related pressures (25%), and physical health concerns (23%).

The data reveals interesting generational differences in what’s causing the most stress. For younger Americans (Gen Z and millennials), work represents the greatest point of contention (33%), followed by finances (27%) and mental health (24%). In contrast, older generations (Gen X, baby boomers, and the silent generation) identified politics as their most significant concern (27%), with physical health following as a close second (24%).

Relationships of all kinds are also contributing significantly to American stress levels. One in six respondents who identified either their love life or family relationships as stressors ranked these areas as their top source of burnout (18% each).

Source: https://studyfinds.org/the-burnt-out-generation-americans-feeling-peak-stress-earlier-than-ever/

100,000-year-old cultural melting pot discovered in Israeli cave may rewrite early human history

Tinshemet cave during the excavations. (Credit- Yossi Zaidner)

In a limestone cave in Israel, archaeologists have uncovered evidence of what might be the oldest case of cultural sharing between different human species. The discovery reveals that around 100,000 years ago, early Homo sapiens and their Neanderthal-like neighbors weren’t just occasionally bumping into each other—they were participating in a shared cultural world, complete with identical toolmaking traditions, hunting practices, and even burial rituals. This finding turns the traditional story of human evolution on its head, suggesting that cultural exchange between different human species was the rule, not the exception, in our ancient past.

The findings at Tinshemet Cave, published in Nature Human Behaviour, provide a rare glimpse into a pivotal period when multiple human species coexisted in the Middle East. The site has yielded fully articulated human skeletons carefully positioned in burial positions, thousands of ochre fragments transported from distant sources, stone tools made with consistent manufacturing techniques, and animal bones that reveal specific hunting preferences—all dating to what scientists call the mid-Middle Paleolithic period (130,000-80,000 years ago).

“Our data show that human connections and population interactions have been fundamental in driving cultural and technological innovations throughout history,” says lead researcher Prof. Yossi Zaidner of the Hebrew University in Jerusalem, in a statement.

The discovery is especially significant because the Levant region (modern-day Israel, Lebanon, Syria, and Jordan) served as a crossroads where different human populations met. Previous discoveries in the region had uncovered fossils with mixed physical characteristics, suggesting that interbreeding occurred between Homo sapiens migrating out of Africa and local Neanderthal-like populations.

What makes the Tinshemet Cave findings transformative is that they demonstrate these different-looking humans weren’t just meeting and mating—they were sharing their unique cultural behaviors and traditions across population boundaries.

Located just 10 kilometers from another significant archaeological site called Nesher Ramla (where Neanderthal-like fossils were previously discovered), Tinshemet Cave preserves evidence of sustained human occupation over thousands of years. The research team excavated multiple layers of sediments inside the cave and on its terrace, uncovering a wealth of artifacts that tell a cohesive story of sophisticated human activity.

Among the most striking discoveries are the human burials. The excavations revealed at least five individuals, including two complete articulated skeletons—one adult and one child. The bodies were deliberately placed in a fetal position on their sides with bent limbs, a burial position remarkably similar to contemporaneous burials found at other Middle Paleolithic sites in the region, including the famous Qafzeh and Skhul caves.

These burials represent the earliest known examples of intentional human burial anywhere in the world, predating similar practices in Europe and Africa by tens of thousands of years. More importantly, they show that diverse human populations were treating their dead with similar ceremonial care, suggesting shared symbolic behaviors and possibly shared beliefs.

Another fascinating discovery was the abundant presence of ochre—a naturally occurring mineral pigment that produces red, yellow, and purple hues. The research team recovered more than 7,500 ochre fragments throughout the site, with the highest concentrations found in layers containing human burials. Chemical analysis revealed that these ochre materials came from at least four different sources, some located as far as 60-80 kilometers away in Galilee, and others possibly from the central Negev, more than 100 kilometers to the south.

The significant effort invested in obtaining these pigments from distant sources suggests their importance in the lives of these ancient people. The presence of large chunks of ochre near human remains—including a 4-5 cm piece found between the legs of one buried individual—hints at their ritual significance. Evidence of heat treatment to enhance the red color of some ochre pieces further reveals sophisticated knowledge and intentional manipulation of these materials.

Stone tool production at Tinshemet Cave demonstrates another dimension of cultural uniformity. The researchers analyzed nearly 2,800 stone artifacts and found that a specific flint-knapping technique known as the centripetal Levallois method dominated tool production. This method, which involves careful preparation of a stone core to produce standardized flakes, appears consistently across mid-Middle Paleolithic sites in the region.

This technological consistency is particularly remarkable because it differs significantly from both earlier and later stone tool traditions in the Levant. Earlier Middle Paleolithic populations (around 250,000-140,000 years ago) primarily used methods to produce blade-like tools, while later populations (after 80,000 years ago) employed a more diverse set of techniques. The dominance of the centripetal Levallois method during this middle period represents a distinct technological tradition shared across populations.

Analysis of animal bones from the site reveals a third element of behavioral uniformity: a focus on hunting large game animals. Unlike earlier and later periods, when smaller prey like gazelles dominated the diet, the mid-Middle Paleolithic hunters at Tinshemet and similar sites showed a clear preference for larger ungulates, particularly aurochs (wild cattle) and equids (horse-like animals). This pattern suggests either a shift in hunting strategies or different approaches to transporting animal resources, possibly connected to changes in settlement patterns.

To establish the age of the findings, the research team employed multiple dating techniques, including thermoluminescence dating of burnt flint, optically stimulated luminescence dating of quartz grains in the sediments, and uranium-series dating of snail shells and flowstones. These methods consistently dated the main human occupation layers to approximately 97,000-106,000 years ago, placing them firmly within the mid-Middle Paleolithic period.

The timing corresponds to a warm interglacial period known as Marine Isotope Stage 5, when climatic conditions in the Levant were relatively favorable. Pollen analysis from the lowest layers of the cave indicates a Mediterranean open forest environment with wide-spaced trees, small shrubs, and herbs dominated by evergreen oak.

Perhaps most intriguing about the Tinshemet Cave discovery is what it suggests about interactions between different human populations. “These findings paint a picture of dynamic interactions shaped by both cooperation and competition,” says co-lead author Dr. Marion Prévost.

Scientists have long associated specific behaviors or technologies exclusively with particular human species. Now we have strong evidence that points to a landscape of interaction, where cultural innovations spread across population boundaries through social learning and exchange.

As excavations at Tinshemet Cave continue, researchers hope to uncover additional evidence about the lives and interactions of these ancient people. The site has already yielded remarkable insights into a crucial chapter of human prehistory—a time when different human populations met, exchanged ideas, and created shared traditions despite their physical differences. What began as a simple archaeological survey has evolved into a profound reconsideration of what it means to be human, showing that cultural connections can transcend biological boundaries.

Source : https://studyfinds.org/100000-year-old-cultural-melting-pot-discovered-israeli-cave-early-human-history/

America is becoming a nation of homebodies

(Photo by Dragana Gordic on Shutterstock)

In his February 2025 cover story for The Atlantic, journalist Derek Thompson dubbed our current era “the anti-social century.” He isn’t wrong. According to our recent research, the U.S. is becoming a nation of homebodies.

Using data from the American Time Use Survey, we studied how people in the U.S. spent their time before, during and after the pandemic.

The COVID-19 pandemic did spur more Americans to stay home. But this trend didn’t start or end with the pandemic. We found that Americans were already spending more and more time at home and less and less time engaged in activities away from home stretching all the way back to at least 2003.

And if you thought the end of lockdowns and the spread of vaccines led to a revival of partying and playing sports and dining out, you would be mistaken. The pandemic, it turns out, mostly accelerated ongoing trends.

All of this has major implications for traffic, public transit, real estate, the workplace, socializing and mental health.

Life inside

The trend of staying home is not new. There was a steady decline in out-of-home activities in the two decades leading up to the pandemic.

Compared with 2003, Americans in 2019 spent nearly 30 minutes less per day on out-of-home activities and eight fewer minutes a day traveling. There could be any number of reasons for this shift, but advances in technology, whether it’s smartphones, streaming services or social media, are likely culprits. You can video chat with a friend rather than meeting them for coffee; order groceries through an app instead of venturing to the supermarket; and stream a movie instead of seeing it in a theater.

Of course, there was a sharp decline in out-of-home activities during the pandemic, which dramatically accelerated many of these stay-at-home trends.

Outside of travel, time spent on out-of-home activities fell by over an hour per day, on average, from 332 minutes in 2019 to 271 minutes in 2021. Travel, excluding air travel, fell from 69 to 54 minutes per day over the same period.

But even after the pandemic lockdowns were lifted, out-of-home activities and travel through 2023 remained substantially depressed, far below 2019 levels. There was a dramatic increase in remote work, online shopping, time spent using digital entertainment, such as streaming and gaming, and even time spent sleeping.

Time spent outside of the home has rebounded since the pandemic, but only slightly. There was hardly any recovery of out-of-home activities from 2022 to 2023, meaning 2023 out-of-home activities and travel were still far below 2019 levels. On the whole, Americans are spending nearly 1.5 hours less outside their homes in 2023 than they did in 2003.

While hours worked from home in 2022 were less than half of what they were in 2021, they’re still about five times what they were ahead of the pandemic. Despite this, only about one-quarter of the overall travel time reduction is due to less commuting. The rest reflects other kinds of travel, for activities such as shopping and socializing.

Ripple effects

This shift has already had consequences.

With Americans spending more time working, playing and shopping from home, demand for office and retail space has fallen. While there have been some calls by major employers for workers to spend more time in the office, research suggests that working from home in the U.S. held steady between early 2023 and early 2025 at about 25% of paid work days. As a result, surplus office space may need to be repurposed as housing and for other uses.

There are advantages to working and playing at home, such as avoiding travel stress and expenses. But it has also boosted demand for extra space in apartments and houses, as people spend more time under their own roof. It has changed travel during the traditional morning – and, especially, afternoon – peak periods, spreading traffic more evenly throughout the day but contributing to significant public transit ridership losses. Meanwhile, more package and food delivery drivers are competing with parked cars and bus and bike lanes for curb space.

Perhaps most importantly, spending less time out and about in the world has sobering implications for Americans well beyond real estate and transportation systems.

Research we’re currently conducting suggests that more time spent at home has dovetailed with more time spent alone. Suffice it to say, this makes loneliness, which stems from a lack of meaningful connections, a more common occurrence. Loneliness and social isolation are associated with increased risk for early mortality.

Because hunkering down appears to be the new norm, we think it’s all the more important for policymakers and everyday people to find ways to cultivate connections and community in the shrinking time they do spend outside of the home.

Source : https://studyfinds.org/america-becoming-nation-of-homebodies/

Happy husband or wife really could be the key to a stress-free life

Your significant other’s positive emotions can be contagious, especially in older couples. (Darren Baker/ Shutterstock)

When your spouse is in a good mood, you might feel happier too, but according to new research, their emotional state could be affecting you on a much deeper level. Scientists have discovered that when your partner experiences positive emotions, it might actually lower your cortisol levels, the primary stress hormone in your body, regardless of how you yourself are feeling. This biological connection between older couples adds a whole new dimension to what it means to be in a relationship.

“Having positive emotions with your relationship partner can act as a social resource,” says lead study author Tomiko Yoneda, an assistant professor of psychology at the University of California, Davis, in a statement.

The Aging Body and Stress Management

Study results, published in Psychoneuroendocrinology, are especially telling for older adults in committed relationships. As we get older, our bodies become worse at regulating stress responses, making us more vulnerable to the harmful effects of high cortisol. But a partner who maintains positive emotions might act as a biological buffer against stress.

The research team analyzed data from 321 older couples from Canada and Germany. These weren’t new relationships. The average couple had been together for 43.97 years. Each participant, aged between 56 and 87, completed surveys multiple times daily for a week, reporting their emotions while also providing saliva samples to measure cortisol. Partners completed surveys at the same time but separately, so they couldn’t influence each other’s responses.

Your Mood, My Body

When people reported feeling more positive than usual, their cortisol levels were lower. But when someone’s partner reported more positive emotions than usual, that person’s cortisol was also lower, regardless of how they themselves were feeling. In simple terms, your partner’s good mood might be doing your body good, even if you’re not sharing their happiness.

This connection extended beyond moment-to-moment measurements to total daily cortisol output. When someone’s partner reported higher positive emotions than usual throughout the day, that person showed lower overall cortisol for the day. This link was stronger for older participants and those who reported being happier in their relationships. In some cases, the effect of a partner’s emotions on cortisol was even stronger than the effect of one’s own emotions.

While a partner’s positive emotions were linked to lower cortisol, the researchers didn’t find any connection between a partner’s negative emotions and cortisol levels. Yoneda explained that this makes sense because older adults often develop ways to shield their partners from the physiological effects of negative emotions.

Quality Relationships Make a Difference

The emotional climate of your relationship may be an overlooked factor in your physical health. When your partner tends toward happiness, interest, or relaxation, their emotional state could be protecting your stress physiology.

This doesn’t mean you should pressure your partner to be constantly happy. Rather, these findings point to potential health benefits that come from fostering positive emotional experiences together. Creating opportunities for shared good times might be more than just relationship maintenance; it could be a mutual health boost.

“Relationships provide an ideal source of support, especially when those are high-quality relationships,” says Yoneda. “These dynamics may be particularly important in older adulthood.”

The association between a partner’s positive emotions and lower cortisol was most pronounced for people who reported higher relationship satisfaction. In happy relationships, partners may be more tuned in to each other’s emotional states.

Yoneda noted that these results fit with psychological theories suggesting positive emotions help us act more fluidly in the moment. These experiences can create positive feedback loops that enhance this capability over time. People in relationships can share these benefits when they experience positive emotions together.

Your partner’s happiness might be doing more than lighting up the room. It could be helping regulate your stress physiology in ways that boost your long-term health. In long-term relationships, emotions truly become a shared resource. What’s yours really is mine, right down to the hormonal level. So perhaps the age-old advice to “choose a happy partner” carries more biological wisdom than we ever realized?

Source : https://studyfinds.org/lower-stress-as-you-age-happy-partner/

 

Your clothes could soon charge your phone: New thermoelectric yarn makes it possible

Conceptual image of a man walking on the street with his smartphone being charged by his hoodie. (AI-generated image created by StudyFinds)

Forget to bring your charger with you on vacation? What if your clothing could generate electricity from the heat your body naturally produces? This futuristic concept is now approaching reality thanks to scientists at Chalmers University of Technology in Sweden and Linköping University.

Researchers say the remarkable new textile technology converts body heat into electricity through thermoelectric effects, potentially powering wearable devices from your clothing. The innovation, described in an Advanced Science paper, centers on a newly developed polymer called poly(benzodifurandione), or PBFDO, which serves as a coating for ordinary silk yarn.

“The polymers that we use are bendable, lightweight and are easy to use in both liquid and solid form. They are also non-toxic,” says study first author Mariavittoria Craighero, a doctoral student at the Department of Chemistry and Chemical Engineering at Chalmers, in a statement.

Unlike previous attempts at creating thermoelectric textiles, this breakthrough addresses a critical barrier that has long hampered progress: the lack of air-stable n-type polymers. These materials are characterized by their ability to move negative charges and are essential counterparts to the more common p-type polymers in creating efficient thermoelectric devices.

“We found the missing piece of the puzzle to make an optimal thread – a type of polymer that had recently been discovered. It has outstanding performance stability in contact with air, while at the same time having a very good ability to conduct electricity. By using polymers, we don’t need any rare earth metals, which are common in electronics,” explains Craighero.

How Thermoelectric Textiles Work

Thermoelectric generators work by converting temperature differences into electrical energy. When one side of a thermoelectric material is warmer than the other, electrons move from the hot side to the cold side, generating an electrical current. The human body continuously generates heat, creating natural temperature gradients between the skin and the surrounding environment.

For efficient thermoelectric generation, both p-type (positive) and n-type (negative) materials must work together. While p-type materials have been well-established in previous research, creating stable n-type materials has been a persistent challenge. Most n-type organic materials degrade rapidly when exposed to oxygen in the air, often becoming ineffective within days.

What makes this development particularly exciting is the remarkable stability of PBFDO-coated silk. Unlike similar materials that degrade within days when exposed to air, these new thermoelectric yarns maintain their performance for over 14 months under normal conditions without any protective coating. The researchers project a half-life of 3.2 years for these materials – an unprecedented achievement for this type of organic conductor.

Beyond electrical performance, the mechanical properties of the PBFDO-coated silk are equally impressive. The coated yarn can stretch up to 14% before breaking and, more importantly for everyday use, it can withstand machine washing.

“After seven washes, the thread retained two-thirds of its conducting properties. This is a very good result, although it needs to be improved significantly before it becomes commercially interesting,” states Craighero.

The material also demonstrates remarkable temperature resilience. During testing, the researchers found that PBFDO remains flexible even when cooled with liquid nitrogen to extremely low temperatures. This exceptional mechanical stability allows the material to withstand various environmental conditions and physical stresses that would be encountered in real-world use.

The Future of Daily Wear?

To showcase the technology’s potential, the research team created two different thermoelectric textile devices: a thermoelectric button and a larger textile generator with multiple thermoelectric legs.

The thermoelectric button demonstrated an output of about 6 millivolts at a temperature difference of 30 degrees Celsius. Meanwhile, the larger textile generator achieved an open-circuit voltage of 17 millivolts at a temperature difference of 70 degrees Celsius.

With a voltage converter, this could help power ultra-low-energy devices, such as certain types of sensors. However, the current power output—0.67 microWatts at a 70-degree temperature difference—is far below what would be required for USB charging of standard electronics.

While these power outputs mark a major step forward in thermoelectric textiles, it’s important to note that the temperature differences used in lab tests—up to 70 degrees Celsius—are significantly higher than what would typically be experienced in everyday clothing. This means real-world performance may be lower than laboratory results suggest.

Potential Uses in Healthcare and Wearable Tech

Despite current limitations in power output, the technology shows particular promise for healthcare applications. Small sensors that monitor vital signs like heart rate, body temperature, or movement patterns could potentially operate using this technology, eliminating the need for battery changes or recharging.

For patients with chronic conditions requiring continuous monitoring, self-powered sensors embedded in clothing could provide valuable data without the hassle of managing battery life. Similarly, fitness enthusiasts could benefit from wearables that never need charging, seamlessly tracking performance metrics during activities.

Beyond health monitoring, the technology could eventually support other low-power functions in smart clothing, such as environmental sensing, location tracking, or simple LED indicators. As power conversion efficiency improves, applications could expand to include more power-hungry features.

The Challenges Ahead

Currently, the production process is time-intensive and not suitable for commercial manufacturing, with the demonstrated fabric requiring four days of manual needlework to produce.

“We have now shown that it is possible to produce conductive organic materials that can meet the functions and properties that these textiles require. This is an important step forward. There are fantastic opportunities in thermoelectric textiles and this research can be of great benefit to society,” says Christian Müller, Professor at the Department of Chemistry and Chemical Engineering at Chalmers University of Technology and research leader of the study.

One key challenge identified through computer simulations is the electrical contact resistance between components. Reducing this resistance could potentially increase power output by three times or more. The researchers also investigated how factors like thermoelectric leg length and thread count affect performance, providing valuable insights for future designs.

Interest in these types of conducting polymers has grown significantly in recent years. They have a chemical structure that allows them to conduct electricity similar to silicon while maintaining the physical properties of plastic materials, making them flexible. Research on conducting polymers is ongoing in many areas such as solar cells, Internet of Things devices, augmented reality, robotics, and various types of portable electronics.

Looking Forward

What’s clear is that there is a viable pathway toward practical thermoelectric textiles that can function reliably in everyday conditions. By addressing both the electrical and mechanical requirements for textile integration, this work bridges the gap between laboratory demonstrations and potential real-world applications.

The development of these polymers also aligns with sustainability goals by eliminating the need for rare earth metals commonly used in electronics. With further refinement and scaling of the manufacturing process, this technology could eventually lead to clothing that powers our devices using nothing but our body heat.

For widespread adoption, researchers will need to develop automated production methods that can efficiently coat and assemble the thermoelectric textiles at scale. Additionally, improving power output while maintaining stability remains a critical goal for future research.

Source : https://studyfinds.org/your-clothes-could-soon-charge-your-phone-new-thermoelectric-yarn/

Tesla vs. BYD: A look inside their cutting-edge EV batteries

Which electric vehicle giant has a better battery? (gguy/Shutterstock)

In the race to dominate the electric vehicle market, two companies stand above the rest: Tesla and China’s BYD. While Tesla pioneered the use of lithium-ion batteries and leads EV sales in North America and Europe, BYD began as a battery manufacturer before expanding into vehicles, surpassing Tesla in global EV sales in 2024. New research from multiple German universities gives us a look at the battery technology powering these automotive giants by directly comparing Tesla’s 4680 cylindrical cell with BYD’s Blade prismatic cell.

The research, published in Cell Reports Physical Science, reveals rare insights into the design, performance, and manufacturing processes of these cutting-edge batteries. By dismantling and analyzing both cell types, the researchers found major differences in energy density, thermal efficiency, and material composition that show the distinct design philosophies of each manufacturer.

“There is very limited in-depth data and analysis available on state-of-the-art batteries for automotive applications,” says lead study author Jonas Gorsch from RWTH Aachen University, in a statement.

For the average consumer, these differences translate into real-world impacts on driving range, charging speed, vehicle cost, and safety. The study offers a window into how battery technology, the heart of any electric vehicle, is evolving through different approaches to solve the same fundamental challenge: how to store more energy safely and efficiently while reducing costs.

The Tale of Two Battery Designs

Tesla’s 4680 cell (named for its 46mm diameter by 80mm height dimensions) represents the company’s latest innovation in battery design. It’s significantly larger than previous cells used in the Model 3, allowing for higher energy density and reduced production costs. The “tabless” design further cuts costs by eliminating the need for certain manufacturing steps.

BYD’s Blade cell takes a completely different approach, using a rectangular prism shape with dimensions of 965mm in length, 90mm in height, and 14mm in thickness. This long, thin design prioritizes safety and cost-effectiveness while offering surprisingly competitive performance metrics despite using different materials.

The most striking difference between the cells is their chemistry. Tesla opts for NMC811 (a nickel-manganese-cobalt blend with high nickel content), delivering impressive energy density of 241 Wh/kg and 643 Wh/l. In simpler terms, Tesla packs more energy into the same weight and volume. BYD uses LFP (lithium iron phosphate), which achieves a more modest 160 Wh/kg and 355 Wh/l. This choice reflects BYD’s focus on cost-effectiveness and longevity over maximum range.

When examining heat management, the researchers found that the Tesla 4680 cell generates twice the heat per volume compared to the BYD Blade cell at the same charging rate. This difference impacts the cooling systems needed for fast charging and has implications for battery longevity and safety. Overall, the study revealed that BYD’s battery is more efficient because it allows easier temperature management.

Looking Inside: Construction and Materials

When researchers took apart the batteries, they found some major differences in how Tesla and BYD build their cells. Inside BYD’s Blade battery, the key components, the positive and negative layers (cathodes and anodes), are stacked in a Z-folded pattern with many thin layers in between. This design makes the battery safer and more durable, but it also means that electricity has to travel a longer path through the battery, which can reduce efficiency. To keep everything securely in place, BYD uses a special lamination method, sealing the edges of the separator (the thin layer that prevents short circuits between the positive and negative sides).

Tesla takes a different approach with its 4680 battery, using a “jelly roll” design, sort of like rolling up a long strip of paper. This setup helps electricity flow more directly, improving performance. One noticeable feature is a small empty space in the center, which likely helps with manufacturing and connecting the battery’s internal parts.

Unlike many other battery manufacturers that use ultrasonic welding, both Tesla and BYD rely on laser welding to connect their thin electrode foils. Despite the BYD cell being significantly larger than Tesla’s, both batteries have a similar proportion of non-active components, such as current collectors, housing, and busbars.

Source: https://studyfinds.org/tesla-vs-byd-ev-batteries/

An ugly truth? Attractive workers earn $20K more annually than ‘unattractive’ colleagues, survey shows

(Photo by PeopleImages.com – Yuri A)

We all know the saying “Don’t judge a book by its cover,” but a new survey suggests that in the workplace, your “cover” might matter more than you think — especially when it comes to income. A recent survey asked 1,050 Americans about “pretty privilege” – the idea that better-looking people get more advantages in life – and found that a whopping 81.3% believe it exists at work.

The results show how our appearance might be influencing everything from who gets hired to who gets that next big promotion.

Pretty privilege isn’t just limited to modeling or acting jobs. Eight in ten people surveyed believe attractive coworkers are more likely to be promoted, hired, or given raises. Even more telling, 66.9% of people have actually seen someone treated unfairly or talked about negatively because of how they look.

The survey, conducted by Standout CV, shows that the pressure to look good at work is real. About 64.2% of people feel pushed to change their natural features – like straightening their hair or wearing makeup – just to fit in at the office. And 83.4% think colleagues who put more effort into their appearance are seen as more capable professionals.

How We See Ourselves
When asked to rate their own workplace attractiveness on a scale of 1 to 10, the average person gave themselves a 7.7. Men seemed more confident about their looks, with 37.5% rating themselves a 9 or perfect 10, compared to only 27.4% of women.

These self-ratings revealed a lot about career experiences. Nearly half (46%) of people who rated themselves as unattractive (scoring 1-3) said their looks had hurt their careers – that’s five times higher than the average of 7.6%.

On the flip side, those who considered themselves good-looking (rating above 7) were likely to say their appearance helped them professionally (60.7%). This number jumped to 66.8% for those who gave themselves a 9 or 10.

People who saw themselves as average lookers (rating 4-6) were most likely to say their appearance had no impact on their work life (38% compared to just 16.2% overall).

Interestingly, one in five people said their looks affected their careers both positively and negatively. This stayed consistent regardless of how attractive they thought they were. This might happen when someone benefits from good looks but also faces issues like not being taken seriously.

In fact, 55.7% of people admitted to downplaying their appearance to be taken more seriously at work. This number rose to 68.7% among those who considered themselves very attractive.

Source: https://studyfinds.org/attractive-workers-earn-more-pretty-privilege/

How tattoo ink travels through the body, raising risks of skin cancer and lymphoma

(Photo by Getty Images in collaboration with Unsplash+)

Tattoos have become a mainstream form of self-expression, adorning the skin of millions worldwide. But a new study from Danish researchers uncovers concerning connections between tattoo ink exposure and increased risks of both skin cancer and lymphoma.

Approximately one in four adults in many Western countries now sport tattoos, with prevalence nearly twice as high among younger generations. The study, published in BMC Public Health, adds to growing evidence that the popular form of body art may carry long-term health consequences previously unrecognized.

The study’s lead author, Signe Bedsted Clemmensen, along with colleagues at the University of Southern Denmark, analyzed data from two complementary twin studies – a case-control study of 316 twins and a cohort study of 2,367 randomly selected twins born between 1960 and 1996. The team created a specialized “Danish Twin Tattoo Cohort” that allowed them to control for genetic and environmental factors when examining cancer outcomes among tattooed and non-tattooed individuals.

When comparing twins where one had cancer and one didn’t, researchers found that the tattooed twin was more likely to be the one with cancer. In the case-control study, tattooed individuals had a 62% higher rate of skin cancer compared to non-tattooed people. The cohort study showed even stronger associations, with tattooed individuals having nearly four times higher rate of skin cancer and 2.83 times higher rate of basal cell carcinoma.

Size appears to matter significantly. Large tattoos (bigger than the palm of a hand) were associated with substantially higher lymphoma and skin cancer risks than smaller tattoos, potentially due to higher exposure levels or longer exposure time. This dose-response relationship strengthens the case for causality rather than mere correlation.

“This suggests that the bigger the tattoo and the longer it has been there, the more ink accumulates in the lymph nodes. The extent of the impact on the immune system should be further investigated so that we can better understand the mechanisms at play,” says Clemmensen, an assistant professor of biostatistics, in a statement.

The Journey of Tattoo Ink Through the Body

Scientists have long known that tattoo ink doesn’t simply stay put in the skin. Particles from tattoo pigments migrate through the bloodstream and accumulate in lymph nodes and potentially other organs. The researchers proposed an “ink deposit conjecture” – suggesting that tattoo pigments trigger inflammation at deposit sites, potentially leading to chronic inflammation and increased risk of abnormal cell growth.

Black ink, the most commonly used tattoo color, has been a particular focus of concern. It typically contains soot products like carbon black, which the International Agency for Research on Cancer (IARC) has listed as possibly cancer-causing to humans. Through incomplete burning during carbon black production, harmful compounds form as byproducts, including benzo[a]pyrene, which IARC classifies as cancer-causing to humans.

“We can see that ink particles accumulate in the lymph nodes, and we suspect that the body perceives them as foreign substances,” explains study co-author Henrik Frederiksen, a consultant in hematology at Odense University Hospital and clinical professor at the university. “This may mean that the immune system is constantly trying to respond to the ink, and we do not yet know whether this persistent strain could weaken the function of the lymph nodes or have other health consequences.”

Colored inks pose their own problems. Red ink – often associated with allergic reactions – contains compounds that may release harmful substances when exposed to sunlight or during laser tattoo removal.

“We do not see a clear link between cancer occurrence and specific ink colors, but this does not mean that color is irrelevant,” notes Clemmensen. “We know from other studies that ink can contain potentially harmful substances, and for example, red ink more often causes allergic reactions. This is an area we would like to explore further.”

The researchers suggest that with tattoo prevalence rising dramatically, especially among younger people, public awareness campaigns might be needed to educate about potential risks.

“We are concerned that tattoo ink has severe public health consequences since tattooing is abundant among the younger generation,” they write in their conclusion. The team recommends further studies to pinpoint the exact biological mechanisms through which tattoo ink might induce cancer.

A Growing Body Of Research

This isn’t the first research to raise alarms about tattoo safety. Previous studies have documented cases of skin conditions and tumors occurring within tattoo areas. However, this large-scale study provides some of the strongest evidence yet for a relationship between tattoos and cancer.

For those already sporting tattoos, the research doesn’t suggest panic – but awareness. The time between tattoo exposure and cancer diagnosis in the study was substantial – a median of 8 years for lymphoma and 14 years for skin cancer. This suggests that cancers develop gradually over time, and monitoring for any changes in tattooed areas might be prudent.

The rise in popularity of tattoo removal services presents its own concerns. The researchers specifically highlight that laser tattoo removal breaks down pigments into smaller fragments that may be more mobile within the body, potentially increasing migration to lymph nodes and other organs.

As with many health studies, this research doesn’t definitively prove causation, but it adds significant weight to growing evidence of long-term risks. The researchers point out that even with new European restrictions on harmful compounds in tattoo inks, the body’s immune response to foreign substances might be problematic regardless of specific ink components.

Balancing Expression and Health

As tattoo culture continues to thrive globally, balancing personal expression through body art with health considerations becomes increasingly important.

With tattoos now firmly embedded in mainstream culture, this research doesn’t aim to stigmatize body art but rather to inform safer practices. Whether this means developing safer inks, improving tattoo application techniques, or simply making more informed choices about tattoo size and placement, understanding the biological impact of tattoo ink is essential for public health.

As the researchers conclude, further studies that pinpoint the biological mechanisms of tattoo ink-induced cancer are needed. Until then, those considering getting inked might want to weigh the aesthetic benefits against potential long-term health considerations – a balance that, like the perfect tattoo design, will be uniquely personal.

Source : https://studyfinds.org/tattoo-ink-skin-cancer-lymphoma/

How the pursuit of happiness ends up sending people on a path to misery

(Photo by Erce on Shutterstock)

We live in a happiness-obsessed world. Self-help gurus promise paths to bliss, Instagram influencers peddle happiness as a lifestyle, and corporations build marketing campaigns around the pursuit of positive emotions. But new research suggests a surprising twist: trying too hard to be happy might actually be making us miserable.

Researchers from the University of Toronto Scarborough and the University of Sydney found that actively pursuing happiness drains our mental energy – the same energy we need for self-control. Their study, published in Applied Psychology: Health and Well-Being, challenges what many of us believe about happiness.

“The pursuit of happiness is a bit like a snowball effect. You decide to try making yourself feel happier, but then that effort depletes your ability to do the kinds of things that make you happier,” says Sam Maglio, marketing professor at the University of Toronto Scarborough and the Rotman School of Management, in a statement.

This might sound familiar: You wake up determined to have a great day. You plan mood-boosting activities and work hard to stay positive. But by evening, you’re ordering takeout instead of cooking, mindlessly scrolling social media, and snapping at your partner. Why? Your happiness pursuit itself might be the problem.

Maglio puts it bluntly: “The more mentally rundown we are, the more tempted we’ll be to skip cleaning the house and instead scroll social media.”

Testing the Happiness Drain

The research team ran four studies that gradually built their case.

First, they surveyed 532 adults about how much they valued and pursued happiness, then measured their self-reported self-control. The results showed a clear pattern: people who placed higher value on seeking happiness reported worse self-control abilities.

For their second study, they moved beyond self-reports to actual behavior. They had 369 participants complete a series of consumer choice rankings and measured how long they persisted at the task. Those with stronger tendencies to pursue happiness showed less persistence, suggesting their mental resources were already running low.

From Happiness Ads to Chocolate Cravings

For their third study, the researchers got clever. They intercepted 36 people at a university library and showed them either an advertisement that prominently featured the word “happiness” or a neutral ad without any happiness messaging. Then they offered participants chocolate candies, telling them to eat as many as they wanted while rating the taste.

“The story here is that the pursuit of happiness costs mental resources,” Maglio explains. “Instead of just going with the flow, you are trying to make yourself feel differently.”

The results were striking: people exposed to the happiness ad ate nearly twice as many chocolates (2.94 vs. 1.56 on average) – a classic sign of decreased self-control. This raises questions about happiness-themed marketing campaigns – they might actually be draining our willpower and setting us up to make choices we later regret.

Not All Goals Drain You the Same

For their final experiment, the researchers tackled an important question: Is happiness-seeking uniquely depleting, or does pursuing any goal require mental energy?

They had 188 participants make 25 choices between pairs of everyday products (like choosing between an iced latte and green tea). One group was told to choose options that would “improve their happiness,” while the other group chose based on what would “improve their accurate judgment.” Then everyone worked on a challenging anagram puzzle where they could quit whenever they wanted.

The happiness group quit much sooner – lasting only 444 seconds on average compared to 574 seconds for the accuracy group. This significant difference suggested that pursuing happiness specifically drains mental energy more than other types of goals.

This wasn’t Maglio’s first investigation into happiness backfiring. In a 2018 study with Kim, he found that people actively seeking happiness tend to feel like they’re running short on time, creating stress that ultimately makes them unhappier.

The Pressure To Feel Even Better

The self-improvement industry rakes in over $10 billion largely by promising to boost happiness. Bestsellers like “The Happiness Project,” “The Art of Happiness,” and “The Happiness Advantage” sell millions of copies with strategies for maximizing positive emotions. But this research suggests many of these approaches might be working against themselves.

The researchers note that the self-help industry puts “a lot of pressure and responsibility on the self.” Many people now treat happiness like money – “something we can and should gather and hoard as much as we can.” This commodification of happiness may be part of the problem, creating a mindset where we’re constantly striving for more rather than appreciating what we have.

Why This Happens

Think of self-control like a gas tank that gets emptied throughout the day. Psychologist Roy Baumeister’s research shows that every act of self-control – resisting temptation, controlling emotions, making decisions – uses fuel from the same tank.

Seeking happiness burns through this fuel quickly because it requires managing your actions, monitoring your thoughts, and actively changing your emotions. When your tank runs low, you’re more likely to make poor choices like overeating, overspending, or being short with others – creating a cycle that ultimately makes you less happy.

The Real Secret To Happiness

So should we abandon the pursuit of well-being? Not exactly. But the research suggests a more balanced approach might work better.

Maglio suggests we think of happiness like sand at the beach: “You can cling to a fistful of sand and try to control it, but the harder you hold, the more your hand will cramp. Eventually, you’ll have to let go.”

His advice cuts through the complexity with refreshing simplicity: “Just chill. Don’t try to be super happy all the time,” says Maglio, whose work is supported by a grant from the Social Sciences and Humanities Research Council of Canada. “Instead of trying to get more stuff you want, look at what you already have and just accept it as something that gives you happiness.”

When we ease up on constantly trying to maximize happiness and accept a wider range of emotions, we might actually preserve the mental energy needed to make better decisions – and ultimately feel better.

Source : https://studyfinds.org/the-happiness-paradox-chasing-joy-backfires/

How financial stress can sabotage job satisfaction by fueling workplace burnout

Being stressed about your finances can lead to burnout at work. (PeopleImages.com – Yuri A/Shutterstock)

In today’s world, the boundaries between our personal and professional lives often blur. Many of us try to keep financial worries separate from our work life, but a new study from the University of Georgia suggests this separation may be wishful thinking. Research reveals that our financial well-being significantly impacts our job satisfaction, with workplace burnout playing a key role.

The study, published in the Journal of Workplace Behavioral Health, shows that when employees experience financial stress, it follows them to work, affecting their performance and satisfaction through increased burnout.

The Hidden Cost of Financial Stress at Work

The U.S. Surgeon General recognized this connection in 2024 by naming workplace well-being one of the top public health priorities. Yet remarkably, 60% of employers don’t consider employee well-being a top 10 initiative. This disconnect is costly with dissatisfied employees reportedly costing the U.S. economy around $1.9 trillion in lost productivity in 2023 alone.

“Stress from work can often leave people feeling tired and overwhelmed. Anxiety in other parts of life could make this even worse,” says lead author Camden Cusumano from the University of Georgia, in a statement. “Just as injury in one part of the body could lead to pain in another, personal financial stress can manifest in someone’s work performance.”

While previous research has examined connections between compensation and job satisfaction, this study takes a more holistic approach. Rather than focusing merely on salary figures, researchers investigated how employees’ overall assessment of their financial health impacts their workplace experience.

When Money Worries Follow You to Work

Their research distinguishes between two dimensions of financial well-being: current money management stress (present concerns) and expected future financial security (future outlook). Both of these affect job satisfaction in different ways.

“We call them different life domains. There’s the work domain, there might be the family domain, things like that,” says Cusumano. “But sometimes there’s spillover from one to the other. My finances might impact the way I’m feeling about the stress in my family, or if I’m working long hours, that might cause some conflict with my family as well.”

The researchers used the Conservation of Resources theory as their framework. This theory suggests people experience stress when they lose resources, face threats to their resources, or fail to gain new resources despite their efforts. In this context, financial well-being represents a crucial resource: a sense of security and control regarding one’s finances.

Burnout Beyond the Workplace

For the study, the researchers surveyed 217 full-time U.S. employees who earned at least $50,000 annually. This sample was deliberately chosen to focus on workers not predisposed to financial insecurity due to low income.

Burnout shows up in three main ways: feeling detached from yourself or others, feeling constantly tired, and feeling like your accomplishments don’t matter. All three combine to make employees tired and disengaged from their work.

Current money management stress didn’t directly affect job satisfaction but operated through increased burnout. In contrast, expected future financial security had a direct positive association with job satisfaction that wasn’t mediated by burnout.

These findings highlight that financial stress doesn’t just create problems at home; it fundamentally alters how employees experience their work. People feeling stressed about making ends meet today are more likely to experience burnout, which in turn reduces their job satisfaction. Meanwhile, those who feel secure about their financial future tend to be more satisfied with their jobs, regardless of burnout levels.

Future financial concerns may also play a role in job satisfaction. If a worker is feeling stressed about their current position, believing their financial situation may improve could enhance their views on their job.

Creating Better Workplace Support Programs

Employers often focus on compensation as the primary financial factor affecting employee satisfaction. However, if an employee’s financial struggles are leading to burnout and job dissatisfaction, addressing work-related factors alone won’t fully resolve the problem.

This research highlights the importance of developing personal financial management skills alongside professional development for employees. Building financial resilience may not only improve the quality of life at home but could also enhance workplace experience and career success, especially in today’s workforce where remote and hybrid work have further blurred the boundaries between work and personal life.

“Some companies are actually providing financial counseling to some of their employees,” says Cusumano. “They’re paying attention to how finances can really permeate different areas of life.”

Organizations could benefit from broadening their wellness initiatives to include financial well-being resources. Providing tools and support to help employees manage current financial stress and build future security could yield significant returns through improved job satisfaction and reduced burnout.

In the end, money might not buy happiness, but financial stress certainly seems capable of diminishing workplace satisfaction. By understanding these connections, both organizations and individuals can develop more effective strategies for navigating the complex relationship between financial health and workplace well-being.

Source : https://studyfinds.org/financial-stress-sabotaging-job-satisfaction-workplace-burnout/

What’s the shape of the universe?

(© Vector Tradition – stock.adobe.com)

Mathematicians use topology to study the shape of the world and everything in it
When you look at your surrounding environment, it might seem like you’re living on a flat plane. After all, this is why you can navigate a new city using a map: a flat piece of paper that represents all the places around you. This is likely why some people in the past believed the earth to be flat. But most people now know that is far from the truth.

You live on the surface of a giant sphere, like a beach ball the size of the Earth with a few bumps added. The surface of the sphere and the plane are two possible 2D spaces, meaning you can walk in two directions: north and south or east and west.

What other possible spaces might you be living on? That is, what other spaces around you are 2D? For example, the surface of a giant doughnut is another 2D space.

Through a field called geometric topology, mathematicians like me study all possible spaces in all dimensions. Whether trying to design secure sensor networks, mine data or use origami to deploy satellites, the underlying language and ideas are likely to be that of topology.

The shape of the universe
When you look around the universe you live in, it looks like a 3D space, just like the surface of the Earth looks like a 2D space. However, just like the Earth, if you were to look at the universe as a whole, it could be a more complicated space, like a giant 3D version of the 2D beach ball surface or something even more exotic than that.

While you don’t need topology to determine that you are living on something like a giant beach ball, knowing all the possible 2D spaces can be useful. Over a century ago, mathematicians figured out all the possible 2D spaces and many of their properties.

In the past several decades, mathematicians have learned a lot about all of the possible 3D spaces. While we do not have a complete understanding like we do for 2D spaces, we do know a lot. With this knowledge, physicists and astronomers can try to determine what 3D space people actually live in.

While the answer is not completely known, there are many intriguing and surprising possibilities. The options become even more complicated if you consider time as a dimension.

To see how this might work, note that to describe the location of something in space – say a comet – you need four numbers: three to describe its position and one to describe the time it is in that position. These four numbers are what make up a 4D space.

Now, you can consider what 4D spaces are possible and in which of those spaces do you live.

Topology in higher dimensions
At this point, it may seem like there is no reason to consider spaces that have dimensions larger than four, since that is the highest imaginable dimension that might describe our universe. But a branch of physics called string theory suggests that the universe has many more dimensions than four.

There are also practical applications of thinking about higher dimensional spaces, such as robot motion planning. Suppose you are trying to understand the motion of three robots moving around a factory floor in a warehouse. You can put a grid on the floor and describe the position of each robot by their x and y coordinates on the grid. Since each of the three robots requires two coordinates, you will need six numbers to describe all of the possible positions of the robots. You can interpret the possible positions of the robots as a 6D space.

As the number of robots increases, the dimension of the space increases. Factoring in other useful information, such as the locations of obstacles, makes the space even more complicated. In order to study this problem, you need to study high-dimensional spaces.

There are countless other scientific problems where high-dimensional spaces appear, from modeling the motion of planets and spacecraft to trying to understand the “shape” of large datasets.

Tied up in knots
Another type of problem topologists study is how one space can sit inside another.

For example, if you hold a knotted loop of string, then we have a 1D space (the loop of string) inside a 3D space (your room). Such loops are called mathematical knots.

The study of knots first grew out of physics but has become a central area of topology. They are essential to how scientists understand 3D and 4D spaces and have a delightful and subtle structure that researchers are still trying to understand.

Source: https://studyfinds.org/whats-the-shape-of-the-universe/

I Cut Out Sugar for a Month—Here’s What It Did for My Mental Health

All good things come in moderation

d3sign / Getty Images

I’ve never been one to turn down something sweet. A bar of chocolate to reward myself for a successful grocery shop, some dessert after dinner—since I only indulged a few times a week, I thought it was pretty harmless.

But after noticing how sluggish, irritable, and foggy I felt after sugar-heavy days, I started wondering: could my sugar intake be affecting my mental health?

With that question in mind, I decided to cut out added sugar for an entire month. No packets of jelly beans, no sweetened boba teas, and no honey in my morning oats. The goal wasn’t just to see how my body felt, but to observe whether eliminating sugar had any impact on my mood, energy levels, and mental clarity.

The result? Let’s just say it wasn’t what I expected.

Why I Decided to Cut Out Sugar
I don’t eat added sugar every day. Instead, I tend to indulge in a (very) sweet treat twice a week or so. I usually justify it by saying that I “deserve” a treat—to reward myself for a work victory, to celebrate a special occasion, or to comfort myself after a hard day.

There’s nothing wrong with treating yourself. But I eventually noticed that my sugar binge led to some uncomfortable symptoms, particularly brain fog, poor sleep, and mood swings.

“Excess sugar intake, especially from refined sources, can cause rapid spikes and crashes in blood sugar levels, which can lead to irritability, fatigue, and difficulty concentrating,” says dietician Jessica M. Kelly, MS, RDN, LDN, the founder and owner of Nutrition That Heals. “Over time, frequent blood sugar fluctuations can contribute to increased anxiety.”

“Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine,” adds Marjorie Nolan Cohn, MS, RD, LDN, CEDS-S, the clinical director of Berry Street. “These ups and downs make it harder to manage emotions, making mood swings more frequent.”

A 2017 study, which looked at data collected from 23,245 people, found that higher sugar intake is associated with depression, particularly in men. Participants with the highest level of sugar consumption were 23% more likely to have a diagnosed mental illness than those with the lowest level of sugar consumption.1

Over time, a high-sugar diet may increase the risk of depression by causing inflammation and disrupting brain chemicals like serotonin and dopamine.

— MARJORIE NOLAN COHN, MS, RD, LDN, CEDS-S
Other research, like this 2024 study, also suggests a link between depression and sugar consumption—but the authors point out this connection might be because mental distress can lead to emotional eating and make it harder to control cravings.2

For the purpose of my experiment, I needed to set some ground rules about the sugars I would and wouldn’t cut out.

According to Kelly and Nolan Cohn, not all sugars affect mental health in the same way. “Natural sugars found in, for example, fruit and dairy, accompany fiber, vitamins, and antioxidants that are health-promoting and slow glucose absorption,” Kelly explains. “Refined sugars, like those in sodas and candy, can cause rapid blood sugar spikes and crashes which can lead to mood swings and brain fog.”

Excited to see the results, I began my experiment!

Week 1: The “Oh Wow, Does That Really Contain Sugar?” Phase
During my first week, I didn’t experience changes in my mood, but rather in my behavior and mindset.

This experiment required me to pick up a new habit: reading nutritional labels and ingredient lists. Although giving up sugar was easy for the first few days, this habit was pretty hard.

I was surprised to learn that sugar is in a lot of things. Most of my favorite savory treats contained sugar. Even my usual “healthy” post-gym treat—a protein bar—was off-limits.

Surprisingly, I didn’t really have any sugar withdrawals, which can be common among people who typically consume a lot of sugar.

“Cutting out sugar can trigger strong cravings since it affects the brain’s reward system, this can lead to withdrawal-like urges, and for some, it can feel very intense,” says Nolan Cohn. Sugar withdrawal symptoms often include headaches, fatigue, and mood swings.

On day four, I had my first major challenge—I realized I could no longer grab some milk chocolate on the way out of the grocery store. Talking myself out of this was harder than I’d like to admit.

The biggest challenge for week one? Choosing what to eat in a restaurant. Most menus don’t specify which dishes contain sugar, and there’s a surprising amount of sugar in savory dishes, like tomato-based curries and wraps filled with sugary salad dressings.

By the end of week one, I felt like giving up. Although I didn’t have any major cravings, constantly checking food labels was annoying, and there were no notable benefits—at least, not yet.

Week 2: A Shift in Mood and Energy
Around the 10-day mark, things started changing for the better.

Even if I don’t eat a lot of sugar in my day-to-day diet and my home-cooked meals, I tend to treat myself—a lot. Food is a go-to source of comfort for me, often to my detriment. My mindset is often along the lines of, “Oh, who cares? It’s just a treat. It’s a special occasion!”

Because I wanted to stick to the experiment, I had to pause my “treat yo’self” mindset. As I was more mindful of sugar, I planned my snacks better, avoided getting takeout, and practiced more self-control while shopping for groceries.

More importantly, I had to actually engage with my feelings instead of eating them away.

On my therapist’s recommendation, I paid attention to the uncomfortable feelings that’d usually lead me to eat, and I journaled about them instead.

I also noticed some changes in my mood—finally! Because I wasn’t eating a lot of sugar and then crashing twice a week, my energy levels felt a bit more stable. This meant that my mood also felt more stable.

Week 3: Mental Clarity and Emotional Balance
By week three, I was genuinely surprised by how good I felt.

Not only was my energy and mood a little calmer, I was really chuffed with myself for managing to avoid sugar for such a long time.

 

Source: https://www.verywellmind.com/does-sugar-affect-mental-health-11683665

Morning blue light therapy can greatly improve sleep quality for older adults

Researchers say blue light exposure in the morning may be a healthier alternative to taking sleep medications. (amenic181/Shutterstock)

Getting older brings many changes, and unfortunately, worse sleep is often one of them. Many seniors struggle with falling asleep, waking up frequently during the night, and generally feeling less rested. But what if something as simple as changing your light exposure could help?

A new study from the University of Surrey has found that the right light, at the right time, might make a significant difference in older adults’ sleep and daily activity patterns. This research, published in GeroScience, reveals that morning exposure to blue-enriched light can be beneficial, while that same light in the evening can actually make sleep problems worse.

“Our research shows that carefully timed light intervention can be a powerful tool for improving sleep and day-to-day activity in healthy older adults,” explains study author Daan Van Der Veen from the University of Surrey, in a statement. “By focusing on morning blue light and maximizing daylight exposure, we can help older adults achieve more restful sleep and maintain a healthier, more active lifestyle.”

Why light timing matters

So why do older adults have more sleep troubles in the first place? Part of the problem lies in the aging eye. As we get older, our eyes undergo natural changes—the lens yellows, pupils get smaller, and we have fewer photoreceptor cells. All these changes mean less light reaches the brain’s master clock, located in a tiny region called the hypothalamic suprachiasmatic nuclei (SCN).

That yellowing lens is particularly problematic because it filters out blue light wavelengths specifically. It’s like wearing subtle yellow sunglasses all the time. This matters because blue light (wavelengths between 420 and 480 nanometers) is especially powerful at regulating our body clocks. With less blue light reaching their brains, older adults’ internal clocks can become weaker and more prone to disruption.

Many seniors also spend less time outdoors and have fewer social engagements, further reducing their exposure to bright natural light. Meanwhile, they might be getting too much artificial light at night, which can confuse the body’s natural rhythms.

The Surrey researchers wanted to see if they could improve sleep for older adults living independently at home by tweaking their light exposure. They recruited 36 people aged 60 and over who reported having sleep problems. None were in full-time employment, and all were free from eye disorders or other conditions that might complicate the study.

Over an 11-week period during fall and winter (when natural daylight is limited in the UK), participants followed a carefully designed protocol. They spent one week establishing baseline measurements, followed by three weeks using either blue-enriched white light (17,000 K) or standard white light (4,000 K) for two hours each morning and evening. After a two-week break, they switched to the other light condition for three weeks, followed by another two-week washout period.

Participants used desktop light boxes while going about normal activities like reading or watching TV. They wore activity monitors on their wrists around the clock and light sensors around their necks during the day. They kept sleep diaries and collected urine samples to measure melatonin metabolites, markers indicating how their internal clocks were functioning.

Morning light helps, evening light hurts

The results were telling. Longer morning exposure to the blue-enriched light significantly improved the stability of participants’ daily activity patterns and reduced sleep fragmentation. By contrast, evening exposure to that same light made it harder to fall asleep and reduced overall sleep quality.

Another key discovery was that participants who spent more time in bright light (above 2,500 lux, roughly the brightness you’d experience outdoors on a cloudy day) had more active days, stronger daily rhythms, and tended to go to bed earlier. This finding reinforces long-standing advice from sleep experts: getting outside during the day is really important for good sleep.

Morning people (early birds) naturally started their morning light sessions earlier than night owls. However, most participants used their evening light sessions at similar times, suggesting that social habits might influence evening routines more than biological clocks.

The women in the study showed more variable activity patterns throughout the day than men, and those who took more daytime naps had less stable daily rhythms and were generally less active.

Practical tips

By the end of the study, participants reported meaningful improvements in their sleep quality. This means light therapy could be a potential alternative to sleep medications, which often come with side effects.

“We believe that this is one of the first studies that have looked into the effects of self-administered light therapy on healthy older adults living independently to aid their sleep and daily activity,” says study author Débora Constantino, a postgraduate research student. “It highlights the potential for accessible and affordable light-based therapies to address age-related sleep issues without the need for medication.”

For older adults seeking better rest, the advice is clear:

  • Get bright, blue-enriched light in the morning: Use a light box or spend time outdoors after waking up.
  • Dim the lights in the evening: Reduce exposure to phones, tablets, and bright overhead lights.
  • Stay consistent: Establishing regular morning and evening routines can further support healthy sleep patterns.

This approach isn’t just for people in care homes or those with cognitive impairments; it can also benefit healthy, independent older adults. With an aging population worldwide, finding simple and effective strategies to improve sleep has never been more important. The right light at the right time might be a key part of aging well.

Source : https://studyfinds.org/morning-blue-light-therapy-boosts-sleep-quality-older-adults/

Belly fat can boost brain health? Yes — but to a point, study shows

(© sun_apple – stock.adobe.com)

Age-related cognitive decline sneaks up on millions of people worldwide. It begins with those frustrating “senior moments” in middle age and can progress to more serious memory and thinking problems later in life. While scientists have traditionally focused their attention directly on the brain to understand these changes, new research out of Toho University in Japan points to an unexpected contributor: your belly fat.

A study published in the journal GeroScience reveals that visceral fat—the deep fat surrounding your internal organs—plays a role in maintaining brain health through a chemical messaging system. You might have heard of BDNF (brain-derived neurotrophic factor)—think of it as brain fertilizer. It helps brain cells grow, survive, and form new connections. The more BDNF you have, the better your brain functions. But as you age, your BDNF levels naturally drop, and that’s when memory problems can start.

Here’s where belly fat comes in. This new study found that CX3CL1, a protein made by visceral fat, plays a big role in maintaining healthy BDNF levels. In younger mice, their belly fat produced plenty of CX3CL1, keeping their brain function strong. But as the mice aged, both their belly fat and their brain’s BDNF levels took a nosedive. When scientists artificially lowered CX3CL1 in young mice, their BDNF levels dropped too, mimicking the effects of aging. But when they gave older mice an extra dose of CX3CL1, their brain’s BDNF bounced back.

These findings flip conventional wisdom about belly fat on its head. While excess visceral fat is still harmful and linked to many health problems, this research suggests that healthy amounts of visceral fat early on serve an important purpose by producing signaling molecules that support brain health.

The research tracked male mice at different ages—5, 10, and 18 months old (roughly equivalent to young adult, middle-aged, and elderly humans). The 5-month-old and 10-month-old mice had similar levels of BDNF in their hippocampus, but by 18 months, these levels had dropped by about a third. This pattern matches the typical trajectory of cognitive aging, where significant decline often doesn’t begin until later in life.

Similarly, CX3CL1 production in visceral fat remained stable in younger mice but declined significantly in older animals, supporting a link between the two proteins.

Stress Hormones and the Fat-Brain Connection

To dig deeper, the researchers asked: What causes the drop in fat-derived CX3CL1 in the first place? The answer involved stress hormones like cortisol (in humans) and corticosterone (in mice).

“Glucocorticoids boost CX3CL1 production. An enzyme in belly fat called 11β-HSD1 reactivates inactive forms of glucocorticoids and keeps them active in cells, promoting glucocorticoid-dependent expression of CX3CL1,” study co-author Dr. Yoshinori Takei tells StudyFinds. “11β-HSD1 is essential for belly fat to respond to circulating glucocorticoids properly.”

But as we age, the amount of this enzyme declines, leading to lower CX3CL1 and BDNF levels. When 11β-HSD1 decreases with age, this entire system weakens, potentially contributing to memory loss.

The paper notes that while lower 11β-HSD1 in aging is problematic for CX3CL1 production and brain health, excessive 11β-HSD1 expression is linked to obesity-related diseases. High 11β-HSD1 levels are associated with metabolic syndrome, which is a known risk factor for cognitive decline.

Rethinking Belly Fat

The connection between belly fat and brain health highlights how intertwined our body systems really are. Our brains don’t operate in isolation but depend on signals from throughout the body—including, surprisingly, our fat tissue.

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems.

The best way to maintain brain health as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

The next time you pinch an inch around your middle, remember: there’s a conversation happening between your belly and your brain that science is just beginning to understand.

Paper Summary

How the Study Worked

The researchers used male mice of three different ages: 5 months (young adult), 10 months (middle-aged), and 18 months (elderly). They measured BDNF protein levels in the hippocampus using a test called ELISA that can detect specific proteins in tissue samples. They also measured CX3CL1 levels in visceral fat tissue using two methods: one that detects the RNA instructions for making the protein and another that detects the protein itself. To determine whether fat-derived CX3CL1 directly affects brain BDNF, they used a technique called RNA interference to reduce CX3CL1 production specifically in the belly fat of younger mice, then checked what happened to brain BDNF levels. They also injected CX3CL1 into older mice to see if it would restore their brain BDNF levels. To understand what regulates CX3CL1 production, they treated fat cells grown in the lab with different stress hormones. Finally, they measured levels and activity of the enzyme 11β-HSD1 in fat tissue from younger and older mice, and used RNA interference to reduce this enzyme in younger mice to see how it affected the fat-brain signaling system.

Results

The study uncovered several key findings. First, hippocampal BDNF levels were similar in 5-month-old and 10-month-old mice (about 300 pg BDNF/mg protein) but dropped by about one-third in 18-month-old mice (about 200 pg BDNF/mg protein). CX3CL1 levels in visceral fat showed a similar pattern, decreasing significantly in the oldest mice. When the researchers reduced CX3CL1 production in the belly fat of younger mice, their brain BDNF levels fell within days, similar to levels seen in naturally aged mice. On the flip side, a single injection of CX3CL1 into the abdominal cavity of older mice boosted their brain BDNF back up, confirming the connection between these proteins. The researchers also found that natural stress hormones (corticosterone in mice, cortisol in humans) increased CX3CL1 production in fat cells, while the enzyme 11β-HSD1 that activates these hormones was much less abundant in the fat tissue of older mice. When they reduced this enzyme in younger mice, both fat CX3CL1 and brain BDNF levels decreased, revealing another link in the signaling chain. Together, these results mapped out a communication pathway from belly fat to brain that becomes disrupted with age.

Limitations

While the study presents intriguing findings, several limitations should be kept in mind. The research used only male mice to avoid complications from female hormonal cycles, so we don’t know if the same patterns exist in females. The sample sizes were small, with most tests using just three mice per group. While this is common in basic science research, larger studies would strengthen confidence in the results. The researchers demonstrated connections between fat tissue signals and brain BDNF levels but didn’t directly test whether these changes affected the mice’s memory or cognitive abilities, though their previous work had shown that CX3CL1 injections improved recognition memory in aged mice. The study was also limited to specific ages in mice, and we don’t yet know how these findings might translate to humans across our much longer lifespan. Finally, the researchers used artificial RNA interference techniques to reduce CX3CL1 and enzyme levels for short periods—different from the gradual changes that occur during natural aging—which might affect how the results apply to real-world aging.

Discussion and Takeaways

This research reveals a previously unknown communication system between belly fat and the brain. Under normal conditions, stress hormones in the blood are activated by the enzyme 11β-HSD1 in visceral fat, which then produces CX3CL1. This fat-derived CX3CL1 signals through immune cells and the vagus nerve (a major nerve connecting internal organs to the brain) to maintain healthy BDNF levels in the hippocampus. As we age, reduced 11β-HSD1 in belly fat disrupts this signaling chain, contributing to lower brain BDNF and potentially to age-related memory problems. This discovery changes how we think about visceral fat, suggesting that while excess belly fat is harmful, healthy amounts serve important functions in supporting brain health. The findings also hint at future therapeutic possibilities—perhaps treatments could target components of this pathway to maintain brain function in aging. The researchers note that a careful balance is needed, as both too little 11β-HSD1 (associated with cognitive decline) and too much (linked to obesity and metabolic problems) appear harmful. For the average person concerned about brain health, this research underscores that the body works as an interconnected whole, with tissues we don’t typically associate with thinking—like fat—playing important roles in maintaining our cognitive abilities.

Funding and Disclosures

The study was supported by grants from the Japan Society for the Promotion of Science (JSPS KAKENHI). The lead researcher, Yoshinori Takei, and two colleagues received research funding through grants numbered 23K10878, 23K06148, and 24K14786. The researchers declared no competing interests, meaning they didn’t have financial or other relationships that might have influenced their research or how they reported it.

Publication Information

The paper “Adipose chemokine ligand CX3CL1 contributes to maintaining the hippocampal BDNF level, and the effect is attenuated in advanced age” was written by Yoshinori Takei, Yoko Amagase, Ai Goto, Ryuichi Kambayashi, Hiroko Izumi-Nakaseko, Akira Hirasawa, and Atsushi Sugiyama from Toho University and other Japanese institutions. It appeared in the journal GeroScience in February 2025, after being submitted in October 2024 and accepted for publication in January 2025. The paper can be accessed online using the identifier https://doi.org/10.1007/s11357-025-01546-4

Before you start thinking about packing on belly fat for the sake of your brain, don’t! The researchers stress that balance is key. Too little belly fat and you lose the brain-protecting effects, but too much can cause serious health problems. The best way to support your brain as you age is to focus on proven strategies: staying active, eating a balanced diet, managing stress, and keeping your mind engaged.

While this research is still in its early stages and was conducted in mice, it opens up fascinating possibilities for understanding how our bodies and brains are connected. Scientists may one day find ways to tap into this fat-brain communication system to slow cognitive decline and keep our minds sharper for longer.

Source : https://studyfinds.org/belly-fat-brain-health/

.

Menopause starting earlier? Half of women in their 30s reporting symptoms

A woman experiencing hot flashes due to menopause (Photo by Pheelings media on Shutterstock)

Perimenopause—the transitional phase leading up to menopause—has long been considered a mid-life experience, typically affecting women in their late 40s. However, new research reveals that a significant number of women in their 30s are already experiencing perimenopausal symptoms severe enough to seek medical attention.

In a survey of 4,432 U.S. women, researchers from Flo Health and the University of Virginia found that more than half of those in the 30-35 age bracket reported moderate to severe menopause symptoms using the validated Menopause Rating Scale (MRS). Among those who consulted medical professionals about their symptoms, a quarter were diagnosed as perimenopausal. This challenges the assumption that perimenopause is primarily a concern for women approaching 50.

The findings, published in the journal npj Women’s Health, highlight a significant gap in healthcare awareness and support for women experiencing early-onset perimenopause.

Unrecognized Symptoms and Healthcare Gaps

“Physical and emotional symptoms associated with perimenopause are understudied and often dismissed by physicians. This research is important in order to more fully understand how common these symptoms are, their impact on women, and to raise awareness amongst physicians as well as the general public,” says study co-author Dr. Jennifer Payne, MD, an expert in reproductive psychiatry at UVA Health and the University of Virginia School of Medicine, in a statement.

Despite medical definitions being well established, public understanding remains muddled. Many people use “menopause” as a catch-all term for both perimenopause and post-menopause. This confusion contributes to women feeling unprepared and unsupported during this transition.

The journey through perimenopause varies. Some women experience a smooth 5-7 year transition with manageable symptoms, while others face a decade-long struggle with physical and psychological challenges that impact daily life.

Early vs. Late Perimenopause

“Perimenopause can be broadly split into early and late stages,” the researchers explained. Early perimenopause typically involves occasional missed periods or cycle irregularity, while late perimenopause features greater menstrual irregularity with longer periods without menstruation, ranging from 60 days to one year.

The study identified eight symptoms significantly associated with perimenopause:

  • Absence of periods for 12 months or 60 days
  • Hot flashes
  • Vaginal dryness
  • Pain during sexual intercourse
  • Recent cycle length irregularity
  • Heart palpitations
  • Frequent urination

While symptom severity generally increased with age, women in their 30s and early 40s still experienced significant symptom burden. Among 30-35-year-olds, 55.4% reported moderate or severe symptoms, increasing to 64.3% in women aged 36-40.

“We had a significant number of women who are typically thought to be too young for perimenopause tell us that they have high levels of perimenopause-related symptoms,” said Liudmila Zhaunova, PhD, director of science at Flo. “It’s important that we keep doing research to understand better what is happening with these women so that they can get the care they need.”

Psychological vs. Physical Symptoms With Menopause

The study revealed patterns in symptom presentation across different perimenopause stages. Psychological symptoms—such as anxiety, depression, and irritability—tend to appear first, peaking among women ages 41-45 before declining. Physical problems, including sexual dysfunction, bladder issues, and vaginal dryness, peaked in women 51 and older. Classic menopause symptoms like hot flashes and night sweats were most prevalent between ages 51-55 and were least common among younger women.

These findings suggest that perimenopause follows a predictable symptom progression, with mood changes and cognitive issues appearing first, followed by more recognized physical symptoms in later stages.

Delayed Medical Attention

Despite high symptom burden, younger women are far less likely to seek medical help for perimenopause. The study found that while 51.5% of women over 56 consulted a doctor, only 4.3% of 30-35-year-olds did. However, among those who sought medical advice, over a quarter of 30-35-year-olds and 40% of 36-40-year-olds were diagnosed as perimenopausal.

The study used the Menopause Rating Scale (MRS), a validated tool that measures symptom severity across three domains: psychological symptoms, somato-vegetative symptoms (including hot flashes and sleep problems), and urogenital symptoms. While MRS scores were highest in the 51-55 age group, younger women still reported a significant symptom burden.

Implications for Healthcare and Awareness

“This study is important because it plots a trajectory of perimenopausal symptoms that tells us what symptoms we can expect when and alerts us to the fact that women are experiencing perimenopausal symptoms earlier than we expected,” Payne said.

These findings underscore the need for earlier education and support. Women in their 30s and early 40s may not recognize symptoms like irregular cycles, mood changes, and sleep disturbances as signs of perimenopause, leading to misdiagnosis or missed opportunities for treatment. This research calls for healthcare providers to adopt a more age-inclusive approach when evaluating these symptoms.

Additionally, the variability of perimenopause means a one-size-fits-all approach to management is inadequate. Psychological symptoms may dominate early perimenopause, while vasomotor and urogenital symptoms become more pronounced in later stages. Understanding these transitions can help tailor treatment strategies for individual needs.

Source : https://studyfinds.org/perimenopause-early-symptoms-women/

How one sleepless night upends the immune system, fueling inflammation

(© Andrii Lysenko – stock.adobe.com)

When you toss and turn all night, your immune system takes notice – and not in a good way. New research reveals that sleep deprivation doesn’t just leave you groggy and irritable; it actually transforms specific immune cells in your bloodstream, potentially fueling chronic inflammation throughout your body.

The study, published in The Journal of Immunology, finds a direct link between poor sleep quality and significant changes in specialized immune cells called monocytes. These altered cells appear to drive widespread inflammation – the same type of inflammation associated with obesity and numerous chronic diseases.

The research, conducted by scientists at Kuwait’s Dasman Diabetes Institute, demonstrates how sleep deprivation triggers an increase in inflammatory “nonclassical monocytes” (NCMs) – immune cells that amplify inflammation. More remarkably, these changes occurred regardless of a person’s weight, suggesting that even lean, healthy individuals may face inflammatory consequences from poor sleep.

Study authors examined three factors increasingly recognized as critical determinants of overall health: sleep, body weight, and inflammation. Though previous research established connections between obesity and poor sleep, this study goes further by identifying specific immune mechanisms that may explain how sleep disruption contributes to chronic inflammatory conditions.

“Our findings underscore a growing public health challenge. Advancements in technology, prolonged screen time, and shifting societal norms are increasingly disruptive to regular sleeping hours. This disruption in sleep has profound implications for immune health and overall well-being,” said Dr. Fatema Al-Rashed, who led the study, in a statement.

How the study worked

The research team recruited 237 healthy Kuwaiti adults across a spectrum of body weights and carefully monitored their sleep patterns using advanced wearable activity trackers. Participants were fitted with ActiGraph GT3X+ devices for seven consecutive days, providing objective data on sleep efficiency, duration, and disruptions. Meanwhile, blood samples revealed striking differences in immune cell populations and inflammatory markers across weight categories.

Obese participants demonstrated significantly lower sleep quality compared to their lean counterparts, along with elevated levels of inflammatory markers. Most notably, researchers observed marked differences in monocyte subpopulations across weight categories. Obese individuals showed decreased levels of “classical” monocytes (which primarily perform routine surveillance) and increased levels of “nonclassical” monocytes – cells known to secrete inflammatory compounds.

The study’s most compelling finding emerged when researchers discovered that poor sleep quality correlated with increased nonclassical monocytes regardless of body weight. Even lean participants who experienced sleep disruption showed elevated NCM levels, suggesting that sleep deprivation itself – independent of obesity – may trigger inflammatory responses.

To further test this hypothesis, researchers conducted a controlled experiment with five lean, healthy individuals who underwent 24 hours of complete sleep deprivation. The results were striking: after just one night without sleep, participants showed significant increases in inflammatory nonclassical monocytes. These changes mirrored the immune profiles seen in obese participants, supporting the role of sleep health in modulating inflammation. Even more remarkably, these alterations reversed when participants resumed normal sleep patterns, demonstrating the body’s ability to recover from short-term sleep disruption.

‘Sleep quality matters as much as quantity’

These findings highlight sleep’s crucial role in immune regulation and suggest that chronic sleep deprivation may contribute to inflammation-driven health problems even in individuals without obesity. The research points to a potential vicious cycle: obesity disrupts sleep, sleep disruption alters immune function, and altered immune function exacerbates inflammation associated with obesity and related conditions.

Modern life often treats sleep as a luxury rather than a necessity. We sacrifice rest for productivity, entertainment, or simply because our environments and schedules make quality sleep difficult to achieve. This study adds to mounting evidence that such trade-offs may have serious long-term health consequences.

For most adults, the National Sleep Foundation recommends 7-9 hours of sleep per night. Study participants averaged approximately 7.8 hours (466.7 minutes) of sleep nightly, but importantly, the research suggests that sleep quality matters as much as quantity. Disruptions, awakenings, and reduced sleep efficiency all appeared to influence immune function, even when total sleep duration seemed adequate.

Sleep efficiency – the percentage of time in bed actually spent sleeping – averaged 91.4% among study participants but was significantly lower in obese individuals. Those with higher body weights also experienced more “wake after sleep onset” (WASO) periods, indicating fragmented sleep patterns that may contribute to immune dysregulation.

How sleep impacts inflammation

The study also revealed intriguing connections between specific inflammatory markers and monocyte subpopulations. Nonclassical monocytes showed positive correlations with multiple inflammatory compounds, including TNF-α and MCP-1 – molecules previously linked to sleep regulation. This suggests that sleep disruption may initiate a cascade of inflammatory signals throughout the body, potentially contributing to various health problems.

While obesity emerged as a significant factor in driving inflammation, mediation analyses revealed that sleep disruption independently contributes to inflammation regardless of weight status. This finding challenges simplistic views of obesity as the primary driver of inflammation and highlights sleep’s importance as a modifiable risk factor for inflammatory conditions.

The implications extend beyond obesity-related concerns. Sleep disruption has been associated with numerous health problems, including cardiovascular disease, diabetes, and mental health disorders. This research provides potential mechanisms explaining these connections and suggests that improving sleep quality could reduce inflammation and associated risks.

Monocytes, crucial components of the innate immune system, patrol the bloodstream looking for signs of trouble. They differentiate into three main types: classical monocytes (which primarily perform surveillance), intermediate monocytes (which excel at presenting antigens and activating other immune cells), and nonclassical monocytes (which specialize in patrolling blood vessels and producing inflammatory compounds).

In healthy individuals, these monocyte populations maintain a careful balance. Sleep disruption appears to tip this balance toward inflammatory nonclassical monocytes, potentially contributing to a state of chronic low-grade inflammation throughout the body.

Is lack of quality sleep becoming a public health crisis?

This research provides compelling evidence that sleep quality deserves serious attention as a public health concern. The study suggests that even temporary sleep disruption can alter immune function, while chronic sleep problems may contribute to persistent inflammation – a condition increasingly recognized as a driver of numerous diseases.

For individuals struggling with obesity or inflammatory conditions, addressing sleep quality may provide additional benefits beyond traditional interventions focused on diet and exercise. The research also highlights potential concerns for shift workers, parents of young children, and others who regularly experience disrupted sleep patterns.

Healthcare providers may need to consider sleep quality as a critical factor when evaluating and treating patients with inflammatory conditions. Similarly, public health initiatives addressing obesity and related disorders might benefit from incorporating sleep improvement strategies alongside dietary and exercise recommendations.

The researchers are now planning to explore in greater detail the mechanisms linking sleep deprivation to immune changes. They also want to investigate whether interventions such as structured sleep therapies or technology-use guidelines can reverse these immune alterations.

“In the long term, we aim for this research to drive policies and strategies that recognize the critical role of sleep in public health,” said Dr. Al-Rashed. “We envision workplace reforms and educational campaigns promoting better sleep practices, particularly for populations at risk of sleep disruption due to technological and occupational demands. Ultimately, this could help mitigate the burden of inflammatory diseases like obesity, diabetes, and cardiovascular diseases.”

Source : https://studyfinds.org/sleep-deprivation-immune-system-inflammation/

How grapes could help preserve muscle health as you age

(Photo by J Yeo on Shutterstock)

Could adding grapes to your daily diet help maintain muscle strength and health as you age? A new mouse model study suggests these antioxidant-rich fruits might help reshape muscle composition, particularly in women, as they enter their later years.

Published in the journal Foods, this investigation — partially funded by the California Table Grape Commission — tracked 480 mice over two and a half years, examining how grape consumption affects muscle gene expression at a fundamental level. The findings highlight how something as simple as adding grapes to our daily diet might help support muscle health during aging.

Muscle loss affects millions of older adults worldwide, with 10-16% of elderly individuals experiencing sarcopenia—the progressive deterioration of muscle mass and function that comes with age. Women often face greater challenges maintaining muscle mass, particularly after menopause, making this research especially relevant for aging females.

Researchers from several U.S. universities discovered that consuming an amount of grapes equivalent to two human servings daily led to notable changes in muscle-related gene expression. While both males and females showed genetic shifts, the effects were particularly pronounced in females, whose gene activity patterns began shifting toward those typically observed in males.

This convergence occurred at the genetic level, where researchers identified 25 key genes affected by grape consumption. Some genes associated with lean muscle mass increased their activity, while others linked to muscle degeneration showed decreased expression.

What makes grapes so special? The fruit contains over 1,600 natural compounds that work together in complex ways. Rather than any single component being responsible for the benefits, it’s likely the combination of these compounds that produces such significant effects.

“This study provides compelling evidence that grapes have the potential to enhance muscle health at the genetic level,” says Dr. John Pezzuto, senior investigator of the study and professor and dean of pharmacy and health sciences at Western New England University, in a statement. “Given their safety profile and widespread availability, it will be exciting to explore how quickly these changes can be observed in human trials.”

Proper muscle function plays a crucial role in everyday activities, from maintaining balance to supporting bone health and regulating metabolism. The potential to help maintain muscle health through dietary intervention could significantly impact quality of life for aging adults.

The research adds to a growing body of evidence supporting grapes’ health benefits. Previous studies have shown positive effects on heart health, kidney function, skin protection, vision, and digestive health. This new understanding of grapes’ influence on muscle gene expression opens another avenue for potential therapeutic applications.

While the physical appearance and weight of muscles didn’t change significantly between groups, the underlying genetic activity showed marked differences. This suggests that grapes might influence muscle health at a fundamental cellular level, even before measurable functional changes occur—though further research is needed to confirm these effects.

For older adults concerned about maintaining their strength and independence, these findings suggest that a daily bowl of grapes in addition to regular exercise just might offer an additional tool in the healthy aging toolkit.. However, the researchers emphasize that human studies are still needed to confirm these effects.

Source : https://studyfinds.org/grapes-muscle-strength/

Why some people remember their dreams (and others don’t)

About a fourth of people don’t remember their dreams. (Roman Samborskyi/Shutterstock)

What were you dreaming about last night? For roughly one in four people, that question draws a blank. For others, the answer comes easily, complete with vivid details about flying through clouds or showing up unprepared for an exam. This stark contrast in dream recall ability has baffled researchers for decades, but a new study reveals there’s more to remembering dreams than pure chance.

From March 2020 to March 2024, scientists from multiple Italian research institutions conducted a sweeping investigation to uncover what determines dream recall. Published in Communications Psychology, their research surpassed typical dream studies by combining detailed sleep monitoring, cognitive testing, and brain activity measurements. The study involved 217 healthy adults between ages 18 and 70, who did far more than simply keep dream journals; they underwent brain tests, wore sleep-tracking wristbands, and some even had their brain activity monitored throughout the night.

Understanding dream recall has long puzzled researchers. Early studies in the 1950s focused mainly on REM sleep, the sleep stage characterized by rapid eye movements and vivid dreams. Scientists initially thought they had solved the mystery of dreaming by linking it exclusively to REM sleep. However, later research revealed that people also dream during non-REM sleep stages, though these dreams tend to be less vivid and harder to remember.

According to researchers at the IMT School for Advanced Studies Lucca, three main factors emerged as strong predictors of dream recall: a person’s general attitude toward dreaming, their tendency to let their mind wander during waking hours, and their typical sleep patterns.

To measure attitudes about dreaming, participants completed a questionnaire rating how strongly they agreed or disagreed with statements like “dreams are a good way of learning about my true feelings” versus “dreams are random nonsense from the brain.” People who viewed dreams as meaningful and worthy of attention were more likely to remember them compared to those who dismissed dreams as meaningless brain static.

Mind wandering proved to be another crucial factor. Using a standardized questionnaire that measures how often people’s thoughts drift away from their current task, researchers found that participants who frequently caught themselves daydreaming or engaging in spontaneous thoughts during the day were more likely to recall their dreams. This connection makes sense considering both daydreaming and dreaming involve similar brain networks, particularly regions associated with self-reflection and creating internal mental experiences.

The relationship between daydreaming and dream recall points to an intriguing possibility: people who spend more time engaged in spontaneous mental activity during the day may be better equipped to generate and remember dreams at night. Both activities involve creating mental experiences disconnected from the immediate external environment.

People who typically had longer periods of lighter sleep with less deep sleep (technically called N3 sleep) were better at remembering their dreams. During deep sleep, the brain produces large, slow waves that help consolidate memories but may make it harder to generate or remember dreams. In contrast, lighter sleep stages maintain brain activity patterns more similar to wakefulness, potentially making it easier to form and store dream memories.

Age was also a factor in dream recall. While younger participants were generally better at remembering specific dream content, older individuals more frequently reported “white dreams,” those frustrating experiences where you wake up knowing you definitely had a dream but can’t remember anything specific about it. This age-related pattern suggests that the way our brains process and store dream memories may change as we get older.

The researchers also discovered that dream recall fluctuates seasonally, with people remembering fewer dreams during winter months compared to spring and autumn. While the exact reason remains unclear, this pattern wasn’t explained by changes in sleep habits across seasons. One possibility is that seasonal variations in light exposure affect brain chemistry in ways that influence dream formation or recall.

Rather than relying on written dream journals, participants used voice recorders each morning to describe everything that was going through their minds just before waking up. This approach reduced the effort required to record dreams and minimized the chance that the act of recording would interfere with the memory of the dream itself.

Throughout the study period, participants wore wristwatch-like devices called actigraphs that track movement patterns to measure sleep quality, duration, and timing. A subset of 50 participants also wore special headbands equipped with electrodes to record their brain activity during sleep. This comprehensive approach allowed researchers to connect dream recall with objective measures of how people were actually sleeping, not just how they thought they slept.

“Our findings suggest that dream recall is not just a matter of chance but a reflection of how personal attitudes, cognitive traits, and sleep dynamics interact,” says lead author Giulio Bernardi, professor in general psychology at the IMT School, in a statement. “These insights not only deepen our understanding of the mechanisms behind dreaming but also have implications for exploring dreams’ role in mental health and in the study of human consciousness.”

The study authors plan to use these findings as a reference for future research, particularly in clinical settings. Further investigations could explore the diagnostic and prognostic value of dream patterns, potentially improving our understanding of how dreams relate to mental health and neurological conditions.

Understanding dream recall could provide insights into how the brain processes and stores memories during sleep. Dreams appear to draw upon our previous experiences and memories while potentially playing a role in emotional processing and memory consolidation. Changes in dream patterns or recall ability might serve as early indicators of neurological or psychiatric conditions.

Source : https://studyfinds.org/why-some-people-remember-their-dreams-others-dont/

This one change to your phone can reverse age-related cognitive issues by 10 years

(Photo by Alliance Images on Shutterstock)

New research reveals a surprisingly simple way to improve mental health and focus: turn off your phone’s internet. A month-long study found that blocking mobile internet access for just two weeks led to measurable improvements in well-being, mental health, and attention—comparable to the effects of cognitive behavioral therapy and reductions in age-related cognitive decline.

Researchers from multiple universities across the U.S. and Canada worked with 467 iPhone users (average age 32) to test how removing constant internet access would affect their daily lives. Instead of asking people to give up their phones completely, the study took a more practical approach. Participants installed an app that blocked mobile internet while still allowing calls and texts. This way, phones remained useful for basic communication but lost their ability to provide endless scrolling, social media, and constant online access.

The average smartphone user now spends nearly 5 hours each day on their device. More than half of Americans with smartphones worry they use them too much, and this jumps to 80% for people under 30. Despite these concerns, few studies have actually tested what happens when people cut back.

The results were significant. After two weeks without mobile internet, participants showed clear improvements in multiple areas. They reported feeling happier and more satisfied with their lives, and their mental health improved—an effect size that was greater than what is typically seen with antidepressant medications in clinical trials. They also performed better on attention tests, showing improvements comparable to reversing 10 years of age-related cognitive decline.

To measure attention, participants completed a computer task that tested their ability to stay focused over time. The improvements were meaningful—similar in size to the difference between an average adult and someone with mild attention difficulties. This suggests that constant mobile internet access may impair our natural ability to focus.

The study design was particularly strong because it included a swap halfway through. After the first two weeks, the groups switched roles—people who had blocked mobile internet got access back, while the other group had to block their internet. This strengthened the evidence that the improvements were caused by reduced mobile internet access rather than other factors.

“Smartphones have drastically changed our lives and behaviors over the past 15 years, but our basic human psychology remains the same,” says lead author Adrian Ward, an associate professor of marketing at the University of Texas at Austin, in a statement. “Our big question was, are we adapted to deal with constant connection to everything all the time? The data suggest that we are not.”

An impressive 91% of participants improved in at least one area. Without the ability to check their phones constantly, people spent more time socializing in person, exercising, and being outdoors—activities known to boost mental health and cognitive function.

Throughout the study, researchers checked in with participants via text messages to track their moods. Those who blocked mobile internet reported feeling progressively better over the two weeks. Even after regaining internet access, many retained some of their improvements, suggesting the break helped reshape their digital habits.

Interestingly, the benefits weren’t just from less screen time. While phone use dropped significantly during the study (from over 5 hours to under 3 hours daily), the improvements appeared linked specifically to breaking the habit of constant online connection. Even after getting internet access back, many participants kept their usage lower and continued feeling better.

One surprising finding involved people who started the study with a high “fear of missing out” (FOMO). Rather than making their anxiety worse, disconnecting from mobile internet led to the biggest improvements in their well-being. This suggests that constant access to social media and online updates may fuel digital anxiety rather than relieve it.

Blocking mobile internet also helped participants feel more in control of their behavior and improved their sleep. Without instant access to endless entertainment and social media, people reported having better control over their attention and averaged about 17 more minutes of sleep per night.

However, sticking to the program was difficult—only about 25% of participants kept their mobile internet blocked for the full two weeks. This highlights how dependent many of us have become on constant connectivity. Still, even those who didn’t fully adhere to the program showed improvements, suggesting that simply reducing mobile internet use can be beneficial.

The researchers noted that a less extreme approach might work better for most people. Instead of blocking all mobile internet, limiting access during certain times or restricting specific apps could provide similar benefits while being easier to maintain.

The takeaway is simple: reducing mobile internet access—even temporarily—can help improve well-being, mental health, and focus. While not everyone is ready to disconnect completely, finding ways to limit our online exposure could make us happier, healthier, and more present in our daily lives.

Source : https://studyfinds.org/digital-detox-keeping-phone-internet-off-wellbeing-focus-sleep/

Why morning people are more likely to conquer challenges

(© Anatoliy Karlyuk – stock.adobe.com)

It’s no surprise that our mental acuity and mood wax and wane during the day, but it may be surprising that most of us seem to be morning people.

In a study at University College London, researchers analyzed data collected from a dozen surveys of 49,218 respondents between March 2020 and March 2022. According to the report published recently in the British Medical Journal Metal Health, the data showed a trend of people claiming better mental health and wellbeing early in the day. They reported greater life satisfaction, increased happiness, and less severe depressive symptoms. They also reported a greater sense of self-worth earlier in the day. People felt worst around midnight. Mental health and mood were more variable on weekends. Loneliness was more stable throughout the week.

Dr. Feifei Bu, principal research fellow in statistics and epidemiology at University College, said in an email to CNN, “Our study suggests that people’s mental health and wellbeing could fluctuate over time of day. On average people seem to feel best early in the day and worst late at night.”

Research Limitations

Even though a correlation was discovered between morning, better mood, life satisfaction, and self-worth, there may be factors affecting the results not apparent in the research, Dr. Bu says.

How people were feeling may have affected when they filled out the surveys. As with most research, the findings need to be replicated. Studies need to be designed to adjust for or eliminate confounding variables, isolating specific questions as much as possible.

In addition, although mental health and well-being are associated, they are not the same thing. Well-being is a complex medley of mental, emotional, physical, cognitive, psychological, and spiritual factors. According to the World Health Organization, well-being is a positive state determined by social, economic, and environmental conditions that include quality of life and a sense of meaning and purpose.

Mental health is a significant contributor to well-being, but they don’t entirely overlap. Many people with mental health issues also enjoy what they describe as a good quality of life.

Also, while many reported feeling better in the morning, better is relative. When someone feels better in the morning, that doesn’t necessarily mean that they feel good.

In addition, mood is a temporary state; mental health and well-being are more stable conditions.

Do hard work when it’s best for you

Do these results mean to confront problems or do your hardest work first thing in the morning? Or does it mean not to problem-solve in the evening – just go to bed and tackle your issues in the morning? Not all research agrees, but more evidence points to late morning as the most productive time for problem-solving. Studies suggest that mood is more stable in the late morning, making it easier to confront more demanding matters with a cool head and less emotional influence.

Cortisol, an important body-regulating hormone that your adrenal glands produce and release, has a daily rhythm of highs and lows. It can also be secreted in bursts in response to stress. Cortisol tends to be lower in the midafternoon. This time is also associated with dips in mood and “decision fatigue.”

Source : https://studyfinds.org/why-morning-people-conquer-challenges/

 

Why intermittent fasting could be harmful for teens

(© anaumenko – stock.adobe.com)

Intermittent fasting has become one of the most popular eating patterns of the past decade. The practice, which involves cycling between periods of eating and fasting, has been praised for its potential health benefits. But a new mouse model study suggests that age plays a crucial role in how the body responds to fasting — and for young individuals, it might do more harm than good.

A team of German researchers recently discovered that while intermittent fasting improved health markers in older mice, it actually impaired important cellular development in younger ones. Their findings, published in Cell Reports, raise important questions about who should (and shouldn’t) try this trending eating pattern.

Inside our bodies, specialized cells in the pancreas produce insulin, a hormone that helps control blood sugar levels. These cells, called beta cells, are particularly important during youth when the body is still developing. The researchers found that in young mice, long-term intermittent fasting disrupted how these cells grew and functioned.

“Our study confirms that intermittent fasting is beneficial for adults, but it might come with risks for children and teenagers,” says Stephan Herzig, a professor at Technical University of Munich and director of the Institute for Diabetes and Cancer at Helmholtz Munich, in a statement.

The study looked at three groups of mice: young (equivalent to adolescence in humans), middle-aged (adult), and elderly. Each group followed an eating pattern where they fasted for 24 hours, followed by 48 hours of normal eating. The researchers tracked how this affected their bodies over both short periods (5 weeks) and longer periods (10 weeks).

At first, all age groups showed improvements in how their bodies handled sugar, which, of course, is a positive sign. But after extended periods of intermittent fasting, significant differences emerged between age groups. While older and middle-aged mice continued to show benefits, the young mice began showing troubling changes.

The pancreatic cells in young mice became less effective at producing insulin, and they weren’t maturing properly. Even more concerning, these cellular changes resembled patterns typically seen in Type 1 diabetes, a condition that usually develops in childhood or adolescence.

“Intermittent fasting is usually thought to benefit beta cells, so we were surprised to find that young mice produced less insulin after the extended fasting,” explains co-lead author Leonardo Matta, from Helmholtz Munich.

The older mice, however, actually benefited from the extended fasting periods. Their insulin-producing cells worked better, and they showed improved blood sugar control. Middle-aged mice maintained stable function, suggesting that mature bodies handle fasting periods differently than developing ones.

This age-dependent response challenges the common belief that intermittent fasting is suitable for everyone. The research suggests that while mature adults might benefit from this eating pattern, young people could be putting themselves at risk, particularly if they maintain the practice for extended periods.

The findings are especially relevant given how popular intermittent fasting has become among young people looking to manage their weight. While short-term fasting appeared safe across all age groups, the long-term effects on young practitioners could be significant.

“The next step is digging deeper into the molecular mechanisms underlying these observations,” says Herzig. “If we better understand how to promote healthy beta cell development, it will open new avenues for treating diabetes by restoring insulin production.”

Despite the attention they receive from athletes and wellness influencers, popular dietary trends aren’t one-size-fits-all. What works for adults might not be appropriate for growing bodies — all the more reason that understanding these age-related differences becomes increasingly important.

Source : https://studyfinds.org/intermittent-fasting-harmful-teens/

Brake dust could be more harmful to health than diesel exhaust

(© kichigin19 – stock.adobe.com)

As cities worldwide crack down on diesel vehicle emissions, a more insidious form of air pollution has been quietly growing alongside increased traffic – brake dust. Research concludes that the particles released when vehicles brake may actually be more harmful to human lung cells than diesel exhaust, with copper-rich brake pads emerging as a particular concern.

This finding comes at a critical time, as the shift toward heavier electric vehicles means more brake wear and potentially higher exposure to these harmful particles. While governments have made substantial progress in reducing exhaust emissions, brake dust remains largely unregulated despite contributing up to 55% of all traffic-related fine particles in urban areas.

Researchers at the University of Southampton and their collaborators examined how tiny particles from different types of brake pads affected human lung cells, focusing on the delicate air sacs where oxygen enters our bloodstream. They compared brake dust from four common types of brake pads against diesel exhaust particles. Much like comparing different recipes to see which ingredients might cause problems, they tested low-metallic, semi-metallic, non-asbestos organic (NAO), and ceramic brake pads.

Their findings, published in Particle and Fibre Toxicology, painted a concerning picture: brake dust from copper-enriched NAO and ceramic brake pads caused significantly more cellular stress and inflammation than both other brake pad types and diesel exhaust. These copper-rich particles triggered inflammatory responses and altered cell metabolism in ways that could potentially lead to disease.

Modern brake pads contain a complex mixture of materials that help vehicles stop safely. NAO brake pads, the most common type in the U.S. due to their low cost and good performance, were developed to replace asbestos-containing pads. However, manufacturers added copper fibers to maintain heat conductivity – a role previously filled by asbestos. This copper content turned out to be problematic.

When researchers exposed lung cells to NAO brake dust, copper accumulated inside the cells steadily as exposure increased. Using specialized molecules that bind to specific metals – like a magnet that only attracts one type of metal – they confirmed that copper was driving the harmful effects.

Perhaps most concerning was the discovery that copper-rich brake dust triggered a cellular response called “pseudohypoxic HIF signaling.” In simple terms, this means the cells behaved as if they were starving for oxygen even though plenty was available – similar to a false alarm that keeps cells in an unnecessary state of emergency. This same mechanism has been linked to various diseases, including certain cancers and scarring of lung tissue.

Some U.S. states, including California and Washington, have already begun restricting copper in brake pads – but these rules were originally created to protect fish and aquatic life from copper washing off roads into waterways, not to address human health concerns. This study suggests these restrictions may have the unexpected benefit of protecting human health as well.

Source: https://studyfinds.org/brake-dust-more-harmful-than-diesel-exhaust/

Eating yogurt may offer protection against hard-to-detect colon cancer

Yogurt has many health benefits. Now, new research shows it might be effective against certain colorectal cancers. (Photo by Vicky Ng on Unsplash)

For years, experts have praised yogurt’s potential benefits for digestive health, but that’s not the only punch it packs. New research suggests its cancer-fighting properties might be more nuanced than previously thought. A new study reveals that yogurt consumption may help prevent certain types of colorectal cancer, specifically those containing higher levels of beneficial bacteria called Bifidobacterium.

Colorectal cancer ranks as the third most common cancer worldwide, affecting both men and women. Prevention strategies have become increasingly important as rates rise, particularly among younger adults. While regular screening through colonoscopy remains the gold standard for early detection, researchers continue searching for dietary and lifestyle factors that might reduce cancer risk.

Research teams from Mass General Brigham and Harvard Medical School analyzed data from over 132,000 health professionals spanning multiple decades. Research published in Gut Microbes reveals a surprising link between yogurt consumption patterns and subsequent colorectal cancer diagnoses.

“Our study provides unique evidence about the potential benefit of yogurt,” says Dr. Shuji Ogino, chief of the Program in Molecular Pathological Epidemiology at Brigham and Women’s Hospital, in a statement. “My lab’s approach is to try to link long-term diets and other exposures to a possible key difference in tissue, such as the presence or absence of a particular species of bacteria. This kind of detective work can increase the strength of evidence connecting diet to health outcomes.”

Through two major studies, the Nurses’ Health Study and the Health Professionals Follow-up Study, researchers tracked more than 100,000 female nurses since 1976 and 51,000 male health professionals since 1986. Every two years, participants answered detailed questions about their health, lifestyle, and medical history. Every four years, they provided specific information about their diets, including how much plain and flavored yogurt they consumed.

This long-term tracking allowed researchers to understand not just occasional yogurt consumption but established eating patterns over decades. When participants developed colorectal cancer, researchers analyzed tumor samples for the presence of Bifidobacterium, a type of beneficial bacteria naturally present in the human gut and commonly added to yogurt products.

Among 3,079 documented colorectal cancer cases, researchers examined 1,121 for Bifidobacterium content. The findings revealed that this beneficial bacteria was quite common. Thirty-one percent of cases were Bifidobacterium-positive, while 69% were negative. For participants who ate two or more servings of yogurt per week, researchers observed a 20% lower rate of Bifidobacterium-positive tumors compared to those who ate yogurt less than once per month.

Most notably, this protective effect appeared strongest in the proximal colon, also known as the right side of the colon. Located near where the small intestine connects to the large intestine, the proximal colon poses unique challenges for cancer detection and treatment. Cancers in this area often grow with fewer obvious symptoms and are harder to spot during routine colonoscopy procedures. Research has shown that patients with proximal colon cancer typically face worse survival outcomes than those with cancers in other parts of the colon.

“It has long been believed that yogurt and other fermented milk products are beneficial for gastrointestinal health,” says co-senior author Dr. Tomotaka Ugai. “Our new findings suggest that this protective effect may be specific for Bifidobacterium-positive tumors.”

Bifidobacterium, a beneficial gut bacterium often found in yogurt, plays a role in digesting dietary fiber, maintaining gut barrier integrity, and regulating immune responses—all factors linked to colorectal cancer risk. The study’s authors hypothesize that yogurt consumption may contribute to a healthier gut microbiome, which in turn could influence cancer risk, particularly in the proximal colon.

However, because different yogurt products contain varying levels and strains of probiotics, more research is needed to determine whether specific types of yogurt provide greater protective benefits than others. Future studies may explore how dietary patterns interact with individual gut microbiomes to influence cancer risk, potentially leading to more personalized dietary recommendations for colorectal cancer prevention, though this remains an emerging area of research.

Regular yogurt consumers in the study demonstrated other healthy habits as well. They typically exercised more, smoked less, and maintained better overall dietary patterns than those who rarely ate yogurt. However, even after accounting for these factors, the association between yogurt consumption and reduced risk of Bifidobacterium-positive proximal colon cancer remained significant.

“This paper adds to the growing evidence that illustrates the connection between diet, the gut microbiome, and risk of colorectal cancer,” says Dr. Andrew Chan, chief of the Clinical and Translational Epidemiology Unit at Massachusetts General Hospital.

Beyond the general recommendation to consume yogurt, this research raises questions about which products might offer the most benefit. Not all yogurts contain the same bacterial strains or concentrations. While many products include Bifidobacterium, the amounts can vary significantly. Future research may help determine whether certain formulations provide better protection against colorectal cancer.

Different subtypes of colorectal cancer may respond differently to preventive measures, suggesting that a one-size-fits-all approach to prevention might not be optimal. This understanding could eventually lead to more personalized prevention strategies based on individual risk factors and gut bacterial composition.

Source: https://studyfinds.org/eating-yogurt-colon-cancer/

Is AI making us stupider? Maybe, according to one of the world’s biggest AI companies

Deferring to machines to make our decisions can have disastrous consequences when it comes human lives. (Credit: © Jakub Jirsak | Dreamstime.com)

There is only so much thinking most of us can do in our heads. Try dividing 16,951 by 67 without reaching for a pen and paper. Or a calculator. Try doing the weekly shopping without a list on the back of last week’s receipt. Or on your phone.

By relying on these devices to help make our lives easier, are we making ourselves smarter or dumber? Have we traded efficiency gains for inching ever closer to idiocy as a species?

This question is especially important to consider with regard to generative artificial intelligence (AI) technology such as ChatGPT, an AI chatbot owned by tech company OpenAI, which at the time of writing is used by 300 million people each week.

According to a recent paper by a team of researchers from Microsoft and Carnegie Mellon University in the United States, the answer might be yes. But there’s more to the story.

Thinking well
The researchers assessed how users perceive the effect of generative AI on their own critical thinking.

Generally speaking, critical thinking has to do with thinking well.

One way we do this is by judging our own thinking processes against established norms and methods of good reasoning. These norms include values such as precision, clarity, accuracy, breadth, depth, relevance, significance and cogency of arguments.

Other factors that can affect quality of thinking include the influence of our existing world views, cognitive biases, and reliance on incomplete or inaccurate mental models.

The authors of the recent study adopt a definition of critical thinking developed by American educational psychologist Benjamin Bloom and colleagues in 1956. It’s not really a definition at all. Rather it’s a hierarchical way to categorise cognitive skills, including recall of information, comprehension, application, analysis, synthesis and evaluation.

The authors state they prefer this categorization, also known as a “taxonomy”, because it’s simple and easy to apply. However, since it was devised it has fallen out of favor and has been discredited by Robert Marzano and indeed by Bloom himself.

In particular, it assumes there is a hierarchy of cognitive skills in which so-called “higher-order” skills are built upon “lower-order” skills. This does not hold on logical or evidence-based grounds. For example, evaluation, usually seen as a culminating or higher-order process, can be the beginning of inquiry or very easy to perform in some contexts. It is more the context than the cognition that determines the sophistication of thinking.

An issue with using this taxonomy in the study is that many generative AI products also seem to use it to guide their own output. So you could interpret this study as testing whether generative AI, by the way it’s designed, is effective at framing how users think about critical thinking.

Also missing from Bloom’s taxonomy is a fundamental aspect of critical thinking: the fact that the critical thinker not only performs these and many other cognitive skills, but performs them well. They do this because they have an overarching concern for the truth, which is something AI systems do not have.

Higher confidence in AI equals less critical thinking
Research published earlier this year revealed “a significant negative correlation between frequent AI tool usage and critical thinking abilities”.

The new study further explores this idea. It surveyed 319 knowledge workers such as healthcare practitioners, educators and engineers who discussed 936 tasks they conducted with the help of generative AI. Interestingly, the study found users consider themselves to use critical thinking less in the execution of the task, than in providing oversight at the verification and editing stages.

In high-stakes work environments, the desire to produce high-quality work combined with fear of reprisals serve as powerful motivators for users to engage their critical thinking in reviewing the outputs of AI.

But overall, participants believe the increases in efficiency more than compensate for the effort expended in providing such oversight.

The study found people who had higher confidence in AI generally displayed less critical thinking, while people with higher confidence in themselves tended to display more critical thinking.

This suggests generative AI does not harm one’s critical thinking – provided one has it to begin with.

Problematically, the study relied too much on self-reporting, which can be subject to a range of biases and interpretation issues. Putting this aside, critical thinking was defined by users as “setting clear goals, refining prompts, and assessing generated content to meet specific criteria and standards”.

Source: https://studyfinds.org/is-ai-making-us-stupider-maybe-according-to-one-of-the-worlds-biggest-ai-companies/

What’s the best time for taking a nap?

(© fizkes – stock.adobe.com)

If you’ve ever wondered about the best time to take a nap, researchers have found your answer: 1:42 p.m. This oddly specific time emerged from a new nationwide study that looked at how Americans nap and what makes some people better nappers than others.

The survey, conducted by Talker Research and commissioned by Avocado Green Mattress, found that most people aim for a 51-minute nap, which would have them waking up at 2:33 p.m. But there’s a catch – napping too long can leave you feeling worse than before you closed your eyes.

“As a psychologist, I see firsthand how sleep — especially napping — affects mood, focus and overall well-being. So many people nap the wrong way and then wonder why they feel groggy instead of refreshed,” says Nick Bach, who holds a doctorate in psychology, in a statement.

When Does a Nap Become Too Long?

The study found that naps lasting longer than an hour and 26 minutes – about 35 minutes past the “perfect” length – enter what researchers call the “danger zone.” At this point, you might feel groggy and disoriented instead of refreshed. And if you’re still sleeping after an extra hour and 44 minutes? That’s no longer a nap – you’ve drifted into a full sleep session.

But even the ideal 51-minute nap might be too long for most people. Bach warns, “I always tell people that if they nap too long, they risk entering deep sleep, which makes waking up harder. A quick 20-minute nap is perfect for a recharge without the dreaded sleep inertia.”

The Great Debate: TV vs. Silence

While sleep experts often recommend quiet, dark rooms for napping, many Americans have different ideas. The study found that 44% of people like having some background noise during their naps – similar to the 50% who prefer noise while sleeping at night. Nearly half of these nappers (47%) fall asleep with the TV on, while only 7% use a white noise machine.

Bach suggests a middle ground: “I always recommend napping in a quiet, dark and cool space. If total silence isn’t an option, using white noise or soft music can help.”

When it comes to where people nap, there’s another split between expert advice and real-world habits. While 53% follow the traditional route and nap in bed, 38% prefer catching their midday rest on the couch. As Bach notes, “Napping on the couch can work, but a bed with good support is usually better.”

Are Nappers More Successful?

Here’s where the research gets interesting: people who regularly take naps might have better social lives. The study found that 48% of self-described “nappers” report having a “thriving” social life, compared to 34% of non-nappers. The pattern continues in their love lives too, with 50% of nappers reporting satisfaction versus 39% of non-nappers.

While both groups were equally likely to be happy (74% of nappers versus 73% of non-nappers), nappers had a slight edge in feeling successful – 39% compared to 32% of non-nappers. They’re also more likely to care about making sustainable choices, with 74% of nappers considering environmental impact in their decisions versus 68% of non-nappers.

Getting the Timing Right

The study’s finding that 1:42 p.m. is the perfect nap time isn’t just a random number – it fits right into expert recommendations. “I think one of the biggest mistakes people make is napping too late,” Bach explains. “If you nap in the late afternoon or evening, it can mess with your nighttime sleep. Ideally, napping before 3 p.m. keeps your sleep schedule on track.”

The benefits of a well-timed nap are clear: 55% of people in the study said they felt more productive right after waking up from a nap. However, there’s a concerning trend – the Americans surveyed only felt well-rested for about half of an average week, suggesting that many might be using naps to make up for poor nighttime sleep.

Source : https://studyfinds.org/best-time-nap/

Why smart people cheat — even when there’s nothing to gain

Man crossing his fingers behind his back (© Bits and Splits – stock.adobe.com)

Study shows uncertainty might be the key to breaking self-deceptive behaviors

A fitness tracker mysteriously logs extra steps. A calorie-counting app somehow shows lower numbers. An online quiz score seems surprisingly high. While these scenarios might seem like harmless self-improvement tools, new research reveals they represent a fascinating psychological phenomenon: we often cheat unconsciously simply to feel better about ourselves, even when there’s nothing tangible to gain.

“I found that people do cheat when there are no extrinsic incentives like money or prizes but intrinsic rewards, like feeling better about yourself,” explains Sara Dommer, assistant professor of marketing at Penn State and lead researcher of a groundbreaking study published in the Journal of the Association for Consumer Research. “For this to work, it has to happen via diagnostic self-deception, meaning that I have to convince myself that I am actually not cheating. Doing so allows me to feel smarter, more accomplished or healthier.”

This phenomenon, which researchers call “diagnostic self-deception,” helps explain behaviors that traditional theories about cheating cannot. While previous research focused on cheating for material gain, Dommer’s work examines why people cheat even when the only reward is an enhanced self-image.

Inside the Self-Deception Experiments

Through four carefully designed studies, Dommer and her team revealed how this self-deceptive behavior works in everyday situations.

Calorie Counting Study

One of the most illuminating experiments tackled everyday calorie tracking. Researchers presented 288 undergraduate students with a three-day food diary scenario, including restaurant meals like pancakes, sandwiches, and pasta dishes. Some students received exact calorie counts from restaurant websites (e.g., “450 calories for a short stack of buttermilk pancakes”), while others only saw multiple options ranging from 300 to 560 calories.

The results showed that when students lacked specific caloric information, they consistently chose lower calorie estimates. Importantly, the study was designed so that averaging the provided calorie options would match the true caloric value. Instead, participants routinely selected lower numbers, effectively deceiving themselves about their food choices.

IQ Test Study

Another study examined intelligence self-deception using a cleverly designed IQ test. 195 Amazon Mechanical Turk workers took a multiple-choice IQ test. Half the participants saw the correct answers highlighted after a few seconds, allowing them to cheat if they wished. The other half took the test normally.

Not only did the group with access to answers score significantly higher, but they also predicted they would perform better on a future test where cheating wouldn’t be possible. Even more telling, when offered a monetary bonus for accurate predictions of their future performance, they still maintained these inflated expectations. This suggests they truly believed their enhanced scores reflected their intelligence rather than their ability to see the answers.

Anagram Study

A third study used word scrambles to measure intelligence, presenting participants with jumbled words like “konreb” (broken) and “eoshu” (house).” Some participants had to type their answers immediately, while others saw the correct answers after three minutes and were asked to self-report how many they had solved. Those who could self-report their scores claimed solving significantly more anagrams than those who had to prove their answers in real-time.

Financial Literacy Study

The final study tackled financial literacy with an interesting twist. Before taking a financial knowledge test, some participants read the statement: “MOST Americans rate themselves highly on financial knowledge, but two-thirds of American adults CANNOT pass a basic financial literacy test.” This simple reminder of uncertainty significantly reduced cheating behavior, suggesting that when people question their capabilities in an area, they become more interested in accurate self-assessment than self-enhancement.

The Results: What It All Means

These studies revealed a consistent pattern: when people could cheat without obvious external rewards, they did—but only if they could maintain the belief that their performance reflected real ability. In the calorie-tracking study, participants entered about 244 fewer calories per day when they could choose from multiple options. In the IQ test, those who could see answers scored an average of 8.82 out of 10, compared to 5.36 for the control group.

“Participants in the cheat group engaged in diagnostic self-deception and attributed their performance to themselves,” Dommer said. “The thinking goes, ‘I’m performing well because I’m smart, not because the task allowed me to cheat.’”

Importantly, this wasn’t just about inflating numbers. Participants genuinely seemed to believe in their enhanced performance. They predicted similar high scores on future tests where cheating wouldn’t be possible, rated the assessments as legitimate measures of ability, and showed increased confidence in their capabilities afterward.

This pattern only broke down when participants’ certainty about their abilities was shaken. When reminded about widespread overconfidence in financial literacy, participants’ cheating decreased significantly, and their self-assessments became more modest.

“I don’t think there’s a good cheating or a bad cheating,” Dommer said. “I just think it’s interesting that not all cheating has to be conscious, explicit and intentional. That said, these illusory self-beliefs can still be harmful, especially when assessing your financial or physical health.”

These findings give us a new understanding of why people might fudge their step counts or peek at answers during online assessments. It’s not just about hitting arbitrary goals or earning meaningless badges—it’s about maintaining and enhancing beliefs about our capabilities, even if we have to deceive ourselves to do it.

Even this seemingly harmless form of cheating comes with consequences. When people convince themselves they’re naturally gifted rather than acknowledging their shortcuts, they might avoid seeking necessary help or purchasing beneficial products and services.

“These illusory self-beliefs can be harmful, especially when assessing your financial or physical health,” Dommer warns.

The research suggests a potential solution: “How do we stop people from engaging in diagnostic self-deception and get a more accurate representation of who they are? One way is to draw their attention to uncertainty around the trait itself. This seems to mitigate the effect,” explains Dommer.

Final Takeaway: How to Avoid Self-Deception

So what’s the big takeaway, especially if you believe you might be guilty of such behavior? While self-deception can provide temporary emotional comfort, it’s worth examining our own tendencies toward unconscious cheating.

Take note when you round down calories, peek at answers, or inflate self-assessments. The goal isn’t to eliminate these behaviors entirely — they’re deeply human — but to recognize when uncertainty about our abilities might actually serve us better than false confidence.

As Dommer’s research shows, acknowledging our limitations often leads to more accurate self-assessment and, ultimately, genuine self-improvement. Companies offering self-assessment tools might consider building in reality checks or uncertainty cues to help users maintain more accurate perceptions of their abilities. After all, real growth starts with honest self-awareness, not comfortable self-deception.

Source : https://studyfinds.org/why-smart-people-cheat/

Devoted nap-takers explain the benefits of sleeping on the job

AP Illustration/Annie Ng

They snooze in parking garages, on side streets before the afternoon school run, in nap pods rented by the hour or stretched out in bed while working from home.

People who make a habit of sleeping on the job comprise a secret society of sorts within the U.S. labor force. Inspired by famous power nappers Winston Churchill and Albert Einstein, today’s committed nap-takers often sneak in short rest breaks because they think the practice will improve their cognitive performance but still carries a stigma.

Multiple studies have extolled the benefits of napping, such as enhanced memory and focus. A mid-afternoon siesta is the norm in parts of Spain and Italy. In China and Japan, nodding off is encouraged since working to the point of exhaustion is seen as a display of dedication, according to a study in the journal Sleep.

Yet it’s hard to catch a few z’s during regular business hours in the United States, where people who nap can be viewed as lazy. The federal government even bans sleeping in its buildings while at work, except in rare circumstances.

Individuals who are willing and able to challenge the status quo are becoming less hesitant to describe the payoffs of taking a dose of microsleep. Marvin Stockwell, the founder of PR firm Champion the Cause, takes short naps several times a week.

“They rejuvenate me in a way that I’m exponentially more useful and constructive and creative on the other side of a nap than I am when I’m forcing myself to gut through being tired,” Stockwell said.

The art of napping

Sleep is as important to good health as diet and exercise, but too many people don’t get enough of it, according to James Rowley, program director of the Sleep Medicine Fellowship at Rush University Medical Center.

“A lot of it has to do with electronics. It used to be TVs, but now cellphones are probably the biggest culprit. People just take them to bed with them and watch,” Rowley said.”

Napping isn’t common in academia, where there’s constant pressure to publish, but University of Southern California lecturer Julianna Kirschner fits in daytime naps when she can. Kirschner studies social media, which she says is designed to deliver a dopamine rush to the brain. Viewers lose track of time on the platforms, interrupting sleep. Kirschner says she isn’t immune to this problem — hence, her occasional need to nap.

The key to effective napping is to keep the snooze sessions short, Rowley said. Short naps can be restorative and are more likely to leave you more alert, he said.

“Most people don’t realize naps should be in the 15- to 20-minute range,” Rowley said. “Anything longer, and you can have problems with sleep inertia, difficulty waking up, and you’re groggy.”

Individuals who find themselves consistently relying on naps to make up for inadequate sleep should probably also examine their bedtime habits, he said.

A matter of timing

Mid-afternoon is the ideal time for a nap because it coincides with a natural circadian dip, while napping after 6 p.m. may interfere with nocturnal sleep for those who work during daylight hours, said Michael Chee, director of the Centre for Sleep and Cognition at the National University of Singapore.

“Any duration of nap, you will feel recharged. It’s a relief valve. There are clear cognitive benefits,” Chee said.

A review of napping studies suggests that 30 minutes is the optimal nap length in terms of practicality and benefits, said Ruth Leong, a research fellow at the Singapore center.

“When people nap for too long, it may not be a sustainable practice, and also, really long naps that cross the two-hour mark affect nighttime sleep,” Leong said.

Experts recommend setting an alarm for 20 to 30 minutes, which gives nappers a few minutes to fall asleep.

But even a six-minute nap can be restorative and improve learning, said Valentin Dragoi, scientific director of the Center for Neural Systems Restoration, a research and treatment facility run by Houston Methodist hospital and Rice University.

 

Neuroscience mystery solved? How our brains use experiences to make sense of time

Your brain learns patterns through your experiences to create timelines. (McCarony/Shutterstock)

Time flows as a constant stream of moments, but your brain sees patterns in this flow. Now, scientists have discovered exactly how individual neurons learn to recognize and predict these patterns, providing the first direct evidence of how our brains map out the structure of time.

The study, published in Nature, was conducted by researchers at UCLA Health. It required recording the activity of individual neurons in patients who had electrodes implanted in their brains for epilepsy treatment. These recordings offer a rare glimpse into how individual brain cells behave during learning and memory formation—something that’s impossible to observe with standard brain imaging techniques.

“Recognizing patterns from experiences over time is crucial for the human brain to form memory, predict potential future outcomes, and guide behaviors,” says Dr. Itzhak Fried, director of epilepsy surgery at UCLA Health, in a statement. “But how this process is carried out in the brain at the cellular level had remained unknown – until now.”

Prior to the main experiment, researchers needed to identify which images would trigger strong neural responses in each participant. They showed participants about 120 different pictures over 40 minutes, including images of celebrities, landmarks, and other subjects chosen partly based on each person’s interests. Based on how brain cells responded, researchers selected six specific images for each participant to use in the main experiment.

The main study had three phases. In the first phase, images appeared in random order while participants performed simple tasks, like identifying whether the person shown was male or female. During the middle phase, images appeared in sequences that followed specific rules, though participants weren’t told about these rules. Instead, they focused on a new task: determining whether each image was shown normally or in a mirror image. The final phase returned to random sequences and the original gender identification task.

The sequence rules were based on what researchers called a pyramid graph. Six points were arranged in a triangle shape, with each point representing one of the selected images. Lines connected certain points, indicating which images could appear after others. Some images were directly connected, like neighboring points on the graph. Others required taking an indirect path through multiple points to get from one to another.

What makes this study particularly fascinating is that it revealed how individual neurons adapted as participants became familiar with these sequences. At first, a neuron would respond strongly to just one specific image. But over time, these same neurons began responding to images that frequently appeared close together in the sequence, essentially mapping out the temporal relationships between different images.

The brain’s ability to encode these temporal patterns shares remarkable similarities with how it represents physical space. Previous research discovered that certain neurons act as “place cells,” firing when an animal reaches specific locations, while others function as “grid cells” that help measure distances. The new study shows the brain uses comparable mechanisms to map out sequences of events and experiences.

This research also builds on earlier discoveries about “concept cells,” neurons that respond to specific individuals, places, or objects. These specialized brain cells appear to be fundamental building blocks of memory. The new findings show how these neurons work together to create structured representations of our experiences through time.

The researchers discovered that this neural mapping created what they call a “successor representation,” a predictive map that considers not just immediate connections but likely future events. Rather than simply linking one moment to the next, your brain builds a broader model of likely future possibilities based on learned patterns.

“This study shows us for the first time how the brain uses analogous mechanisms to represent what are seemingly very different types of information: space and time,” explains Fried. “We have demonstrated at the neuronal level how these representations of object trajectories in time are incorporated by the human hippocampal-entorhinal system.”

During breaks between testing phases, researchers observed “replay” events, moments when neurons would rapidly rehearse the learned sequences in a compressed timeframe. This neural replay happened in milliseconds, suggesting a mechanism for consolidating learned patterns into memory.

Understanding how the brain encodes temporal patterns goes beyond basic science. The findings could help develop new treatments for memory disorders and advance the design of brain-computer interfaces. They may also inform artificial intelligence systems that aim to process sequential information in ways that mirror human cognition.

Source : https://studyfinds.org/brain-experiences-sense-of-time/

9 predictions for the biggest research breakthroughs of 2025

(Photo by Nan_Got on Shutterstock)

From personalized medicine to wearable technology to hair loss innovations, this year could provide no shortage of ways for humans to live healthier
Remember when science fiction promised us flying cars and robot butlers? Well, 2025’s actual breakthroughs might not help you commute through the clouds, but they’re poised to transform something far more important: how we understand and care for our human bodies and minds. From reversing hair loss to regenerating teeth, from predicting mental health patterns to personalized genetic treatments, we’re standing on the edge of discoveries that would have seemed like science fiction just a few years ago.

We asked a panel of nine experts to provide us with their predictions for this year’s biggest research breakthroughs. If there’s one thing we can say, we’re looking forward to a world where the ability to gauge and improve our health might be easier than ever before.

What makes these predictions (or should we really call them expectations?) especially fascinating is how they’re all connected by two powerful threads: the rise of personalized medicine and the integration of artificial intelligence. Gone are the days of one-size-fits-all healthcare – whether we’re talking about stress management, dental care, or treating obesity, researchers are uncovering ways to tailor treatments to each person’s unique genetic makeup, gut microbiome, and lifestyle patterns.

But perhaps the most exciting shift isn’t just in what these breakthroughs might achieve, but in how they’re changing our entire approach to healthcare. Instead of waiting for problems to occur and then treating them, 2025’s innovations are all about prevention and early intervention. Imagine a world where your smartwatch can predict a mental health dip before you feel it, where your genes can be edited to prevent diseases before they start, or where your teeth could actually repair themselves. That world isn’t just science fiction anymore – it’s right around the corner.

Advancements in Aging and Mental Health Research

As a geriatric psychiatrist and someone deeply immersed in caregiving and aging issues, I predict 2025 will bring significant advancements in research focused on aging, mental health, and caregiver support. One of the most exciting areas is the use of AI-driven health technologies to detect and manage age-related conditions earlier. For example, wearable devices are becoming smarter at identifying early signs of cognitive decline or physical frailty. I anticipate new breakthroughs in how these tools deliver actionable insights, empowering families and caregivers to intervene before major health events occur.

Another area I’m watching closely is personalized medicine for mental health. Research into biomarkers and genetic testing is advancing quickly, and I believe we’ll soon see targeted treatments for depression, anxiety, and cognitive disorders that are more effective and have fewer side effects. This could be life-changing for older adults who struggle with medication tolerance or for caregivers managing their own stress.

Finally, I predict a surge in studies exploring the psychosocial aspects of caregiving. Researchers are diving deeper into the mental health impacts of caregiving and testing interventions — like mindfulness programs, virtual support groups, and even VR therapy-that help caregivers cope with stress and maintain their well-being. These innovations are essential as caregiving responsibilities grow more common and complex.

What excites me most is the focus on holistic approaches that integrate mental, emotional, and physical health. Whether it’s smarter tech, personalized care, or emotional resilience tools, I believe these breakthroughs will make life better for aging adults and their caregivers too — helping us all age with more grace, dignity, and support.

Mind-Body Connection and Stress Management

2025 will be the year of the mind-body connection — specifically, in understanding how chronic stress physically impacts our bodies. Eighty percent of our nervous system carries information from the body to the brain, not the other way around – yet our approach to mental health targets the mind first.

We’ve already seen at NEUROFIT that our average active user reports a 54% reduction in stress after just one week of mind-body practices — more studies will show how physical interventions can be more effective than traditional cognitive approaches for managing stress and mental health.

Measurement technology can help lead this trend. With wearables becoming more sophisticated, I anticipate studies showing how real-time biometric data can predict stress-related health issues before they become severe. Our own research analyzing millions of stress data points shows that certain physiological patterns consistently precede burnout. Given that chronic stress leads to $1T+ in healthcare expenses each year, I expect to see major studies validating these early warning signals, potentially revolutionizing preventive healthcare.

Another exciting area is the intersection of behavioral science and technology. Studies are currently exploring how brief, targeted interventions can create lasting changes in stress response patterns. We’ve found that 95% of our users experience immediate stress relief within five minutes of specific somatic exercises. I predict we’ll see research showing how short, consistent practices can rewire the nervous system more effectively than longer, sporadic interventions.

Finally, I think we’ll see breakthrough research on social connection’s role in nervous system regulation. Our data shows that prioritizing social play can improve emotional balance by up to 26%. I expect studies in 2025 will further validate how structured social interactions can significantly impact stress resilience and overall mental health outcomes.

These developments could radically change how we approach stress management and mental health care, moving from reactive treatment to proactive regulation and prevention.

Regenerative Medicine in Dentistry

As a dentist, I’m particularly excited about advancements in regenerative medicine and biomaterials for dentistry. Researchers are exploring ways to grow dental tissues or repair teeth using stem cells, which could revolutionize how we treat tooth decay and damage. Imagine being able to regenerate lost enamel or even replace a missing tooth without needing implants. These breakthroughs could lead to less invasive and more natural dental solutions for patients.

In the broader medical field, wearable technology and AI-driven diagnostics are also advancing quickly. Devices that monitor health metrics like glucose levels, heart rate, and oral health indicators in real time could become more accurate and accessible. These tools could improve preventive care by catching potential health issues early, leading to better outcomes for patients. I believe 2025 will bring us closer to more personalized and proactive healthcare.

AI-Driven Personalized Medicine

By 2025, I foresee significant advancements in AI-driven personalized medicine, wherein the integration of genomics, patient data analytics, and AI will result in much more precise and targeted treatments. There is a growing interest among researchers in developing AI-powered algorithms that could forecast disease progression based on a person’s genetic makeup, lifestyle, and environmental exposure. This would facilitate more proactive and personalized interventions, especially in chronic disease management, oncology, and neurology.

Another field that I predict breakthroughs in is the integration of AI and wearables for real-time health monitoring. Various studies are underway that test wearable technologies that gather continuous physiological data, to be analyzed by AI to spot early signs of impending heart attacks, strokes, or complications arising from diabetes, even before symptoms begin to appear. This will change healthcare from being a reactive practice to proactive care and ensure timely intervention for patients.

Finally, I foresee a rapid increase in research on regenerative medicine, specifically on stem cell therapies and tissue engineering. With technological advancement, there may come the ability to regenerate tissues and organs damaged from specific life traumas, diseases, and conditions that, until today, could not be cured-such as heart disease, spinal cord injury, and neurodegenerative diseases. This space will no doubt interconnect with AI and machine learning to better results and hastened efficacies in treatments.

Genetics and Personalized Preventative Medicine

As a recruiter working in the life sciences industry, I have insider knowledge of the hiring shifts promising to transform medicine in the coming years.

Right now, it’s all about genetics. Personalized preventative medicine is what everyone wants. In other words, why treat a disease if you can avoid it? Tailored care takes into account a patient’s predispositions on a genetic level and neutralizes the threat before it manifests. It’s more possible than ever before, and I’m placing top talent in the sector daily. These candidates range from analysts looking at large data samples to patient-facing counselors focusing on a single profile, but by far, genetic therapy is the most exciting. With CRISPR technology, we’re on the cusp of being able to rework genetic abnormalities to our advantage, instead of simply waiting for them to be expressed. This has the potential to disrupt our understanding of the entire human body.

Progress in Hair Regeneration Research

In 2025, I anticipate significant progress in hair regeneration research, particularly in stem cell therapy and gene editing technologies. These studies aim to revolutionize treatments for hair loss by targeting the root causes at the cellular level. For example, researchers are exploring ways to reactivate dormant hair follicles or create lab-grown hair that matches the individual’s natural growth patterns.

Additionally, advancements in understanding the scalp’s microbiome could lead to personalized solutions for conditions like dandruff and inflammation, which impact hair health. These breakthroughs can potentially make treatments more effective, less invasive, and tailored to the unique needs of every individual. It’s an exciting time for the field of hair health.

Personalized Nutrition Through Microbiome Research

Work-in-progress research conducted within the microbiome of the gut in 2025 will start having a direct influence on health. Scientists are unpacking how unique gut bacterial profiles influence everything from nutrient absorption to immune functions, thus opening the door to personalized nutrition solutions.

Several ongoing studies aim to develop new microbiome-based tools to offer cures for chronic ailments, like obesity, IBS, and diabetes. These advances promise to allow personalized nutrition based on individual gut composition.

Introducing support changes the approach whereby monitoring gut health will be done using wearable technology and combined with microbiome research. This will allow individuals to take proactive and data-driven approaches to wellness.

Revolutionizing Obesity Treatment with GLP-1

The year 2025 can be considered a significant milestone in the treatment of obesity as there will be advancements in the GLP-1 receptor agonist drugs. These drugs, in particular Ozempic, are being investigated for their benefits going beyond weight loss and improving the patient’s metabolic profile and decreasing the likelihood of chronic diseases including diabetes and cardiovascular diseases.

The study is not only confined to the management of obesity and weight loss but it also focuses on the effects of GLP-1 on the brain, control of appetite, and inflammatory cytokines. It is also found that it can help in preventing neurodegenerative diseases and is associated with enhanced cognitive performance.

GLP-1 therapies are expected to enhance these developments when combined with AI-based personalized medicine. Through the use of genetic and metabolic data, clinicians may be able to determine the best course of treatment for each patient thus achieving better outcomes.

Source : https://studyfinds.org/9-predictions-biggest-research-breakthroughs-202/

Teens spend 90+ minutes on their phones during typical school day

(Photo by BearFotos on Shutterstock)

As schools nationwide grapple with smartphone policies, new research provides unprecedented and shocking insight into how teenagers use their phones during school hours. Using sophisticated tracking technology, researchers discovered that students spend an average of 92 minutes on their smartphones during a typical school day, with a quarter of students exceeding 2 hours of use.

Moving beyond simple screen time measurements, researchers deployed passive sensing technology to paint a detailed picture of how and when adolescents use their phones during the school day. Their findings raise important questions about the role of smartphones in modern education and their potential impact on learning.

Research led by Dr. Dimitri A. Christakis at Seattle Children’s Research Institute found that this school-day phone use accounts for approximately 27% of students’ total daily phone usage, which averages 5.59 hours. More revealing than the raw numbers is how students spend their phone time during school hours.

Social media and messaging dominate school-hour phone use, with Instagram leading social platforms. Instagram users in the study spent an average of about 25 minutes on the platform during school hours alone. Messaging and chat applications averaged 19.5 minutes of use during school hours, while video streaming services claimed about 17 minutes.

Looking at demographic patterns, older teens (ages 16-18) logged significantly more phone time during school hours compared to younger teens (ages 13-15), spending about 33 more minutes on their devices. Female students showed higher usage rates than male students, using their phones approximately 29 minutes more during school hours.

Parental attempts to limit screen time appeared to have little impact on school-hour phone use. Students with parental limits on screen time showed similar usage patterns to those without restrictions, suggesting that school-based interventions might be more effective than home-based rules.

Educational background of parents emerged as a significant factor. Students whose parents held bachelor’s degrees spent about 32 minutes less time on their phones during school hours compared to peers whose parents did not have college degrees. This correlation raises important questions about the role of family educational culture in shaping student technology habits.

The study, published in JAMA Pediatrics, also revealed interesting patterns among different demographic groups. Hispanic students showed significantly higher social media use during school hours compared to their white peers, spending about 25 more minutes on social platforms. Meanwhile, students identifying as LGBTQIA+ showed similar usage patterns to their non-LGBTQIA+ peers, with no statistically significant differences in overall phone use.

While smartphones offer potential benefits for learning and communication, these findings suggest their primary use during school hours may be misaligned with educational goals. More schools are expected to implement phone restrictions in the coming years, with research like this providing valuable data to inform those policy decisions.

Source : https://studyfinds.org/teens-spend-90-minutes-phones-during-school/

From A to Zzzs: The science behind a better night’s sleep

It’s no secret that a good night’s sleep plays a vital role in mental and physical health and well-being. The way you feel during your waking hours depends greatly on how you are sleeping, say sleep experts.

A pattern of getting inadequate or unsatisfying sleep over time can raise the risk for chronic health problems and can affect how well we think, react, work, learn and get along with others.

According to the National Heart, Lung and Blood Institute, an estimated 50 to 70 million Americans have sleep disorders, and one in three adults does not regularly get the recommended amount of uninterrupted sleep needed to protect their health.

Many factors play a role in preparing the body to fall asleep and wake up, according to the National Institutes of Health. Our internal “body clock” manages the sleep and waking cycles and runs on a 24-hour repeating rhythm, called the circadian rhythm. This rhythm is controlled both by the amount of a sleep-inducing compound called adenosine in our system and cues in our environment, such as light and darkness. This is why sleep experts suggest keeping your bedroom dark during your preferred sleeping hours.

Sleep is also controlled by two main hormones, melatonin and cortisol, which our bodies release in a daily rhythm that is controlled by the body clock.

Exposure to bright artificial light—such as from television, computer and phone screens—late in the evening can disrupt this process, making it hard to fall asleep, explained Sanjay Patel, director of the UPMC Comprehensive Sleep Disorders Clinical Program and a professor of medicine and epidemiology at the University of Pittsburgh.

Keeping our body clock and hormone levels more-or-less regulated are the best ways to consistently achieve good sleep, Patel said. He encouraged people with sleeping struggles to focus more on behavioral changes than seeking quick fixes, such as with over-the-counter sleep supplements like melatonin or by upping alcohol intake to feel drowsy.

Patel said there’s not much clinical evidence that melatonin supplements work very well, and that “a lot of the clinical trials of melatonin haven’t shown consistent evidence that it helps with insomnia.”

He did point out that the supplement isn’t particularly harmful either, except when “people start increasing and increasing the dose. And in particular, we worry about the high doses that a lot of children are being given by their parents, where it really can cause problems,” he said. Taking any more than three to five milligrams doesn’t increase the sedative effects, “and yet, we see people showing up to clinic all the time taking 20 milligrams.”

Sleeping potions

Many have suggested that warm milk, chamomile tea or tart cherry juice can induce a somniferous effect. While Patel said there’s no evidence they work, he did point out that they’re preferable to a nightcap.

“Alcohol is really bad for your sleep long term, for a number of reasons,” Patel said. First, alcohol can relax the throat muscles and can make sleep apnea and snoring worse for sufferers. Secondly, the body metabolizes alcohol rather quickly so its sedation effects do not last throughout the night.

“So while it may put you to sleep, what happens is, three or four hours later, the alcohol has been metabolized, and now you will wake up from not having alcohol in your system,” he said.

Evening libations can also increase acid reflux and long-term drinking can cause “changes in your brain chemistry and is a big cause of insomnia,” he said. Heavy drinkers who suffer from insomnia will often increase their intake of alcohol in an effort to fall asleep, thus creating a dangerous cycle that could lead to alcohol use disorder.

Cannabis is not much better, Patel said.

While a handful of pot users—specifically those who use it to treat anxiety—may see some sleep benefits, for the most part cannabis often does not help chronic insomnia and will likely make it worse.

“They actually see a lot of people whose sleep gets better when they stop using (cannabis),” Patel said.

Instead of turning to sleep aids—natural or otherwise—Patel said developing a bedtime routine that promotes relaxation and unwinding is a much better route to a good night’s rest.

Whether it’s taking hot bath, reading a book, meditating or even tuning into the nightly news, the brain will associate an oft-repeated bedtime ritual with the relaxation required to fall asleep, he explained.

You can watch television, but stay off social media, he said. “The algorithms on social media are designed to keep us engaged and end up contributing to people not closing their eyes until much later than they planned.”

Other common reasons that sleep can be unsatisfying or elusive are stress, worry and the simple fact that many people don’t give themselves enough time for rest.

“We see all the time that people plan to go to bed at a certain time, but then once they get into bed, they do other things and keep their mind active,” such as responding to emails, paying bills or scrolling on social platforms.

Aging influence

The rhythm and timing of the body clock changes with age, Patel said.

People need more sleep early in life when they’re growing and developing. For example, newborns may sleep more than 16 hours a day, and preschool-age children need to take naps.

In the teen years, the internal clock shifts so that they fall asleep later in the night, but then want to sleep in late. This is troublesome for teens because “they need to be up for school at 6:30 a.m. and so that’s causing lots of problems,” Patel said.

Some school districts in the region, including Pittsburgh Public in 2023, have shifted to later start times with this in mind.

For adults, sleep during middle age can be tricky with young children in the home who disrupt parents’ sleeping patterns. This is also a time of life when stress and worry are heightened, he said.

Older adults tend to go to bed earlier and wake up earlier, but they’ve got their own unique challenges, Patel said.

“A lot of physical problems mean that people are often waking up more in the night as they age. They have to get up to go to the bathroom. They have chronic aches and pains that wake them up. They’re often taking medications that … have side effects that affect your sleep,” he said.

Source : https://medicalxpress.com/news/2025-02-zzzs-science-night.html

 

Vacation days are the key to well-being? Study explains important link

(© Monkey Business – stock.adobe.com)

If you’re like many Americans, you probably didn’t take all your vacation time this past year. Even if you did, chances are you didn’t fully unplug while away from the office. But according to new research from the University of Georgia, those vacation days aren’t just a nice perk—they’re crucial for your well-being.

The research, published in the Journal of Applied Psychology, analyzed 32 different studies across nine countries. Researchers discovered something surprising: vacation benefits last much longer than previously believed. While we’ve long known that vacations can improve well-being, this comprehensive review found these positive effects persist well after returning to work, challenging earlier beliefs that vacation benefits quickly disappear.

“We think working more is better, but we actually perform better by taking care of ourselves,” explains lead author Ryan Grant, a doctoral student in psychology at UGA’s Franklin College of Arts and Sciences, in a statement. “We need to break up these intense periods of work with intense periods of rest and recuperation.”

The catch? How you spend your vacation matters significantly. The research team found that truly disconnecting from work produced the greatest benefits. This means avoiding work emails, skipping those “quick check-ins” with the office, and genuinely allowing yourself to mentally detach from workplace responsibilities.

“If you’re not at work but you’re thinking about work on vacation, you might as well be at the office,” says Grant. “Vacations are one of the few opportunities we get to fully just disconnect from work.”

Physical activity emerged as another key factor in maximizing vacation benefits. But don’t worry, this doesn’t mean you need to run marathons during your beach trip.

“Basically anything that gets your heart rate up is a good option,” explains Grant. “Plus, a lot of physical activities you’re doing on vacation, like snorkeling, for example, are physical. So they’re giving you the physiological and mental health benefits. But they’re also unique opportunities for these really positive experiences that you probably don’t get in your everyday life.”

The length of your vacation also plays a crucial role. The study found that longer vacations generally led to greater improvements in well-being, though these effects also tended to decline more quickly upon return. The researchers recommend building in buffer days both before and after your trip. Taking time to pack and prepare reduces pre-vacation stress while having a day or two to readjust after returning can ease the transition back to work life.

Cultural differences revealed interesting patterns, too. In countries where work achievement and success are highly valued, people experience more dramatic benefits from vacation time, likely because they really need the break. However, they also show steeper declines in well-being when returning to work. Workers in countries with more mandatory vacation days tended to get more out of their time off, possibly because taking vacations is more normalized and accepted.

These findings arrive at a critical moment, as vacation usage has declined in recent decades. In 2018 alone, American workers left 768 million vacation days unused, surrendering approximately $65 billion in benefits. This trend persists despite mounting evidence that prolonged work without adequate breaks can lead to burnout, anxiety, depression, and even physical health problems.

Maybe we should all rethink how we view vacations. Rather than seeing them as optional luxuries, we should recognize them as essential tools for maintaining well-being and long-term productivity. Whether it’s a two-week adventure or a long weekend getaway, the key is to fully disconnect and engage in activities that provide both physical and mental benefits.

Source : https://studyfinds.org/vacation-days-long-term-health/

The secret to career success? It might be hidden in your free time

(© Drobot Dean – stock.adobe.com)

In an age of endless productivity hacks and work-life balance tips, new research offers a refreshing perspective: what if you could advance your career while actually enjoying your leisure time? A study suggests this elusive goal might be more achievable than previously thought, introducing a concept called “leisure-work synergizing” that could revolutionize how we think about professional development.

Conventional wisdom has long suggested that work and leisure should remain separate. Clock out, go home, and leave work behind. But researchers Kate Zipay from Purdue University and Jessica Rodell from the University of Georgia have uncovered evidence that thoughtfully blending certain work-related elements into leisure activities might actually enhance both professional growth and personal enjoyment.

The concept, published in Organization Science, goes beyond simply answering emails after hours or catching up on work during weekends.

“We found that employees who intentionally integrate professional growth into their free time – like listening to leadership podcasts, watching TED Talks or reading engaging business books – report feeling more confident, motivated and capable at work,” explains Zipay. This innovative approach allows people to develop professionally without sacrificing the fundamental pleasure of leisure time.

The Science Behind the Strategy

The research team tracked 89 professionals over five weeks, examining how their leisure choices influenced their work performance and emotional state. Participants completed surveys about their activities and experiences during evenings and weekends, followed by assessments of their workplace mindset and performance the next day.

What emerged was a clear pattern: when people engaged in leisure activities that had some connection to professional growth, they reported significantly higher levels of self-assurance, feeling more confident and capable at work. This boost in confidence translated into better overall workplace performance and satisfaction.

However, the research revealed an important caveat: personality matters. Not everyone benefits equally from blending work and leisure. The study identified two distinct types of people: “integrators” who naturally prefer fluid boundaries between work and personal life, and “segmenters” who thrive on keeping these domains separate.

“Employees who prefer a clear separation between work and personal life might struggle with this approach,” notes Zipay, “highlighting the importance of tailoring the practice to individual preferences.”

For integrators, leisure-work synergizing proved particularly beneficial, actually reducing fatigue rather than adding to it. Meanwhile, segmenters showed less positive results from the practice, suggesting that forcing this approach when it doesn’t align with personal preferences could be counterproductive.

‘Done right, it’s a game-changer’

This research arrives at a crucial moment when traditional boundaries between work and personal life continue to blur, especially in the wake of remote work trends. Rather than fighting against this evolution, the study suggests we might benefit from being more strategic about it.

“This isn’t about making your free time feel like work,” emphasizes Zipay. “It’s about leveraging activities you already love in a way that fuels your professional growth. Done right, it’s a game-changer for employees and employers alike.”

Look for those natural overlaps where professional growth can occur alongside genuine enjoyment. For instance, the explosive growth of platforms like MasterClass and the surging popularity of business and personal development podcasts suggest many people already naturally gravitate toward this kind of enriching leisure activity.

For organizations and employees alike, these findings open up new possibilities for professional development. Instead of relying solely on traditional training programs or expecting employees to sacrifice personal time for growth, companies might benefit from supporting more flexible and integrated approaches to skill development.

Rather than choosing between career advancement and personal enjoyment, careful integration of the two might offer the best of both worlds, proving that sometimes you really can have your cake and eat it too.

Source : https://studyfinds.org/secret-to-career-success-free-time/

Why being a ‘bingo night’ regular could buy your brain an extra 5 years

(© Monkey Business – stock.adobe.com)

Going out to restaurants, playing bingo, visiting friends, or attending religious services could give you extra years of healthy brain function, according to new research from Rush University Medical Center. Their study found that older adults who stayed socially active typically developed dementia five years later than those who were less social. It’s a difference that could both extend life and save hundreds of thousands in healthcare costs.

“This study shows that social activity is related to less cognitive decline in older adults,” said Bryan James, PhD, associate professor of internal medicine at Rush, in a statement. “The least socially active older adults developed dementia an average of five years before the most socially active.”

The research team followed 1,923 older adults who were initially dementia-free, checking in with them yearly to track their social activities and cognitive health. They looked at six everyday social activities: dining out, attending sporting events or playing bingo, taking trips, doing volunteer work, visiting relatives or friends, participating in groups, and attending religious services.

Over nearly seven years of follow-up, 545 participants developed dementia, while 695 developed mild cognitive impairment (MCI), which often precedes dementia. After accounting for factors like age, education, gender, and marital status, the researchers found that each increase in social activity was linked to a 38% lower chance of developing dementia.

Being social seems to help the brain in several ways. When we engage socially, we exercise the parts of our brain involved in memory and thinking. “Social activity challenges older adults to participate in complex interpersonal exchanges, which could promote or maintain efficient neural networks in a case of ‘use it or lose it,’” explains James.

The benefits of social activity appear to work independently of other social factors, like how many friends someone has or how supported they feel. This suggests that simply getting out and doing things with others could be more important than the size of your social circle.

The research takes on new urgency following the COVID-19 pandemic, which left many older adults isolated. The findings suggest that communities might benefit from creating more opportunities for older adults to engage socially, whether through organized activities, volunteer programs, or regular social gatherings.

Source: https://studyfinds.org/social-seniors-five-years-dementia/

The bitter truth: Science reveals why coffee tastes different to everyone

What affects coffee’s bitterness more: roasting techniques or your predisposed genetics? (Photo by Mix and Match Studio on Shutterstock)

Next time you take a sip of coffee and scrunch your nose at its bitter taste, your DNA might be to blame. New research from scientists in Germany has uncovered fascinating insights into why Arabica coffee’s signature bitterness varies from person to person, and it’s not just about how dark the roast is.

The study, published in Food Chemistry, was conducted at the Technical University of Munich’s Leibniz Institute for Food Systems Biology. Researchers have identified a new group of bitter compounds formed during coffee roasting.

“Indeed, previous studies have identified various compound classes that contribute to bitterness. During my doctoral thesis, I have now identified and thoroughly analyzed another class of previously unknown roasting substances,” says study author Coline Bichlmaier, a doctoral student, in a statement.

While caffeine has long been known as coffee’s primary bitter component, even decaffeinated coffee tastes bitter, indicating other compounds are at work. At the heart of this bitter business is a compound called mozambioside, found naturally in raw coffee beans. It’s about ten times more bitter than caffeine and particularly abundant in naturally caffeine-free coffee varieties. However, this may not be at the root of that bitter taste.

“Our investigations showed that the concentration of mozambioside decreases significantly during roasting so that it only makes a small contribution to the bitterness of coffee,” says principal investigator Roman Lang.

Through detailed chemical analysis, researchers tracked mozambioside as coffee beans roasted. They found it breaks down into seven specific compounds, each contributing its own bitter properties. Using ultra-high-performance liquid chromatography and mass spectrometry, essentially very precise chemical detection methods, they measured exactly how much of each compound forms during roasting and transfers into your cup.

When studying Colombian Arabica coffee specifically, they found that not everyone experiences these bitter compounds the same way. A specific gene called TAS2R43, which codes for one of our approximately 25 bitter taste receptors, plays a crucial role. About 20% of Europeans have a deletion in this gene, meaning they’re missing that particular bitter taste receptor entirely.

In standardized taste tests with 11 volunteers, researchers analyzed each participant’s DNA using saliva samples to determine their TAS2R43 gene status. Their genetic test revealed that two participants had both copies of the TAS2R43 gene variant defective, seven had one intact and one defective variant, and only two people had both copies fully intact.

The results revealed striking differences in bitter perception based on genetics. When combining mozambioside with its roasting products in a sample, eight out of eleven test subjects perceived a bitter taste, one found it astringent, and two didn’t notice any particular taste.

During roasting experiments at different temperatures, researchers discovered that some bitter compounds peaked at 240°C, while others continued increasing up to 260°C. These findings join our existing knowledge about other bitter-tasting substances formed during roasting, including compounds called caffeoylquinides (from chlorogenic acids), diketopiperazines (from coffee proteins), and oligomers of 4-vinylcatechols (from caffeic acids).

Bitter taste receptors aren’t only found in our mouths. They exist throughout the body in various organs and tissues. Studies indicate they help fight pathogens in our respiratory tract, assist with defense mechanisms in our intestines and blood cells, and may play a role in metabolism regulation.

“The new findings deepen our understanding of how the roasting process influences the flavor of coffee and open up new possibilities for developing coffee varieties with coordinated flavor profiles,” says Lang. “They are also an important milestone in flavor research, but also in health research. Bitter substances and their receptors have further physiological functions in the body, most of which are still unknown.”

With global production reaching 102.2 million 60-kilo bags of Arabica coffee in 2023/24, understanding these bitter compounds and their perception is major. For coffee lovers and producers alike, this research provides scientific validation for something many have long suspected: we really do experience coffee differently from one another, and it’s written in our genes.

Source : https://studyfinds.org/why-coffee-tastes-different-to-everyone/

Teflon flu cases surge: What you need to know

(Credit: Simca/Shutterstock)

From frying pans to muffin tins and saucepans – you can get nonstick surfaces on just about any type of cookware. However, did you know that the nonstick coating can make some people ill?

In 2023, 267 reports of suspected polymer fume fever or “Teflon flu” were reported to U.S. Poison Centers, triple the annual number in previous years. In 2019, 79 cases were reported. Similar reports have been reported since 2011, and they are increasing in frequency.

The 267 cases were not all confirmed, and not all patients reported symptoms. Some patients may have been exposed to chemicals at work.

The disorder is called polymer fume fever since the polymers that make up the nonstick coatings typically come from polytetrafluorethylene (PTFE), which prevents foods from sticking to a pan. According to the National Institutes of Health, this material can break down into tiny particles at normal cooking temperatures, releasing toxic gases and chemicals.

The Poison Center explains that Teflon flu is caused by inhaling the fumes from burning products, including PTFE. Symptoms include:

  • headaches
  • fever
  • shivering or chills
  • unpleasant taste
  • thirst
  • coughing
  • nausea
  • weakness
  • muscle aches or cramps

Symptoms of Teflon flu can last one to two days.

How can you prevent polymer fume fever?

Whether in your work environment or at home, these safety principles while working with fumes can help avoid illnesses:

Ventilation and Air Filtration: Use exhaust fans, open windows, or operate air purifiers equipped with HEPA filters to lessen indoor air pollution and facilitate the removal of airborne contaminants. Avoid using abrasive cleaning methods (such as using steel wool or metal scouring pads) that produce airborne particles.

Material Selection for Safer Alternatives: Opt for low-emission materials and products with reduced metal content when possible. Choose water-based or low-VOC (volatile organic compound) cleaning agents and coatings to minimize exposure to toxic substances. Choose environmentally friendly and low-emission options for consumer goods whenever doable.

For metal and polymer fume exposures, most of the symptoms of Teflon flu will resolve in 24 to 48 hours. If you or someone you know has been exposed to metal fumes or polymer fumes and are experiencing either metal fume fever or polymer fume fever, follow these steps:

  1. Move away from the source causing the fumes.
  2. Drink plenty of water to stay hydrated.
  3. Over-the-counter medications such as ibuprofen and acetaminophen can help manage fever and body aches.

Call the Poison Center right away at 1-800-222-1222 to receive immediate first aid and instructions from a specially trained nurse or pharmacist.

Source: https://studyfinds.org/teflon-flu-cases-surge-what-you-need-to-know/?nab=0

Social media has long battled bot overload — Now AI is both the problem and the cure

(Image by VectorMine on Shutterstock)

Remember when the biggest threat online was a computer virus? Those were simpler times. Today, we face a far more insidious digital danger: AI-powered social media bots. A study by researchers from the University of Washington and Xi’an Jiaotong University reveals both the immense potential and concerning risks of using large language models (LLMs) like ChatGPT in the detection and creation of these deceptive fake profiles.

Social media bots — automated accounts that can mimic human behavior — have long been a thorn in the side of platform operators and users alike. These artificial accounts can spread misinformation, interfere with elections, and even promote extremist ideologies. Until now, the fight against bots has been a constant game of cat and mouse, with researchers developing increasingly sophisticated detection methods, only for bot creators to find new ways to evade them.

Enter the era of large language models. These AI marvels, capable of understanding and generating human-like text, have shown promise in various fields. But could they be the secret weapon in the war against social media bots? Or might they instead become a powerful tool for creating even more convincing fake accounts?

The research team, led by Shangbin Feng, set out to answer these questions by putting LLMs to the test in both bot detection and bot creation scenarios. Their findings paint a picture of both hope and caution for the future of social media integrity.

“There’s always been an arms race between bot operators and the researchers trying to stop them,” says Feng, a doctoral student in Washington’s Paul G. Allen School of Computer Science & Engineering, in a university release. “Each advance in bot detection is often met with an advance in bot sophistication, so we explored the opportunities and the risks that large language models present in this arms race.”

On the detection front, the news is encouraging. The researchers developed a novel approach using LLMs to analyze various aspects of user accounts, including metadata (like follower counts and account age), the text of posts, and the network of connections between users. By combining these different streams of information, their LLM-based system was able to outperform existing bot detection methods by an impressive margin—up to 9.1% better on standard datasets.

Large language models like ChatGPT can play a major role in the detection and creation of deceptive fake profiles, researchers warn. (Photo by Tada Images on Shutterstock)

What’s particularly exciting about this approach is its efficiency. While traditional bot detection models require extensive training on large datasets of labeled accounts, the LLM-based method achieved its superior results after being fine-tuned on just 1,000 examples. This could be a game-changer in a field where high-quality, annotated data is often scarce and expensive to obtain.

However, the study’s findings weren’t all rosy. The researchers also explored how LLMs might be used by those on the other side of the battle — the bot creators themselves. By leveraging the language generation capabilities of these AI models, they were able to develop strategies for manipulating bot accounts to evade detection.

These LLM-guided evasion tactics proved alarmingly effective. When applied to known bot accounts, they were able to reduce the detection rate of existing bot-hunting algorithms by up to 29.6%. The manipulations ranged from subtle rewrites of bot-generated text to make it appear more human-like to strategic changes in which accounts a bot follows or unfollows.

Perhaps most concerning is the potential for LLMs to create bots that are not just evasive but truly convincing. The study demonstrated that LLMs could generate user profiles and posts that capture nuanced human behaviors, making them far more difficult to distinguish from genuine accounts.

This dual-use potential of LLMs in the realm of social media integrity presents a challenge for platform operators, researchers, and policymakers alike. On one hand, these powerful AI tools could revolutionize our ability to identify and remove malicious bot accounts at scale. On the other, they risk becoming a sophisticated weapon in the arsenal of those seeking to manipulate online discourse.

Source: https://studyfinds.org/social-media-bots-ai/?nab=0

6G revolution begins: Researchers achieve record-breaking data speeds

(© sitthiphong – stock.adobe.com)

The road to 6G wireless networks just got a little smoother. Scientists have made a significant leap forward in terahertz technology, potentially revolutionizing how we communicate in the future. An international team has developed a tiny silicon device that could double the capacity of wireless networks, bringing us closer to the promise of 6G and beyond.

Imagine a world where you could download an entire season of your favorite show in seconds or where virtual reality feels as real as, well, reality. This is what scientists believe terahertz technology can potentially bring to the world. Their work is published in the journal Laser & Photonics Review.

This tiny marvel, a silicon chip smaller than a grain of rice, operates in a part of the electromagnetic spectrum that most of us have never heard of: the terahertz range. Think of the electromagnetic spectrum as a vast highway of information.

We’re currently cruising along in the relatively slow lanes of 4G and 5G. Terahertz technology? That’s the express lane, promising speeds that make our current networks look like horse-drawn carriages in comparison.

Terahertz waves occupy a sweet spot in the electromagnetic spectrum between microwaves and infrared light. They’ve long been seen as a promising frontier for wireless communication because they can carry vast amounts of data. However, harnessing this potential has been challenging due to technical limitations.

The researchers’ new device, called a “polarization multiplexer,” tackles one of the key hurdles in terahertz communication: efficiently managing different polarizations of terahertz waves. Polarization refers to the orientation of the wave’s oscillation. By cleverly manipulating these polarizations, the team has essentially created a traffic control system for terahertz waves, allowing more data to be transmitted simultaneously.

If that sounds like technobabble, think of it as a traffic cop for data, able to direct twice as much information down the same road without causing a jam.

“Our proposed polarization multiplexer will allow multiple data streams to be transmitted simultaneously over the same frequency band, effectively doubling the data capacity,” explains lead researcher Professor Withawat Withayachumnankul from the University of Adelaide, in a statement.

At the heart of this innovation is a compact silicon chip measuring just a few millimeters across. Despite its small size, this chip can separate and combine terahertz waves with different polarizations with remarkable efficiency. It’s like having a tiny, incredibly precise sorting machine for light waves.

To create this device, the researchers used a 250-micrometer-thick silicon wafer with very high electrical resistance. They employed a technique called deep reactive-ion etching to carve intricate patterns into the silicon. These patterns, consisting of carefully designed holes and structures, form what’s known as an “effective medium” – a material that interacts with terahertz waves in specific ways.

The team then subjected their device to a battery of tests using specialized equipment. They used a vector network analyzer with extension modules capable of generating and detecting terahertz waves in the 220-330 GHz range with minimal signal loss. This allowed them to measure how well the device could handle different polarizations of terahertz waves across a wide range of frequencies.

“This large relative bandwidth is a record for any integrated multiplexers found in any frequency range. If it were to be scaled to the center frequency of the optical communications bands, such a bandwidth could cover all the optical communications bands.”

In their experiments, the researchers demonstrated that their device could effectively separate and combine two different polarizations of terahertz waves with high efficiency. The device showed an average signal loss of only about 1 decibel – a remarkably low figure that indicates very little energy is wasted in the process. Even more impressively, the device maintained a polarization extinction ratio (a measure of how well it can distinguish between different polarizations) of over 20 decibels across its operating range. This is crucial for ensuring that data transmitted on different polarizations doesn’t interfere with each other.

To put the potential of this technology into perspective, the researchers conducted several real-world tests. In one demonstration, they used their device to transmit two separate high-definition video streams simultaneously over a terahertz link. This showcases the technology’s ability to handle multiple data streams at once, effectively doubling the amount of information that can be sent over a single channel.

But the team didn’t stop there. In more advanced tests, they pushed the limits of data transmission speed. Using a technique called on-off keying, they achieved error-free data rates of up to 64 gigabits per second. When they employed a more complex modulation scheme (16-QAM), they reached staggering data rates of up to 190 gigabits per second. That’s roughly equivalent to downloading 24 gigabytes – or about six high-definition movies – in a single second. It’s a staggering leap from current wireless technologies.

Still, the researchers say it’s not just about speed. This device is also incredibly versatile.

“This innovation not only enhances the efficiency of terahertz communication systems but also paves the way for more robust and reliable high-speed wireless networks,” adds Dr. Weijie Gao, a postdoctoral researcher at Osaka University and co-author of the study.

Source: https://studyfinds.org/6g-record-breaking-data-speed/?nab=0

How workplace rudeness is killing productivity and endangering lives

Boss yelling at employees (Photo by Yan Krukov from Pexels)

“Please” and “thank you” — these simple courtesies might be worth more than their weight in gold, according to a stunning new study. Researchers have uncovered a startling link between workplace rudeness and team performance that’s forcing organizations to rethink their approach to interpersonal dynamics.

In an era where workplace efficiency is paramount, who would have thought that a careless comment or a dismissive email could be the wrench in the gears of productivity? However, according to the research published in the Journal of Applied Psychology, incivility is wreaking havoc in our offices, operating rooms, and boardrooms.

Far from being a mere annoyance, the study suggests that rudeness is a silent saboteur, capable of derailing team performance and potentially endangering lives. The study, conducted by an international team of researchers from the University of Florida, Indiana University, and institutions across the U.S. and Israel, paints a sobering picture of how even mild instances of incivility can have far-reaching consequences.

“Many workplaces treat rudeness as a minor interpersonal issue,” says Dr. Amir Erez, a professor at the University of Florida Warrington College of Business, in a statement. “Our research shows that it’s a major threat to productivity and even safety. Organizations should treat it as such.”

Through a series of five innovative studies, the researchers peeled back the layers of workplace interactions to reveal the insidious effects of rudeness. From laboratory experiments involving bridge-building with newspaper and tape to high-stakes medical simulations, the findings consistently pointed to a disturbing truth: rudeness dramatically impairs team functioning.

Perhaps most alarming is the disproportionate impact of rudeness relative to its perceived intensity. In one study, seemingly mild rude comments from an external source accounted for a staggering 44% of the variance in medical teams’ performance quality. This suggests that even small slights can have outsized effects on team outcomes.

Far from being a mere annoyance, the study suggests that rudeness is a silent saboteur, capable of derailing team performance and potentially endangering lives. (© fizkes – stock.adobe.com)

How exactly does rudeness wreak such havoc?

The researchers found that rudeness acts as a social threat, triggering defensive responses in team members. This threat response shifts individuals from a collaborative mindset to a self-protective one, reducing what the researchers call “social value orientation” (SVO) – essentially, the degree to which people prioritize collective interests over their own.

This shift towards self-interest manifests in reduced information sharing and workload distribution among team members, two critical components of effective teamwork. In medical settings, this translates to poorer execution of potentially life-saving procedures.

“Our research helps us understand the effect rudeness can have on team dynamics, especially in urgent, intense situations like in health care,” says Jake Gale, Ph.D., an assistant professor of management at the Indiana University Kelley School of Business Indianapolis. “By understanding how rudeness triggers self-focused behaviors and impairs communication, we’re not just advancing academic knowledge; we’re uncovering insights that could save lives. It’s a powerful reminder that the way we interact with each other has real-world consequences, especially in critical situations.”

The implications of these findings extend far beyond the medical field. Whether in a high-powered corporate boardroom or a local retail store, rudeness from any source – be it supervisors, colleagues, or customers – consistently degrades team cooperation and coordination, leading to poorer outcomes across the board.

Given the pervasiveness of rudeness in modern workplaces, with over 50% of employees reporting weekly encounters, addressing this issue becomes not just a matter of politeness but a critical factor in organizational effectiveness and safety.

The researchers suggest that organizations take proactive steps to create work environments that foster respect and civility. This could include implementing training programs to build resilience against rudeness or promoting mindfulness practices that help employees maintain a collective focus even in the face of interpersonal challenges.

Source: https://studyfinds.org/workplace-rudeness-productivity/?nab=0

Inside the attention spans of young kids: Why curiosity is mistaken for lack of focus

(Credit: August de Richelieu from Pexels)

Picture this: You’re playing a game of “Guess Who?” with a five-year-old. You’ve narrowed it down to the character with the red hat, but instead of triumphantly declaring their guess, the child keeps flipping over cards, examining every detail from mustaches to earrings. Frustrating? Maybe. But according to new research, this seemingly inefficient behavior might be a key feature of how young minds learn about the world.

A study published in Psychological Science by researchers at The Ohio State University has shed new light on a longstanding puzzle in child development: Why do young children seem to pay attention to everything, even when it doesn’t help them complete a task? The answer, it turns out, is more complex and fascinating than anyone expected.

For years, scientists have observed that children tend to distribute their attention broadly, taking in information that adults would consider irrelevant or distracting. This “distributed attention” has often been chalked up to immature brain development or a simple lack of focus. But Ohio State psychology professor Vladimir Sloutsky and his team suspected there might be more to the story.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” Sloutsky explains in a media release.

This over-exploration persists even when children are motivated by rewards to complete tasks quickly.

To investigate this question, Sloutsky and lead author Qianqian Wan designed clever experiments involving four to six-year-old children and adults. Participants were shown images of cartoon creatures and asked to sort them into two made-up categories called “Hibi” and “Gora.” Each creature had seven features like horns, wings, and tails. Importantly, only one feature perfectly predicted which category the creature belonged to, while the other features were only somewhat helpful for categorizing.

The key twist was that all the features were initially hidden behind “bubbles” on a computer screen. Participants could reveal features one at a time by tapping or clicking on the bubbles. This setup allowed the researchers to see exactly which features people chose to look at before making their category decision.

“Children can’t seem to stop themselves from gathering more information than they need to complete a task, even when they know exactly what they need,” researchers explain. (Credit: Kamaji Ogino from Pexels)

If children’s broad attention was simply due to an inability to filter out distractions, the researchers reasoned that hiding irrelevant features should help them focus only on the most important one. However, that’s not what happened. Even when they quickly figured out which feature was the perfect predictor of category, children – especially younger ones – continued to uncover and examine multiple features on each trial. Adults, on the other hand, quickly zeroed in on the key feature and mostly ignored the rest.

Interestingly, by age six, children started to show a mix of strategies. About half the six-year-olds behaved more like adults, focusing mostly on the key feature. The other half continued to explore broadly like younger children. This suggests the study may have captured a key transition point in how children learn to focus their attention.

To rule out the possibility that children just enjoyed the action of tapping to reveal features, the researchers ran a second experiment. This time, they gave children the option to either reveal all features at once with one tap or uncover them one by one. Children of all ages strongly preferred the single-tap option, indicating their goal was indeed to gather information rather than simply tapping for fun.

So, why do children persist in this seemingly inefficient exploration? Sloutsky proposes two intriguing possibilities. The first is simple curiosity – an innate drive to learn about the world that overrides task efficiency. The second, which Sloutsky favors, relates to the development of working memory.

“The children learned that one body part will tell them what the creature is, but they may be concerned that they don’t remember correctly. Their working memory is still under development,” Sloutsky suggests. “They want to resolve this uncertainty by continuing to sample, by looking at other body parts to see if they line up with what they think.”

Source: https://studyfinds.org/over-exploring-minds-attention-kids/?nab=0

Just 10 seconds of light exercise boosts brain activity in kids

(Photo by Yan Krukov from Pexels)

What if the secret to unlocking your child’s cognitive potential was as simple as a 10-second stretch? It may sound too good to be true, but a revolutionary study from Japan suggests that brief, light exercises could be the key to boosting brain activity in children, challenging our understanding of the mind-body connection.

The findings, published in Scientific Reports, suggest that these quick, low-intensity activities could be a valuable tool for enhancing cognitive function and potentially improving learning in school settings.

The research, led by Takashi Naito and colleagues, focuses on a part of the brain called the prefrontal cortex (PFC). This area, located at the front of the brain, is crucial for many important mental tasks. It helps us plan, make decisions, control our impulses, and pay attention – all skills that are vital for success in school and life.

As children grow, their prefrontal cortex continues to develop. This means that childhood is a critical time for building strong mental abilities. However, many children today aren’t getting enough physical activity. In fact, a whopping 81% of children worldwide don’t get enough exercise. This lack of movement could potentially hinder their brain development and cognitive skills.

While previous studies have shown that moderate to intense exercise can improve brain function, less was known about the effects of light, easy activities – the kind that could be done quickly in a classroom or during short breaks. This study aimed to fill that gap by examining how simple exercises affect blood flow in the prefrontal cortex of children.

“Our goal is to develop a light-intensity exercise program that is accessible to everyone, aiming to enhance brain function and reduce children’s sedentary behavior,” Naito explains in a statement. “We hope to promote and implement this program in schools through collaborative efforts.”

The researchers recruited 41 children between the ages of 10 and 15 to participate in the study. These kids performed seven different types of light exercises, each lasting either 10 or 20 seconds. The exercises included things like stretching, hand movements, and balancing on one leg – all activities that could be easily done in a classroom without special equipment.

To measure brain activity, the researchers used a technique called functional near-infrared spectroscopy (fNIRS). This non-invasive method uses light to detect changes in blood flow in the brain, which can indicate increased brain activity. The children wore a special headband with sensors while doing the exercises, allowing the researchers to see how their brain activity changed during each movement.

Most of the exercises led to significant increases in blood flow to the prefrontal cortex, suggesting increased brain activity in this important region. Interestingly, not all exercises had the same effect. Simple, static stretches didn’t show much change, but exercises that required more thought or physical effort – like twisting movements, hand exercises, and balancing – showed the biggest increases in brain activity.

These findings suggest that even short bursts of light activity can “wake up” the prefrontal cortex in children. This could potentially lead to improved focus, better decision-making, and enhanced learning abilities. The best part is that these exercises are quick and easy to do, making them perfect for incorporating into a school day or study routine.

Source: https://studyfinds.org/10-seconds-exercise-brain-activity/?nab=0

Tourist dies after ice collapse in Icelandic glacier

An aerial view of the Breidamerkurjökull glacier in 2021

A foreign tourist has died in south Iceland after ice collapsed during a visit their group was making to a glacier, local media report.

A second tourist was injured but they have been taken to hospital and their life is not in danger, while two others are still missing.

Rescuers have suspended the search for the missing in the Breidamerkurjökull glacier until morning because of difficult conditions.

Ice collapsed as the group of 25 people were visiting an ice cave along with a guide on Sunday.

Emergency workers worked by hand to try to rescue those missing.

First responders received a call just before 15:00 on Sunday about the collapse.

“The conditions are very difficult on the ground,” said local police chief Sveinn Kristján Rúnarsson. “It’s in the glacier. It’s hard to get equipment there… It’s bad. Everything is being done by hand.”

Local news outlets reported that 200 people were working on the rescue operation at one point on Sunday.

Speaking on Icelandic TV, Chief Superintendent Rúnarsson said police had been unable to contact the two missing people.

While the conditions were “difficult”, the weather was “fair”, he said.

Confirming that all those involved were foreign tourists, he said there was nothing to suggest that the trip to the cave should not have taken place.

“Ice cave tours happen almost the whole year,” he said

“These are experienced and powerful mountain guides who run these trips. It’s always possible to be unlucky. I trust these people to assess the situation – when it’s safe or not safe to go, and good work has been done there over time. This is a living land, so anything can happen.”

The police chief was quoted as saying that people had been standing in a ravine between cave mouths when an ice wall collapsed.

Source : https://www.bbc.com/news/articles/cp8ny80e6lyo

Mental menu: Your food choices may be causing anxiety and depression

(Credit: Prostock-studio/Shutterstock)

The proverbial “sugar high” that follows the ingestion of a sweet treat is a familiar example of the potentially positive effects of food on mood.

On the flip side, feeling “hangry” – the phenomenon where hunger manifests in the form of anger or irritability – illustrates how what we eat or don’t eat can also provoke negative emotions.

The latest research suggests that blood sugar fluctuations are partly responsible for the connection between what we eat and how we feel. Through its effects on our hormones and our nervous system, blood sugar levels can be fuel for anxiety and depression.

Mental health is complex. There are countless social, psychological, and biological factors that ultimately determine any one person’s experience. However, numerous randomized controlled trials have demonstrated that diet is one biological factor that can significantly influence risk for symptoms of depression and anxiety, especially in women.

As a family medicine resident with a Ph.D. in nutrition, I have witnessed the fact that antidepressant medications work for some patients but not others. Thus, in my view, mental health treatment strategies should target every risk factor, including nutrition.

The role of the glycemic index
Many of the randomized controlled trials that have proven the link between diet and mental health have tested the Mediterranean diet or a slightly modified version of it. The Mediterranean diet is typically characterized by lots of vegetables – especially dark green, leafy vegetables – fruit, olive oil, whole grains, legumes and nuts, with small amounts of fish, meat and dairy products. One of the many attributes of the Mediterranean diet that may be responsible for its effect on mood is its low glycemic index.

The glycemic index is a system that ranks foods and diets according to their potential to raise blood sugar. Thus, in keeping with the observation that blood sugar fluctuations affect mood, high glycemic index diets that produce drastic spikes in blood sugar have been associated with increased risk for depression and to some extent anxiety.

High glycemic index carbohydrates include white rice, white bread, crackers and baked goods. Therefore, diets high in these foods may increase risk for depression and anxiety. Meanwhile, low glycemic index carbs, such as parboiled rice and al dente pasta, that are more slowly absorbed and produce a smaller blood sugar spike are associated with decreased risk.

Diets high in legumes and dark green vegetables produce lower spikes in blood sugar. (Credit: Jacqueline Howell from Pexels)

How diet affects mood

Many scientific mechanisms have been proposed to explain the connection between diet and mental health. One plausible explanation that links blood sugar fluctuations with mood is its effect on our hormones.

Every time we eat sugar or carbohydrates such as bread, rice, pasta, potatoes, and crackers, the resulting rise in blood sugar triggers a cascade of hormones and signaling molecules. One example, dopamine – our brain’s pleasure signal – is the reason we can experience a “sugar high” following the consumption of dessert or baked goods. Dopamine is the body’s way of rewarding us for procuring the calories, or energy that are necessary for survival.

Insulin is another hormone triggered by carbohydrates and sugar. Insulin’s job is to lower blood sugar levels by escorting the ingested sugar into our cells and tissues so that it can be used for energy. However, when we eat too much sugar, too many carbs, or high glycemic index carbs, the rapid increase in blood sugar prompts a drastic rise in insulin. This can result in blood sugar levels that dip below where they started.

This dip in blood sugar sparks the release of adrenaline and its cousin noradrenaline. Both of these hormones appropriately send glucose into the bloodstream to restore blood sugar to the appropriate level.

However, adrenaline influences more than just blood sugar levels. It also affects how we feel, and its release can manifest as anxiety, fear, or aggression. Hence, diet affects mood through its effect on blood sugar levels, which trigger the hormones that dictate how we feel.

Interestingly, the rise in adrenaline that follows sugar and carbohydrate consumption doesn’t happen until four to five hours after eating. Thus, when eating sugar and carbs, dopamine makes us feel good in the short term; but in the long term, adrenaline can make us feel bad.

However, not everyone is equally affected. Identical meals can produce widely varying blood sugar responses in different people, depending on one’s sex, as well as genetics, sedentariness, and the gut microbiome.

And it’s important to keep in mind that, as previously mentioned, mental health is complicated. So in certain circumstances, no amount of dietary optimization will overcome the social and psychological factors that may underpin one’s experience.

Nevertheless, a poor diet could certainly make a person’s experience worse and is thus relevant for anyone, especially women, hoping to optimize mental health. Research has shown that women, in particular, are more sensitive to the effects of the glycemic index and diet overall.

Source: https://studyfinds.org/food-choices-anxiety-depression/

In just 10 minutes, new app gives you a mental health makeover

(Credit: Microgen/Shutterstock)

Just 10 minutes of daily mindfulness practice, delivered through a free smartphone app, could be the key to unlocking a healthier, happier you. It sounds almost too good to be true, but that’s exactly what researchers from the Universities of Bath and Southampton have discovered.

In one of the largest and most diverse studies of its kind, 1,247 adults from 91 countries embarked on a 30-day mindfulness journey using the free Medito app. The results were nothing short of remarkable. Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms compared to the control group. They also experienced a 6.9% greater improvement in well-being and a 12.6% larger decrease in anxiety.

The benefits didn’t stop there. The study, published in the British Journal of Health Psychology, uncovered an intriguing link between mindfulness practice and healthier lifestyle choices. Participants who used the mindfulness app reported more positive attitudes towards health maintenance (7.1% higher than the control group) and stronger intentions to look after their health (6.5% higher). It’s as if the simple act of tuning into the present moment created a ripple effect, influencing not just mental health but also motivating healthier behaviors.

What makes this study particularly exciting is its accessibility. Unlike traditional mindfulness programs that might require significant time commitments or expensive retreats, this intervention was delivered entirely through a free mobile app. Participants, most of whom had no prior mindfulness experience, were asked to complete just 10 minutes of practice daily. The sessions included relaxation exercises, intention-setting, body scans, focused breathing, and self-reflection.

“This study highlights that even short, daily practices of mindfulness can offer benefits, making it a simple yet powerful tool for enhancing mental health,” says Masha Remskar, the lead researcher from the University of Bath, in a media release.

Participants who completed the mindfulness program reported a 19.2% greater reduction in depression symptoms. (Credit: Ground Picture/Shutterstock)

Perhaps even more impressive than the immediate effects were the long-term benefits. In follow-up surveys conducted 30 days after the intervention ended, participants in the mindfulness group continued to report improved well-being, reduced depression symptoms, and better sleep quality compared to the control group.

The study also shed light on why mindfulness might be so effective.

“The research underscores how digital technology – in this case, a freely available app – can help people integrate behavioral and psychological techniques into their lives, in a way that suits them,” notes Dr. Ben Ainsworth from the University of Southampton.

Source : https://studyfinds.org/10-minute-app-mental-health/?nab=0

Wow! Scientists may have finally decoded mysterious signal from space

The “Wow!” signal was originally captured in 1977 by the Ohio State University’s Big Ear radio telescope (Credit: Big Ear Radio Observatory and North American AstroPhysical Observatory)

For nearly half a century, astronomers have been puzzled by a brief and unexplainable radio signal detected in 1977 that seemed to hint at the existence of alien life. Known as the “Wow! Signal,” this tantalizing cosmic transmission has remained one of the most intriguing mysteries in the search for signs of intelligent life in outer space. Now, scientists may finally know where it came from!

A team of researchers may have uncovered a potential astrophysical explanation for the Wow! Signal that could reshape our understanding of this enduring enigma. Their findings, currently published in the preprint journal arXiv, suggest the signal may have been the result of a rare and dramatic event involving a burst of energy from a celestial object interacting with clouds of cold hydrogen gas in the Milky Way galaxy.

“Our latest observations, made between February and May 2020, have revealed similar narrowband signals near the hydrogen line, though less intense than the original Wow! Signal,” explains Abel Méndez, lead author of the study from the Planetary Habitability Laboratory at the University of Puerto Rico at Arecibo, in a media release.

“Our study suggests that the Wow! Signal was likely the first recorded instance of maser-like emission of the hydrogen line.”

Cold hydrogen clouds in the galaxy emit faint narrowband radio signals similar to those shown here, detected by the Arecibo Observatory in 2020. A sudden brightening of one of these clouds, triggered by a strong emission from another stellar source, may explain the Wow! Signal. (Credit: University of Puerto Rico at Arecibo)

The Wow! Signal was detected by the Big Ear radio telescope at The Ohio State University on August 15, 1977. It exhibited several intriguing characteristics, including a narrow bandwidth, high signal strength, and a frequency tantalizingly close to the natural radio emission of neutral hydrogen — an element abundant throughout the universe. These properties led many to speculate the signal could be of artificial origin, perhaps a deliberate message from an extraterrestrial intelligence.

This passing burst of activity in space led Dr. Jerry Ehman to famously write “Wow!” next to the print-out of the signal, which was like nothing else astronomers were seeing in space at the time. However, the signal was never detected again, despite numerous attempts to locate its source over the ensuing decades.

This has posed a major challenge for the SETI (Search for Extraterrestrial Intelligence) community, as repetition is considered essential for verifying the authenticity of a potential extraterrestrial signal — also known as a technosignature.

This new study, however, is pushing the conversation away from an alien radio transmission and closer to a once-in-a-lifetime natural occurrence in deep space. The researchers’ key insight stems from observations made using the now-decommissioned Arecibo Observatory in Puerto Rico, one of the world’s most powerful radio telescopes until its collapse in 2020.

For now, the Wow! Signal remains shrouded in mystery, but there is now at least a plausible explanation for its existence — one that does not involve aliens.

Source: https://studyfinds.org/wow-signal-decoded/?nab=0

Do ambitious people really make the best leaders? New study raises doubts

(Credit: fizkes/Shutterstock)

Leadership is a critical component in every aspect of human activity, from business and education to government and healthcare. We often assume that those who aspire to leadership positions are the most qualified for the job. However, a new study challenges this assumption, revealing a striking disconnect between ambition and actual leadership effectiveness.

The study, conducted by researchers Shilaan Alzahawi, Emily S. Reit, and Francis J. Flynn from Stanford University’s Graduate School of Business, explores the relationship between ambition and leadership evaluations. Their findings suggest that while ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts.

At the heart of this research is the concept of ambition, defined as a persistent striving for success, attainment, and accomplishment. Ambitious individuals are typically drawn to leadership positions, motivated by the promise of power, status, and financial rewards. However, the study, published in PNAS Nexus, raises an important question: Does this ambition translate into better leadership skills?

To investigate this question, the researchers conducted a large-scale study involving 472 executives enrolled in a leadership development program. These executives were evaluated on 10 leadership competencies by their peers, subordinates, managers, and themselves. In total, the study analyzed 3,830 ratings, providing a comprehensive view of each leader’s effectiveness from multiple perspectives.

Perhaps the most thought-provoking finding of the study is the significant discrepancy between how ambitious leaders view themselves and how others perceive them. Highly ambitious individuals consistently rated themselves as more effective leaders across various competencies. However, this positive self-assessment was not corroborated by the evaluations from their peers, subordinates, or managers.

For instance, ambitious leaders believed they were better at motivating others, managing collaborative work, and coaching and developing people. They also thought they had a stronger growth orientation and were more accountable for results. Yet, their colleagues and subordinates did not observe these superior abilities in practice.

While ambitious individuals are more likely to pursue and obtain leadership roles, they may not necessarily be more effective leaders than their less ambitious counterparts. (Credit: fauxels from Pexels)

This disconnect between self-perception and reality has significant implications for how we select and develop leaders. Many organizations rely on self-selection processes, where individuals actively choose to be considered for leadership roles. The assumption is that those who step forward are the most capable candidates. However, this study suggests that such an approach may be flawed, potentially promoting individuals based on their ambition rather than their actual leadership skills.

The researchers propose that ambitious individuals may be drawn to leadership roles for reasons unrelated to their aptitude. The allure of higher salaries, greater authority, and increased social status may drive them to pursue these positions, regardless of their actual leadership capabilities. To justify this pursuit, ambitious individuals may unconsciously inflate their self-perceptions of leadership effectiveness.

This phenomenon aligns with psychological concepts such as motivated reasoning and cognitive dissonance. Essentially, people tend to interpret information in a way that confirms their existing beliefs or desires. In this case, ambitious individuals may convince themselves of their superior leadership skills to justify their pursuit of higher positions.

Organizations and individuals may need to rethink their approach to leadership selection and development. Rather than relying solely on self-selection and ambitious individuals dominating candidate pools, companies might benefit from actively identifying and encouraging individuals who possess leadership potential but may lack the confidence or ambition to pursue such roles.

Moreover, the research highlights the importance of gathering diverse perspectives when evaluating leadership effectiveness. Relying solely on self-assessments or the opinions of a single group (e.g., only peers or only subordinates) may provide an incomplete or biased picture of a leader’s true capabilities.

This study urges us to look beyond ambition when selecting and developing leaders. By focusing on actual leadership skills rather than mere drive for power, we can cultivate leaders who are truly capable of guiding us through the challenges of the 21st century.

Source: https://studyfinds.org/the-ambitious-leaders-dilemma/?nab=0

Sea snail’s deadly venom may hold the key to a diabetes cure

A freshly-collected batch of venomous cone snails. (Credit: Safavi Lab)

In the vast, mysterious depths of the ocean, where some of the planet’s deadliest creatures reside, scientists have discovered an unexpected ally in the fight against diabetes and hormone disorders. A new study finds that the geography cone, a venomous marine snail known for its lethal sting, harbors a powerful secret: a toxin that could revolutionize the way we treat certain diseases.

The geography cone (Conus geographus) isn’t your typical predator. Instead of using brute force to capture its prey, it employs a more insidious method — a cocktail of venomous toxins that disrupt the bodily functions of its victims, leaving them helpless and easy to consume. However, within this deadly arsenal lies a remarkable substance, one that mimics a human hormone and holds the potential to create groundbreaking medications.

Publishing their work in the journal Nature Communications, scientists from the University of Utah and their international collaborators have identified a component in the snail’s venom that acts like somatostatin, a human hormone responsible for regulating blood sugar and various other bodily processes. What’s truly astonishing is that this snail-produced toxin, known as consomatin, doesn’t just mimic the hormone — it surpasses it in stability and specificity, making it an extraordinary candidate for drug development.

How can a deadly venom become a life-saving drug?
Somatostatin in humans serves as a kind of master regulator, ensuring that levels of blood sugar, hormones, and other critical molecules don’t spiral out of control. However, consomatin, the snail’s version of this hormone, has some unique advantages. Unlike human somatostatin, which interacts with multiple proteins in the body, consomatin targets just one specific protein with pinpoint accuracy. This precise targeting means that consomatin could potentially be used to regulate blood sugar and hormone levels with fewer side-effects than existing medications.

Consomatin is also more stable than the human hormone, lasting longer in the body due to the presence of an unusual amino acid that makes it resistant to breakdown. For pharmaceutical researchers, this feature is a goldmine — it could lead to the development of drugs that offer longer-lasting benefits to patients, reducing the frequency of doses and improving overall treatment outcomes.

Ho Yan Yeung, PhD, first author on the study (left) and Thomas Koch, PhD, also an author on the study (right) examine a freshly-collected batch of cone snails. Credit: Safavi Lab

While it may seem counterintuitive to look to venom for inspiration in drug development, this approach is proving to be incredibly fruitful. As Dr. Helena Safavi, an associate professor of biochemistry at the University of Utah and the senior author of the study, explains, venomous animals like the geography cone have had millions of years to fine-tune their toxins to target specific molecules in their prey. This evolutionary precision is exactly what makes these toxins so valuable in the search for new medicines.

“Venomous animals have, through evolution, fine-tuned venom components to hit a particular target in the prey and disrupt it,” says Safavi in a media release. “If you take one individual component out of the venom mixture and look at how it disrupts normal physiology, that pathway is often really relevant in disease.”

In other words, nature’s own designs can offer shortcuts to discovering new therapeutic pathways.

In its natural environment, consomatin works alongside another toxin in the cone snail’s venom, which mimics insulin, to drastically lower the blood sugar of the snail’s prey. This one-two punch leaves the fish in a near-comatose state, unable to escape the snail’s deadly grasp. By studying consomatin and its insulin-like partner, researchers believe they can uncover new ways to control blood sugar levels in humans, potentially leading to better treatments for diabetes.

“We think the cone snail developed this highly selective toxin to work together with the insulin-like toxin to bring down blood glucose to a really low level,” explains Ho Yan Yeung, a postdoctoral researcher in biochemistry at the University of Utah and the study’s first author.

What’s even more exciting is the possibility that the cone snail’s venom contains additional yet undiscovered toxins that also regulate blood sugar.

“It means that there might not only be insulin and somatostatin-like toxins in the venom,” Yeung adds. “There could potentially be other toxins that have glucose-regulating properties too.”

Source: https://studyfinds.org/sea-snail-venom-diabetes/?nab=0

Franchise Faces: The Most Iconic Fast Food Mascots of All Time

Step right up, folks, and feast your eyes on the colorful cast of characters that have been tempting our taste buds and raiding our wallets for decades! We’re talking about those lovable (and sometimes slightly unnerving) fast food mascots that are as much a part of our culture as the greasy, delicious food they’re hawking. From the golden arches of McDonald’s to the finger-lickin’ goodness of KFC, these animated pitchmen have wormed their way into our hearts faster than you can say “supersize me.” They’ve made us laugh, occasionally made us cringe, and more often than not, made us inexplicably crave a burger at 2 AM. So, grab your favorite value meal and get ready for a nostalgic trip down fast food memory lane as we rank the best fast food mascots. Trust us, this list is more stacked than a triple-decker burger!

If fast food mascots feel like old friends, you aren’t alone. That’s why we’ve put together a list of the top best fast food mascots from 10 expert websites. Did your favorite make our list? As always, we’d like to see your own recommendations in the comments below!

The Consensus Best Fast Food Mascots, Ranked
1. Colonel Sanders – KFC
Who doesn’t love a heaping bucket of fried chicken? “One of the most popular and recognizable fast food mascots is KFC’s Colonel Sanders. Not only is this a mascot and symbol for the brand, it directly represents the founder of Kentucky Fried Chicken — Colonel Harland David Sanders,” notes Restaurant Clicks.

What makes him stand out? “As a character, Colonel Sanders is a lovable, sweet old man with plenty of personal ties to KFC. He’s often portrayed by comedians, which gives the brand plenty of room to create funny and innovative commercials,” adds Ranker.

“Dressed in a white suit and black bow tie, accessorized with glasses and a cane, the Colonel’s image has become synonymous with the brand’s finger-licking good fried chicken. His face, etched in the memories of countless fried chicken fans, carries an aura of professionalism, quality, and trustworthiness,” suggests Sixstoreys.

2. Ronald McDonald – McDonald’s
One of the most recognizable fast food mascots, Ronald McDonald even has his own balloon in the Macy’s Thanksgiving Day parade. The mascot, “was first introduced to audiences in 1963, when actor Willard Scott (who played the immensely popular Bozo the Clown at the time) took on the persona of the red-haired clown for three TV ads promoting McDonald’s. He was referred to as ‘Ronald McDonald – the hamburger-happy clown’ and sported a drink cup on his nose as well as a food tray as a hat,” according to Lovefood.com.

Ronald is the perfect combination of fun and odd for a mascot. “He has Wendy’s red hair, The King’s freaky appearance, and the Colonel’s kindly character. Put it all together and you have a master of the mascots,” adds WatchMojo.

Thrillist writes: “Ronald is without a doubt the most polemic fast-food mascot. He’s friendly and instantly recognizable, but he’s also a clown. Most normal people are terrified by clowns regardless of nostalgia, so whether he reminds you of Saturday mornings spent watching cartoons and eating Happy Meals or the scariest moments or Stephen King’s ‘It’ is all on you.”

3. The King – Burger King
Who remembers going into Burger King as a kid and getting one of those paper crowns? “The first iteration of the Burger King was an unsuspecting fellow with a lopsided crown sitting atop his burger throne, cradling a soda. Today, he’s a life-size dude with a massive plastic head. He’s always smiling, giving him an almost menacing air — he might be outside your bedroom window right now,” points out The Daily Meal.

You know who we are talking about. “That unsettling-yet-unforgettable maniacal grin has been producing nightmares across the U.S. since 2004, when the current, plastic-costumed incarnation was introduced to the world,” says Mashed.

Restaurant Clicks writes: “Sometimes creepy and odd is what restaurants need to make people pay attention. It’s also fitting that he’s wearing a paper crown, similar to the ones kids can get in-store.”

I had to ask my 9-year-old if she thought The King was creepy. Her response? “A little, but I like him.”

4. Wendy – Wendy’s

Consider Wendy’s founder Dave Thomas as the ultimate girl dad. His daughter, Melinda was the idea behind the smiling, freckled, red-headed girl that the fast food chain still embraces.

You don’t think of Wendy’s without conjuring up an image of this red-haired sweetheart. “She’s been the primary logo of Wendy’s since the beginning and her image is irrevocably tied to the restaurant chain. Her personality is a central part of the fast food chain – that of a sweet young girl with plenty of pep and enthusiasm. Plus, her association with her father gives the brand a family feel, even though it has grown into a huge corporation,” notes Ranker.

Sixstoreys adds, “the character has remained a consistent symbol of the all-American, wholesome cuisine that Wendy’s seeks to provide. Her warm and approachable demeanor instantly evokes a sense of familiarity and family, resonating with customers who appreciate the brand’s commitment to quality, freshness, and friendliness.”

“She isn’t animatronic, she doesn’t have any particular peculiarities, but she is one of the most famous faces in all of fast food,” points out WatchMojo.

5. Jack Box – Jack in the Box
Rounding out our top five is Jack Box, from (you guessed it) Jack in the Box. “An adaptation of the fast food chain’s original clown head mascot, the geometrical character has become a classic American mascot. The franchise has employed Jack in its advertising since 1994 – part of a larger rebranding effort after a 1993 food contamination scandal,” according to The Drum.

 

Source: https://studyfinds.org/best-fast-food-mascots/?nab=0

Gen Z blames social media for ruining their mental health — but no one’s signing off

(Photo by DimaBerlin on Shutterstock)

Three in four Gen Z Americans are putting the blame on social media for having a negative impact on their mental health.

The survey, commissioned by LG Electronics and conducted by Talker Research, offers compelling insights into the digital habits and emotional responses of 2,000 Gen Z social media users. In a startling revelation, 20% of Gen Zers cite Instagram and TikTok as detrimental to their well-being, followed by Facebook at 13%.

Despite these concerns, social media remains an integral part of Gen Z’s daily life. The average user spends a whopping five-and-a-half hours per day on social media apps, with 45% believing they outpace their friends in usage time. Boredom (66%), seeking laughter (59%), staying informed (49%), and keeping tabs on friends (44%) are the primary motivators for their online engagement.

However, this digital immersion comes at a cost. Nearly half the poll (49%) report experiencing negative emotions from social media use, with stress and anxiety affecting 30% of respondents. Even more alarming, those who experience these negative feelings report that it takes only 38 minutes of scrolling before their mood begins to sour.

“We spend a significant portion of our lives online and often these experiences may leave us feeling drained and not mentally stimulated,” says Louis Giagrande, head of U.S. marketing at LG Electronics, in a statement. “We encourage everyone to be more conscious about the social media content they choose to engage with, bringing stronger balance, inspiration, and happiness to their lives. If we focus on optimism, we will be better equipped to deal with life’s challenges and build a happier life.”

The study also uncovered a desire for change among Gen Z users. In fact, 62% wish they could “reset” their social media feeds and start anew. Over half (53%) express frustration with content misalignment, feeling that their feeds don’t reflect their interests. Moreover, 54% believe they have limited or no control over the content populating their feeds, with only 16% claiming total control.

Yet, it’s not all doom and gloom. Four in five respondents (80%) associate social media with positive impacts on their mood. Comedy (65%), animal content (48%), beauty-related posts (40%), and prank videos (34%) are among the top mood boosters. Two-thirds of users say that social media has turned a bad day into a good one, and 44% believe it positively impacts their outlook on life.

Source: https://studyfinds.org/gen-z-blames-social-media-for-ruining-their-mental-health-but-no-ones-signing-off/?nab=0

The superstorms from space that could end modern life

A sudden solar superstorm is thought to be behind a devastating bombardment of high-energy particles around 14,000 years ago (Credit: Nasa)

The Sun is going through a period of high activity, but it is nothing compared to an enormous solar event that slammed into our planet 14,000 years ago. If one were to occur today, the effect on Earth could be devastating.

The oldest trees on Earth date back a whopping 5,000 years, living through all manner of events. They have stood through the rise and fall of the Roman Empire, the birth of Christianity, the European discovery of the Americas and the first Moon landing. Trees can even be fossilised in soil underground, giving us a connection to the last 30,000 years.
At first glance, these long-lived specimens might just appear to be static observers, but not so. They are doing something extraordinary as they grow – recording the activity of our Sun.

As trees photosynthesise throughout the year, they change in colouration depending on the season, appearing lighter in spring and darker by autumn. The result is a year-on-year record contained within the growth “rings” of the tree. “This gives us this really valuable archive of time capsules,” says Charlotte Pearson, a dendrochronologist – someone who studies tree rings – at the Laboratory of Tree-Ring Research at the University of Arizona, US.

For most of the 20th Century, dendrochronologists have largely used tree rings to investigate change across wide chunks of history – a decade or more. Yet at certain points in time, the change they document has been more sudden and cataclysmic. What they are finding evidence of are massive solar events that reveal disturbing insights into the turbulent recent past of the star at the centre of our Solar System.

“Nobody was expecting a brief event to appear,” says Edouard Bard, a climatologist at the College de France in Paris. But in 2012 a then-PhD student called Fusa Miyake, now a cosmic ray physicist at Nagoya University in Japan, made an astonishing discovery. Studying Japanese cedar trees, she discovered a huge spike in a type of carbon known as carbon-14 in a single year nearly 800 years ago, in 774 CE. “I was so excited,” says Miyake.

After doubting the data at first, Miyake and her colleagues soon came to an unnerving conclusion. The spike in carbon-14 must have come from something injecting huge numbers of particles into our atmosphere, since this radioactive isotope of carbon is produced when high-energy particles strike nitrogen in the atmosphere. Once linked perhaps to cosmic events like supernovae, studies have since suggested another probable cause: a monster burst of particles thrown out by the Sun. These would be generated by superflares, far bigger than anything seen in the modern era.

“They require an event that’s at least ten times bigger than anything we’ve observed,” says Mathew Owens, a space physicist at the University of Reading in the UK. The first recorded solar flare sighting dates back to the middle of the 19th Century, and are associated with the great geomagnetic storm of 1859, which has become known as the Carrington Event, after one of the astronomers who observed it, Richard Carrington.

Spikes in the level of carbon-14 isotope in tree rings have revealed past spikes in high-energy particles bombarding the Earth (Credit: Getty Images)

Miyake’s discovery was confirmed by other studies of tree rings and analysis of ancient ice in cores collected from places such as Antarctica and Greenland. The latter contained correlated signatures of berylium-10 and chlorine-36, which are produced in a similar atmospheric process to carbon-14. Since then, more Miyake events, as these massive bursts of cosmic radiation and particles are now known, have been unearthed. In total, seven well studied events are known to have occurred over the past 15,000 years, while there are several other spikes in carbon-14 that have yet to be confirmed as Miyake events.

The most recent occurred just over 1,000 years ago in 993 CE. Researchers believe these events occur rarely – but at somewhat regular intervals, perhaps every 400 to 2,400 years.

The most powerful known Miyake event was discovered as recently as 2023 when Bard and his colleagues announced the discovery of a carbon-14 spike in fossilised Scots pine trees in Southern France dating back 14,300 years. The spike they saw was twice as powerful as any Miyake event seen before, suggesting these already-suspected monster events could be even bigger than previously thought.

The team behind the discovery of this superstorm from space had scoured the Southern French Alps for fossilised trees and found some that had been exposed by rivers. Using a chainsaw, they collected samples and examined them back in a laboratory, discovering evidence for an enormous carbon-14 spike. “We dreamed of finding a new Miyake event, and we were very, very happy to find this,” says Cécile Miramont, a dendrochronologist at Aix-Marseille University in France and a co-author on the study.

Source : https://www.bbc.com/future/article/20240815-miyake-events-the-giant-solar-superstorms-that-could-rock-earth

South American lungfish has largest genome of any animal

A South American lungfish, whose scientific name is Lepidosiren paradoxa, is seen at a laboratory at the Louisiana State University in Baton Rouge, Louisiana, U.S., March 18, 2024. Katherine Seghers, Louisiana State University/Handout via REUTERS/File Photo Purchase Licensing Rights

The South American lungfish is an extraordinary creature – in some sense, a living fossil. Inhabiting slow-moving and stagnant waters in Brazil, Argentina, Peru, Colombia, Venezuela, French Guiana and Paraguay, it is the nearest living relative to the first land vertebrates and closely resembles its primordial ancestors dating back more than 400 million years.

This freshwater species, called Lepidosiren paradoxa, also has another distinction: the largest genome – all the genetic information of an organism – of any animal on Earth. Scientists have now sequenced its genome, finding it to be about 30 times the size of the human genetic blueprint.

The metric for genome size was the number of base pairs, the fundamental units of DNA, in an organism’s cellular nuclei. If stretched out like from a ball of yarn, the length of the DNA in each cell of this lungfish would extend almost 200 feet (60 meters). The human genome would extend a mere 6-1/2 feet (2 meters).

“Our analyses revealed that the South American lungfish genome grew massively during the past 100 million years, adding the equivalent of one human genome every 10 million years,” said evolutionary biologist Igor Schneider of Louisiana State University, one of the authors of the study published this week in the journal Nature

In fact, 18 of the 19 South American lungfish chromosomes – the threadlike structures that carry an organism’s genomic information – are each individually larger than the entire human genome, Schneider said.

While huge, there are plants whose genome is larger. The current record holder is a fork fern species, called Tmesipteris oblanceolata, in the French overseas territory of New Caledonia in the Pacific. Its genome is more than 50 times the human genome’s size.

Until now, the largest-known animal genome was that of another lungfish, the Australian lungfish, Neoceratodus forsteri. The South American lungfish’s genome was more than twice as big. The world’s four other lungfish species live in Africa, also with large genomes.

Lungfish genomes are largely composed of repetitive elements – about 90% of the genome. The researchers said the massive genome expansion documented in lungfish genomes seems to be related to a reduction in these species of a mechanism that ordinarily suppresses such genomic repetition.

“Animal genome sizes vary greatly, but the significance and causes of genome size variation remain unclear. Our study advances our understanding of genome biology and structure by identifying mechanisms that control genome size while maintaining chromosome stability,” Schneider said.

The South American lungfish reaches up to about 4 feet (1.25 meters) long. While other fish rely upon gills to breathe, lungfish also possess a pair of lung-like organs. It lives in oxygen-starved, swampy environs of the Amazon and Parana-Paraguay River basins, and supplements the oxygen gotten from the water by breathing in oxygen from the air.

Lungfish first appeared during the Devonian Period. It was during the Devonian that one of the most important moments in the history of life on Earth occurred – when fish possessing lungs and muscular fins evolved into the first tetrapods, the four-limbed land vertebrates that now include amphibians, reptiles, birds and mammals.

Source : https://www.reuters.com/science/south-american-lungfish-has-largest-genome-any-animal-2024-08-16

Why are more young adults getting colorectal cancer? The answer may be their diet

3D Rendered Medical Illustration of Male Anatomy showing Colorectal Cancer (© SciePro – stock.adobe.com)

Colorectal cancer rates are rising at an alarming rate among young adults, but the reason behind the increased diagnoses has been a medical mystery. However, the Cleveland Clinic has released a study that pinpoints a major cause for the spike in cases: diet.

When looking at the microbiomes of adults 60 years and younger with colorectal cancer, researchers found an unusually high level of diet-derived molecules called metabolites. The metabolites involved in colorectal cancer usually come from eating red and processed meat.

“Researchers—ourselves included—have begun to focus on the gut microbiome as a primary contributor to colon cancer risk. But our data clearly shows that the main driver is diet,” says Dr. Naseer Sangwan, a director at the Microbial Sequencing & Analytics Resource Core at the Cleveland Clinic and study co-author, in a media release. “We already know the main metabolites associated with young-onset risk, so we can now move our research forward in the correct direction.”

The study is published in the journal npj Precision Oncology.

This is a map of colorectal cancer hotspots in the United States. (Image credit: Rogers et al. American Journal of Cancer Research)

The researchers created an artificial intelligence algorithm to examine a wide range of datasets in published studies to determine what factors contributed most to colorectal cancer risk. One crucial area to explore was the gut microbiome. Previous research showed significant differences in gut composition between younger and older adults with colorectal cancer.

One of the most striking features among young adults and older adults with colorectal cancer is the differences in diet, reflected through the type of metabolites present in the gut microbiome. Younger people showed higher levels of metabolites involved in producing and metabolizing an amino acid called arginine, along with metabolites involved in the urea cycle.

According to the authors, these metabolites likely result from overeating red meat and processed foods. They are currently examining national datasets to confirm their findings.

Choosing between the two, it is much simpler to change a diet than to completely reset a person’s microbiome. The findings suggest that eating less red and processed meat could lower a person’s risk of colorectal cancer.

“Even though I knew before this study that diet is an important factor in colon cancer risk, I didn’t always discuss it with my patients during their first visit. There is so much going on, it can already be so overwhelming,” says Dr. Suneel Kamath, a gastrointestinal oncologist at the Cleveland Clinic and senior author of the study. “Now, I always make sure to bring it up to my patients, and to any healthy friends or family members they may come in with, to try and equip them with the tools they need to make informed choices about their lifestyle.”

Making healthier dietary choices is also a more accessible method for preventing colorectal cancer. While screening is an important tool, Dr. Kamath notes it is impractical for doctors to give every person in the world a colonoscopy. In the future, simple tests that count specific metabolites as a marker for colorectal cancer risk may help with increased monitoring. On the research side, the authors plan to test whether particular diets or drugs involved in regulating arginine production and the urea cycle can help prevent or treat colorectal cancer in young adults.

Source: https://studyfinds.org/colorectal-cancer-diet/?nab=0

Shocking brain scans reveal consciousness remains among vegetative patients

(© Photographee.eu – stock.adobe.com)

For years, families of brain-injured patients have insisted their unresponsive loved ones were still “in there.” Now, a groundbreaking study on consciousness suggests they may have been right all along.

Researchers have discovered that approximately one in four patients who appear completely unresponsive may actually be conscious and aware but physically unable to show it. This phenomenon, known as cognitive motor dissociation, challenges long-held assumptions about disorders of consciousness and could have profound implications for how we assess and care for brain-injured patients.

The study, published in the New England Journal of Medicine, represents the largest and most comprehensive investigation of cognitive motor dissociation to date. An international team of researchers used advanced brain imaging and electrophysiological techniques to detect signs of consciousness in patients who seemed entirely unresponsive based on standard behavioral assessments.

The findings suggest that cognitive motor dissociation is far more common than previously thought. This has major implications for clinical care, end-of-life decision-making, and our fundamental understanding of consciousness itself.

The study examined 353 adult patients with disorders of consciousness resulting from various types of brain injuries. These conditions exist on a spectrum, ranging from coma (where patients are completely unresponsive and show no signs of awareness) to the vegetative state (where patients may open their eyes and have sleep-wake cycles but show no signs of awareness) to the minimally conscious state (where patients show some inconsistent but reproducible signs of awareness).

Traditionally, doctors have relied on bedside behavioral assessments to diagnose a patient’s level of consciousness. However, this approach assumes that if a patient can’t physically respond to commands or stimuli, they must not be aware. The new study challenges this assumption, revealing signs of consciousness that may not be outwardly visible.

Strikingly, the study found that 25% of patients who showed no behavioral signs of consciousness demonstrated brain activity consistent with awareness and the ability to follow commands. In other words, one in four patients who appeared to be in a vegetative state or minimally conscious state without command-following ability were actually conscious and able to understand and respond mentally to instructions.

“Some patients with severe brain injury do not appear to be processing their external world. However, when they are assessed with advanced techniques such as task-based fMRI and EEG, we can detect brain activity that suggests otherwise,” says lead study author Yelena Bodien, PhD, in a statement.

Bodien is an investigator for the Spaulding-Harvard Traumatic Brain Injury Model Systems and Massachusetts General Hospital’s Center for Neurotechnology and Neurorecovery.

“These results bring up critical ethical, clinical, and scientific questions – such as how can we harness that unseen cognitive capacity to establish a system of communication and promote further recovery?”

The study also found that cognitive motor dissociation was more common in younger patients, those with traumatic brain injuries, and those who were assessed later after their initial injury. This suggests that some patients may recover cognitive abilities over time, even if they remain unable to communicate behaviorally.

Interestingly, even among patients who could follow commands behaviorally, more than 60% did not show responses on the brain imaging tests. This highlights the complex nature of consciousness and the limitations of current detection methods.

The findings raise challenging questions about how we diagnose disorders of consciousness, make end-of-life decisions, and allocate resources for long-term care and rehabilitation. It also opens up new avenues for potential therapies aimed at restoring communication in these patients.

While the study represents a significant advance, the authors caution that the techniques used are not yet widely available and require further refinement before they can be routinely used in clinical practice.

“To continue our progress in this field, we need to validate our tools and to develop approaches for systematically and pragmatically assessing unresponsive patients so that the testing is more accessible,” adds Bodien. “We know that cognitive motor dissociation is not uncommon, but resources and infrastructure are required to optimize detection of this condition and provide adequate support to patients and their families.”

Source: https://studyfinds.org/brain-consciousness-vegetative/?nab=0

AI model 98% accurate in detecting diseases — just by looking at your tongue

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real-time — with 98% accuracy.
vladimirfloyd – stock.adobe.com

This technology could be aah-mazing!

Researchers in Iraq and Australia say they have developed a computer algorithm that can analyze the color of a person’s tongue to detect their medical condition in real time — with 98% accuracy.

“Typically, people with diabetes have a yellow tongue; cancer patients a purple tongue with a thick greasy coating; and acute stroke patients present with an unusually shaped red tongue,” explained senior study author Ali Al-Naji, who teaches at Middle Technical University in Baghdad and the University of South Australia.

Examining the tongue for signs of disease has long been commonplace in Chinese medicine.
MDPI

“A white tongue can indicate anemia; people with severe cases of COVID-19 are likely to have a deep-red tongue,” Al-Naji continued. “An indigo- or violet-colored tongue indicates vascular and gastrointestinal issues or asthma.”

Source: https://nypost.com/2024/08/13/lifestyle/ai-model-98-accurate-in-detecting-diseases-just-by-looking-at-your-tongue/

Paradise Found: Experts Rank the West Coast’s Most Beautiful Beaches

A pelican in front of Haystack Rock on Cannon Beach in Oregon (Photo by Hank Vermote on Shutterstock)

The West Coast of the United States is home to some of the most breathtaking beaches in the world. From California’s dramatic cliffs to Oregon and Washington’s peaceful shores, there’s a beach for every vibe. With nearly 8,000 miles of shoreline, it would take years to get to every beach. That’s why we’ve created a list of the best West Coast beaches that travel experts across seven websites recommend adding to your bucket list. So, grab your sunscreen and towel, and discover what could become your new favorite spot. Is there a beach you love that’s not on the list? Let us know!

The List: Top 6 Must-Visit Beaches on the Left Coast

1. Cannon Beach, Oregon

Cannon Beach, Oregon (Photo by Tim Mossholder on Unsplash)

You won’t be able to put your camera down when visiting Cannon Beach along the Oregon coast. The breathtaking sunsets rival those in far-off lands, with the towering Haystack Rock providing an incredible backdrop. Nestled in the charming town of Cannon Beach, this coastal gem is a favorite spot for day-trippers from Portland, says Roam The Northwest.

You’ll also encounter plenty of wildlife at the beach, from tide pools to unique marine life. Nearby, Ecola State Park offers hiking trails and lookouts – perfect for selfies. Sixt recommends a stroll through Cannon Beach’s downtown area, filled with unique boutiques and galleries. It’s a full-day adventure!

Swimming might not be the main attraction at Cannon Beach, but that doesn’t bother visitors reports Your Tango. There’s plenty to do, from exploring vibrant nature trails to walking to Haystack Rock at low tide. The beach promises fun times for everyone, even if you don’t take a dip!

2. La Jolla Beach, California

La Jolla Beach, California (Photo by Danita Delimont on Shutterstock)

According to USA Today, this popular San Diego beach is a dream come true with its miles of turquoise water and gentle waves, making it the perfect place to learn to surf. The swimming and snorkeling are unbeatable with plenty of colorful fish and marine life.

The beach is an ideal destination for families looking to enjoy the sand, surf, and explore the nearby State Marine Reserve. Roam the Northwest recommends bringing a bike to tour the Coast Walk Trail or visit other nearby beaches.

If you enjoy kayaking or snorkeling, Your Tango says that this beach offers some of the most secluded caves to explore. Just a short walk from La Jolla Cove is the Children’s Pool, an ideal spot for families with small children.

3. Rialto Beach, Washington

Sunset on Rialto Beach, Washington (Photo by Jay Yuan on Shutterstock)

The highlight of this stunning Olympic Coast beach is the aptly named Hole-in-the-Wall, located about a mile north of the parking lot. Accessible only at low tide, this natural rock formation is perfect for exploring and taking photos. It frames the sea stacks that line this stretch of beach beautifully, according to Roam the Northwest.

The water can be chilly in the summer, around 59 degrees, but Cheapism says the stunning scenery more than makes up for it. You’ll find massive sea stacks just offshore, piles of driftwood scattered along the beach and the famous Hole-in-the-Wall sea cave arch. The wildlife here is a real treat too—otters, whales, and seals are regulars!

This beach is one of the rare spots where you can bring your pets, but Yard Barker says you must keep them leashed and don’t let them go past Ellen Creek. It’s a popular place for beach camping too, though sadly, your four-legged friends have to sit that one out.

4. Ruby Beach, Washington

Ruby Beach ( Photo by Sean Pavone on Shutterstock)

This is one of Washington State’s best-kept secrets according to Yard Barker. Adjacent to Olympic National Park, this spot is more “beachy” than Rialto. If it weren’t for the cooler weather, you might think you were in California. This spot is a must-add to any vacation itinerary.

Ruby Beach is conveniently located off Highway 101 and is perfect for day-trippers. With its towering sea stacks and cobbled stones, Roam the Northwest guarantees you’ll spend hours beachcombing and soaking in the wild beauty. Its remote charm and stunning landscapes keep people coming back for more.

During low tide, visitors can explore rocky areas and discover marine life in the tide pools, while photographers capture the scenic sea stacks and driftwood. For a more active experience, the nearby Olympic National Park offers coastal trails with stunning views of the Pacific Ocean. As Sixt highlights, the 1.5-mile hike to the Hoh River is breathtaking, featuring sea stacks, driftwood, and the chance to spot eagles and other wildlife.

Source: https://studyfinds.org/best-west-coast-beaches/?nab=0

Going vegan vs. Mediterranean diet: Surprising study reveals which is healthier

(© Mustafa – stock.adobe.com)

The Mediterranean diet has long been touted as the gold standard for healthy eating, but a new contender has emerged from an unexpected corner. Recent research shows that a low-fat vegan diet not only promotes more weight loss but also dramatically reduces harmful substances in our food.

The study, conducted by researchers at the Physicians Committee for Responsible Medicine, an agency that promotes plant-based foods, compared the effects of a Mediterranean diet and a low-fat vegan diet on overweight adults. Participants on the vegan diet lost an average of 6 kilograms (about 13 pounds) more than those on the Mediterranean diet, with no change in their physical activity.

But the benefits, published in the journal Frontiers in Nutrition, didn’t stop at weight loss. The vegan diet also led to a dramatic 73% reduction in dietary advanced glycation end-products (AGEs). These harmful compounds, formed when proteins or fats combine with sugars, have been linked to various health issues, including inflammation, oxidative stress, and an increased risk of chronic diseases like Type 2 diabetes and cardiovascular disease.

Why you should eliminate AGEs from your diet

To understand AGEs, imagine them as unwanted houseguests that overstay their welcome in your body. They form naturally during normal metabolism, but they also sneak in through our diet, especially in animal-based and highly processed foods. AGEs are particularly abundant in foods cooked at high temperatures, such as grilled meats or fried foods. They can accumulate in our bodies over time, causing damage to tissues and contributing to the aging process – hence their nickname, “glycotoxins.”

The Mediterranean diet, long praised for its health benefits, surprisingly showed no significant change in dietary AGE levels. This finding challenges the perception that the Mediterranean diet is the gold standard for healthy eating. The vegan diet, on the other hand, achieved its AGE-busting effects primarily by eliminating meat consumption (which accounted for 41% of the AGE reduction), minimizing added fats (27% of the reduction), and avoiding dairy products (14% of the reduction).

These results suggest that a low-fat vegan diet could be a powerful tool in the fight against obesity and its related health issues. By reducing both body weight and harmful AGEs, this dietary approach may offer a two-pronged attack on factors that contribute to chronic diseases.

Mediterranean diet not best for weight loss?
The study’s lead author, Dr. Hana Kahleova, says that the vegan diet’s benefits extended beyond just numbers on a scale. The reduction in AGEs could have far-reaching implications for overall health, potentially lowering the risk of various age-related diseases.

“The study helps bust the myth that a Mediterranean diet is best for weight loss,” says Kahleova, the director of clinical research at the Physicians Committee for Responsible Medicine, in a statement. “Choosing a low-fat vegan diet that avoids the dairy and oil so common in the Mediterranean diet helps reduce intake of harmful advanced glycation end-products leading to significant weight loss.”

This research adds to a growing body of evidence supporting the benefits of plant-based diets. Previous studies have shown that vegetarian and vegan diets can reduce the risk of developing metabolic syndrome and Type 2 diabetes by about 50%. The dramatic reduction in dietary AGEs observed in this study may help explain some of these protective effects.

Source: https://studyfinds.org/vegan-vs-mediterranean-diet-which-is-healthier/?nab=0

Survey says it takes nearly 2 months of exercise before you’ll start to look more fit

(© rangizzz – stock.adobe.com)

The poll of 2,000 adults reveals what goals people prioritize when it comes to their fitness. Above all, they’re aiming to lose a certain amount of weight (43%), increase their general strength (43%) and increase their general mobility (35%).

However, 48 percent are worried about potentially losing the motivation to get fit and 65 percent believe the motivation to increase their level of physical fitness wanes over time.

According to respondents, the motivation to keep going lasts for about four weeks before needing a new push.

The survey, commissioned by Optimum Nutrition and conducted by TalkerResearch, finds that a majority of Americans’ diet affects their level of fitness motivation (89%).

Nearly three in 10 (29%) believe they don’t get enough protein in their diet, lacking it either “all the time” (19%) or often (40%).

Gen X respondents feel like they are lacking protein the most out of all generations (35%), compared to millennials (34%), Gen Z (27%) and baby boomers (21%). Plus, over three in five (35%) women don’t think they get enough protein vs. 23 percent of men.

The average person has two meals per day that don’t include protein, but 61 percent would be more likely to increase their protein intake to help achieve their fitness goals.

As people reflect on health and wellness goals, the most common experiences that make people feel out of shape include running out of breath often (49%) and trying on clothing that no longer fits (46%).

Over a quarter (29%) say they realized they were out of shape after not being able to walk up a flight of stairs without feeling winded.

Source: https://studyfinds.org/survey-says-it-takes-nearly-2-months-of-exercise-before-youll-start-to-look-more-fit/

Gold goes 2D: Scientists create ultra-thin ‘goldene’ sheets

Lars Hultman, professor of thin film physics and Shun Kashiwaya, researcher at the Materials Design Division at Linköping University. (Credit: Olov Planthaber)

In a remarkable feat of nanoscale engineering, scientists have created the world’s thinnest gold sheets at just one atom thick. This new material, dubbed “goldene,” could revolutionize fields from electronics to medicine, offering unique properties that bulk gold simply can’t match.

The research team, led by scientists from Linköping University in Sweden, managed to isolate single-atom layers of gold by cleverly manipulating the metal’s atomic structure. Their findings, published in the journal Nature Synthesis, represent a significant breakthrough in the rapidly evolving field of two-dimensional (2D) materials.

Since the discovery of graphene — single-atom-thick sheets of carbon — in 2004, researchers have been racing to create 2D versions of other elements. While 2D materials made from carbon, boron, and even iron have been achieved, gold has proven particularly challenging. Previous attempts resulted in gold sheets several atoms thick or required the gold to be supported by other materials.

The Swedish team’s achievement is particularly noteworthy because they created free-standing sheets of gold just one atom thick. This ultra-thin gold, or goldene, exhibits properties quite different from its three-dimensional counterpart. For instance, the atoms in goldene are packed more tightly together, with about 9% less space between them compared to bulk gold. This compressed structure leads to changes in the material’s electronic properties, which could make it useful for a wide range of applications.

One of the most exciting potential uses for goldene is in catalysis, which is the process of speeding up chemical reactions. Gold nanoparticles are already used as catalysts in various industrial processes, from converting harmful vehicle emissions into less dangerous gases to producing hydrogen fuel. The researchers believe that goldene’s extremely high surface-area-to-volume ratio could make it an even more efficient catalyst.

The creation of goldene also opens up new possibilities in fields like electronics, photonics, and medicine. For example, the material’s unique optical properties could lead to improved solar cells or new types of sensors. In medicine, goldene might be used to create ultra-sensitive diagnostic tools or to deliver drugs more effectively within the body.

How They Did It: Peeling Gold Atom by Atom
The process of creating goldene is almost as fascinating as the material itself. The researchers used a technique that might be described as atomic-scale sculpting, carefully removing unwanted atoms to leave behind a single layer of gold.

They started with a material called Ti3AuC2, which is part of a family of compounds known as MAX phases. These materials have a layered structure, with sheets of titanium carbide (Ti3C2) alternating with layers of gold atoms. The challenge was to remove the titanium carbide layers without disturbing the gold.

To accomplish this, the team used a chemical etching process. They immersed the Ti3AuC2 in a carefully prepared solution containing potassium hydroxide and potassium ferricyanide, known as Murakami’s reagent. This solution selectively attacks the titanium carbide layers, gradually dissolving them away.

However, simply etching away the titanium carbide wasn’t enough. Left to their own devices, t

he freed gold atoms would quickly clump together, forming 3D nanoparticles instead of 2D sheets. To prevent this, the researchers added surfactants — molecules that help keep the gold atoms spread out in a single layer.

Two key surfactants were used: cetrimonium bromide (CTAB) and cysteine. These molecules attach to the surface of the gold, creating a protective barrier that prevents the atoms from coalescing. The entire process took about a week, with the researchers carefully controlling the concentration of the etching solution and surfactants to achieve the desired result.

For the first time, scientists have managed to create sheets of gold only a single atom layer thick. (Credit: Olov Planthaber)

Results: A New Form of Gold Emerges

The team’s efforts resulted in sheets of gold just one atom thick, confirmed through high-resolution electron microscopy. These goldene sheets showed several interesting properties:

  1. Compressed structure: The gold atoms in goldene are packed about 9% closer together than in bulk gold. This compression changes how the electrons in the material behave, potentially leading to new electronic and optical properties.
  2. Increased binding energy: X-ray photoelectron spectroscopy revealed that the electrons in goldene are more tightly bound to their atoms compared to bulk gold. This shift in binding energy could affect the material’s chemical reactivity.
  3. Rippling and curling: Unlike perfectly flat sheets, the goldene layers showed some rippling and curling, especially at the edges. This behavior is common in 2D materials and can influence their properties.
  4. Stability: Computer simulations suggested that goldene should be stable at room temperature, although the experimental samples showed some tendency to form blobs or clump together over time.

The researchers also found that they could control the thickness of the gold sheets by adjusting their process. Using slightly different conditions, they were able to create two- and three-atom-thick sheets of gold as well.

Limitations and Challenges

  1. Scale: The current process produces relatively small sheets of goldene, typically less than 100 nanometers across. Scaling up production to create larger sheets will be crucial for many potential applications.
  2. Stability: Although computer simulations suggest goldene should be stable, the experimental samples showed some tendency to curl and form blobs, especially at the edges. Finding ways to keep the sheets flat and prevent them from clumping together over time will be important.
  3. Substrate dependence: The goldene sheets were most stable when still partially attached to the original Ti3AuC2 material or when supported on a substrate. Creating large, free-standing sheets of goldene remains a challenge.
  4. Purity: The etching process leaves some residual titanium and carbon atoms mixed in with the gold. While these impurities are minimal, they could affect the material’s properties in some applications.
  5. Reproducibility: The process of creating goldene is quite sensitive to the exact conditions used. Ensuring consistent results across different batches and scaling up production will require further refinement of the technique.

The surprising cure for chronic back pain? Just take a walk

(© glisic_albina – stock.adobe.com)

For anyone who has experienced the debilitating effects of low back pain, the results of an eye-opening new study may be a game-changer. Researchers have found that a simple, accessible program of progressive walking and education can significantly reduce the risk of constant low back pain flare-ups in adults. The implications are profound — no longer does managing this pervasive condition require costly equipment or specialized rehab facilities. Instead, putting on a pair of sneakers and taking a daily stroll could be one of the best preventative therapies available.

Australian researchers, publishing their work in The Lancet, recruited over 700 adults across the country who had recently recovered from an episode of non-specific low back pain, which lasted at least 24 hours and interfered with their daily activities. The participants were divided into two groups: one received an individualized walking and education program guided by a physiotherapist over six months, and the other received no treatment at all during the study.

Participants were then carefully followed for at least one year, up to a maximum of nearly three years for some participants. The researchers meticulously tracked any recurrences of low back pain that were severe enough to limit daily activities.

“Our study has shown that this effective and accessible means of exercise has the potential to be successfully implemented at a much larger scale than other forms of exercise,” says lead author Dr. Natasha Pocovi in a media release. “It not only improved people’s quality of life, but it reduced their need both to seek healthcare support and the amount of time taken off work by approximately half.”

Methodology: A Step-by-Step Approach

So, what did this potentially back-saving intervention involve? It utilized the principles of health coaching, where physiotherapists worked one-on-one with participants to design and progressively increase a customized walking plan based on the individual’s age, fitness level, and objectives.

The process began with a 45-minute consultation to understand each participant’s history, conduct an examination, and prescribe an initial “walking dose,” which was then gradually ramped up. The guiding target was to work up to walking at least 30 minutes per day, five times per week, by the six-month mark.

During this period, participants also participated in lessons to help overcome fears about back pain while learning easy strategies to self-manage any recurrences. They were provided with a pedometer and a walking diary to track their progress. After the first 12 weeks, they could choose whether to keep using those motivational tools. Follow-up sessions with the physiotherapist every few weeks, either in-person or via video calls, were focused on monitoring progress, adjusting walking plans when needed, and providing encouragement to keep participants engaged over the long haul.

Results: Dramatic Improvement & A Manageable Approach
The impact of this straightforward intervention was striking. Compared to the control group, participants in the walking program experienced a significantly lower risk of suffering a recurrence of low back pain that limited daily activities. Overall, the risk fell by 28%.

Even more impressive, the average time for a recurrence to strike was nearly double for those in the walking group (208 days) versus the control group (112 days). The results for any recurrence of low back pain, regardless of impact on activities and recurrences requiring medical care, showed similarly promising reductions in risk. Simply put, people engaging in the walking program stayed pain-free for nearly twice as long as others not treating their lower back pain.

Source: https://studyfinds.org/back-pain-just-take-a-walk/

Intermittent fasting may supercharge ‘natural killer’ cells to destroy cancer

Could skipping a few meals each week help you fight cancer? It might sound far-fetched, but new research suggests that one type of intermittent fasting could actually boost your body’s natural ability to defeat cancer.

(Credit: MIA Studio/Shutterstock)

A team of scientists at Memorial Sloan Kettering Cancer Center (MSK) has uncovered an intriguing link between fasting and the body’s immune system. Their study, published in the journal Immunity, focuses on a particular type of immune cell called natural killer (NK) cells. These cells are like the special forces of your immune system, capable of taking out cancer cells and virus-infected cells without needing prior exposure.

So, what’s the big deal about these NK cells? Well, they’re pretty important when it comes to battling cancerous tumors. Generally speaking, the more NK cells you have in a tumor, the better your chances of beating the disease. However, there’s a catch: the environment inside and around tumors is incredibly harsh. It’s like a battlefield where resources are scarce, and many immune cells struggle to survive.

This is where fasting enters the picture. The researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment and more effective at fighting cancer.

“Tumors are very hungry,” says immunologist Joseph Sun, PhD, the study’s senior author, in a media release. “They take up essential nutrients, creating a hostile environment often rich in lipids that are detrimental to most immune cells. What we show here is that fasting reprograms these natural killer cells to better survive in this suppressive environment.”

Illustration of a group of cancer cells. Researchers found that periods of fasting actually “reprogrammed” these NK cells, making them better equipped to survive in the dangerous tumor environment. (© fotoyou – stock.adobe.com)

How exactly does intermittent fasting achieve this?
The study, which was conducted on mice, involved denying the animals food for 24 hours twice a week, with normal eating in between. This intermittent fasting approach had some pretty remarkable effects on the NK cells.

First off, fasting caused the mice’s glucose levels to drop and their levels of free fatty acids to rise. Free fatty acids are a type of lipid (fat) that can be used as an alternative energy source when other nutrients are scarce. The NK cells learned to use these fatty acids as fuel instead of glucose, which is typically their primary energy source.

“During each of these fasting cycles, NK cells learned to use these fatty acids as an alternative fuel source to glucose,” says Dr. Rebecca Delconte, the lead author of the study. “This really optimizes their anti-cancer response because the tumor microenvironment contains a high concentration of lipids, and now they’re able enter the tumor and survive better because of this metabolic training.”

The fasting also caused the NK cells to move around the body in interesting ways. Many of them traveled to the bone marrow, where they were exposed to high levels of a protein called Interleukin-12. This exposure primed the NK cells to produce more of another protein called Interferon-gamma, which plays a crucial role in fighting tumors. Meanwhile, NK cells in the spleen were undergoing their own transformation, becoming even better at using lipids as fuel. The result? NK cells were pre-primed to produce more cancer-fighting substances and were better equipped to survive in the harsh tumor environment.

 

Source: https://studyfinds.org/intermittent-fasting-fight-cancer/

There are 6 different types of depression, brain pattern study shows

(Image by Feng Yu on Shutterstock)

Depression and anxiety disorders are among the most common mental health issues worldwide, yet current treatments often fail to provide relief for many sufferers. A major challenge has been the heterogeneity of these conditions. Patients with the same diagnosis can have vastly different symptoms and underlying brain dysfunctions. Now, a team of researchers at Stanford University has developed a novel approach to parse this heterogeneity, identifying six distinct “biotypes” of depression and anxiety based on specific patterns of brain circuit dysfunction.

The study, published in Nature Medicine, analyzed brain scans from over 800 patients with depression and anxiety disorders. By applying advanced computational techniques to these scans, the researchers were able to quantify the function of key brain circuits involved in cognitive and emotional processing at the individual level. This allowed them to group patients into biotypes defined by shared patterns of circuit dysfunction, rather than relying solely on symptoms.

Intriguingly, the six biotypes showed marked differences not just in their brain function, but also in their clinical profiles. Patients in each biotype exhibited distinct constellations of symptoms, cognitive impairments, and critically, responses to different treatments. For example, one biotype characterized by hyperconnectivity in circuits involved in self-referential thought and salience processing responded particularly well to behavioral therapy. Another, with heightened activity in circuits processing sadness and reward, was distinguished by prominent anhedonia (inability to feel pleasure).

These findings represent a significant step towards a more personalized, brain-based approach to diagnosing and treating depression and anxiety. By moving beyond one-size-fits-all categories to identify subgroups with shared neural mechanisms, this work opens the door to matching patients with the therapies most likely to help them based on the specific way their brain is wired. It suggests that brain circuit dysfunction may be a more meaningful way to stratify patients than symptoms alone. In the future, brain scans could be used to match individual patients with the treatments most likely to work for them, based on their specific neural profile.

More broadly, this study highlights the power of a transdiagnostic, dimensional approach to understanding mental illness. By focusing on neural circuits that cut across traditional diagnostic boundaries, we may be able to develop a more precise, mechanistic framework for classifying these conditions.

“To our knowledge, this is the first time we’ve been able to demonstrate that depression can be explained by different disruptions to the functioning of the brain,” says the study’s senior author, Dr. Leanne Williams, a professor of psychiatry and behavioral sciences, and the director of Stanford Medicine’s Center for Precision Mental Health and Wellness. “In essence, it’s a demonstration of a personalized medicine approach for mental health based on objective measures of brain function.”

The 6 Biotypes Of Depression

  1. The Overwhelmed Ruminator: This biotype has overactive brain circuits involved in self-reflection, detecting important information, and controlling attention. People in this group tend to have slowed-down emotional reactions and attention, but respond well to talk therapy.
  2. The Distracted Impulsive: This biotype has underactive brain circuits that control attention. They tend to have trouble concentrating and controlling impulses, and don’t respond as well to talk therapy.
  3. The Sensitive Worrier: This biotype has overactive brain circuits that process sadness and reward. They tend to have trouble experiencing pleasure and positive emotions.
  4. The Overcontrolled Perfectionist: This biotype has overactive brain circuits involved in regulating behavior and thoughts. They tend to have excessive negative emotions and threat sensitivity, trouble with working memory, but respond well to certain antidepressant medications.
  5. The Disconnected Avoider: This biotype has reduced connectivity in emotion circuits when viewing threatening faces, and reduced activity in behavior control circuits. They tend to have less rumination and faster reaction times to sad faces.
  6. The Balanced Coper: This biotype doesn’t show any major overactivity or underactivity in the brain circuits studied compared to healthy people. Their symptoms are likely due to other factors not captured by this analysis.

Of course, much work remains to translate these findings into clinical practice. The biotypes need to be replicated in independent samples and their stability over time needs to be established. We need to develop more efficient and scalable ways to assess circuit function that could be deployed in routine care. And ultimately, we will need prospective clinical trials that assign patients to treatments based on their biotype.

Nevertheless, this study represents a crucial proof of concept. It brings us one step closer to a future where psychiatric diagnosis is based not just on symptoms, but on an integrated understanding of brain, behavior, and response to interventions. As we continue to map the neural roots of mental illness, studies like this light the way towards more personalized and effective care for the millions of individuals struggling with these conditions.

“To really move the field toward precision psychiatry, we need to identify treatments most likely to be effective for patients and get them on that treatment as soon as possible,” says Dr. Jun Ma, the Beth and George Vitoux Professor of Medicine at the University of Illinois Chicago. “Having information on their brain function, in particular the validated signatures we evaluated in this study, would help inform more precise treatment and prescriptions for individuals.”

Source: https://studyfinds.org/there-are-6-different-types-of-depression-brain-pattern-study-shows/

Super dads, super kids: Science uncovers how the magic of fatherly care boosts child development

(Photo by Ketut Subiyanto from Pexels)

The crucial early years of a child’s life lay the foundation for their lifelong growth and happiness. Spending quality time with parents during these formative stages can lead to substantial positive changes in children. With that in mind, researchers have found an important link between a father’s involvement and their child’s successful development, both mentally and physically. Simply put, being a “super dad” results in raising super kids.

However, in Japan, where this study took place, a historical gender-based division of labor has limited fathers’ participation in childcare-related activities, impacting the development of children. Traditionally, Japanese fathers, especially those in their 20s to 40s, have been expected to prioritize work commitments over family responsibilities.

This cultural norm has resulted in limited paternal engagement in childcare, regardless of individual inclinations. The increasing number of mothers entering full-time employment further exacerbates the issue, leaving a void in familial support for childcare. With the central government advocating for paternal involvement in response to low fertility rates, Japanese fathers are now urged to become co-caregivers, shifting away from their traditional role as primary breadwinners.

While recent trends have found a rise in paternal childcare involvement, the true impact of this active participation on a child’s developmental outcomes has remained largely unexplored. This groundbreaking study published in Pediatric Research, utilizing data from the largest birth cohort in Japan, set out to uncover the link between paternal engagement and infant developmental milestones. Led by Dr. Tsuguhiko Kato from the National Center for Child Health and Development and Doshisha University Center for Baby Science, the study delved into this critical aspect of modern parenting.

“In developed countries, the time fathers spend on childcare has increased steadily in recent decades. However, studies on the relationship between paternal care and child outcomes remain scarce. In this study, we examined the association between paternal involvement in childcare and children’s developmental outcomes,” explains Dr. Kato in a media release.

Leveraging data from the Japan Environment and Children’s Study, the research team assessed developmental milestones in 28,050 Japanese children. These children received paternal childcare at six months of age and were evaluated for various developmental markers at three years. Additionally, the study explored whether maternal parenting stress mediates these outcomes at 18 months.

“The prevalence of employed mothers has been on the rise in Japan. As a result, Japan is witnessing a paradigm shift in its parenting culture. Fathers are increasingly getting involved in childcare-related parental activities,” Dr. Kato says.

The study measured paternal childcare involvement through seven key questions, gauging tasks like feeding, diaper changes, bathing, playtime, outdoor activities, and dressing. Each father’s level of engagement was scored accordingly. The research findings were then correlated with the extent of developmental delay in infants, as evaluated using the Ages and Stages questionnaire.

Source: https://studyfinds.org/super-dads-super-kids/

Women are losing their X chromosomes — What’s causing it?

(Credit: ustas7777777/Shutterstock)

A groundbreaking new study has uncovered genetic factors that may help explain why some women experience a phenomenon called mosaic loss of the X chromosome (mLOX) as they age. With mLOX, some of a woman’s blood cells randomly lose one of their two X chromosomes over time. Concerningly, scientists believe this genetic oddity may lead to the development of several disease, including cancer.

Researchers with the National Institutes of Health found that certain inherited gene variants make some women more susceptible to developing mLOX in the first place. Other genetic variations they identified seem to give a selective growth advantage to the blood cells that retain one X chromosome over the other after mLOX occurs.

Importantly, the study published in the journal Nature confirmed that women with mLOX have an elevated risk of developing blood cancers like leukemia and increased susceptibility to infections like pneumonia. This underscores the potential health implications of this chromosomal abnormality.

As some women age, their white blood cells can lose a copy of chromosome X. A new study sheds light on the potential causes and consequences of this phenomenon. (Credit: Created by Linda Wang with Biorender.com)

Paper Summary

Methodology

To uncover the genetic underpinnings of mLOX, the researchers conducted a massive analysis of nearly 900,000 women’s blood samples from eight different biobanks around the world. About 12% of these women showed signs of mLOX in their blood cells.

Results

By comparing the DNA of women with and without mLOX, the team pinpointed 56 common gene variants associated with developing the condition. Many of these genes are known to influence processes like abnormal cell division and cancer susceptibility. The researchers also found that rare mutations in a gene called FBXO10 could double a woman’s risk of mLOX. This gene likely plays an important role in the cellular processes that lead to randomly losing an X chromosome.

Source: https://studyfinds.org/women-losing-x-chromosomes/

Facially expressive people are more well-liked, socially successful

(Photo by airdone on Shutterstock)

Are you an open book, your face broadcasting every passing emotion, or more of a stoic poker face? Scientists at Nottingham Trent University say that wearing your heart on your sleeve (or rather, your face) could actually give you a significant social advantage. Their research shows that people who are more facially expressive are more well-liked by others, considered more agreeable and extraverted, and even fare better in negotiations if they have an amenable personality.

The study, led by Eithne Kavanagh, a research fellow at NTU’s School of Social Sciences, is the first large-scale systematic exploration of individual differences in facial expressivity in real-world social interactions. Across two studies involving over 1,300 participants, Kavanagh and her team found striking variations in how much people moved their faces during conversations. Importantly, this expressivity emerged as a stable individual trait. People displayed similar levels of facial expressiveness across different contexts, with different social partners, and even over time periods up to four months.

Connecting facial expressions with social success
So what drives these differences in facial communication styles and why do they matter? The researchers say that facial expressivity is linked to personality, with more agreeable, extraverted and neurotic individuals displaying more animated faces. But facial expressiveness also translated into concrete social benefits above and beyond the effects of personality.

In a negotiation task, more expressive individuals were more likely to secure a larger slice of a reward, but only if they were also agreeable. The researchers suggest that for agreeable folks, dynamic facial expressions may serve as a tool for building rapport and smoothing over conflicts.

Across the board, the results point to facial expressivity serving an “affiliative function,” or a social glue that fosters liking, cooperation and smoother interactions. Third-party observers and actual conversation partners consistently rated more expressive people as more likable.

Expressivity was also linked to being seen as more “readable,” suggesting that an animated face makes one’s intentions and mental states easier for others to decipher. Beyond frequency of facial movements, people who deployed facial expressions more strategically to suit social goals, such as looking friendly in a greeting, were also more well-liked.

“This is the first large scale study to examine facial expression in real-world interactions,” Kavanagh says in a media release. “Our evidence shows that facial expressivity is related to positive social outcomes. It suggests that more expressive people are more successful at attracting social partners and in building relationships. It also could be important in conflict resolution.”

Taking our faces at face value
The study, published in Scientific Reports, represents a major step forward in understanding the dynamics and social significance of facial expressions in everyday life. Moving beyond the traditional focus on static, stylized emotional expressions, it highlights facial expressivity as a consequential and stable individual difference.

The findings challenge the “poker face” intuition that a still, stoic demeanor is always most advantageous. Instead, they suggest that for most people, allowing one’s face to mirror inner states and intentions can invite warmer reactions and reap social rewards. The authors propose that human facial expressions evolved largely for affiliative functions, greasing the wheels of social cohesion and cooperation.

The results also underscore the importance of studying facial behavior situated in real-world interactions to unveil its true colors and consequences. Emergent technologies like automated facial coding now make it feasible to track the face’s mercurial movements in the wild, opening up new horizons for unpacking how this ancient communication channel shapes human social life.

Far from mere emotional readouts, our facial expressions appear to be powerful tools in the quest for interpersonal connection and social success. As the researchers conclude, “Being facially expressive is socially advantageous.” So the next time you catch yourself furrowing your brow or flashing a smile, know that your face just might be working overtime on your behalf to help you get ahead.

 

Source: https://studyfinds.org/facially-expressive-people-well-liked-socially-successful/

Can indie games inspire a creative boom from Indian developers?

Visai Games’ Venba won a Bafta Games Award this year

India might not be the first country that springs to mind when someone mentions video games, but it’s one of the fastest-growing markets in the world.
Analysts believe there could be more than half a billion players there by the end of this year.
Most of them are playing on mobile phones and tablets, and fans will tell you the industry is mostly known for fantasy sports games that let you assemble imaginary teams based on real players.
Despite concerns over gambling and possible addiction, they’re big business.
The country’s three largest video game startups – Game 24X7, Dream11 and Mobile Premier League – all provide some kind of fantasy sport experience and are valued at over $1bn.
But there’s hope that a crop of story-driven games making a splash worldwide could inspire a new wave of creativity and investment.
During the recent Summer Game Fest (SGF) – an annual showcase of new and upcoming titles held in Los Angeles and watched by millions – audiences saw previews of a number of story-rich titles from South Asian teams.

Detective Dotson will also have a companion TV series produced

One of those was Detective Dotson by Masala Games, based in Gujarat, about a failed Bollywood actor turned detective.
Industry veteran Shalin Shodhan is behind the game and tells BBC Asian Network this focus on unique stories is “bucking the trend” in India’s games industry.
He wants video games to become an “interactive cultural export” but says he’s found creating new intellectual property difficult.
“There really isn’t anything in the marketplace to make stories about India,” he says, despite the strength of some of the country’s other cultural industries.
“If you think about how much intellectual property there is in film in India, it is really surprising to think nothing indigenous exists as an original entertainment property in games,” he says.
“It’s almost like the Indian audience accepted that we’re just going to play games from outside.”
Another game shown during SGF was The Palace on the Hill – a “slice-of-life” farming sim set in rural India.
Mala Sen, from developer Niku Games, says games like this and Detective Dotson are what “India needed”.
“We know that there are a lot of people in India who want games where characters and setting are relatable to them,” she says.

Games developed by South Asian teams based in western countries have been finding critical praise and commercial success in recent years.

Venba, a cooking sim that told the story of a migrant family reconnecting with their heritage through food, became the first game of its kind to take home a Bafta Game Award this year.

Canada-based Visai Games, which developed the title, was revealed during SGF as one of the first beneficiaries of a new fund set up by Among Us developer Innersloth to boost fellow indie developers.

That will go towards their new, unnamed project based on ancient Tamil legends.

Another title awarded funding by the scheme was Project Dosa, from developer Outerloop, that sees players pilot giant robots, cook Indian food and fight lawyers.

Its previous game, Thirsty Suitors, was also highly praised and nominated for a Bafta award this year.

Games such as these resonating with players worldwide helps perceptions from the wider industry, says Mumbai-based Indrani Ganguly, of Duronto Games.

“Finally, people are starting to see we’re not just a place for outsource work,” she says.

“We’re moving from India being a technical space to more of a creative hub.

“I’m not 100% seeing a shift but that’s more of a mindset thing.

“People who are able to make these kinds of games have always existed but now there is funding and resource opportunities available to be able to act on these creative visions.”

Earth’s inner core rotation slows down and reverses direction. What does this mean for the planet?

(Image by DestinaDesign on Shutterstock)

Earth’s inner core, a solid iron sphere nestled deep within our planet, has slowed its rotation, according to new research. Scientists from the University of Southern California say their discovering challenges previous notions about the inner core’s behavior and raises intriguing questions about its influence on Earth’s dynamics.

The inner core, a mysterious realm located nearly 3,000 miles beneath our feet, has long been known to rotate independently of the Earth’s surface. Scientists have spent decades studying this phenomenon, believing it to play a crucial role in generating our planet’s magnetic field and shaping the convection patterns in the liquid outer core. Until now, it was widely accepted that the inner core was gradually spinning faster than the rest of the Earth, a process known as super-rotation. However, this latest study, published in the journal Nature, reveals a surprising twist in this narrative.

“When I first saw the seismograms that hinted at this change, I was stumped,” says John Vidale, Dean’s Professor of Earth Sciences at the USC Dornsife College of Letters, Arts and Sciences, in a statement. “But when we found two dozen more observations signaling the same pattern, the result was inescapable. The inner core had slowed down for the first time in many decades. Other scientists have recently argued for similar and different models, but our latest study provides the most convincing resolution.”

Slowing Spin, Reversing Rhythm
By analyzing seismic waves generated by repeating earthquakes in the South Sandwich Islands from 1991 to 2023, the researchers discovered that the inner core’s rotation had not only slowed down but had actually reversed direction. The team focused on a specific type of seismic wave called PKIKP, which traverses the inner core and is recorded by seismic arrays in northern North America. By comparing the waveforms of these waves from 143 pairs of repeating earthquakes, they noticed a peculiar pattern.

Many of the earthquake pairs exhibited seismic waveforms that changed over time, but remarkably, they later reverted to match their earlier counterparts. This observation suggests that the inner core, after a period of super-rotation from 2003 to 2008, had begun to sub-rotate, or spin more slowly than the Earth’s surface, essentially retracing its previous path. The researchers found that from 2008 to 2023, the inner core sub-rotated two to three times more slowly than its prior super-rotation.

The inner core began to decrease its speed around 2010, moving slower than the Earth’s surface. (Credit: USC Graphic/Edward Sotelo)

The study’s findings paint a captivating picture of the inner core’s rotational dynamics. The matching waveforms observed in numerous earthquake pairs indicate moments when the inner core returned to positions it had occupied in the past, relative to the mantle. This pattern, combined with insights from previous studies, reveals that the inner core’s rotation is far more complex than a simple, steady super-rotation.

The researchers discovered that the inner core’s super-rotation from 2003 to 2008 was faster than its subsequent sub-rotation, suggesting an asymmetry in its behavior. This difference in rotational rates implies that the interactions between the inner core, outer core, and mantle are more intricate than previously thought.

Limitations: Pieces Of The Core Puzzle
While the study offers compelling evidence for the inner core’s slowing and reversing rotation, the study of course has some limitations. The spatial coverage of the seismic data is relatively sparse, particularly in the North Atlantic, due to the presence of chert layers that hindered continuous coring. Furthermore, the Earth system model used in the study, despite its sophistication, is still a simplified representation of the complex dynamics at play.

The authors emphasize the need for additional high-resolution data from a broader range of locations to strengthen their findings. They also call for ongoing refinement of Earth system models to better capture the intricacies of the inner core’s behavior and its interactions with the outer core and mantle.

Source: https://studyfinds.org/earth-inner-core-rotation-slows/

Mars missions likely impossible for astronauts without kidney dialysis

Photo by Mike Kiev from Unsplash

New study shows damage from cosmic radiation, microgravity could be ‘catastrophic’ for human body
LONDON — As humanity sets its sights on deep space missions to the Moon, Mars, and beyond, a team of international researchers has uncovered a potential problem lurking in the shadows of these ambitious plans: spaceflight-induced kidney damage.

The findings, in a nutshell
In a new study that integrated a dizzying array of cutting-edge scientific techniques, researchers from University College London found that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts.

This sobering discovery, published in Nature Communications, not only highlights the immense challenges of long-duration space travel but also underscores the urgent need for effective countermeasures to protect the health of future space explorers.

“If we don’t develop new ways to protect the kidneys, I’d say that while an astronaut could make it to Mars they might need dialysis on the way back,” says the study’s first author, Dr. Keith Siew, from the London Tubular Centre, based at the UCL Department of Renal Medicine, in a media release. “We know that the kidneys are late to show signs of radiation damage; by the time this becomes apparent it’s probably too late to prevent failure, which would be catastrophic for the mission’s chances of success.”

New research shows that exposure to the unique stressors of spaceflight — such as microgravity and galactic cosmic radiation — can lead to serious, potentially irreversible kidney problems in astronauts. (© alonesdj – stock.adobe.com)

Methodology

To unravel the complex effects of spaceflight on the kidneys, the researchers analyzed a treasure trove of biological samples and data from 11 different mouse missions, five human spaceflights, one simulated microgravity experiment in rats, and four studies exposing mice to simulated galactic cosmic radiation on Earth.

The team left no stone unturned, employing a comprehensive “pan-omics” approach that included epigenomics (studying changes in gene regulation), transcriptomics (examining gene expression), proteomics (analyzing protein levels), epiproteomics (investigating protein modifications), metabolomics (measuring metabolite profiles), and metagenomics (exploring the microbiome). They also pored over clinical chemistry data (electrolytes, hormones, biochemical markers), assessed kidney function, and scrutinized kidney structure and morphology using advanced histology, 3D imaging, and in situ hybridization techniques.

By integrating and cross-referencing these diverse datasets, the researchers were able to paint a remarkably detailed and coherent picture of how spaceflight stressors impact the kidneys at multiple biological levels, from individual molecules to whole organ structure and function.

Results
The study’s findings are as startling as they are sobering. Exposure to microgravity and simulated cosmic radiation induced a constellation of detrimental changes in the kidneys of both humans and animals.

First, the researchers discovered that spaceflight alters the phosphorylation state of key kidney transport proteins, suggesting that the increased kidney stone risk in astronauts is not solely a secondary consequence of bone demineralization but also a direct result of impaired kidney function.

Second, they found evidence of extensive remodeling of the nephron – the basic structural and functional unit of the kidney. This included the expansion of certain tubule segments but an overall loss of tubule density, hinting at a maladaptive response to the unique stressors of spaceflight.

Perhaps most alarmingly, exposing mice to a simulated galactic cosmic radiation dose equivalent to a round trip to Mars led to overt signs of kidney damage and dysfunction, including vascular injury, tubular damage, and impaired filtration and reabsorption.

Piecing together the diverse “omics” datasets, the researchers identified several convergent molecular pathways and biological processes that were consistently disrupted by spaceflight, causing mitochondrial dysfunction, oxidative stress, inflammation, fibrosis, and senescence (cell death) — all hallmarks of chronic kidney disease.

Source: https://studyfinds.org/mars-missions-catastrophic-astronauts-kidneys/

Being more optimistic can keep you from procrastinating

(© chinnarach – stock.adobe.com)

We’ve all been there — a big task is looming over our heads, but we choose to put it off for another day. Procrastination is so common that researchers have spent years trying to understand what drives some people to chronically postpone important chores until the last possible moment. Now, researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future.

The findings, in a nutshell
Researchers found evidence that having a pessimistic view about how stressful the future will be could increase the likelihood of falling into a pattern of severe procrastination. Moreover, the study published in Scientific Reports reveals that having an optimistic view on the future wards off the urge to procrastinate.

“Our research showed that optimistic people — those who believe that stress does not increase as we move into the future — are less likely to have severe procrastination habits,” explains Saya Kashiwakura from the Graduate School of Arts and Sciences at the University of Tokyo, in a media release. “This finding helped me adopt a more light-hearted perspective on the future, leading to a more direct view and reduced procrastination.”

Researchers from the University of Tokyo have found a fascinating factor that may be the cause of procrastination: people’s view of the future. (Credit: Ground Picture/Shutterstock)

Methodology
To examine procrastination through the lens of people’s perspectives on the past, present, and future, the researchers introduced new measures they dubbed the “chronological stress view” and “chronological well-being view.” Study participants were asked to rate their levels of stress and well-being across nine different timeframes: the past 10 years, past year, past month, yesterday, now, tomorrow, next month, next year, and the next 10 years.

The researchers then used clustering analysis to group participants based on the patterns in their responses over time – for instance, whether their stress increased, decreased or stayed flat as they projected into the future. Participants were also scored on a procrastination scale, allowing the researchers to investigate whether certain patterns of future perspective were associated with more or less severe procrastination tendencies.

Results: Procrastination is All About Mindset
When examining the chronological stress view patterns, the analysis revealed four distinct clusters: “descending” (stress decreases over time), “ascending” (stress increases), “V-shaped” (stress is lowest in the present), and a “skewed mountain” shape where stress peaked in the past and declined toward the future.

Intriguingly, the researchers found a significant relationship between cluster membership and level of procrastination. The percentage of severe procrastinators was significantly lower in the “descending” cluster – those who believed their stress levels would stay flat or decrease as they projected into the future.

Source: https://studyfinds.org/being-more-optimistic-can-keep-you-from-procrastinating/

Who’s most vulnerable to scams? Psychologists reveal who criminals target and why

(Credit: fizkes/Shutterstock)

About 1 in 6 Americans are age 65 or older, and that percentage is projected to grow. Older adults often hold positions of power, have retirement savings accumulated over the course of their lifetimes, and make important financial and health-related decisions – all of which makes them attractive targets for financial exploitation.

In 2021, there were more than 90,000 older victims of fraud, according to the FBI. These cases resulted in US$1.7 billion in losses, a 74% increase compared with 2020. Even so, that may be a significant undercount since embarrassment or lack of awareness keeps some victims from reporting.

Financial exploitation represents one of the most common forms of elder abuse. Perpetrators are often individuals in the victims’ inner social circles – family members, caregivers, or friends – but can also be strangers.

When older adults experience financial fraud, they typically lose more money than younger victims. Those losses can have devastating consequences, especially since older adults have limited time to recoup – dramatically reducing their independence, health, and well-being.

But older adults have been largely neglected in research on this burgeoning type of crime. We are psychologists who study social cognition and decision-making, and our research lab at the University of Florida is aimed at understanding the factors that shape vulnerability to deception in adulthood and aging.

Defining vulnerability
Financial exploitation involves a variety of exploitative tactics, such as coercion, manipulation, undue influence, and, frequently, some sort of deception.

The majority of current research focuses on people’s ability to distinguish between truth and lies during interpersonal communication. However, deception occurs in many contexts – increasingly, over the internet.

Our lab conducts laboratory experiments and real-world studies to measure susceptibility under various conditions: investment games, lie/truth scenarios, phishing emails, text messages, fake news and deepfakes – fabricated videos or images that are created by artificial intelligence technology.

To study how people respond to deception, we use measures like surveys, brain imaging, behavior, eye movement, and heart rate. We also collect health-related biomarkers, such as being a carrier of gene variants that increase risk for Alzheimer’s disease, to identify individuals with particular vulnerability.

And our work shows that an older adult’s ability to detect deception is not just about their individual characteristics. It also depends on how they are being targeted.

Individual risk factors
Better cognition, social and emotional capacities, and brain health are all associated with less susceptibility to deception.

Cognitive functions, such as how quickly our brain processes information and how well we remember it, decline with age and impact decision-making. For example, among people around 70 years of age or older, declines in analytical thinking are associated with reduced ability to detect false news stories.

Additionally, low memory function in aging is associated with greater susceptibility to email phishing. Further, according to recent research, this correlation is specifically pronounced among older adults who carry a gene variant that is a genetic risk factor for developing Alzheimer’s disease later in life. Indeed, some research suggests that greater financial exploitability may serve as an early marker of disease-related cognitive decline.

Social and emotional influences are also crucial. Negative mood can enhance somebody’s ability to detect lies, while positive mood in very old age can impair a person’s ability to detect fake news.

Lack of support and loneliness exacerbate susceptibility to deception. Social isolation during the COVID-19 pandemic has led to increased reliance on online platforms, and older adults with lower digital literacy are more vulnerable to fraudulent emails and robocalls.

Isolation during the COVID-19 pandemic has increased aging individuals’ vulnerability to online scams. (© Andrey Popov – stock.adobe.com)

Finally, an individual’s brain and body responses play a critical role in susceptibility to deception. One important factor is interoceptive awareness: the ability to accurately read our own body’s signals, like a “gut feeling.” This awareness is correlated with better lie detection in older adults.

According to a first study, financially exploited older adults had a significantly smaller size of insula – a brain region key to integrating bodily signals with environmental cues – than older adults who had been exposed to the same threat but avoided it. Reduced insula activity is also related to greater difficulty picking up on cues that make someone appear less trustworthy.

Types of effective fraud
Not all deception is equally effective on everyone.

Our findings show that email phishing that relies on reciprocation – people’s tendency to repay what another person has provided them – was more effective on older adults. Younger adults, on the other hand, were more likely to fall for phishing emails that employed scarcity: people’s tendency to perceive an opportunity as more valuable if they are told its availability is limited. For example, an email might alert you that a coin collection from the 1950s has become available for a special reduced price if purchased within the next 24 hours.

There is also evidence that as we age, we have greater difficulty detecting the “wolf in sheep’s clothing”: someone who appears trustworthy, but is not acting in a trustworthy way. In a card-based gambling game, we found that compared with their younger counterparts, older adults are more likely to select decks presented with trustworthy-looking faces, even though those decks consistently resulted in negative payouts. Even after learning about untrustworthy behavior, older adults showed greater difficulty overcoming their initial impressions.

Reducing vulnerability
Identifying who is especially at risk for financial exploitation in aging is crucial for preventing victimization.

We believe interventions should be tailored instead of a one-size-fits-all approach. For example, perhaps machine learning algorithms could someday determine the most dangerous types of deceptive messages that certain groups encounter – such as in text messages, emails, or social media platforms – and provide on-the-spot warnings. Black and Hispanic consumers are more likely to be victimized, so there is also a dire need for interventions that resonate with their communities.

Prevention efforts would benefit from taking a holistic approach to help older adults reduce their vulnerability to scams. Training in financial, health, and digital literacy are important, but so are programs to address loneliness.

People of all ages need to keep these lessons in mind when interacting with online content or strangers – but not only then. Unfortunately, financial exploitation often comes from individuals close to the victim.

Source: https://studyfinds.org/whos-most-vulnerable-to-scams/

Mushroom-infused ‘microdosing’ chocolate bars are sending people to the hospital, prompting investigation: FDA

The Food and Drug Administration (FDA) is warning consumers about a mushroom-infused chocolate bar that has reportedly sent some people to the hospital.

The FDA released an advisory message about Diamond Shruumz “microdosing” chocolate bars on June 7. The chocolate bars contain a “proprietary nootropics blend” that is said to give a “relaxed euphoric experience without psilocybin,” according to its website.

“The FDA and CDC, in collaboration with America’s Poison Centers and state and local partners, are investigating a series of illnesses associated with eating Diamond Shruumz-brand Microdosing Chocolate Bars,” the FDA’s website reads.

“Do not eat, sell, or serve Diamond Shruumz-Brand Microdosing Chocolate Bars,” the site warns. “FDA’s investigation is ongoing.”

The FDA is warning consumers against Diamond Shruumz chocolate bars. (FDA | iStock)

“Microdosing” is a practice where one takes a very small amount of psychedelic drugs with the intent of increasing productivity, inspiring creativity and boosting mood. According to Diamond Shruumz’s website, the brand said its products help achieve “a subtle, sumptuous experience and a more creative state of mind.”

“We’re talkin’ confections with a kick,” the brand said. “So if you like mushroom chocolate bars and want to mingle with some microdosing, check us out. We just might change how you see the world.”

But government officials warn that the products have caused seizures in some consumers and vomiting in others.

“People who became ill after eating Diamond Shruumz-brand Microdosing Chocolate Bars reported a variety of severe symptoms including seizures, central nervous system depression (loss of consciousness, confusion, sleepiness), agitation, abnormal heart rates, hyper/hypotension, nausea, and vomiting,” the FDA reported.

Six people reportedly experienced such severe reactions that they sought medical care.

At least eight people have suffered a variety of medical symptoms from the chocolates, including nausea. (iStock)

“All eight people have reported seeking medical care; six have been hospitalized,” the FDA’s press release said. “No deaths have been reported.”

Diamond Shruumz says on its website that its products are not necessarily psychedelic. Although the chocolate is marketed as promising a psilocybin-like experience, there is no psilocybin in it.

“There is no presence of psilocybin, amanita or any scheduled drugs, ensuring a safe and enjoyable experience,” the website claims. “Rest assured, our treats are not only free from psychedelic substances but our carefully crafted ingredients still offer an experience.”

“This allows you to indulge in a uniquely crafted blend designed for your pleasure and peace of mind.”

Officials warn consumers to keep the products out of the reach of minors, as kids and teens may be tempted to eat the chocolate bars.

Source: https://www.foxnews.com/health/mushroom-infused-microdosing-chocolate-bars-sending-people-hospital-prompting-investigation-fda

 

Elephants give each other ‘names,’ just like humans

(Photo by Unsplash+ in collaboration with Getty Images)

They say elephants never forget a face, and now as it turns out, they seem to remember names too. That is, the “names” they have for one another. Yes, believe it or not, a new study shows that elephants actually have the rare ability to identify one another through unique calls, essentially giving one another human-like names when they converse.

Scientists from Colorado State University, along with a team of researchers from Save the Elephants and ElephantVoices, used machine learning to make this fascinating discovery. Their work suggests that elephants possess a level of communication and abstract thought that is more similar to ours than previously believed.

In the study, published in Nature Ecology and Evolution, the researchers analyzed hundreds of recorded elephant calls from Kenya’s Samburu National Reserve and Amboseli National Park. By training a sophisticated model to identify the intended recipient of each call based on its unique acoustic features, they could confirm that elephant calls contain a name-like component, a behavior they had suspected based on observation.

“Dolphins and parrots call one another by ‘name’ by imitating the signature call of the addressee. By contrast, our data suggest that elephants do not rely on imitation of the receiver’s calls to address one another, which is more similar to the way in which human names work,” says lead author Michael Pardo, who conducted the study as a postdoctoral researcher at CSU and Save the Elephants, in a statement.

Once the team pinpointed the specific calls to the corresponding elephants, the scientists played back the recordings and observed their reactions. When the calls were addressed to them, the elephants responded positively by calling back or approaching the speaker. In contrast, calls meant for other elephants elicited less enthusiasm, demonstrating that the elephants recognized their own “names.”

Two juvenile elephants greet each other in Samburu National Reserve in Kenya. (Credit: George Wittemyer)

Elephants’ Brains Even More Complex Than Realized

The ability to learn and produce new sounds, a prerequisite for naming individuals, is uncommon in the animal kingdom. This form of arbitrary communication, where a sound represents an idea without imitating it, is considered a higher-level cognitive skill that greatly expands an animal’s capacity to communicate.

Co-author George Wittemyer, a professor at CSU’s Warner College of Natural Resources and chairman of the scientific board of Save the Elephants, elaborated on the implications of this finding: “If all we could do was make noises that sounded like what we were talking about, it would vastly limit our ability to communicate.” He adds that the use of arbitrary vocal labels suggests that elephants may be capable of abstract thought.

To arrive at these conclusions, the researchers embarked on a four-year study that included 14 months of intensive fieldwork in Kenya. They followed elephants in vehicles, recording their vocalizations and capturing approximately 470 distinct calls from 101 unique callers and 117 unique receivers.

Kurt Fristrup, a research scientist in CSU’s Walter Scott, Jr. College of Engineering, developed a novel signal processing technique to detect subtle differences in call structure. Together with Pardo, he trained a machine-learning model to correctly identify the intended recipient of each call based solely on its acoustic features. This innovative approach allowed the researchers to uncover the hidden “names” within the elephant calls.

Source: https://studyfinds.org/elephants-give-each-other-names/

Baby talk explained! All those sounds mean more than you think

Mother and baby laying down together (Photo by Ana Tablas on Unsplash)

From gurgling “goos” to squealing “wheees!”, the delightful symphony of sounds emanating from a baby’s crib may seem like charming gibberish to the untrained ear. However, a new study suggests that these adorable vocalizations are far more than just random noise — they’re actually a crucial stepping stone on the path to language development.

The research, published in PLOS One, took a deep dive into the vocal patterns of 130 typically developing infants over the course of their first year of life. Their discoveries challenge long-held assumptions about how babies learn to communicate.

Traditionally, many experts believed that infants start out making haphazard sounds, gradually progressing to more structured “baby talk” as they listen to and imitate the adults around them. This new study paints a different picture, one where babies are actively exploring and practicing different categories of sounds in what might be thought of as a precursor to speech.

Think of it like a baby’s very first music lesson. Just as a budding pianist might spend time practicing scales and chords, it seems infants devote chunks of their day to making specific types of sounds, almost as if they’re trying to perfect their technique.

The researchers reached this conclusion after sifting through an enormous trove of audio data captured by small recording devices worn by the babies as they went about their daily lives. In total, they analyzed over 1,100 daylong recordings, adding up to nearly 14,500 hours – or about 1.6 years – of audio.

Using special software to isolate the infant vocalizations, the research team categorized the sounds into three main types: squeals (high-pitched, often excited-sounding noises), growls (low-pitched, often “rumbly” sounds), and vowel-like utterances (which the researchers dubbed “vocants”).

Next, they zoomed in on five-minute segments from each recording, hunting for patterns in how these sound categories were distributed. The results were striking: 40% of the recordings showed significant “clustering” of squeals, with a similar percentage showing clustering of growls. In other words, the babies weren’t randomly mixing their sounds, but rather, they seemed to focus on one type at a time, practicing it intensively.

Source: https://studyfinds.org/baby-talk-explained/

Exit mobile version