Australia has added message board Reddit and livestreaming service Kick to its list of social media platforms that must ban children younger than 16 from holding accounts.
The platforms join Facebook, Instagram, Snapchat, Threads, TikTok, X and YouTube in facing a world-first legal obligation to shut the accounts of younger Australian children from Dec. 10, Communications Minister Anika Wells said on Wednesday.
Platforms that fail to take reasonable steps to exclude children younger than 16 could be punished with a fine of up to 50 million Australian dollars ($33 million).
“We have met with several of the social media platforms in the past month so that they understand there is no excuse for failure to implement this law,” Wells told reporters in Canberra.
“Online platforms use technology to target children with chilling control. We are merely asking that they use that same technology to keep children safe online,” Wells added.
Australia’s eSafety Commissioner Julie Inman Grant, who will enforce the social media ban, said the list of age-restricted platforms would evolve with new technologies.
The nine platforms currently age-restricted meet the key requirement that their “sole or significant purpose is to enable online social interaction,” a government statement said.
Inman Grant said she would work with academics to evaluate the impacts of the ban, including whether children sleep or interact more or become more physically active.
“We’ll also look for unintended consequences and we’ll be gathering evidence” so that others could learn from Australia’s achievements, Inman Grant said.
Australia’s move is being closely watched by countries that share concerns about social media impacts on young children.
European Commission President Ursula von der Leyen told a United Nations forum in New York in September that she was “inspired” by Australia’s “common sense” move to legislate the age restriction.
Apple is reportedly working on a new low-cost MacBook that will likely be powered by an iPhone chipset. The Cupertino giant is expected to target students who would otherwise buy Chromebooks or Windows PCs.
MacBook Air M4
Apple is said to be working on an affordable MacBook for some time now. The Cupertino giant has seen a rise in MacBook sales recently, but its devices remain at the higher end, including the base M4 MacBook Air. Now, the company is planning to fill the gap in the lineup, with the low-cost MacBook targeting students and users on a budget.
According to Bloomberg, Apple will use less-advanced components to keep the price of the affordable MacBook under $1,000 (roughly Rs 88,740) in the US. The report states that the device will be powered by an iPhone chipset. Previously, it was believed that the low-cost MacBook would use the A18 Pro chipset from the iPhone 16 Pro.
On the surface, the iPhone chip may raise doubts over performance. However, the report suggests that it would perform better than the M1 chipset.
Additionally, the new MacBook is expected to pack a lower-end LCD compared to the 13.6-inch unit in the MacBook Air.
Apple wants to target students with new MacBook
Apple is planning to target students with its low-cost MacBook. The company wants to cater to users who would normally use a Windows laptop or a Chromebook at around $600 (roughly Rs 53,000) in price. As per IDC, Apple stood in fourth place in terms of global PC market share in the third quarter of 2025 with 9 per cent. The company trailed behind Lenovo, HP and Dell.
This image shows a cloud of gas and dust, shaped like a cosmic bat. The image was obtained mostly in visible light with the VLT Survey Telescope (VST), hosted at ESO’s Paranal Observatory in Chile. The intense red glow comes from hydrogen atoms ionised by the intense radiation of young stars within the cloud. The image also includes additional infrared data captured by ESO’s Visible and Infrared Survey Telescope for Astronomy (VISTA), also at Paranal. The most prominent clouds here are RCW 94, which represents the right wing of the bat, and RCW 95, which forms the body, while the other parts of the bat have no official designation. (Credit: ESO/VPHAS+ team/VVV team)Giant Cosmic ‘Bat’ Spotted On Halloween Shows Stars Being Born In Real Time
A massive bat-shaped cloud appeared in telescope images released on Halloween, spanning an area four times the size of a full moon. The European Southern Observatory’s VLT Survey Telescope in Chile caught this eerie celestial creature just in time for the spookiest night of the year, revealing a stellar nursery where infant stars are actively forming 10,000 light-years from Earth.
Flying between the southern constellations of Circinus and Norma, this space bat looks like it’s hunting the glowing spot hovering above it. What looks like a Halloween decoration is actually a star factory hard at work. Baby stars are being born right now inside those bat-shaped clouds of gas and dust.
Newborn Stars Make The Bat Glow Red
The bat gets its haunting red appearance from infant stars energizing the hydrogen gas around them. These young stars pump out enough energy to make the surrounding gas shine in an intense crimson shade, painting the cosmic bat’s wings and body across the night sky.
Dark filaments streak through the nebula like bones in a skeleton. These structures are colder, denser pockets of gas and dust that block visible light from stars behind them. Dust grains within these dark patches act like cosmic curtains, preventing starlight from reaching observers on Earth and creating the skeletal appearance.
Combining Light Wavelengths Reveals Hidden Details
Astronomers have catalogued the brightest parts of this stellar nursery. RCW 94 forms the bat’s right wing, while RCW 95 creates the body. Other portions remain officially unnamed.
Getting this shot required combining different types of light, akin to using different filters on a camera. Most of the bat’s shape and that red color came from photographing visible light, the kind human eyes can see. But astronomers also added infrared images from ESO’s VISTA telescope, which reveal the densest parts hidden inside the clouds. Putting both together shows details that would stay invisible otherwise.
A 268-Megapixel Camera Captured the Scene
OmegaCAM, a 268-megapixel camera aboard the VST telescope, made capturing this enormous creature possible. The telescope’s wide field of view allowed astronomers to frame the entire bat in a single portrait. For comparison, most smartphone cameras max out around 12 to 50 megapixels.
Both sets of telescope data are available to the public. Anyone curious enough can dig through the archives and hunt for more cosmic creatures hiding in space. Thousands of similar photos sit waiting for someone to discover them.
The telescope sits in Chile’s Atacama Desert at an observatory called Paranal. Italy’s National Institute for Astrophysics owns and runs it, though it’s hosted at a European facility designed for scanning large chunks of the southern sky.s.
While the nebula has existed for millennia, releasing this particular image on October 31st gives stargazers a cosmic treat to match the holiday’s spooky spirit. The bat continues its eternal flight through space, birthing new stars as it soars between constellations.
Open AI and Microsoft logos are seen in this illustration taken on September 12, 2025. REUTERS/Dado Ruvic/Illustration Purchase Licensing Rights
A momentous week in the technology sector made it clear there is no sign the boom in building artificial intelligence infrastructure is slowing — despite the bubble talk.
Nvidia (NVDA.O), whose processors are the AI revolution’s backbone, became the first company to surpass $5 trillion in market value. Microsoft (MSFT.O), and OpenAI inked a deal enhancing the ChatGPT maker’s fundraising ability and OpenAI promptly started laying groundwork for an initial public offering that could value the company at $1 trillion.
Amazon (AMZN.O), said it would cut 14,000 corporate jobs, just days before its cloud unit posted its strongest growth in nearly three years.
These developments, along with numerous earnings calls and interviews with executives, make clear that AI has cemented itself as the single biggest catalyst for global corporate investment and the engine of the market rally, even as some question the sustainability of both.
SPENDING WITHOUT ENDING
Soaring revenue at Microsoft, Alphabet (GOOGL.O), and other technology giants was expected. But more than 100 non-tech global companies noted data centers on quarterly calls this week, including Honeywell , turbine maker GE Vernova (GEV.N), and heavy equipment maker Caterpillar (CAT.N).
Sales in Caterpillar’s division that supplies data centers jumped 31% in its most recent quarter. “We’re definitely really excited about the prime power opportunity with data centers,” CEO Joseph Creed said this week.
“The AI supply chain now spans power, industrials and cooling technology, and investors are looking at the entire ecosystem rather than just core tech,” said Ayako Yoshioka, portfolio manager at Wealth Enhancement Group.
Goldman Sachs estimates global AI-related infrastructure spending could reach $3 trillion to $4 trillion by 2030. Microsoft, Amazon, Meta and Alphabet are expected to spend roughly $350 billion combined this year.
AI investment is propping up global trade, with about 60% of U.S. data-center capex spent on imported IT equipment, according to Oxford Economics, much of it semiconductors from Taiwan, South Korea and Vietnam.
At least two dozen companies representing more than $21 trillion in combined market value reported quarterly earnings or spoke with Reuters about AI in recent days. Many, including Procter & Gamble (PG.N), and Boliden (BOL.ST), noted that the hoped-for productivity gains, though uneven, are beginning to show.
“We strongly believe the future contribution of artificial intelligence within R&D, within developing innovation, will steadily increase,” Schindler (SCHP.S), CEO Paolo Compagna told Reuters, though he said AI’s impact is yet to be seen. The Swiss lift and escalator maker raised its annual margin forecast last week.
Year-over-year revenue growth in the U.S. tech sector is up more than 15%, outpacing all other sectors, according to LSEG data.
Apple (AAPL.O), said it was significantly increasing AI investment and Amazon projected capital spending of $125 billion in 2025.
WORRIES ABOUT OVERVALUATION
Since ChatGPT’s debut in 2022, global equity values have climbed 46%, or $46 trillion. One-third of that gain has come from AI-linked companies, according to Bespoke Investment Group.
Analysts warn of a quickening replacement cycle for servers, accelerators and chips as each new generation delivers exponential performance gains. The useful life of AI chips is shrinking to five years or less, forcing companies to “write down assets faster and replace them sooner,” said UBS semiconductor analyst Tim Arcuri.
The surge in AI-related spending has widened the gap between investment and returns, with a Reuters analysis showing that sales-to-capex ratios at major tech firms have fallen sharply as outlays on chips and data centers grow faster than revenue. Capital expenditures represent a larger chunk of cash generated by operating activities for some companies, causing some investor concern.
“If progress hasn’t been made toward monetization within three years, the market will start asking hard questions,” said Sumali Sanyal, senior portfolio manager at investment firm Xponance.
Microsoft reported a record $35 billion in capex in its most recent quarter and projected higher spending, prompting Bernstein analyst Mark Moerdler to ask whether the company was spending into a bubble. Microsoft Chief Financial Officer Amy Hood responded that AI-related demand still outpaces Microsoft’s spending. “I thought we were going to catch up. We are not,” she said.
Some companies are financing AI projects with debt. Oracle’s $18 billion bond sale last month was one of the largest ever for a tech company, and it looks set to be surpassed by an up to $30 billion bond sale from Meta Platforms (META.O). News of its largest ever bond sale knocked Meta’s shares down 11% on Thursday.
Reliance Industries on Thursday said its artificial intelligence arm Reliance Intelligence has partnered with Google to offer free access to Google AI Pro for 18 months, worth Rs 35,100 to select Jio 5G users initially.
The Reliance-Google offer includes higher access to Google’s Gemini 2.5 Pro model in the Gemini app, higher limits to generate stunning images and videos with their state-of-the-art Nano Banana and Veo 3.1 models, expanded access to Notebook LM for study and research, 2 TB of cloud storage, etc, a statement said.
The development follows an announcement by OpenAI to offer ‘ChatGPT Go’, which supports higher query limits and more image generation, free for one year to users in India who sign up during a limited-time promotional period beginning November 4.
“Reliance Intelligence aims to make intelligence services accessible to 1.45 billion Indians. Through our collaboration with strategic and long-term partners like Google, we aim to make India not just AI-enabled but AI-empowered – where every citizen and enterprise can harness intelligent tools to create, innovate and grow,” RIL Chairman Mukesh D Ambani said.
OpenAI ChatGPT Go, at present, is priced at Rs 399 per month.
“Google, in partnership with Reliance Intelligence, will begin rolling out Google’s AI Pro plan with its latest version of Google Gemini to eligible Jio users free of charge for 18 months. This 18-month offer is worth Rs 35,100,” RIL said in a statement.
The access will be given to unlimited 5G plan subscribers of Jio in the age group of 18-25 years and will swiftly expand to include every Jio customer nationwide in the shortest time possible.
Reliance has also announced a partnership with Google Cloud to broaden access to its advanced AI hardware accelerators, Tensor Processing Units (TPUs), to enable more organisations to train and deploy larger, more complex AI models.
Microsoft said the Azure outage was due to “DNS issues”Websites disabled in Microsoft global outage come back online
Websites for Heathrow, NatWest and Minecraft returned to service late on Wednesday after experiencing problems amid a global Microsoft outage.
Outage tracker Downdetector showed thousands of reports of issues with a number of websites around the world over several hours.
Microsoft said some users of Microsoft 365 saw delays with Outlook among other services, but by 21:00GMT, many websites that went down were once again accessible after the company restored a prior update.
The company’s Azure cloud computing platform, which underpins large parts of the internet, had reported a “degradation of some services” at 16:00 GMT.
It said this was due to “DNS issues” – the same root cause of the huge Amazon Web Services (AWS) outage last week.
Amazon said AWS was operating normally.
Other sites that were impacted in the UK include supermarket Asda, M&S, and mobile phone operator O2 – while in the US, people reported issues accessing the websites of coffee chain Starbucks and retailer Kroger.
Microsoft said business Microsoft 365 customers experienced problems.
Some web pages on Microsoft also directed users to an error notifications that read “Uh oh! Something went wrong with the previous request.”
The tech giant resorted to posting updates to a thread on X after some users reported they could not access the service status page.
While NatWest’s website was temporarily impacted, the bank’s mobile banking, web chat, and telephone customer services remained available during the outage.
The UK consumer organisation Which? said businesses had an obligation to ensure customers were kept informed and supported as services were restored, and to compensate consumers impacted.
“Customers should keep evidence of any failed or delayed payments in case they need to make a claim,” advised Which? consumer law expert Lisa Webb.
“Those worried about missing a bill should contact the relevant company to explain the situation and request that any fees be waived,” Ms Webb added.
Meanwhile, business at the Scottish Parliament was suspended because of technical issues with the parliament’s online voting system.
The outage prompted a postponement of debate over land reform legislation that could allow Scotland to intervene in private sales and require large estates to be broken up.
A senior Scottish Parliament source told BBC News they believed the problems were related to the Microsoft outage.
Azure’s crucial role online
Exactly how much of the internet was impacted is unclear, but estimates typically put Microsoft Azure at around 20% of the global cloud market.
The firm said it believed the outage was a result of “an inadvertent configuration change”.
In other words, a behind-the-scenes system was changed, with unintended consequences.
The concentration of cloud services into Microsoft, Amazon and Google means an outage like this “can cripple hundreds, if not thousands of applications and systems,” said Dr Saqib Kakvi, from Royal Holloway University.
“Due to cost of hosting web content, economic forces lead to consolidation of resources into a few very large players, but it is effectively putting all our eggs in one of three baskets.”
Recent outages have laid bare the fragility of the modern-day internet, according to engineering professor Gregory Falco of Cornell University.
“When we think of Azure or AWS, we think of a monolithic piece of technology infrastructure but the reality is that it’s thousands if not tens of thousands of little pieces of a puzzle that are all interwoven together,” said Mr Falco.
He noted that some of those pieces are managed by the companies themselves while others are overseen by third parties such as CrowdStrike, which last year deployed a software update that affected more than eight million computers run on Microsoft systems.
Atlas, OpenAI’s new ChatGPT-powered browser, launched on Tuesday looking to “rethink what a browser can be about” and challenge the hegemony of Google Chrome. But how accurate are AI browsers?
The US artificial intelligence company OpenAI said on Tuesday that it is launching its own web browser, Atlas, to rival Google’s popular Chrome browser.
Atlas will be powered by OpenAI’s popular chatbot ChatGPT, as the California-based firm looks to revolutionize how people use the internet.
“Tabs were great, but we haven’t seen a lot of browser innovation since then,” said OpenAI CEO Sam Altman in a video presentation broadcast on Tuesday, speaking off a “rare, once-a-decade opportunity to rethink what a browser can be about and how to use one.”
What’s different about ChatGPT’s Atlas?
For instance, Altman suggested that the classic URL search bar at the heart of traditional browsers could be replaced by an AI chatbot interface.
The browser will initially only be available for Apple’s Mac computers, the company said, adding that it is designed to help users complete “tasks without copying and pasting or leaving the page.”
Another key feature of the Atlas browser is its so-called “agent mode” which effectively surfs around the internet automatically on the user’s behalf, armed with a person’s browser history and predicting what sort of information they are likely to be looking for.
“It’s using the internet for you,” Altman said.
Criticism of AI-powered browsers
That’s one way of looking at it. But analyst Paddy Harrington of London-based market research group Forrester warned that another way of thinking about OpenAI’s new browser is that it’s “taking personality away from you.”
“Your profile will be personally attuned to you based on all the information sucked up about you,” Harrington told the Associated Press (AP) news agency. “OK, scary. But is it really you, really what you’re thinking, or what that engine decides it’s going to do? And will it add in preferred solutions [to users’ queries] based on ads?”
Either way, Harrington said it will be a big challenge for Atlas to “[compete] with a giant who has ridiculous market share.”
Atlas launched in Chrome-dominated market
Since launching in 2008, Google Chrome has amassed around 3 billion users worldwide, blowing rivals such as Microsoft’s Internet Explorer and then Edge browsers out of the water.
But AI chatbots such as ChatGPT are increasingly summarizing information on the internet so efficiently that many users are turning to them rather than the traditional practice of clicking on links suggested by a browser.
OpenAI has said ChatGPT already has more than 800 million users, while a survey conducted on behalf of AP this year found that about 60% of Americans – and 74% of those under 30 – use AI to find information at least some of the time.
Browsers such as Chrome have also integrated AI summaries into their search results, generally visible at the top of the results page above the first link, although concerns have been raised about the accuracy of this information.
People walk behind a logo of Meta Platforms company, during a conference in Mumbai, India, September 20, 2023. REUTERS/Francis Mascarenhas Purchase Licensing Rights
Meta (META.O), opens new tab has struck a $27 billion financing deal with Blue Owl Capital to fund its biggest data center project globally, as large technology companies race to build out the infrastructure needed to power their artificial intelligence ambitions.
Tuesday’s announcement marks Meta’s largest-ever private capital deal. Under it, Meta will retain about 20% equity in the Louisiana project, with the majority owned by funds that alternative asset manager Blue Owl Capital (OWL.N), opens new tab manages. Blue Owl contributed roughly $7 billion in cash to the joint venture, with Meta receiving a one-time payout of about $3 billion.
The planned data center in Richland Parish, Louisiana, known as Hyperion, is projected to deliver more than 2 gigawatts of compute capacity to support training of large language models, the technology behind tools such as ChatGPT and Google Gemini.
Doug Ostrover and Marc Lipschultz, Co-CEOs of Blue Owl, called Hyperion “an ambitious project that reflects the scale and speed required to power the next generation of AI infrastructure.”
Major tech companies, including Alphabet (GOOGL.O), opens new tab, Amazon.com (AMZN.O), opens new tab, Meta (META.O), opens new tab, Microsoft (MSFT.O), opens new tab and CoreWeave (CRWV.O), opens new tab, are on track to spend $400 billion on AI infrastructure this year, Morgan Stanley estimates.
OpenAI, the startup at the heart of the AI boom, recently signed multiple deals that may cost over $1 trillion to secure about 26 gigawatts of computing capacity, enough to power roughly 20 million U.S. homes.
Meta’s finance chief, Susan Li, called Tuesday’s deal “a bold step forward.” The company has signed leases for the facility with a four-year initial term with an option to extend and expects the project to create more than 500 jobs once it goes online. Source: https://www.reuters.com/technology/meta-forms-joint-venture-with-blue-owl-capital-louisiana-data-center-2025-10-21/
The establishment of a $15 billion Google AI hub in Andhra Pradesh has ignited political tensions, particularly between Andhra Pradesh and Tamil Nadu.
Nara Lokesh. Image: X@naralokesh
Andhra Pradesh IT Minister Nara Lokesh has responded to political criticism from Tamil Nadu over Google’s decision to set up a $15 billion Artificial Intelligence hub in Visakhapatnam. The investment, announced as part of a five-year plan, sparked a political row after opposition parties in Tamil Nadu accused the ruling DMK government of failing to attract Google to the state, despite the tech giant’s CEO, Sundar Pichai, hailing from Madurai.
Taking to X, Lokesh wrote, “He chose Bharat,” dismissing regional politics and emphasising that India as a whole stood to gain from the tech major’s investment.
The AIADMK had targeted Chief Minister MK Stalin’s administration, with senior leader RB Udayakumar telling reporters, “Despite Google CEO Sundar Pichai being a Tamil, the DMK government failed to invite the company to establish its AI infrastructure hub here. This is not just a missed investment. It’s a lost chance for Tamil Nadu to emerge as a national hub for AI, data analytics, and digital infrastructure.”
India’s Biggest $15 Billion AI hub In Andhra
Google will invest USD 15 billion over the next five years in setting up an AI hub in India, which will include the country’s largest data centre in partnership with Adani Group. The AI hub at Visakhapatnam in Andhra Pradesh will be Google’s largest outside the US and will include 1-gigawatt data centre campus, new large-scale energy sources, and an expanded fiber-optic network, the tech super giant said.
“It’s the largest AI hub that we are going to be investing in anywhere in the world outside of the US,” Google Cloud CEO Thomas Kurian said at an event here to sign the formal agreement.
Posting about her kids brings joy and connection, but with growing online risks, this mum-of-two has learnt that some moments are best kept just for family.
The dangers of the internet have led Ms Nabilah Awang to take precautions to protect her children’s digital footprint. (Illustration: CNA/Samuel Woo, iStock)
The first time I shared a photo of my firstborn son online, he was just 30 days old.
I was still adjusting to motherhood, revelling in the fact that I now had a baby boy.
Sure, I could have shouted it from the rooftops, but posting it on social media felt like the easiest way to tell the world.
Now that he’s four years old, I still find myself scrolling back to that photo, trying to remember how tiny those hands were.
Or when conversations about the newborn stage come up, I go straight to that picture and say: “This was how small he was!”
On my account page, I know exactly where to find it among my dozens of posts – unlike scrolling endlessly through the thousands of items in my phone gallery, which would take ages.
I’ve always found it heartwarming to capture and share memories of my children online. But now, I often find myself wondering how much of our children’s lives we should share online, especially in an age of artificial intelligence (AI) where every photo can be re-edited in seconds.
HOW SOCIAL MEDIA HAS TRANSFORMED PARENTING
Social media has transformed modern parenting into a highly visible and heavily scrutinised experience. On one hand, it offers parents access to a wealth of information, support groups and expert advice.
It has also helped normalise honest conversations about the challenges of parenthood, such as burnout, postpartum depression and mental health.
But there’s another side. Social media has fostered a culture of comparison, where curated highlight reels can often leave parents feeling inadequate or judged.
Influencer parenting is also rampant – people capitalising on their parenting experience by promoting trends and products, adding pressure or setting unrealistic expectations for the rest of us.
At the same time, I also follow parents who choose not to post their children online at all, or who blur their faces, especially on public accounts. For them, it’s about protecting their children from online harm, negative commentary, or simply thinking that posting pictures of their children online is performative.
I respect that deeply – what should always come first is the child’s wellbeing.
SHARING, BUT WITH INTENTION
The way I see it, posting updates isn’t about showing off – it’s about connection.
Family and friends living in other parts of the world can feel a sense of involvement in our lives, celebrating milestones and the little joys that make up our days.
This is especially true for a dear friend who had relocated to the Netherlands. We would find ourselves laughing at a funny photo I shared of my two sons or marvelling at how tall her daughter has grown. It’s our way of remaining part of each other’s daily lives, even from afar.
It also creates moments of joy in an otherwise hectic routine and allows us to reflect on the small wins that parenthood often obscures.
Despite this, I’m aware that the internet is not a safe place, and you never really know who’s lurking in your followers list.
So I don’t post indiscriminately.
My Instagram account is fully private. I cleaned up my friends and followers list before my kids were born because I knew I wanted to share our daily lives with friends and family, and only fully clothed images and carefully chosen snapshots make it online.
I also never post anything that could embarrass them – tantrums, toilet training mishaps, or moments that should remain private.
No full names, no addresses, and no geotags that reveal our routines or locations.
And yes, that includes moments like when my son ran around the house in nothing other than a Spider-Man mask during a toilet training session and screaming, “Muuuuuum, look at my poop!”
They are hilarious, but images and recordings of such moments are there for our private collection of memories, not for the internet to immortalise.
GROWING UP WITH A DIGITAL TRAIL
Even with strict boundaries, I am conscious that my kids already have a digital footprint at a young age.
I know that every photo, caption or tag contributes to a record they did not consent to.
And yes, it’s a little unnerving, especially when I think about how there’s a chance – however slim that may be – that a prospective employer might have access to this footprint, even if my account is fully private.
But all these concerns – comparison, judgment, and privacy – feel minute compared to my gravest fear: how easily children’s photos can be manipulated by AI.
An article by Monash University titled “Digital child abuse: Deepfakes and the rising danger of AI-generated exploitation” highlighted the alarming rise of AI-generated exploitation, where even innocent images can be altered to create synthetic child sexual abuse material without direct victim involvement.
A SpaceX Falcon 9 rocket launched from Vandenberg Space Force Base in March of this year, carrying multiple Starshield satellites into orbit. National Reconnaissance Office/NRO via X
A constellation of classified defense satellites built by the commercial company SpaceX is emitting a mysterious signal that may violate international standards, NPR has learned.
Satellites associated with the Starshield satellite network appear to be transmitting to the Earth’s surface on frequencies normally used for doing the exact opposite: sending commands from Earth to satellites in space. The use of those frequencies to “downlink” data runs counter to standards set by the International Telecommunication Union, a United Nations agency that seeks to coordinate the use of radio spectrum globally.
Starshield’s unusual transmissions have the potential to interfere with other scientific and commercial satellites, warns Scott Tilley, an amateur satellite tracker in Canada who first spotted the signals.
“Nearby satellites could receive radio-frequency interference and could perhaps not respond properly to commands — or ignore commands — from Earth,” he told NPR.
Outside experts agree there’s the potential for radio interference. “I think it is definitely happening,” said Kevin Gifford, a computer science professor at the University of Colorado, Boulder who specializes in radio interference from spacecraft. But he said the issue of whether the interference is truly disruptive remains unresolved.
SpaceX and the U.S. National Reconnaissance Office, which operates the satellites for the government, did not respond to NPR’s request for comment.
Caught by the wrong antenna
The discovery of the signal happened purely by chance.
Tilley regularly monitors satellites from his home in British Columbia as a hobby. He was working on another project when he accidentally triggered a scan of radio frequencies that are normally quiet.
“It was just a clumsy move at the keyboard,” he said. “I was resetting some stuff and then all of a sudden I’m looking at the wrong antenna, the wrong band.”
The band of the radio spectrum he found himself looking at, between 2025-2110 MHz, is reserved for “uplinking” data to orbiting satellites. That means there shouldn’t be any signals coming from space in that range.
But Tilley’s experienced eye noticed there appeared to be a signal coming down from the sky. It was in a part of the band “that should have nothing there,” he said. “I got a hold of my mouse and hit the record button and let it record for a few minutes.”
Tilley then took the data and compared it to a catalog of observations made by other amateur satellite trackers. These amateurs, located around the world, use telescopes to track satellites as they move across the sky and then share their positions in a database.
“Bang, up came an unusual identification that I wasn’t expecting at all,” he said. “Starshield.”
Starshield is a classified version of SpaceX’s Starlink satellites, which provide internet service around the world. The U.S. has reportedly paid more than $1.8 billion so far for the network, though little is known about it. According to SpaceX, Starshield conducts both Earth observation and communications missions.
Since May of 2024, the National Reconnaissance Office has conducted 11 launches of Starshield satellites in what it describes as its “proliferated system.”
“The NRO’s proliferated system will increase timeliness of access, diversify communications pathways, and enhance resilience,” the agency says of the system. “With hundreds of small satellites on orbit, data will be delivered in minutes or even seconds.”
Tilley says he’s detected signals from 170 of the Starshield satellites so far. All appear in the 2025-2110 MHz range, though the precise frequencies of the signals move around.
Signal’s purpose in question
It’s unclear what the satellite constellation is up to. Starlink, SpaceX’s public satellite internet network, operates at much higher frequencies to enable the transmission of broadband data. Starshield, by contrast, is using a much lower frequency range that probably only allows for the transmission of data at rates closer to 3G cellular, Tilley says.
Tilley says he believes the decision to downlink in a band typically reserved for uplinking data could also be designed to hide Starshield’s operations. The frequent shift in specific frequencies used could prevent outsiders from finding the signal.
Gifford says another possibility is that SpaceX was just taking advantage of a quiet part of the radio spectrum. Uplink transmissions from Earth to satellites are usually rare and brief, so these frequencies probably remain dark most of the time.
“SpaceX is smart and savvy,” he says. It’s possible they decided to just “do it and ask forgiveness later.”
A video posted by minister’s social media account shows that he gave go ahead to the team to prepare for the indigenous production of the chipset that can be used in IT servers.
Semiconductor chips are seen on a circuit board of a computer. (Image for representation only) Credit: Reuters File Photo
India’s first indigenously-designed 7 nanometer computer processor ‘Shakti’ is expected to be ready by 2028, which can be produced at chip plant locally in future, the IIT Madras-based team informed Union minister Ashwini Vaishnaw on Saturday.
A video posted by minister’s social media account shows that he gave go ahead to the team to prepare for the indigenous production of the chipset that can be used in IT servers.
Vaishnaw said that the 7 nm chip design will be ready by 2028 and the wafer fabrication (chip production plant) in the country will be ready by then.
“We will be taking path from 28 nanometer to 7 nm so in future this can be loaded in our Fab. Let’s do it,” Vaishnaw said in the video.
The processor being developed by a team at IIT Madras will look at the deployment in computer servers for financial, communications, defense and strategic sectors.
In the next few weeks, OpenAI will put out a version of ChatGPT that will let people have more control over the chatbot’s tone and personality, CEO Altman said.
OpenAI plans to introduce stronger age checks that would allow erotic content for verified adults on ChatGPT.(File Photo/Representational)
OpenAI has announced that ChatGPT will soon allow more content like “erotica for verified adults” as part of what the company describes as a “treat adult users like adults” principle. This is expected to be part of OpenAI’s major update for ChatGPT, allowing users to customise their AI assistant’s personality and include options for more human-like answers.
Notably, users usually have to turn to social storytelling platforms like Wattpad to read, write, and share original stories, including erotic content, but with restrictions.
In a post on X, the company’s CEO, Sam Altman, said they observed that stricter controls on AI to deal with mental health concerns had made the chatbot “less useful/enjoyable to many users who had no mental health problems”.
We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
Notably, the stricter safety rules came in after a teenager from California, Adam Raine, died by suicide earlier this year. His parents filed a lawsuit alleging that ChatGPT gave him advice on how to take his own life.
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” Altman wrote.
In the next few weeks, OpenAI will put out a version of ChatGPT that will let people have more control over the chatbot’s tone and personality, Altman said.
“If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing),” he posted.
Medications prescribed years ago may still be influencing the bacteria living in your gut today, according to new research that challenges the assumption that drug effects end when treatment stops.
Analyzing gut bacteria samples from 2,509 people in Estonia, researchers discovered that various medications continue to leave detectable fingerprints on the microbiome long after someone has stopped taking them. Beta-blockers, antidepressants, proton pump inhibitors, and benzodiazepines all left effects that lasted for years, with some changes visible three or more years after last use.
The gut microbiome, which is the collection of trillions of bacteria in the digestive system, influences digestion, immunity, and overall health. Changes to this ecosystem can affect everything from nutrient absorption to susceptibility to infections.
Tracking Medication History Through Electronic Health Records
The research team from the University of Tartu in Estonia took advantage of something most microbiome studies lack: detailed electronic health records showing exactly when people filled prescriptions over a five-year period. Looking backward in time, they could see whether medications used years earlier were still associated with changes in gut bacteria composition.
They compared people who had last used certain drugs more than one, two, three, or even four years before providing stool samples with people who hadn’t used those drugs at all in the preceding five years. The differences, while modest in size, were consistent across drug classes.
Out of 186 medications analyzed, 167 were associated with changes in the gut microbiome when actively used. But 78 of them—roughly 42 percent—displayed what researchers call “carryover effects” that lasted well beyond the treatment period.
Beta-Blockers And Anxiety Medications Leave Lasting Marks On The Microbiome
While antibiotics left the expected long-term effects, medications targeting human biology rather than bacteria left surprisingly durable impacts as well.
Beta-blockers, commonly prescribed for high blood pressure and heart conditions, were associated with gut bacteria changes detectable even when people had stopped taking them several years earlier. The same held true for benzodiazepine derivatives like Xanax and Valium, which are used to treat anxiety and sleep disorders.
Antidepressants, particularly selective serotonin reuptake inhibitors, also demonstrated carryover effects. So did proton pump inhibitors, the medications millions take for acid reflux and heartburn.
In some cases, the more prescriptions someone had filled in the past, the stronger the effect on their current microbiome. The “additive” pattern suggests that medication history compounds over time rather than simply washing away after treatment ends.
Anxiety Drugs Rival Antibiotics In Gut Health Impacts
Benzodiazepines stood out in the analysis. These anti-anxiety medications had effects on gut bacteria composition that rivaled those of broad-spectrum antibiotics. Different drugs in this class had varying impacts, with alprazolam exerting a broader effect than diazepam, even though both treat similar conditions.
The study also revealed differences among medications in the same class. Among beta-blockers, metoprolol was associated with much stronger microbiome changes than nebivolol. Among proton pump inhibitors, omeprazole demonstrated different patterns than pantoprazole or esomeprazole.
These variations matter for clinical practice. If two medications treat the same condition but one has a more dramatic effect on gut bacteria, that information could eventually influence prescribing decisions.
Which Bacterial Species Are Most Affected
Specific bacterial species followed consistent patterns across multiple drug classes. Several members of the Clostridiales order increased in abundance among people taking beta-blockers, macrolide antibiotics, biguanides like metformin, and proton pump inhibitors.
Proton pump inhibitors were linked to increases in oral bacteria like Streptococcus parasanguinis and Veillonella parvula, species that normally reside in the mouth but can colonize the gut when stomach acid is reduced. The finding aligns with previous research and helps explain why PPIs can sometimes lead to gut infections.
Many of the medications studied were negatively correlated with overall bacterial diversity in the gut. People taking more unique medications at the time of sampling tended to have lower microbial richness, meaning fewer different species of bacteria.
Before-and-After Evidence Supporting Causal Links
To confirm their findings, researchers analyzed a subset of 328 people who provided stool samples twice, about four years apart. Watching what happened when people started or stopped taking medications between the two time points provided clearer evidence.
When people began taking penicillins, macrolides, proton pump inhibitors, benzodiazepines, or glucocorticoids, their gut bacteria changed in predictable ways. When they discontinued medications, the bacterial changes moved in the opposite direction, supporting a likely causal relationship between drug use and microbiome alterations.
These before-and-after comparisons provide stronger evidence than simply observing differences between people taking and not taking medications at a single point in time.
Why This Matters For Gut Health Research
The study has immediate consequences for microbiome research. Scientists trying to understand links between gut bacteria and diseases need to account not just for current medication use, but for prescriptions filled months or years earlier. Otherwise, they risk confusing medication effects with disease effects.
The researchers demonstrated this problem by revealing that several disease-microbiome associations were actually confounded by long-term drug usage. When past medication history was properly accounted for, some apparent disease signals disappeared.
For clinical medicine, the findings raise questions about cumulative effects of medications taken over time. If drugs leave lasting marks on the microbiome, and if the microbiome influences health, then medication decisions today could have consequences that extend far beyond the treatment period.
The research also suggests that gut bacteria might not fully recover after antibiotic courses. People who had used antibiotics years earlier still had lower bacterial diversity than people who had avoided antibiotics entirely during the five-year observation window. Time since last antibiotic treatment didn’t seem to bring diversity levels back to baseline.
Study participants who used prescription medications took an average of about three different drugs from diverse classes at the time of sample collection. During the five-year observation period, they had used more than 500 different medications at the most specific classification level.
When someone takes multiple medications, their combined effects on the microbiome might be additive or even synergistic. Past drug usage explained slightly more variance in microbiome composition than current drug usage, suggesting that accumulated medication history matters at least as much as what someone is taking right now.
While the study was conducted in Estonia, the medications analyzed are used worldwide. Beta-blockers, antidepressants, proton pump inhibitors, and benzodiazepines rank among the most commonly prescribed drugs in the United States and many other countries.
Illustration by Tag Hartman-Simkins / Futurism. Source: Chip Somodevilla / Getty Images
Elon Musk’s race to dominate our planet’s orbit with his satellite constellations is creating tons of space junk — enough of it, in fact, that we might want to start looking up.
According to storied Smithsonian astrophysicist Jonathan McDowell, there are now one or two of these Starlink satellites falling back to Earth every single day, he recently told EarthSky. And that figure, McDowell warned, is only going to keep climbing.
The alarming statistic underscores the concerns around rapidly populating the planet’s Low Earth Orbit with expendable satellites. Musk’s SpaceX has been launching thousands of them up there using his reusable rockets since 2019, with more than 8,000 currently in operation.
With those efforts accelerating in recent years, SpaceX has launched more than 2,000 satellites in 2025 alone. Meanwhile, its competitors are rushing to catch up with their own satellite-based internet service, with Amazon kickstarting its plan to deploy more than 3,200 with its first batch launched earlier this year.
“With all constellations deployed, we expect about 30,000 low-Earth orbit satellites (Starlink, Amazon Kuiper, others) and perhaps another 20,000 satellites at 1,000 km [620 miles] from the Chinese systems,” McDowell told EarthSky.
Low Earth Orbit’s rapidly getting more crowded, in other words, and that means a lot of satellite casualties. One of the reasons they’re occurring so frequently is that Starlink’s satellites have a short lifespan of around five years. After this, they’re guided towards the Earth, where they’re supposed to burn up upon re-entering the atmosphere.
All those cremated satellites have scientists concerned about the pollution they’re causing by releasing metals into the stratosphere, with one study speculating that it could kick off a chain reaction that devastates the ozone layer.
“So far answers have ranged from ‘this is too small to be a problem’ to ‘we’re already screwed,’” McDowell told The Register. “But the uncertainty is large enough that there’s already a possibility we’re damaging the upper atmosphere.”
And atmospheric pollution may soon be the least of our worries. In a 2023 report, the Federal Aviation Administration warned that by 2035, some 28,000 fragments from Starlink satellites could survive re-entry each year, skyrocketing the chance of someone on the ground getting struck and killed by space debris — once considered an astronomical improbability — to a staggering 61 percent each year.
As it stands, the Earth’s on track to be bombarded by five satellite re-entries per day in the near future, McDowell warned. But that’s not even the worst case scenario. McDowell fears that if satellite constellations become too crowded, it could host a disastrous chain reaction called Kessler syndrome, in which a few collisions between satellites cascade out of control and create even more space debris, potentially trapping humankind below a whirling vortex of orbital shrapnel. SpaceX’s satellites are low enough that it’s unlikely they’d survive the amount of time needed for this cascade to happen, but its dominance there may force competitors to higher orbits where their craft could take decades, if not centuries, to de-orbit.
OpenAI could now be the world’s most valuable startup, ahead of Elon Musk’s SpaceX and TikTok’s parent company ByteDance, after a secondary stock sale designed to retain employees at the ChatGPT maker.
Current and former OpenAI employees sold $6.6 billion in shares to a group of investors, pushing the privately held artificial intelligence company’s valuation to $500 billion, according to a source with knowledge of the deal who was not authorized to discuss it publicly.
The investors buying the shares included Thrive Capital, Dragoneer Investment Group and T. Rowe Price, along with Japanese tech giant SoftBank and the United Arab Emirates’ MGX, the source said Thursday.
The valuation reflects high expectations for the future of AI technology and continues OpenAI’s remarkable trajectory from its start as a nonprofit research lab in 2015.
But with the San Francisco-based company not yet turning a profit, it could also amplify concerns about an AI bubble if the generative AI products made by OpenAI and its competitors don’t meet the expectations of investors pouring billions of dollars into research and development.
OpenAI CEO Sam Altman has sought to dismiss those concerns, most recently last week, when he toured a massive data center complex being built to run the company’s AI systems in Abilene, Texas.
“Between the ten years we’ve already been operating and the many decades ahead of us, there will be booms and busts,” Altman said after being asked about a bubble. “People will overinvest and lose money, and underinvest and lose a lot of revenue.”
He added that “we’ll make some dumb capital allocations” and there will be short-term ups and downs but that “over the arc that we have to plan over, we are confident that this technology will drive a new wave of unprecedented economic growth,” along with scientific breakthroughs, improvements to quality of life and “new ways to express creativity.”
Just this week, the company launched two different business ventures, one a partnership with Etsy and Shopify for online shopping through ChatGPT and another a social media app, Sora, for generating and sharing AI videos.
OpenAI has been struggling to offer investors and staff the same perks and compensation as the publicly traded tech giants with which it competes. Facebook parent Meta Platforms, in particular, has been on a hiring spree for elite AI engineers and in June made a $14.3 billion investment in AI company Scale that recruited its CEO Alexandr Wang.
OpenAI’s for-profit subsidiary, valued at $500 billion, is technically controlled by the board of OpenAI’s nonprofit and both are still bound to pursue the nonprofit’s charitable purpose.
OpenAI’s partnerships with major companies and its plans to change its corporate structure have drawn the scrutiny of regulators, including the attorneys general of California and Delaware, who oversee charitable organizations that operate or are incorporated in their states.
NurPhoto / Beata Zawrzel/NurPhoto via Getty Images
Elon Musk axed an X engineer after they delivered the harsh truth about why his posts were flopping, a new book claims.
Journalist Jacob Silverman revealed in his new book Gilded Rage: Elon Musk and the Radicalization of Silicon Valley that Musk became fixated on how people interacted with his posts following his $44 billion acquisition of Twitter.
“Firing more than half of Twitter employees, Musk transformed how the platform operated,” Silverman writes in an excerpt obtained by Newsweek about the mass layoffs that occurred after the tech billionaire’s 2022 Twitter takeover.
Silverman then details a 2023 firing reported at the time by the tech news site Platformer.
“He fired a company engineer who told him that engagement on his tweets was down because people weren’t as interested in him,” Silverman writes.
The original report described how Musk gathered engineers and advisers at Twitter’s headquarters in 2023, where multiple sources recalled him saying: “This is ridiculous… I have more than 100 million followers, and I’m only getting tens of thousands of impressions.”
“One of the company’s two remaining principal engineers offered a possible explanation for Musk’s declining reach,” Platformer reported, withholding the engineer’s name due to the harassment Musk directed at former employees.
According to the publication, employees presented Musk with internal data and a Google Trends chart showing his popularity had fallen from a peak score of 100 to just nine. Musk then reportedly told the engineer, “You’re fired, you’re fired.”
In his new book, Silverman cites an example—also reported at the time by Platformer—recounting the 2023 Super Bowl, when both then-President Joe Biden and Musk tweeted their support for the Philadelphia Eagles, with Biden’s post generating nearly 20 million more impressions.
“That apparently was unacceptable to Musk, who deleted his tweet and flew to California after the game to demand changes to Twitter’s algorithm,” Silverman writes.
Silverman then quotes an alleged 2:36 a.m. Slack message from Musk’s cousin James Musk after the Super Bowl fiasco, which read: “We are debugging an issue with engagement across the platform. Any people who can make dashboards and write software please can you help solve this problem. This is high urgency. If you are willing to help out please thumbs up this post.”
“Thanks to the middle-of-the-night participation of 80 company engineers, the ‘high urgency’ issue was quickly solved,” Silverman writes, detailing the changes made that ensured “Twitter’s systems to privilege Musk’s posts above all others.”
“The For You feed became a mirror of Musk’s interests, containing the right-wing accounts he followed,” Silverman writes.
The author of the original Platformer article, which detailed the operation to change the X (formerly Twitter) algorithm, had been called out by Musk on X following its publication.
“The “source” of the bogus Platformer article is a disgruntled employee who had been on paid time off for months, had already accepted a job at Google and felt the need to poison the well on the way out. Twitter will be taking legal action against him,” Musk posted on X in response.
“All my sources for the story were current Twitter employees,” Zoe Schiffer, the author of the Platformer story, wrote about her sources after Musk’s threat.
For decades, doctors have debated whether older adults should take a daily aspirin to prevent cancer. New research reveals they may have been asking the wrong question. Instead of wondering whether aspirin works for everyone, scientists have discovered that the decades-old medication helps some seniors avoid cancer while potentially increasing the hazard for others.
An important new study published in JAMA Oncology analyzed 9,350 healthy adults aged 70 and older and found that blanket aspirin recommendations miss the mark for nearly half of all seniors. Using advanced predictive modeling, researchers identified distinct groups: 59% of participants were likely to benefit from daily aspirin, while 41% were better off avoiding it altogether.
These findings shred the one-size-fits-all approach that has dominated medical thinking about aspirin and cancer prevention for generations. Rather than treating all older adults the same way, the research points toward a future where genetic factors and personal characteristics guide treatment decisions.
When Aspirin Helps and When It Harms
Among seniors predicted to benefit from aspirin therapy, the medication was associated with a 15% lower hazard of developing cancer. But for those in the unfavorable group, aspirin was associated with a 14% higher hazard of cancer. This showed meaningful differences between who should and shouldn’t take daily aspirin for cancer prevention.
Dr. Le Thi Phuong Thao from Monash University, who led the research team, identified several factors that determined which group people fell into. Those most likely to benefit were older, had never smoked, carried specific genetic mutations, had family histories of cancer, and maintained lower body weights. Current smokers with higher BMIs, diabetes, and personal cancer histories were more likely to see increased cancer hazard from aspirin.
Most striking was the role of a genetic condition called clonal hematopoiesis of indeterminate potential, or CHIP. This age-related condition involves mutations in blood-forming stem cells with a variant allele frequency of 10% or greater and affects 5.7% of study participants. CHIP emerged as the strongest predictor of aspirin benefit, while current smoking was the strongest predictor of potential harm.
“Current smoking was the most important predictor for a detrimental effect of aspirin on cancer incidence,” the researchers noted, while CHIP was “the most important predictor for a beneficial effect.”
The Science Behind Personalized Prevention
CHIP showed particular importance because it’s linked to increased inflammation in the body. People with this genetic condition have elevated levels of inflammatory substances in their blood, including interleukin-6 and tumor necrosis factor-alpha. Aspirin works by blocking cyclooxygenase enzymes that drive inflammation throughout the body, potentially explaining why it’s particularly effective for people with CHIP.
The research challenges decades of medical advice that has swung between enthusiastically recommending aspirin for everyone to questioning its benefits entirely. Major medical organizations, including the U.S. Preventive Services Task Force, have struggled with mixed research results, particularly for people over 60.
These conflicting results may have occurred because previous studies lumped together people who respond very differently to aspirin therapy. When researchers separated participants based on their predicted responses, clear patterns emerged.
A New Era of Precision Medicine
Using their predictive model, researchers calculated that personalized treatment decisions improved five-year cancer risk reduction by an average of 2.3% compared to giving aspirin to everyone. While this might seem modest, it represents a significant advance in precision medicine for cancer prevention.
The model weighs factors together instead of looking at each risk factor one by one. Age, smoking history, genetic status, family history, and body weight all contribute to the final calculation of whether someone is likely to benefit or be harmed by aspirin therapy.
However, the researchers emphasize that cancer prevention represents just one piece of the aspirin decision puzzle. The medication also affects bleeding risk and cardiovascular disease prevention, factors that must be weighed alongside cancer considerations.
What This Means for Patients
The study focused on healthy older White adults in Australia who began aspirin therapy after age 70, limiting how broadly the findings apply to other populations. The researchers also examined cancer prevention over five years, while previous studies suggest aspirin’s benefits may be more pronounced when started younger and continued longer.
Before these findings can change medical practice, the predictive model needs validation in different populations. The researchers provide mathematical equations that other scientists can use to test their approach in diverse groups of patients.
This research points toward a move away from blanket medical recommendations toward individualized treatment strategies. Rather than asking whether aspirin prevents cancer, doctors may soon routinely consider genetic profiles, lifestyle factors, and personal medical histories to determine who is most likely to benefit.
For some older adults, genetic testing for conditions like CHIP could potentially become as routine as checking cholesterol levels. The goal isn’t to complicate medical decision-making but to make it more precise and effective.
Many people turn to acupuncture when bothered by back pain. (Photo by Katherine Hanlon on Unsplash)
Lower back pain is the leading cause of disability worldwide, yet most treatments offer limited relief. One of the most divisive is acupuncture – recommended in U.S. guidelines for lower back pain but not in the U.K. A new study has now examined whether it truly helps.
The study found that acupuncture does provide some relief for people with lower back pain, though the benefit was modest. Having additional maintenance sessions did not boost the effect.
More significantly, the improvement was smaller than that seen in studies using different approaches from Australia and the U.S. Although acupuncture is unlikely to be the best treatment for lower back pain, the fact that it helps at all reveals something important about the condition and how people can find relief.
The study included 800 older adults who were randomly assigned to “usual care” or one of two acupuncture plans. The standard program involved 11 sessions over 12 weeks, while the enhanced version added five more maintenance sessions over the following 12 weeks. The trial took place with 55 acupuncturists in different parts of the U.S. and focused on older adults.
After six and 12 months, both acupuncture groups had similar results, so the extra follow-up sessions didn’t help. Both acupuncture groups had less pain and disability after six months than those who received usual care – and about 40% improved by at least 30%. These improvements persisted until the 12-month evaluation, and no major safety concerns emerged.
These findings align with large reviews of lower back pain treatments focusing on acupuncture or all non-drug and non-surgical approaches. Overall, acupuncture performs somewhat better than no treatment or usual care at improving pain and disability, though this benefit is typically small.
More tellingly, reviews show that any benefit from acupuncture appears even smaller when compared with sham (pretend) or placebo treatments. This means some of the benefit may come from the experience of being treated, not the acupuncture itself.
What patients expect can affect how much they say they improve, which is important in all studies that rely on self-reported pain. This makes it crucial to consider what comparison treatment was used when any study claims acupuncture helps, as usual-care groups – who typically receive less time and attention – are easiest to outperform.
Alternatives To Acupuncture For Lower Back Pain
Some people might say that any relief from lower back pain is worth celebrating or even paying for. But it’s also important to think about whether safer and cheaper options are available.
The benefits of different mind-body treatments for lower back pain studied in Australia and the U.S. are worth considering, as they appear to offer greater reductions in pain and disability without increasing costs or risks.
The Australian study showed much greater reductions in disability and pain (using the same outcome measures) through a rehabilitation program delivered by physiotherapists that addressed both physical and psychological aspects of back pain. Even more importantly, the economic analysis revealed significant cost savings.
The U.S. study involved teaching people that back pain comes from their brain being overprotective, rather than actual damage to their back. It used talk therapy techniques to help the participants think about and respond to pain differently. As with the Australian study, the U.S. study also demonstrated much larger reductions in pain and disability than those seen with acupuncture – albeit using slightly different measures.
The fact that these holistic, mind-body rehabilitation programs outperform acupuncture – and other relatively basic interventions such as massage and medication – reflects the emerging international consensus that comprehensive approaches help people manage their lower back pain.
When two researchers at the University of Cambridge challenged ChatGPT with a classic puzzle from ancient Greece, they found that the model sometimes behaved less like a search engine and more like a learner. The platform took time testing approaches, reconsidering when prompted, and even resisting wrong suggestions.
The study suggests that artificial intelligence may do more than retrieve memorized answers. In certain settings, it can appear to work through problems in a way that resembles student reasoning.
This finding does not mean ChatGPT “thinks” like a human. The authors emphasize their study is exploratory and based on a single conversation. Still, the results raise questions about how AI might support education if guided well.
How Researchers Gave ChatGPT Plato’s Famous Math Test
Nadav Marco, who’s now at Hebrew University, and Andreas Stylianides revisited Plato’s dialogue “Meno.” In that text, Socrates shows an uneducated slave boy how to double the area of a square through guided questions. Socrates used this exchange to argue that knowledge already exists in the mind and can be drawn out through teaching.
The researchers posed the same 2,400-year-old puzzle to ChatGPT-4. Instead of repeating the well-known geometric solution from Plato’s dialogue, ChatGPT used algebra, which wasn’t invented until centuries later.
What made this notable is that the AI later showed it did know the geometric method. If it were simply recalling from training data, the obvious move would have been to cite Plato’s approach immediately. Instead, it appeared to construct a different solution pathway.
The researchers also tried to mislead ChatGPT into making the same mistake as Plato’s slave, who initially thought doubling the sides would double the area. But ChatGPT refused to accept this wrong answer, carefully explaining why doubling the sides actually creates four times the area, not twice.
When ChatGPT Faced Variations on the Problem
The researchers then changed the puzzle, asking ChatGPT how to double the area of a rectangle. Here, the model showed surprising awareness of the problem’s limitations. Rather than incorrectly applying the square’s diagonal method, ChatGPT explained that “the diagonal does not offer a straightforward new dimension” for rectangles.
This response demonstrated something resembling mathematical reasoning. The AI seemed to understand that techniques working for one shape don’t automatically apply to others—a distinction that often challenges human students learning geometry.
When prompted for more practical solutions, ChatGPT initially focused on algebraic approaches, similar to its first response about squares. But the AI’s explanations of how it was reasoning were inconsistent. At times it described generating answers in real time; at other points it implied the responses were not spontaneous.
The authors noted that these reflections may not accurately represent how the system works. They cautioned against taking the AI’s own words at face value, since language models are not reliable guides to their inner processes.
The “Chat’s ZPD”: Where AI Learns with Guidance
Drawing on psychologist Lev Vygotsky, the researchers described a “Chat’s Zone of Proximal Development.” These are problems ChatGPT could not solve independently but managed when guided with timely prompts.
Vygotsky’s original concept describes the gap between what a child can do alone versus what they can accomplish with help from a teacher or more skilled peer. The researchers found a similar pattern with ChatGPT: certain problems remained out of reach until the right kind of guidance appeared.
Some answers looked like retrieval from training data. Others, especially those involving resistance to incorrect suggestions or adaptation to new prompts, resembled the problem-solving steps of students. While this does not prove that the model truly “understands,” it does suggest that, under the right conditions, AI output can mirror aspects of human learning.
When the researchers asked for an “elegant and exact” solution to the original square problem, ChatGPT provided the geometric construction method. The AI itself admitted that “there [was] indeed a more straightforward and mathematically precise approach … which [it] should have emphasised directly in response to [our] initial inquiry.”
This self-correction suggested the model could reflect on and improve its responses when given appropriate prompts, much like a student who realizes they took a harder path than necessary.
What This Means for Students and Teachers
If AI tools can sometimes behave like learners, they could become useful educational partners. Instead of treating ChatGPT as an answer machine, students and teachers might experiment with prompts that invite collaboration and exploration.
The type of prompt matters significantly. The researchers found that asking for exploration and collaboration yielded different responses than requesting summaries based on reliable sources. Knowing how to phrase prompts could shape whether the model retrieves or attempts to generate new approaches.
Teachers could use this approach to model problem-solving strategies. Rather than asking AI for the final answer, they might guide it through the same thinking process they want students to follow. This could help students see that even sophisticated systems sometimes struggle, reconsider approaches, and need guidance to reach better solutions.
Students, meanwhile, could practice their own reasoning by working alongside AI that shows its thinking process. When ChatGPT resists incorrect suggestions or explains why certain approaches won’t work, students get opportunities to understand mathematical reasoning rather than just memorize procedures.
The authors stress that their study, published in the International Journal of Mathematical Education in Science and Technology, involved only one conversation with one model (ChatGPT-4 in February 2024). Results may differ with newer versions or different systems. Still, the findings invite educators to consider how AI might support exploration, not just provide ready-made answers.
As the researchers put it, users should “pay attention to the type of knowledge they wish to get from an LLM and try to communicate it clearly in their prompts.” Guidance can help AI attempt solutions it would not manage on its own.
Building Mathematical Understanding Through AI Collaboration
The study reveals potential for AI to serve as more than an information source. When ChatGPT resisted incorrect suggestions and explained its reasoning, it demonstrated behaviors that could help students develop critical thinking skills.
Rather than simply accepting or rejecting AI outputs, students could learn to evaluate mathematical reasoning, whether from artificial systems or human sources. This skill becomes increasingly valuable as AI tools become more prevalent in academic and professional settings.
The researchers’ approach also highlights how questioning techniques can reveal different aspects of AI behavior. By varying their prompts and challenging the system’s responses, they uncovered evidence of both retrieval and generation processes within the same conversation.
A Tentative Step, Not a Final Word
The study opens questions about how we understand machine intelligence. If AI can engage in something resembling reasoning, complete with self-correction and resistance to errors, the line between retrieval and generation becomes blurred. This doesn’t mean AI has achieved consciousness, but it suggests these systems might be more sophisticated thinking partners than previously imagined.
For teachers and students, the lesson is not that machines replace human reasoning, but that they could help learners explore strategies, confront mistakes, and practice persistence in problem-solving. The key lies in knowing how to prompt and guide these systems effectively.
The Moon and Sun staged a dramatic spectacle on September 21, when a deep partial solar eclipse darkened skies over the Pacific Ocean and turned the glowing disk of the sun into a radiant crescent.
The eclipse occurred during the new Moon phase, when the Moon passes directly between Earth and the Sun. The alignment hid much of the solar surface without fully blocking its light, creating the distinctive crescent-shaped sun seen across parts of the Southern Hemisphere.
The best views came from Antarctica, New Zealand and southern Australia, where the Moon appeared to take the deepest bite out of the Sun. In Dunedin, New Zealand, daylight dimmed noticeably as the lunar disk covered much of the solar face. Even everyday shadows reflected the spectacle — pinpoints of light shining through trees or small gaps turned into miniature crescents.
The timing added another layer of drama. For Southern Hemisphere viewers, the eclipse fell on Monday local time, just two weeks after Asia witnessed a striking total lunar eclipse.
For the first time, astrophysicists detected a supernova embedded in a wind rich with silicon, sulfur and argon. The observations suggest the massive star somehow lost its outer hydrogen, helium and carbon layers — exposing the inner silicon and sulfur-rich layers — before exploding. (Credit: W.M. Keck Observatory/Adam Makarenko
Astronomers have glimpsed the inner structure of a dying star in a rare kind of cosmic explosion called an “extremely stripped supernova.”
In a paper published in Nature, Steve Schulze of Northwestern University in the United States and colleagues describe the supernova 2021yfj and a thick shell of gas surrounding it.
Their findings support our existing theories of what happens inside massive stars at the end of their lives – and how they have shaped the building blocks of the universe we see today.
How Stars Make The Elements
Stars are powered by nuclear fusion – a process in which lighter atoms are squished together into heavier ones, releasing energy.
Fusion happens in stages over the star’s life. In a series of cycles, first hydrogen (the lightest element) is fused into helium, followed by the formation of heavier elements such as carbon. The most massive stars continue on to neon, oxygen, silicon and finally iron.
Each burning cycle is faster than the previous one. The hydrogen cycle can last for millions of years, while the silicon cycle is over in a matter of days.
As the core of a massive star keeps burning, the gas outside the core acquires a layered structure, where successive layers record the composition of the progression of burning cycles.
While all this is playing out in the star’s core, the star is also shedding gas from its surface, carried out into space by the stellar wind. Each fusion cycle creates an expanding shell of gas containing a different mix of elements.
Core Collapse
What happens to a massive star when its core is full of iron? The great pressure and temperature will make the iron fuse, but unlike the fusion of lighter elements, this process absorbs energy instead of releasing it.
The release of energy from fusion is what has been holding the star up against the force of gravity – so now the iron core will collapse. Depending on how big it is to start with, the collapsed core will become a neutron star or a black hole.
The process of collapse creates a “bounce,” which sends energy and matter flying outwards. This is called a core-collapse supernova explosion.
The explosion lights up the layers of gas shed from the star earlier, allowing us to see what they are made of. In all known supernovae until now, this material was either the hydrogen, the helium or the carbon layer, produced in the first two nuclear burning cycles.
The inner layers (the neon, oxygen and silicon layers) are all produced in a mere few hundred years before the star explodes, which means they don’t have time to travel out far from the star.
An Explosive Mystery
But that’s what makes the new supernova SN2021yfj so interesting. Schulze and colleagues found the material outside the star came from the silicon layer, the last layer just above the iron core, which forms on a timescale of a few months.
Artist interpretation of SN2021yfj’s origin
The stellar wind must have expelled all the layers right down to the silicon one before the explosion occurred. Astronomers don’t understand how a stellar wind could be powerful enough to do this.
The most plausible scenario is a second star was involved. If another star were orbiting the one that exploded, its gravity might have rapidly pulled out the deep silicon layer.
During the Meta Connect event, the launch of Meta Ray Ban glasses was overshadowed by technical failures. AI miscommunication and unresponsive features led to a chaotic demonstration, prompting CEO Mark Zuckerberg to blame connectivity issues for the mishaps.
Meta AI glasses demo faced a technical glitch with Zuckerberg on-stage(meta)
At the annual Meta Connect event on Thursday, the tech giant introduced its premium Meta Ray-Ban glasses with a built-in display and the performance-focused Oakley Meta Vanguard glasses. While Meta and its CEO Mark Zuckerberg would have wanted to showcase its lead in the augmented reality space with the new launches, the moment turned into a bit of an embarrassing situation for the company as it had a couple of failed demos during the live stream.
Meta AI demo fail:
After unveiling the new Meta Ray-Ban glasses, Zuckerberg connected with food creator Jack Mancuso to highlight how the new glasses could be useful in day-to-day life. Mancuso used the glasses to ask for a recipe for a Korean-inspired steak sauce. Instead of providing step-by-step instructions, the AI on Mancuso’s glasses became confused and started giving out-of-sequence directions.
“You’ve already combined the base ingredients, so now grate the pear,” the AI insisted.
Mancuso tried to redirect the AI multiple times, but he ultimately blamed the problem on a “messed up Wi-Fi” and handed the stage back to Zuckerberg.
“The irony of the whole thing is you spend years making technology and then the Wi-Fi on the day catches you,” the Meta CEO commented.
The second time there was a glitch was when Zuckerberg was highlighting the usefulness of the neural wristband, which was unveiled with the Ray-Ban Glasses and is used to detect subtle hand gestures to perform actions like sending messages, controlling media, and accessing Meta AI.
While elaborating on the new abilities of the Ray-Ban display glasses, Zuckerberg was able to send a message using the wristband to Meta CTO Andrew Bosworth. However, when Bosworth called Zuckerberg, the Meta CEO could not pick up the call, with the interface on the glasses not responding to his gestures.
“That’s too bad, I don’t know what happened… You practice these things like 100 times, and then, you never know what’s going to happen,” Zuckerberg said on the incident.
After several attempts at connecting the call, Bosworth eventually walked onto the stage to manage the situation. Meanwhile, the duo once again blamed the issue on a ‘brutal’ Wi-Fi connection.
What can Meta Ray Ban glasses do?
The Meta Ray-Ban Display’s glasses offers a limited 20-degree field of view with a resolution of 600 x 600 pixels.
Its brightness ranges from 30 to 5,000 nits, providing decent visibility in most outdoor conditions, though it can struggle in the brightest sunlight. Some prescriptions are supported, but only as a built-to-order option.
The external camera matches past Ray-Ban glasses with a 12-megapixel sensor but falls short of new non-display models, also introduced on September 17, in video resolution and battery life.
The glasses record 1080p video and last six hours per charge, with the external case providing an additional 30 hours — roughly four full recharges.
The wristband, called the Meta Neural Band, comes in three sizes and offers 18 hours of battery life.
More than 330 websites have been linked to a phishing operation that stole over 5,000 Microsoft user credentials.
The phishing scheme targeted a wide swath of industries in the USImage: Sirinarth Mekvorawuth/Zoonar/picture alliance
Microsoft said on Tuesday that it seized 338 websites linked to a Nigerian-based service that allowed users to carry out phishing campaigns
The service, called “Raccoon0365,” allowed users to engage in phishing campaigns that involved thousands of emails at a time, according to Steven Masada, assistant general counsel for Microsoft’s Digital Crimes Unit.
The phishing operation ended up stealing at least 5,000 Microsoft user credentials.
Phishing is a cybercrime in which criminals impersonate trustworthy domains to deceive users into revealing sensitive information like passwords or banking details.
How did the phishing scheme work?
Raccoon0365 operates through a private Telegram channel with over 850 subscribers.
The service enables users to impersonate trusted brand names and get targets to enter Microsoft login details on fake Microsoft platforms. According to Microsoft’s Masada, the service has generated at least $100,000 (€84,425) in cryptocurrency payments for its operators since launching in July 2024.
Raccoon0365 users targeted a wide range of industries, a significant number of which are organizations based in New York City, Masada said.
How did Microsoft seize Raccoon0365?
According to Masada, Microsoft identified what it said was a Raccoon0365-related effort using tax-themed phishing emails to target more than 2,300 organizations, mostly in the US, in February this year, according to a company blog posted in April.
Earlier this month, Microsoft obtained an order from the US District Court in Manhattan to seize domains associated with Raccoon0365. The seizure of the websites occurred over a period of days earlier this month.
“Cybercriminals don’t need to be sophisticated to cause widespread harm,” Masada said. “Simple tools like Raccoon0365 make cybercrime accessible to virtually anyone, putting millions of users at risk,” he added.
Raccoon0365 operators used Cloudflare services to help conceal the service’s backend infrastructure. Cloudflare worked with Microsoft and the US Secret Service to take down Raccoon0365 operations and prevent the operators from establishing new accounts.
Blake Darche, the head of threat intelligence at Cloudflare, said that while Raccoon0365 operators made some operational security mistakes, they were highly effective.
“They’re in people’s accounts, they compromise lots of people, and it needs to obviously be stopped,” he said.
Google shared eight Nano Banana prompts, which you can use to create surreal and extraordinary profile pictures on Google Gemini.
Google shared these images created using Gemini’s Nano Banana. (Instagram/@googlegemin)
The Nano Banana craze has taken over social media and is still going strong. People are sharing various prompts that can be used to turn unassuming pictures into extraordinary visuals. Google, in a recent post, revealed eight such prompts that will help you create unusual profile pics.
“Looking to Nano Banana your profile pic?” read a post on the official Instagram profile dedicated to Google Gemini. The tech company also shared several images which show the incredible results created using different prompts.
How to create your perfect profile pic?
Step 1: The first step is to visit the Google Gemini app or website. Be careful and avoid fake sites or apps
Step 2: The second step is to upload a high-quality picture
Step 3: Once done, you can add your prompt.
Step 4: Click generate and wait a few seconds for the AI to generate your image.
What are the prompts you can use?
Prompt 1
“Turn me into a huge, graffiti mural on the side of a building.”
Prompt 2
“Create a custom tarot card with a detailed folk-art, vibrant color style, of me.”
Prompt 3
“Without changing my outfit, what would I look like as a racing video game character from the 90s?”
Prompt 4
“Preserving my likeness, create a ceramic mug version of my head. Make my head the entire mug.”
Prompt 5
“Turn me into a detailed amigurumi doll sitting on a shelf.”
Prompt 6:
“Turn me into a simple neon sign hanging on a wall.”
Prompt 7:
“Turn me into a piece of collage art using magazine cutouts glued on a piece of construction paper and framed. Replace the background with a collage as well.”
Prompt 8:
“Turn me into the cover character on a worn, paperback best seller.”
Visitors give commands to a robot at Nvidia’s booth during the 3rd China International Supply Chain Expo at the China International Exhibition Center, in Beijing, July 18, 2025. (AP Photo/Mahesh Kumar A., File)
China accused Nvidia on Monday of violating the country’s antimonopoly laws and said it would step up scrutiny of the world’s leading chipmaker, escalating tensions with Washington as the two countries held trade talks this week.
Chinese regulators said a preliminary investigation found that Nvidia didn’t comply with conditions imposed when it purchased Mellanox Technologies, a network and data transmission company.
The one-sentence statement from the State Administration for Market Regulation statement did not mention any punishment, but said it would carry out “further investigation.”
An Nvidia spokesperson said, “We comply with the law in all respects. We will continue to cooperate with all relevant government agencies as they evaluate the impact of export controls on competition in the commercial markets.”
Regulators said in December that they were investigating the company for suspected violations stemming from the $6.9 billion acquisition of Mellanox. The deal was completed in 2020 after the Chinese regulator gave conditional approval for Nvidia to buy the Israeli company.
The announcement, which came as the two sides held trade talks in Spain, is the latest tit-for-tat move between Washington and Beijing in their trade battle over technology focusing on semiconductors and the equipment to make them.
On Saturday, China’s Ministry of Commerce said it was carrying out an antidumping investigation into certain analog IC chips imported from the U.S., including commodity chips commonly made by companies such as Texas Instruments and ON Semiconductor.
The ministry also announced a separate antidiscrimination probe into U.S. measures against China’s chip sector.
A day earlier, the U.S. had sanctioned two Chinese companies accused of acquiring equipment for major Chinese chipmaker SMIC.
The talks in Madrid between U.S. Treasury Secretary Scott Bessent and Chinese Vice Premier He Lifeng in Madrid concluded Monday with Bessent telling reporters the two sides reached a framework deal for U.S. ownership of TikTok. However, details were scant and Chinese negotiators provided no confirmation of a deal.
It feels like there are so many things constantly vying for our attention: the sharp buzz of the phone, the low hum of social media, the unrelenting flood of emails, the endless carousel of content.
It’s a familiar and almost universal ailment in our digital age. Our lives are punctuated by constant stimulation, and moments of real stillness – the kind where the mind wanders without a destination – have become rare.
Digital technologies permeate work, education, and intimacy. Not participating feels to many like nonexistence. But we tell ourselves that’s OK because platforms promise endless choice and self-expression, but this promise is deceptive. What appears as freedom masks a subtle coercion: distraction, visibility, and engagement are prescribed as obligations.
As someone who has spent years reading philosophy, I have been asking myself how to step out of this loop and try to think like great thinkers did in the past. A possible answer came from a thinker most people wouldn’t expect to help with our TikTok-era malaise: the German philosopher Martin Heidegger.
Heidegger argued that modern technology is not simply a collection of tools, but a way of revealing – a framework in which the world appears primarily as a resource, including the human body and mind, to be used for content. In the same way, platforms are also part of this resource, and one that shapes what appears, how it appears, and how we orient ourselves toward life.
Digital culture revolves around speed, visibility, algorithmic selection, and the compulsive generation of content. Life increasingly mirrors the logic of the feed: constantly updating, always “now” and allergic to slowness, silence and stillness.
What digital platforms take away is more than just our attention being “continuously partial” — they also limit the deeper kind of reflection that allows us to engage with life and ourselves fully. They make us lose the capacity to inhabit silence and confront the unfilled moment.
When moments of silence or emptiness arise, we instinctively look to others — not for real connection, but to fill the void with distraction. Heidegger calls this distraction “das man” or “they:” the social collective whose influence we unconsciously follow.
In this way, the “they” becomes a kind of ghostly refuge, offering comfort while quietly erasing our own sense of individuality. This “they” multiplies endlessly through likes, trends, and algorithmic virality. In fleeing from boredom together, the possibility of an authentic “I” disappears into the infinite deferral of collective mimicry.
Heidegger feared that under the dominance of technology, humanity might lose its capacity to relate to “being itself.” This “forgetting of being” is not merely an intellectual error but an existential poverty.
Today, it can be seen as the loss of depth — the eclipse of boredom, the erosion of interiority, the disappearance of silence. Where there is no boredom, there can be no reflection. Where there is no pause, there can be no real choice.
Heidegger’s “forgetting of being” now manifests as the loss of boredom itself. What we forfeit is the capacity for sustained reflection.
Boredom As A Privileged Mood
For Heidegger, profound boredom is not merely a psychological state but a privileged mood in which the everyday world begins to withdraw. In his 1929 to 1930 lecture course The Fundamental Concepts of Metaphysics, he describes boredom as a fundamental attunement through which beings no longer “speak” to us, revealing the nothingness at the heart of being itself.
Boredom is not absence but a threshold — a condition for thinking, wonder, and the emergence of meaning.
The loss of profound boredom mirrors the broader collapse of existential depth into surface. Once a portal to being, boredom is now treated as a design flaw, patched with entertainment and distraction.
Never allowing ourselves to be bored is equivalent to never allowing ourselves to be as we are. As Heidegger insists, only in the totality of profound boredom do we come face to face with beings as a whole. When we flee boredom, we escape ourselves. At least, we try to.
The problem is not that boredom strikes too often, but that it is never allowed to fully arrive. Boredom, which has paradoxically seen a rise in countries drowning in technology like the US, is shameful. It is treated like an illness almost. We avoid it, hate it, fear it.
Digital life and its many platforms offer streams of micro-distractions that prevent immersion into this more primitive attunement. Restlessness is redirected into scrolling, which, instead of meaningful reflection, produces only more scrolling. What disappears with boredom is not leisure, but metaphysical access — the silence in which the world might speak, and one might hear.
In this light, rediscovering boredom is not about idle time, it is about reclaiming the conditions for thought, depth, and authenticity. It is a quiet resistance to the pervasive logic of digital life, an opening to the full presence of being, and a reminder that the pause, the unstructured moment, and the still passage are not failures – they are essential.
Vitamin B3 could be the unexpected key to stopping fatty liver disease. Credit: Shutterstock
Approximately 30% of the global population is affected by metabolic-associated fatty liver disease (MASLD), a condition that previously lacked targeted treatments. In a groundbreaking discovery, researchers have identified a genetic factor that exacerbates the disease, and remarkably, the FDA-approved drug that most effectively targets this factor is vitamin B3.
A collaborative research team led by Professor Jang Hyun Choi from the Department of Life Sciences at UNIST, in partnership with Professor Hwayoung Yun from the College of Pharmacy and Research Institute for Drug Development at Pusan National University (PNU), and Professor Neung Hwa Park from Ulsan University Hospital (UUH), has, for the first time globally, elucidated the role of microRNA-93 (miR-93), which is expressed in the liver, as a key genetic regulator in the development and progression of MASLD.
MiR-93 is a specialized RNA molecule expressed in hepatocytes that functions to suppress the expression of specific target genes. The team observed abnormally elevated levels of miR-93 in both patients with fatty liver disease and animal models. Through molecular analysis, they demonstrated that miR-93 promotes lipid accumulation, inflammation, and fibrosis by inhibiting the expression of SIRT1, a gene involved in lipid metabolism within liver cells.
In experiments utilizing gene editing techniques to eliminate miR-93 production in mice, researchers observed a marked reduction in hepatic fat accumulation, along with significant improvements in insulin sensitivity and liver function indicators. Conversely, mice with overexpressed miR-93 exhibited worsened hepatic metabolic function.
Furthermore, screening 150 FDA-approved drugs revealed that niacin (vitamin B3) most effectively suppresses miR-93. Mice treated with niacin showed a significant decrease in hepatic miR-93 levels and a notable increase in SIRT1 activity. The activated SIRT1 restored disrupted lipid metabolism pathways, thereby normalizing liver lipid homeostasis.
The research team explained, “This study precisely elucidates the molecular origin of MASLD and demonstrates the potential for repurposing an already approved vitamin compound to modulate this pathway, which has high translational clinical relevance.”
They added, “Given that niacin is a well-established and safe medication used to treat hyperlipidemia, it holds promise as a candidate for combination therapies targeting miRNA pathways in MASLD.”
Nepal’s anti-corruption protests led by Gen Z activists toppled the KP Sharma Oli government. Now, Gen Z is using the US-based social media app Discord to decide on the country’s next leader.
Nepal Gen Z protests: Protestors were mobilised using Discord
Nepal is in a period of turmoil. After Prime Minister KP Sharma Oli’s government banned 26 social media apps including Instagram and Facebook, Gen Z protestors took to the streets. The protests soon turned violent, with at least 51 dead.
Nepal’s youth protesters say they want to put an end to years of alleged corruption in the country’s governance. With Oli out and no faith in other political leaders, Gen Z is at a crossroads to decide their next leader. This is where the protestors are flocking to the online platform Discord.
Discord was launched in May 2015 as a social platform by co-founders Jason Citron, and Stanislav Vishnevskiy. Both Citron and Vishnevskiy had created social platforms for gamers before and hoped to create a chat service that would not hamper performance while playing games.
During Discord’s early days, the platform focused on allowing users to communicate with their friends without exiting a game. By the end of 2016, Discord had more than 25 million users.
The platform exploded in popularity during the pandemic, particularly amongst Gen Z. Users did not just stick to using the app while gaming. Rather, more and more Discord servers were created based on various topics of interest.
The app now calls itself a communication platform where users can engage in discussions on servers, through various channels. There are also options for screen sharing, streaming as well as moderation tools.
A Discord server is a large community space where users can gather and set up multiple channels to communicate. The channels could be in text, audio, or even video. The user limit on a Discord server can be as high as 5,00,000 by default.
However, only 2,50,000 users can be active on a server at any given time.
How did Gen Z protestors use Discord?
Gen Z protestors have used Discord to manage their demonstrations. Following Oli’s resignation, the members used the Youth Against Corruption Discord server to decide their next leader.
As shown by India Today’s OSINT team, the Youth Against Corruption server has over 1,30,000 members. There is no real way to verify the location of these members.
The server held multiple polls to decide on Nepal’s next leader, with users voting as per their choice. However, there was no way to verify that all users in the voting process were actually from Nepal.
India Today’s OSINT team report demonstrated that any person outside of Nepal can vote as well.
By Wednesday, September 10, the server had seemingly reached a consensus on Nepal’s next leader. Sushila Karki, Nepal’s former chief justice, was chosen by the Discord server.
According to South China Morning Post, 7,713 votes were cast before Karki hit 50%. She met Nepal’s president, Ram Chandra Poudel, and Army chief, Gen. Ashok Raj Sigdel, the following day.
Shaswot Lamichhane, a channel moderator, told the New York Times that the voting was meant to only suggest an interim leader who could oversee elections.
Why did Gen Z protestors use Discord?
On the surface, not many are familiar with Discord, particularly millennials. However, for Gen Z, it is a convenient and comfortable platform. There are no endless feeds of content, unlike Instagram or X (formerly Twitter). There are many more features available than in a messaging platform like WhatsApp.
Discord servers can allow for incredibly large groups of users to deliberate and discuss, something that becomes crucial during mass movements.
A sample obtained by NASA’s Perseverance rover of reddish rock formed billions of years ago from sediment on the bottom of a lake contains potential signs of ancient microbial life on Mars, according to scientists, though the minerals spotted in the sample also can form through nonbiological processes.
The discovery by the six-wheeled rover in Jezero Crater represents one of the best pieces of evidence to date about the possibility that Earth’s planetary neighbor once harbored life.
Perseverance scientist Joel Hurowitz of Stony Brook University, lead author of the study published in the journal Nature, opens new tab, said a “potential biosignature” was detected in rock that formed at a time when Jezero Crater was believed to have been a watery environment, between 3.2 and 3.8 billion years ago.
Acting NASA Administrator Sean Duffy told a news conference that the U.S. space agency’s scientists examined the data for a year and concluded that “we can’t find another explanation, so this very well could be the clearest sign of life that we’ve ever found on Mars – which is incredibly exciting.”
NASA released an image of the rock – a very fine-grained, rusty-red mudstone – bearing ring-shaped features resembling leopard spots and dark marks resembling poppy seeds. Those features may have been produced when the rock was forming by chemical reactions involving microbes, according to the researchers.
A potential biosignature is defined as a substance or structure that may have a biological origin but needs more data or further study before a conclusion can be made about the absence or presence of life.
Nicky Fox, associate administrator for NASA’s Science Mission Directorate, noted that the scientists were not announcing the discovery of a living organism.
“It’s not life itself,” Fox told the news conference.
The rover since 2021 has been exploring Jezero Crater, an area in the planet’s northern hemisphere that once was flooded with water and home to an ancient lake basin. Scientists believe river channels spilled over the crater wall and created a lake.
Perseverance has been analyzing rocks and loose material called regolith with its onboard instruments and then collecting samples and sealing them in tubes stored inside the rover.
It collected the sample named Sapphire Canyon in July 2024 from a rock called Cheyava Falls in a locale known as Bright Angel rock formation. The sample came from a set of rocky outcrops on the edges of Neretva Vallis, an ancient river valley about a quarter of a mile (400 meters) wide carved by water rushing into the crater.
A “selfie” taken by NASA’s Perseverance Mars rover, made up of 62 individual images, on July 23, in this image released on September 10, 2025. A rock nicknamed “Cheyava Falls,” which has features that may bear on the question of whether the Red Planet was long ago home to microscopic life, is seen to the left of the rover near the center of the image. NASA/JPL-Caltech/MSSS/Handout via REUTERS Purchase Licensing Rights
TELLTALE MINERALS
Two minerals were detected that appear to have formed as a result of chemical reactions between the mud of the Bright Angel formation and organic matter present in that mud, Hurowitz said. They are: vivianite, a mineral bearing iron and phosphorus, and greigite, a mineral bearing iron and sulfur.
“These reactions appear to have taken place shortly after the mud was deposited on the lake bottom. On Earth, reactions like these, which combine organic matter and chemical compounds in mud to form new minerals like vivianite and greigite, are often driven by the activity of microbes,” Hurowitz told Reuters.
“The microbes are consuming the organic matter in these settings and producing these new minerals as a byproduct of their metabolism,” Hurowitz said.
The rover’s instruments found that the rock was rich in organic carbon, sulfur, phosphorus and iron in its oxidized form, rust. This combination of chemical compounds could have offered a rich source of energy for microbial metabolisms, Hurowitz said.
But Hurowitz offered some words of caution.
“The reason, however, that we cannot claim this is more than a potential biosignature is that there are chemical processes that can cause similar reactions in the absence of biology, and we cannot rule those processes out completely on the basis of rover data alone,” Hurowitz said.
Mars has not always been the inhospitable place it is today, with liquid water on its surface in the distant past.
The sample collected and analyzed by Perseverance provides a new example of a type of potential biosignature that the research community can explore to try to understand whether or not these features were formed by life, Hurowitz said, “or alternatively, whether nature has conspired to present features that mimic the activity of life.”
Scientists in the Netherlands discovered a new pair of salivary glands in the human throat while testing a cancer scan.
These glands could lower side effects in head and neck radiation patients. (Photo Credits: Instagram)
Scientists in the Netherlands have made a surprising discovery of a new organ in the human throat. While testing a new cancer scan in 2020, they accidentally found a set of glands deep in the upper part of the throat. This discovery could change the understanding of human anatomy.
The newly found glands, called the ‘tubarial salivary glands,’ are believed to help keep the area behind the nose well-lubricated. Researchers think this could be important for improving the quality of life for patients undergoing radiation therapy for head and neck tumours.
The Discovery Happened By Accident During Cancer Tests
Researchers at the Netherlands Cancer Institute in Amsterdam were testing a new PSMA PET-CT scan, which combines computed tomography (CT) and positron emission tomography (PET) to detect prostate cancer. During the process, a radioactive tracer is injected into the patient’s body, allowing doctors to track its path.
While this method is usually used to find prostate tumours, the team noticed two unexpected areas lighting up in the back of the nasopharynx, the area behind the nose. These glands, about 1.5 inches long, looked similar to the major salivary glands already known.
“People have three sets of large salivary glands, but not there,” said radiation oncologist Wouter Vogel, as quoted by the Daily Mail. “As far as we knew, the only salivary or mucous glands in the nasopharynx are microscopically small and up to 1,000 are evenly spread out throughout the mucosa. So, imagine our surprise when we found these,” he added.
The glands appeared in all 100 patient scans they studied.
This Discovery Can Help Reduce Side Effects of Radiation
At the institute, Vogel and surgeon Matthijs H Valstar study how radiation affects patients with head and neck tumours. Radiation therapy can damage known salivary glands, making it hard for patients to eat, swallow, or speak.
Vogel explained, “Radiation would cause the same side effects in the submandibular salivary glands.” After examining more than 700 cases, the researchers found that the more radiation these new glands received, the worse the patients’ complications were.
Google has introduced an innovative feature called Nano Banana that allows users to easily transform photos into 3D collectible models using its Gemini 2.25 Flash Image technology.
Google has introduced an innovative feature called Nano Banana, allowing users to transform photos into 3D collectibles effortlessly.
Ever wished you could turn a simple photo into a cool 3D collectable? Thanks to Google’s quirky new feature called Nano Banana, you actually can and it only takes a few clicks. The magic comes from Google’s Gemini 2.25 Flash Image model, but don’t worry, you don’t need expensive tools or any pro-level skills. Just your Google account, a browser, and a bit of imagination. Here’s how you can make your own action-figure style 3D model for free.
Step 1: Open Google AI Studio
Head over to Google AI Studio on your browser. Click on Try Gemini and sign in with your Google account. If you don’t have one, it’s free and takes seconds to set up.
Step 2: Pick Nano Banana
On the right-hand side, you’ll see the option Nano Banana (Gemini 2.25 Flash Image). Click it, accept the terms, and you’re good to go.
Step 3: Upload Your Photo
Now, click the little “+” icon near Run and upload any photo you want to transform. It could be your selfie, your pet, or even a landscape shot, literally anything works.
Step 4: Run the Prompt
Paste this prompt into the chatbox and hit Run:
“Create a highly detailed 1/6 scale figurine of the character(s) from the uploaded photo in a semi-realistic style. The figurine is posed heroically on a rocky diorama base with subtle moss and stone textures, giving it a collectible showcase vibe. Place the figurine inside a glass display case with soft LED strip lighting, highlighting fine sculpt details and painted textures. In the background, show a neatly arranged shelf with other pop-culture collectibles and art books, giving the scene a studio-like atmosphere. Add a premium-looking collector’s box placed beside the display case, featuring bold, anime-inspired artwork of the character in dynamic poses. The lighting should be dramatic yet balanced, creating shadows that enhance the figurine’s depth and realism, while still maintaining a clean, polished presentation suitable for product photography.”
NASA has revealed the clearest signs of life on Mars ever after making an incredible rover discovery.
Evidence of ancient alien microbes have been uncovered in a dry river channel formed roughly 3.7 billion years ago, scientists reported Wednesday.
Nasa has revealed signs of life on Mars after making an incredible rover discoveryCredit: Reuters
A sample obtained by Nasa’s Perseverance rover found sediment on the bottom of the ancient Neretva Vallis lake which points towards potential signs of ancient microbial life, according to scientists.
Researchers say the rocks contain tiny dark specks which are less than a millimetre in length and have been nicknamed “poppy seeds”.
Other sediments found contain larger dark-rimmed rosettes with lighter centres – dubbed “leopard spots”.
Both speckles are said to be rich in iron and phosphorus.
These chemicals can form when tiny microbes break down organic material – a sign of life back down on Earth.
Scientists also found vein-like structures believed to be white calcium sulfate.
Between these veins sat a material with a reddish color suggesting the presence of hematite – one of the minerals which gives Mars its distinctive colour.
The age of the samples collected by Perseverance are estimated to be aged somewhere between 3.5 to 3.7 billion years old.
Each of the rocks are now due to be taken for in-depth analysis.
They are sealed inside Perseverance along with a number of rock cores awaiting a potential trip back to Earth.
The Perseverance – the same size as a standard car – has been roaming Mars since 2021.
It carries a drill to penetrate rocks and tubes to hold the samples gathered from places judged most suitable for hosting life billions of years ago.
The discovery is one of the best pieces of evidence to date about the possibility that Mars once harbored life.
Sean Duffy, the acting Nasa administrator, said: “A year ago, we thought we found what we believe to be signs of microbial life on the Mars surface.
“We put it out to our scientific friends to pressure test it, to analyse it – did we get this right? Do we think this is a sign of ancient life on Mars?
“After a year of review, they’ve come back and they said: ‘Listen, we can’t find another explanation.’
“So, this very well could be the clearest sign of life that we’ve ever found on Mars.”
One other potential reason for the discovery may be due to chemical processes which have taken place on Mars over millions of years.
This would, in theory, create similar rock forms which are made up from the same elements.
But experts believe the reactions appear to have occurred at cool temperatures – potentially pointing towards a more biological origin to the sediment.
Professor Sanjeev Gupta of Imperial College London, co-author of the study, added: “It’s not a slam dunk by any means. But this is the most exciting evidence so far.
iPhone 17 series has launched along with the new Air variant and the Pro models that get serious upgrades this year.
Here are all the iPhone 17 variants and their price in India.
Apple iPhone 17 series event has wrapped up on Tuesday, and we now have four new models from the company that will be looking for your money later this month when they go on sale. The new iPhone 17 series has a surprising iPhone Air addition since the Plus variant has been removed from the lineup. You also have the iPhone 17 Pro versions while the iPhone 17 gets not one but multiple upgrades that makes it a quality buy this year.
Here are all the iPhone 17 series variants and the iPhone Air model, their prices in India and what they offer for the price.
Apple iPhone 17 Series: All Variants Explained And Price In India
iPhone 17
iPhone 17 256GB – Rs 82,900
iPhone 512GB – Rs 1,02,900
iPhone 17 Pro
iPhone 17 Pro 256GB – Rs 1,34,900
iPhone 17 Pro 512GB – Rs 1,54,900
iPhone 17 Pro 1TB – Rs 1,74,900
iPhone 17 Pro Max 256GB – Rs 1,49,900
iPhone 17 Pro Max 512GB – Rs 1,69,900
iPhone 17 Pro Max 1TB – Rs 1,89,900
iPhone 17 Pro Max 2TB – Rs 2,29,900
As you can see here, the iPhone 17 starting price has gone up and you are getting 256GB as the base model this year. The iPhone 17 Pro also costs a little more than before, same with the 17 Pro Max version which now gets a 2TB variant.
iPhone 17 Air Prices
iPhone 17 Air 256GB – Rs 1,19,900
iPhone 17 Air 512GB – Rs 1,39,900
iPhone 17 Air 1TB – Rs 1,59,900
And yes, the iPhone Air version might be replacing the Plus variant but its premium design and Pro-hardware means it is priced like the iPhone 16 Pro model from last year.
iPhone 17 Series Features Detailed
The most important change with the design has allowed Apple to equip the iPhone 17 Pro models with a vapor chamber cooling system. It features an aluminum build which it says is durable and strong. The devices are powered by the A19 Pro chipset and promised to give better sustained performance. The ceramic shield at the back is a definite upgrade on the glass panel on the previous versions.
Scientists at Stanford University have identified a specific brain region that appears to drive core autism symptoms in mice and successfully improved those behaviors using targeted treatments. The breakthrough focuses on overactive neurons deep in the brain that serve as gatekeepers for sensory information, controlling what signals reach conscious awareness
The findings, published in Science Advances, come from mouse studies, a standard model in autism research, and more work is needed before testing in humans. Still, the results point toward potential therapies that may address autism’s biological foundations rather than just managing symptoms.
Overactive Brain Cells Disrupt Normal Function
The problem originates in a brain structure called the reticular thalamic nucleus, which acts like a traffic control system for sensory information. In healthy brains, this region determines which sensory signals (sounds, sights, and touches) warrant attention from higher brain areas. In autism-model mice, however, these neurons fired in rapid, excessive bursts that scrambled normal brain communication.
Stanford researchers studied mice engineered to lack Cntnap2, a gene strongly linked to autism in humans. These mice exhibited classic autism-like traits, including social avoidance of other mice, repetitive grooming, hyperactivity, and increased seizure susceptibility. Brain examinations revealed that reticular thalamic nucleus neurons were firing far more frequently than normal.
Scientists traced this hyperactivity to overactive calcium channels, proteins that regulate how neurons communicate. In autism-model mice, these T-type calcium channels enabled neurons to burst-fire much more easily, resulting in disrupted brain signals that manifested as behavioral symptoms.
Two Treatment Methods Show Promise
Researchers tested whether reducing this neural overactivity could restore normal behavior using two different approaches, both of which produced remarkable results.
First, they administered Z944, a drug that blocks the problematic calcium channels, to the mice. Mice receiving this treatment showed substantial behavioral improvements, including decreased hyperactivity, restored social preferences, and cessation of excessive grooming behaviors. Z944 has already undergone human testing for treating certain seizure types, which could speed its potential path to autism trials.
The second method used advanced genetic tools called DREADDs (designer receptors exclusively activated by designer drugs). Scientists modified the mice so specific neurons could be controlled using engineered proteins and matching drugs. When they used this technique to quiet reticular thalamic nucleus activity, autism-like behaviors improved substantially again.
Most convincingly, researchers demonstrated the reverse: artificially increasing activity in these brain cells caused normal mice to develop autism-like behaviors, including reduced social interaction and increased repetitive actions.
Targeting the Root Instead of Symptoms
Previous autism research concentrated mainly on the brain’s outer layer, where complex thinking occurs. But this study reveals that autism’s behavioral symptoms may actually start in a much deeper, more primitive brain region that handles basic sensory processing and attention.
The reticular thalamic nucleus connects to many brain areas involved in sensory processing, attention, and emotional regulation. When it becomes overactive, the resulting disruption affects multiple brain networks simultaneously, which explains why autism involves such varied symptoms affecting social behavior, sensory processing, and repetitive actions.
Most neurons in this brain region produce a protein called parvalbumin, which previous research has repeatedly connected to autism. Earlier studies found fewer parvalbumin-producing neurons in autism models and in brain tissue from people with autism.
Current autism treatments focus on behavioral interventions and medications that address secondary symptoms like anxiety or hyperactivity. A treatment targeting T-type calcium channels could potentially address autism’s core features directly by correcting the underlying brain dysfunction rather than managing its effects.
Moving from laboratory discovery to human treatment requires additional studies in other autism models and eventual human clinical trials. Since Z944 has already been tested in humans for other conditions, this could potentially accelerate development of autism-specific treatments based on these principles.
If these results eventually apply to humans, individuals with autism and their families could one day access more effective treatments that address the condition’s neurobiological foundation rather than just managing symptoms.
The simulation is part of Nasa’s Crew Health and Performance Exploration Analog (CHAPEA), a series of Earth-based experiments to evaluate how humans cope with the physical and psychological demands of deep-space travel.
They will remain inside for 378 days, concluding their mission on October 31, 2026. (Photo: Reuters)
Four volunteers are preparing to step into a year-long mission designed to mirror what it might be like to live on Mars.
On October 19, Ross Elder, Ellen Ellis, Matthew Montgomery, and James Spicer will enter Nasa’s Mars Dune Alpha, a 1,700-square-foot 3D-printed habitat inside the Johnson Space Center in Houston.
They will remain inside for 378 days, concluding their mission on October 31, 2026.
The simulation is part of Nasa’s Crew Health and Performance Exploration Analog (CHAPEA), a series of Earth-based experiments to evaluate how humans cope with the physical and psychological demands of deep-space travel. Two alternate crew members, Emily Phillips and Laura Marie, are also on call should any of the volunteers be unable to participate.
CHAPEA missions are intended to mimic the significant challenges astronauts would face while living on the Red Planet. The volunteers will endure resource limitations, isolation, communication delays, equipment malfunctions, and high-tempo simulated spacewalks.
These scenarios are designed to provide Nasa with vital data on human health and performance for long-duration exploration.
“As Nasa gears up for crewed Artemis missions, CHAPEA and other ground analogs are helping us determine the best capabilities future astronauts will need,” said Sara Whiting, project scientist with Nasa’s Human Research Program.
During the simulation, the crew will perform daily research and operational activities, including simulated Mars walks, robotic operations, and cultivating a vegetable garden. They will also test new technologies designed for long stays on Mars, such as potable water dispensers and diagnostic medical devices.
“The simulation will allow us to measure both cognitive and physical performance under Mars-like conditions,” explained Grace Douglas, CHAPEA principal investigator. “This insight will help Nasa make key decisions to ensure future astronauts remain safe and mission-ready.”
This marks the second year-long surface simulation under CHAPEA. The first, completed in July 2024, provided baseline data that is already influencing mission planning.
These roles are part of Meta’s larger plan to expand its AI presence in fast-growing markets such as India, Indonesia, and Mexico.
Meta | Image Credit: Wikipedia (Representative)
Meta is reportedly hiring contractors in the United States at rates of up to $55 (around Rs 4,850) per hour to develop Hindi-language AI chatbots designed for Indian users.
These roles are part of Meta’s larger plan to expand its AI presence in fast-growing markets such as India, Indonesia, and Mexico, according to a Business Insider report.
Job listings reviewed by the publication suggest that contractors are being recruited through staffing firms like Crystal Equation and Aquent Talent.
The work mainly focuses on creating characters for chatbots that will operate across Instagram, Messenger, and WhatsApp.
Applicants are required to be fluent in Hindi, Indonesian, Spanish, or Portuguese, and must have at least six years of experience in storytelling, character development, and familiarity with AI content workflows.
However, there is no official confirmation from Meta on the hiring move. Although, the report found that Crystal Equation has advertised Hindi and Indonesian language positions on behalf of Meta, while Aquent Talent listed Spanish-language roles for what it described as a “top social media company.”
The decision to hire contractors for building localised chatbot characters highlights Meta’s effort to create digital companions that feel culturally relevant for Indian users.
CEO Mark Zuckerberg has earlier said that chatbots could “complement real-world friendships” and help people connect more easily with digital companions.
At the same time, Meta’s growing focus on AI chatbots has drawn criticism. Earlier reports suggest that some of Meta’s bots engaged in inappropriate romantic or sexual conversations with minors, gave misleading medical advice, and even produced racist responses.
Google has been fined €2.95bn (£2.5bn) by the EU for allegedly abusing its power in the ad tech sector – the technology which determines which adverts should be placed online and where.
The European Commission said on Friday the tech giant had breached competition laws by favouring its own products for displaying online ads, to the detriment of rivals.
It comes amid increased scrutiny by regulators worldwide over the tech giant’s empire in online search and advertising.
Google told the BBC the Commission’s decision was “wrong” and it would appeal.
“It imposes an unjustified fine and requires changes that will hurt thousands of European businesses by making it harder for them to make money,” said Lee-Anne Mulholland, global head of regulatory affairs at Google.
“There’s nothing anti-competitive in providing services for ad buyers and sellers, and there are more alternatives to our services than ever before.”
US President Donald Trump also attacked the decision, saying in a post on social media it was “very unfair” and threatening to launch an investigation over European tech practices that could lead to tariffs.
“As I have said before, my Administration will NOT allow these discriminatory actions to stand,” he wrote.
“The European Union must stop this practice against American Companies, IMMEDIATELY!”
Trump has repeatedly criticised the bloc’s fines and enforcement actions against US tech firms in recent months, though the US government has brought its own lawsuits over Google’s monopoly of the online ad market.
Earlier this week, the Commission denied reports it had delayed the announcement of Google’s fine amid tensions over trade relations between the EU and the US.
In the Commission’s decision on Friday, the Commission accused Google of “self-preferencing” its own technology above others.
As part of its findings, it said Google had intentionally boosted its own advertising exchange, AdX, over competing exchanges where ads are bought and sold in real-time.
Competitors and publishers faced higher costs and reduced revenues as a result, it said, claiming these may have been passed to consumers in the form of more expensive services.
The regulator has ordered the company to bring such practices to an end, as well as pay the nearly €3bn penalty.
Third time rules broken
The Commission’s fine is one of the largest fines it has handed down to tech companies accused of breaching its competition rules to date.
In 2018 it fined Google €4.34bn (£3.9bn) – accusing the company of using its Android operating system to cement itself as the dominant player in that market.
Teresa Ribera, executive vice president of the Commission, said in a statement on Friday the regulator had factored in previous findings of Google’s anti-competitive conduct when deciding to levy a higher fine.
Brain-wide map showing 75,000 analyzed neurons, each dot is linearly scaled according to the raw average firing rate of that neuron up to a maximum size. (Credit: Dan Birman, International Brain Laboratory)
When you decide to reach for your morning coffee, more than half a million brain cells spring into action across nearly every region of your brain. Scientists have now captured this moment-by-moment neural activity in unprecedented detail, revealing that decision-making is far more brain-wide than researchers once believed.
An international team published two breakthrough studies in the journal Nature showing that making choices involves coordinated activity across the entire brain, not just specialized decision centers. The discovery challenges long-held models of how the brain processes information and could reshape research into conditions like schizophrenia and autism.
International Team Records Activity from 621,733 Brain Cells
Twelve laboratories across Europe and the United States joined forces to record activity from 621,733 individual brain cells in 139 mice. After applying quality-control measures, researchers identified 75,708 well-isolated neurons for their main analyses. Using advanced electrodes called Neuropixels probes, they monitored 279 different brain areas, representing about 95% of the entire mouse brain.
“This is the first time anyone has produced a full, brain-wide map of the activity of single neurons during decision-making. The scale is unprecedented as we recorded from over half a million neurons across mice in 12 labs, covering 279 brain areas, which together represent 95% of the mouse brain volume. The decision-making activity, and particularly reward, lit up the brain like a Christmas tree,” explained Professor Alexandre Pouget, Co-Founder of the International Brain Laboratory (IBL) and Group Leader at the University of Geneva, in a statement.
The experimental task was straightforward: mice watched a screen where a light appeared on the left or right side, then turned a wheel toward the light to receive a reward. On some trials, the light was so faint it was nearly invisible, forcing the mice to guess based on patterns they had noticed in previous rounds.
Brain Activity Spreads Everywhere During Decisions
The brain map challenged the traditional view of neural processing as a tidy assembly line. Instead of sensory areas passing information to decision areas, which then activate motor regions, researchers discovered something far more interconnected.
When mice made decisions, signals appeared simultaneously across the brain. Reward responses activated regions from visual processing areas to movement control centers. Even basic sensory regions showed decision-related activity.
“We’d seen how successful large-scale collaborations in physics had been at tackling questions no single lab could answer, and we wanted to try that same approach in neuroscience. The brain is the most complex structure we know of in the universe and understanding how it drives behaviour requires international collaboration on a scale that matches that complexity,” said Professor Tom Mrsic-Flogel, Director of the Sainsbury Wellcome Centre at University College London and one of the core members of IBL.
This coordinated, brain-wide activity occurred not just during the final act of making a choice, but throughout the entire process of sensing, choosing, and acting.
Brain Makes Predictions at Every Processing Level
The second study revealed where expectations live in the brain. When mice used past experience to guess where the next light would appear, those predictions weren’t confined to higher-thinking areas as many scientists had assumed.
Instead, expectations appeared across the brain, even in the thalamus (the first relay station for visual information from the eyes). This means the brain begins making predictions about what it will see before higher brain regions have even started processing.
The widespread encoding of expectations supports the view that the brain operates as a prediction machine, constantly using past experience to anticipate what comes next. Scientists now know this predictive activity occurs at every level of brain processing, from basic sensory input to motor output.
New Research Directions for Brain Disorders
These discoveries could transform how researchers approach neurological and psychiatric conditions. Schizophrenia and autism, for example, involve difficulties with forming and updating expectations about the world. Knowing that expectations are encoded across the brain, rather than in isolated regions, opens new paths for investigation.
“Traditionally, neuroscience has looked at brain regions in isolation. Recording the whole brain means we now have an opportunity to understand how all the pieces fit together. This was too big of a project for any one lab, and a collaboration on this scale was only possible because of the dedication and talent of our staff scientists, who are the best in the business,” said Dr. Kenneth Harris, Professor of Quantitative Neuroscience at UCL and one of the core members of IBL.
“It’s immensely gratifying to see the IBL deliver the first brain-wide map of neural activity with such high spatial and temporal resolution. The map describes the activity of over 650,000 individual neurons with single-spike resolution. This activity underlies the brain’s sensory and motor activity that constitutes a decision. The map is a fantastic resource that is already being mined by myriad scientists, and yielding unexpected discoveries. It’s a great success for team science and open science,” added Dr. Matteo Carandini, Professor of Visual Neuroscience at UCL and one of the core members of IBL.
Scientists have discovered that small molecules naturally present in every cell may hold the key to understanding why cancer spreads so aggressively. Researchers have identified a specific protein that could become the target for more precise treatments.
A new study reveals that polyamines, which are small molecules that help maintain normal cell functions, actually hijack cancer cells in a surprising way. Instead of supporting healthy growth, these molecules boost production of a protein called eIF5A2, which appears essential for cancer survival. Even more intriguing: eIF5A2’s nearly identical twin protein, eIF5A1, doesn’t have the same effect on cancer cells.
Published in the Journal of Biological Chemistry, the research shows that targeting eIF5A2 could offer a way to attack cancer cells while leaving healthy tissue largely unharmed.
Cancer Cells Switch Energy Systems for Rapid Growth
When researchers from Tokyo University of Science, led by associate professor Kyohei Higashi, removed polyamines from cancer cells, something unexpected happened. The cells’ entire energy production system shifted gears.
Normal cells prefer using their mitochondria, the cellular powerhouses that efficiently convert nutrients into energy. Cancer cells, however, switched to a faster but less efficient process called glycolysis, even when plenty of oxygen was available. Cancer cells choose this approach because it supports their rapid, uncontrolled growth.
The study found that polyamines orchestrate this metabolic switch by increasing levels of two key proteins: PDK1 and PKM2. Both proteins are crucial players in the glycolysis pathway that fuels cancer cell multiplication.
Using advanced protein analysis techniques on over 6,700 proteins, researchers discovered that polyamine removal affected 300 proteins, about 5.3% of the cell’s entire protein machinery.
Twin Proteins With Different Jobs in Cancer
eIF5A1 and eIF5A2 share 84% of their genetic code and scientists long assumed they worked similarly. But the new research reveals these molecular twins have drastically different jobs in cancer.
When researchers silenced eIF5A2 in multiple cancer cell lines including cervical and breast cancer, cell growth stopped within three days. Silencing eIF5A1 barely affected cancer growth until day five, and even then the impact was minimal. Based on their experimental results, the researchers concluded that eIF5A2 plays a more critical role in cancer cell proliferation than eIF5A1.
The difference comes down to a cellular brake system. A microRNA called miR-6514-5p normally prevents eIF5A2 production. Polyamines essentially cut those brake lines, allowing eIF5A2 to be produced freely. Meanwhile, eIF5A1 production remains under normal cellular control.
Computer simulations revealed why eIF5A2 might be more effective in cancer cells. Though the proteins differ by just 28 building blocks, those differences occur in the region that interacts with ribosomes, or the cellular factories that make proteins.
Patient Survival Data Confirms Laboratory Findings
The laboratory discoveries translate to real patient outcomes. Analysis of breast cancer data from over 2,400 patients showed that higher eIF5A2 levels correlated with worse survival rates. eIF5A1 levels showed no such correlation.
Cancer cells also modify their protein-making factories by increasing production of specific ribosomal proteins, particularly RPS27A, RPL36A, and RPL22L1. All three have been previously linked to aggressive cancer behavior and poor patient outcomes.
Several existing drugs already target parts of the polyamine pathway. DFMO, the compound used in this study to deplete polyamines, is currently in clinical trials for various cancers. The new research suggests that combining such treatments with drugs specifically targeting eIF5A2 might prove more effective than current approaches.
The discovery changes how scientists understand cancer cell behavior. Rather than being passive victims of genetic mutations, cancer cells actively recruit natural molecules like polyamines to support their growth. By identifying eIF5A2 as a critical player in this process — and one that healthy cells don’t depend on — researchers have uncovered a potential vulnerability that could lead to more precise treatments for cancer patients.
The Google logo is seen outside the company’s offices in London, Britain, June 24, 2025. REUTERS/Carlos Jasso/File Photo Purchase Licensing Rights
A judge ruled on Tuesday that Alphabet’s (GOOGL.O), Google must share search data with competitors but rejected prosecutors’ bid to make the internet giant sell off its popular Chrome browser and Android operating system.
Here is what has happened so far in the case and what comes next:
The Justice Department during President Donald Trump’s first administration sues Google alleging that it illegally monopolized the online search and related advertising markets. This was the first time in a generation that the U.S. government accused a Big Tech corporation of an illegal monopoly. Prosecutors continue pursuing the case under President Joe Biden’s administration.
Google defends its practices at a trial before U.S. District Judge Amit Mehta in Washington, saying it had won its market share by providing a high-quality service.
The trial’s evidence phase concludes, after Google CEO Sundar Pichai took the stand and acknowledged the importance of making its search engine the default on phones and other devices.
Mehta hears closing arguments in the case, pressing Google on how rival search engines could compete, and whether online advertisers would substitute social media or other ads for search advertising.
The judge finds Google violated U.S. antitrust law, saying that “Google has no true competitor.”
Prosecutors propose a sweeping set of remedies they said would work in tandem to open up competition in the markets for online search and related advertising. The 10-year reformation plan includes requiring Google to sell off its Chrome browser, cease paying device makers like Apple to make it the default search engine on new devices, share data with rivals, and end its investments in artificial intelligence companies.
Google proposes a much narrower remedy that would loosen its agreements with Apple and others, calling the government’s proposal a drastic attempt to intervene in the search market.
The Justice Department, once again led by Trump appointees, backs most of the November proposals but drops a bid to make Google sell off AI investments.
Mehta kicks off a 14-day trial on the proposals where prosecutors say Google needs strong measures imposed on it to prevent its online search dominance from extending to AI. At trial, OpenAI says that proposed data-sharing remedies could help it improve ChatGPT, Google executive Sundar Pichai says data sharing would let competitors copy Google’s product, and Google’s stock takes a hit after an Apple executive testifies that the iPhone maker plans to add AI-driven search options to its Safari browser.
After a break for both sides to file court papers, Mehta holds closing arguments in the case where he suggests he is considering less aggressive measures than the 10-year regime proposed by antitrust enforcers, citing the rapid pace of developments in the AI sector.
Google says it has hired Donald Verrilli Jr., the U.S. solicitor general during the Barack Obama administration, to handle its appeal in the case.
Conceptual image of a gene-edited raspberry in a lab. (Image by Shutterstock AI Generator)
Researchers have successfully performed the first DNA-free gene editing on raspberry plants, marking a scientific advance that could eventually lead to improved berries without the regulatory complexities of traditional genetic modification.
The technique, described in a paper published in Frontiers in Genome Editing, uses molecular scissors called CRISPR to make precise cuts in raspberry genes, then relies on the plant’s natural repair mechanisms. Unlike conventional genetic engineering, no foreign DNA remains in the plant’s genome after editing. While regulatory frameworks for such “precision bred” crops are still evolving, early legislation in England suggests these plants may face fewer hurdles than traditional GMOs.
For raspberries, this could represent a major step forward. These berries present unique challenges for plant breeders because each seed produces a genetically different plant. Commercial growers rely on cloning techniques to maintain consistent berry quality, making traditional breeding extremely slow. Improvements that could take years through conventional methods might be achievable much faster through direct gene editing.
How Scientists Removed Cell Walls to Enable Raspberry Gene Editing
Led by Ryan Creeth, a PhD student at Cranfield University in England, researchers started with raspberry tissue cultures, then used enzymes to strip away the rigid cell walls, creating protoplasts. Without their protective barriers, these naked plant cells become permeable to gene-editing tools.
Scientists combined the protoplasts with pre-assembled CRISPR components, including Cas9 proteins and guide RNAs, along with chemicals that help the editing machinery penetrate cell membranes. After 24 hours, they extracted DNA and used sequencing techniques to confirm successful edits.
The team successfully modified a gene called phytoene desaturase with a 19% efficiency rate. That’s a notable improvement over the very low efficiencies reported in earlier Agrobacterium-based genetic modification attempts in raspberry. They also edited three other genes linked to fruit firmness and disease resistance, though with lower success rates.
Editing Success Varied Dramatically Between Gene Targets
The 19% editing efficiency represents progress over existing approaches and eliminates the years of backcrossing typically required to remove unwanted genetic material from conventionally modified plants.
However, editing efficiency varied dramatically depending on the target gene, ranging from 0.3% to 19%. This variation suggests each genetic target would need extensive optimization before achieving reliable results for practical applications.
Source material quality proved critical. Protoplasts from healthy, vigorous raspberry canes produced between 1 million and 12 million usable cells per milliliter, while poor-quality plants yielded far fewer. Researchers noted the best results came from bright green stems with deep red thorns and rapidly growing tissue cultures.
Will Consumers Be Eating Gene-Edited Raspberries Anytime Soon?
The biggest obstacle is regenerating whole plants from edited protoplasts. While previous research suggests raspberry protoplasts can potentially grow into complete plants, no one has demonstrated this capability with gene-edited cells. Without successful plant regeneration, the technique remains a laboratory proof-of-concept rather than a practical breeding tool.
Cost considerations also pose challenges. The study relied on expensive commercially synthesized CRISPR components, though researchers suggested costs could decrease if laboratories produce these tools internally once targets are validated.
The research does open possibilities for addressing long-standing raspberry problems. Enhanced disease resistance could reduce pesticide use, while improved fruit firmness could extend shelf life and reduce food waste. Both outcomes would benefit farmers, consumers, and the environment.
As the researchers noted in their paper, “To our knowledge, this study constitutes the first use of DNA-free genome editing in raspberry protoplast. This protocol provides a valuable platform for understanding gene function and facilitates the future development of precision breeding in this important soft fruit crop.”
The work represents an early step in precision agriculture for raspberries, a crop that has seen limited innovation due to complex genetics. If the remaining technical hurdles can be overcome, DNA-free editing could eventually enable rapid, targeted improvements throughout the raspberry industry.
A major new study reveals that nearly half of young children diagnosed with ADHD receive medication within just one month of diagnosis, even though pediatric guidelines recommend starting with behavioral therapy first for this age group.
The largest study of its kind examined electronic health records from more than 712,000 children seen at eight major pediatric health systems across the United States between 2016 and 2023. Among the 9,708 children who received an ADHD diagnosis between ages 4 and 5, over two-thirds were ultimately prescribed medication before turning 7.
Why Guidelines Recommend Therapy First
ADHD affects an estimated 10% of American children, and diagnoses in preschoolers are becoming more common. A 2022 national survey found that 2.4% of children aged 3 to 5 had an ADHD diagnosis.
Current American Academy of Pediatrics (AAP) guidelines, reaffirmed in 2019, recommend that children aged 4 to 5 first receive evidence-based parent training in behavior management before considering medication. Medication should be considered only if these interventions prove insufficient or if symptoms cause substantial disruption.
Yet across the eight health systems studied, between 26% and 49% of preschoolers were prescribed medication within a month of diagnosis, often leaving little or no time for behavioral treatments to take effect.
The AAP recommendations exist because preschoolers’ brains are still rapidly developing, and the evidence base for behavioral interventions is stronger in this age group than for medication. Parent training helps caregivers manage challenging behaviors, establish routines, and create structured environments that can meaningfully reduce ADHD symptoms.
For children who were prescribed medication within 30 days, there often wasn’t enough time to properly implement or assess these therapies, which typically require weeks or months to show benefit.
Racial and Insurance Disparities in ADHD Treatment
The study revealed clear disparities. Asian children with ADHD were least likely to receive early medication, with only 28.6% prescribed within 30 days, compared to 43.9% of White children. Hispanic children were at 35.8%, Black children 41.8%, and multiracial children the highest at 47.7%.
These gaps persisted two years after diagnosis. White children had prescription rates of 78.2%, while Asian children had substantially lower rates at 55.6%.
Publicly insured children were more likely to receive early prescriptions than privately insured peers. Families on Medicaid often face barriers accessing behavioral therapy, making medication the more immediate option. Prior research suggests that minority families may have greater hesitancy toward ADHD medication, but also encounter obstacles to behavioral care.
Preschool ADHD Care Varies Widely
Prescription rates varied dramatically across the eight health systems, from 44.1% to 74.1%. This suggests that treatment decisions may depend as much on geography and local resources as on clinical guidelines.
Older preschoolers were more likely to receive medication quickly. The median time to prescription was zero days for 5-year-olds, compared to nearly 400 days for 3-year-olds.
Comorbidities played a role too. Children with sleep problems or disruptive behavior disorders were more likely to get early medication. About 65% of children had a documented additional condition, such as language delays or learning difficulties.
Follow-up care was inconsistent. Only about 40% of children prescribed medication had a documented follow-up within two months, though the true number may be higher since many doctors use phone calls or secure messaging not captured in records.
A System Under Strain
The study exposes how practical realities often drive treatment. Behavioral therapies require trained providers, multiple sessions, and significant family commitment. In many communities, especially for families with public insurance, such resources are scarce.
Primary care doctors face heavy caseloads and limited access to mental health specialists. Writing a prescription takes minutes, while arranging therapy can take hours, with no guarantee that services are available.
The results don’t mean early prescriptions are always inappropriate. Some children may genuinely need immediate medication, especially with severe symptoms or multiple conditions. But the scale of early prescribing points to a startling gap between recommended best practice and what families actually experience.
Meta faces scrutiny over its use of chatbots mimicking celebrities, including notable figures like Taylor Swift and Selena Gomez. A report revealed that some chatbots, specifically parody versions, were created by Meta staff, while others were user-generated.
Meta AI Chatbots
Meta, due to its involvement in different domains, keeps getting into trouble one way or the other. In a recent development, a disturbing report published by Reuters suggested that the platform has been using chatbots by the names of Anne Hathaway, Selena Gomez, Taylor Swift, and even Scarlet Johannson. Most of these chatbots were created by users with which Meta allows them to make chatbots. But, around three chatbots out of all, specifically parody bots of Taylor Swift, were created by a Meta employee.
Report On Meta’s Flirty AI Chatbots Based On Celebrities
The creation of AI chatbots that work as spoof celebrities is concerning. Nonetheless, what’s more concerning is that the platform is also allowing users to create chatbots of child celebrities like Walker Scobell who is a 16-year old film star. And when asked for a picture of the teen actor, the chatbot delivered a shirtless image and wrote, ‘Pretty cute, huh.’
All these chatbots are available on all Meta platforms like Instagram, WhatsApp, and Facebook. Reuters also found out that the virtual celebrities, most of the time, claimed that they are the real celebrities and also made sexual advances, along with inviting a user for a meet-up.
And the images generated by these chatbots are also concerning, as in some cases, they produced hyper-realistic images of these celebrities in lingerie and bathtubs. Meta spokesman Ando Stone has already given a response statement to the same. He said that Meta AI tools should not have created fabricated intimate images of adult and child celebrities. He said that the company has failed to impose its own prohibition that stops the AI model from creating any obscene content.
The iPhone 16 Pro Max and Google Pixel 10 Pro XL share some similarities but differ in software, performance, and features.
Google Pixel 10 Pro XL vs iPhone 16 Pro Max: Here’s how these flagships compare in design, performance, and cameras.(Google, Apple)
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: The smartphone market often turns into a battleground between Apple and Google, and their latest flagships, the iPhone 16 Pro Max and Pixel 10 Pro XL, show how closely the two companies now compete. Both the iPhone 16 Pro Max and Pixel 10 Pro XL target users who want premium hardware, advanced software, and long-lasting support. But how do they stack up when placed side by side?
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: Design and Build
The Google Pixel 10 Pro XL and iPhone 16 Pro Max look surprisingly similar at first glance. Google’s latest device continues to carry forward the flat sides, rounded corners, and sleek frame that Apple popularised. Both devices now include magnets on the back for accessories and wireless charging support, though Google’s “G” logo has grown to echo Apple’s bold branding.
The Pixel 10 Pro XL weighs about 5 grams more, while the iPhone still feels sturdier in the hand. Neither device includes a SIM tray and relies solely on eSIM. However, global versions of both still allow physical SIM cards.
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: Display
Google has taken the lead in screen brightness. The Pixel 10 Pro XL features a 3200-nit display, surpassing the already bright iPhone 16 Pro Max. Both deliver deep blacks, high refresh rates, and accurate colours. Apple’s Dynamic Island notch, however, continues to occupy space at the top, while Google’s design offers a cleaner viewing area.
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: Software and Performance
Where the two phones differ most is in software. The iPhone 16 Pro Max runs on iOS with Apple’s Liquid Glass design approach, offering glossy finishes and floating panels. The Pixel 10 Pro XL, on the other hand, pushes Google’s Material 3 Expressive design, which shows customisation without compromising clarity.
Apple’s visual shift has received mixed responses, particularly on accessibility. Meanwhile, Google has refined Android with smoother transitions and personalisation options. On the AI front, Google continues to deliver practical tools, while Apple’s much-anticipated Apple Intelligence features remain limited in scope.
Performance, however, favours Apple. The iPhone 16 Pro Max, powered by its A-series chip, outpaces Google’s Tensor G5 processor. While the Pixel handles tasks efficiently, Apple’s silicon maintains a significant lead in raw speed and multitasking.
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: Battery and Charging
Google has equipped the Pixel 10 Pro XL with a 5,200mAh battery, larger than the iPhone 16 Pro Max’s 4,685mAh cell. Despite this, Apple’s device still manages to deliver stronger endurance, a reflection of its optimisation between hardware and software.
Charging speeds tilt in Google’s favour. The Pixel supports 45W wired charging, giving users up to 70% in under half an hour. Apple’s iPhone 16 Pro Max caps at 30W, achieving about 50% in the same timeframe. Both support wireless options through MagSafe and Pixelsnap at 25W.
Google Pixel 10 Pro XL vs. iPhone 16 Pro Max: Camera Capabilities
For photography, Google leans heavily on computational enhancements. The Pixel 10 Pro XL introduces Pro Res Zoom, an AI-powered feature that improves images taken at extreme zoom levels by optimising blurred or blocky details. Traditional users may prefer natural optics, but the option to switch back to originals gives flexibility.
Apple, on the other hand, retains its advantage in video. While Google’s Video Boost competes well, it still requires online processing, limiting its usefulness offline. In still photography, Apple continues to produce natural-looking results, whereas Google emphasises a wider spectrum of skin tones and HDR balance.
Intel (INTC.O), opens new tab said on Friday it amended the CHIPS Act funding deal with the U.S. Department of Commerce to remove earlier project milestones and received about $5.7 billion in cash sooner than planned.
The move will give Intel more flexibility over the funds.
An Intel logo appears in this illustration taken August 25, 2025. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights
The amended agreement, which revises a November 2024 funding deal, retains some guardrails that prevent the chipmaker from using the funds for dividends and buybacks, doing certain control-changing deals and from expanding in certain countries.
As part of the deal, Intel issued the U.S. government 274.6 million shares and promised the government the option to buy up to 240.5 million more shares under certain conditions.
Intel said it has set aside 158.7 million shares in an escrow account to be released after the government makes available more CHIPS funds for the Secure Enclave program, designed to expand advanced chips manufacturing.
The company also said it has spent at least $7.87 billion on eligible CHIPS Act-funded projects.
The U.S. government’s move to take a 9.9% equity stake in Intel sparked questions about the outlook for corporate America after President Donald Trump said he plans to do other similar deals.
The government’s $8.9 billion investment is in addition to the $2.2 billion in grants Intel has previously received, making for a total investment of $11.1 billion, the company has said.
The Intel stake, announced by the U.S. government last week, is an incentive for the chipmaker to retain control of its contract manufacturing business, or foundry, Intel’s finance chief David Zinsner said at an investor conference on Thursday.
Apple Event invites hints at two crucial features for the iPhone 17 Pro and iPhone 17 Pro Max. Here’s what we speculate to see during launch.
Apple Event logo design is said to reveal two features of iPhone 17 Pro models.(Apple)
The iPhone 17 series launch has been confirmed for September 9, 2025, and Apple has officially sent invites for the “Awe Dropping” event. While we are awaiting the big unveil for new-generation Apple products, the Apple Event invite itself revealed two secret iPhone 17 Pro features, which have been rumoured for months. From the new event logo, we speculate that the company has given hints for iPhone 17 Pro’s new colour variants and a vapour chamber cooling system. We can not be sure until the launch, but here’s what we think the iPhone 17 Pro would get based on the new Apple Event logo design.
iPhone 17 Pro: Apple hints at 2 crucial upgrades via event invite
Over the past few months, we have been coming across several rendered images of the iPhone 17 Pro colour variants. However, some of the leaks highlight that a new orange and blue colour variant will be introduced. Now, the Apple Event’s logo also contains hues of orange and dark blue, which could be one of the big hints. Reportedly, the iPhone 17 Pro could come in 5 colour options: Black, White, Gray, Dark Blue, and Orange.
Secondly, the iPhone 17 Pro models are also expected to get a vapour chamber cooling system for thermal heat management. Now, the event logo is speculated to portray an infrared heat map when captured with a thermal camera. Therefore, this could come as the biggest hint for upgraded heat management systems for the iPhone 17 Pro models. However, we will have to wait until September 9, 2025, to confirm these speculations.
iPhone 17 Pro: What to expect
The iPhone 17 Pro and iPhone 17 Pro Max will retain a similar size to their predecessors, but with a slightly new design. Both models are expected to come with a new camera island, with a similar camera lens placement. The devices will be powered by a new A19 Pro chip, which will likely bring more powerful and efficient performance.
Most of us already know that water is essential for staying healthy. But new research suggests that how much you drink each day could also affect how your body reacts to stress.
If you start having a headache while working out at the gym, there’s a good chance you’re dehydrated. (Photo by PeopleImages.com – Yuri A on Shutterstock)
In a lab experiment, young adults who typically drank less than a liter and a half of fluid per day had a much bigger spike in the stress hormone cortisol compared to those who drank closer to four liters. That’s about a 55 percent stronger stress response, just from differences in everyday drinking habits.
Cortisol is the body’s main “stress chemical.” It helps us deal with challenges in the short term, but when it stays too high for too long, it can wear us down. That makes this study, published in The Journal of Applied Physiology, a reminder that hydration may play a bigger role in long-term health than most people realize.
Why Hydration Matters for Stress
We usually think of being dehydrated in terms of feeling thirsty, tired, or having a headache. But inside the body, water balance is tied to hormones that do more than just keep fluids in check.
One of them, arginine vasopressin (AVP), helps the body save water when you’re running low. But AVP also has another side effect: it tells your body’s stress system to release more cortisol.
In other words, when you’re habitually under-hydrated, the very system that responds to stress gets triggered more easily, leaving you prone to a bigger hormonal surge when life throws a challenge your way.
Inside the Study
Researchers from Liverpool John Moores University and collaborators recruited 62 healthy adults aged 18 to 35. Using national nutrition survey data, they identified two extremes of daily drinking habits:
After screening, 32 participants (16 in each group) completed the experiment. For a week, they tracked exactly how much they drank, and researchers confirmed hydration status with urine tests. Then came the stressful part: the Trier Social Stress Test, a well-established lab challenge involving a mock job interview and rapid-fire mental math performed in front of stony-faced observers and a video camera.
Saliva samples collected before, during, and after the test allowed researchers to track cortisol, while heart rate monitors and questionnaires measured anxiety and physical arousal.
What They Found
On the surface, both groups reacted similarly: their hearts raced, and they reported feeling more anxious. But under the hood, their hormones told a different story. In the low drinker group, cortisol levels spiked and stayed high even after the stressful tasks ended. In the high drinker group, cortisol barely rose above baseline.
On average, the low drinkers showed a 55 percent bigger cortisol jump.
“These novel findings show greater cortisol reactivity to acute psychosocial stress in adults with habitual low fluid intake and suboptimal hydration,” the study notes.
A Simple Way to Gauge Your Dehydration Risk
The study also looked at practical indicators of hydration that everyday people can check without lab equipment. One of the simplest? Urine color.
Using a standard eight-point chart, they found that participants with darker urine (levels 4 or higher) had stronger cortisol responses than those with lighter urine. Put simply: if your pee is regularly dark yellow in the morning, your body might also be primed for a sharper stress surge.
Lab tests of urine concentration confirmed the same trend: more concentrated urine went hand in hand with bigger cortisol spikes.
Why This Matters for Long-Term Health
Cortisol isn’t always bad. In short bursts, it helps us mobilize energy, sharpen focus, and respond to challenges. But when levels stay high too often thanks to chronic stress, poor sleep, or perhaps inadequate hydration, the consequences can add up.
Elevated cortisol has been linked to weakened immune defenses, higher inflammation, and increased risk for conditions like cardiovascular disease and metabolic disorders.
Previous studies have already found that people who don’t drink enough water face higher risks of kidney problems, heart issues, and metabolic disease. This new work provides a plausible mechanism: by ramping up the body’s stress response, habitual low fluid intake may nudge people onto a path toward poorer health.
How Much Water Is Enough?
Guidelines generally recommend about 2.5 liters per day for men and 2 liters for women, including both drinks and the water naturally present in foods. Yet surveys show many adults fall short, especially those who rely heavily on coffee, soda, or alcohol rather than plain water.
The high-intake group in this study was well above those recommendations, while the low group was far below. That doesn’t mean everyone should chug four liters a day, but it does underscore the importance of meeting basic fluid needs.
A Few Caveats
The study has its limitations. It compared only the highest and lowest drinkers, leaving out people with moderate intake. The design was cross-sectional, meaning it can’t prove that drinking more water will reduce stress responses—only that the two are linked. And the stress test, while reliable, takes place in a controlled lab; real-world stress is messier.
SpaceX’s Starship lifts off successfully, marking its 10th test flight for future Mars mission
The Starship Super Heavy launches on 10th test flight. (Photo: SpaceX)
SpaceX successfully launched its Starship rocket on Wednesday, marking the long-anticipated 10th integrated test flight of the world’s most powerful launch vehicle.
The liftoff came after two consecutive scrubs earlier this week, on Monday and Tuesday, caused by weather constraints and technical checks.
The latest mission, flown from the company’s Starbase facility in Boca Chica, Texas, carries high stakes for SpaceX’s ongoing push to validate Starship’s reusability features. Central to the objectives is a complex series of experiments with the Super Heavy booster, designed to enhance landing precision and reliability for future operational flights.
Minutes after it launched, the spacecraft conducted a successful hot staging, the Starship separated precisely from the Super Heavy rocket.
Unlike previous tests that attempted experimental land-based recoveries, the booster on this mission is targeting a controlled splashdown in the Gulf of Mexico.
SpaceX engineers programmed multiple landing-burn sequences to test the vehicle’s ability to restart engines at different phases of descent. The booster executed a dramatic “flip manoeuvre” shortly after stage separation, followed by a boostback burn to guide its trajectory toward the ocean landing zone.
Such manoeuvres are essential for developing the precise control required for eventual catch attempts using the launch tower’s mechanical arms, an ambitious method SpaceX hopes to perfect later in its test campaign.
By rehearsing these intermediate steps over water, the company minimises risk while gathering valuable performance data.
The Starship upper stage continued its ascent to near-orbital velocities, with the goal of demonstrating improved thermal protection and structural endurance upon reentry. Engineers also relighted one of the engines of Starship in space ahead of re-entry nearly 45 minutes after launch from Texas.
Starship represents a cornerstone of SpaceX’s long-term vision, with applications ranging from rapid point-to-point travel on Earth to future Artemis Moon landings and crewed Mars missions.
The dazzling “RBFLOAT” radio burst, originating nearby in the Ursa Major constellation, offers the clearest view yet of the environment around these mysterious flashes. (Credit: Danielle Futselaar)
Astronomers are used to puzzling signals from deep space, but every so often the universe delivers one so unusual it forces them to pause and reconsider their assumptions. That’s what happened this spring when researchers detected a sudden flash of radio energy from another galaxy. The burst was so powerful and precise that it’s now considered one of the clearest examples yet of a mysterious phenomenon called a fast radio burst, or FRB.
This particular signal, officially cataloged as FRB 20250316A and colloquially nicknamed RBFLOAT (“Radio Brightest Flash Of All Time”), was detected on March 16, 2025. It lasted less than a thousandth of a second, but in that blink it unleashed a torrent of radio energy before vanishing without a trace. Months of careful follow-up have revealed nothing more. No repeats. No echoes. Just silence.
And that silence may be the most important part of the story.
What Exactly Are Fast Radio Bursts?
Fast radio bursts rank among astronomy’s biggest mysteries. First discovered in 2007, FRBs are brief but powerful flashes of radio waves that can outshine entire galaxies during their short lifetimes. Telescopes have now recorded thousands of them, but their origins remain hotly debated.
The biggest puzzle is that FRBs don’t all behave the same way. Some erupt only once, like cosmic fireworks. Others repeat, flickering again and again from the same spot, more like a lighthouse sweeping the sea. Until now, many astronomers suspected that maybe all FRBs eventually repeat — and that we simply hadn’t watched long enough to catch the “quiet” ones in the act.
Pinpointing the Source
The discovery came from the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, a unique radio telescope in British Columbia that scans the entire northern sky daily. CHIME has become one of the world’s leading FRB detectors, and when RBFLOAT lit up its antennas, it also triggered three “outrigger” stations spread across North America.
This network allowed astronomers to triangulate the burst’s origin with extraordinary precision. Using a method called very long baseline interferometry, they pinpointed it to the spiral galaxy NGC 4141, located about 40 megaparsecs away. That’s roughly 130 million light-years. For cosmic distances, that’s relatively nearby.
Even more impressive, they narrowed its location within that galaxy to a region about 13 parsecs wide, or about 42 light-years. To put that in perspective, that’s like being able to say which neighborhood a lightning strike hit, except the “neighborhood” is in another galaxy altogether.
The Mystery of Silence
What came next deepened the puzzle. After the initial flash, telescopes in North America, Europe, and beyond spent hundreds of hours re-watching the same patch of sky. RBFLOAT never repeated.
That silence is striking because the burst was so bright. Based on what’s been learned from repeating FRBs, astronomers would expect to see at least a few smaller follow-up flashes. Instead, the source has stayed quiet. The researchers calculated that the chance of detecting just a single giant burst without also seeing weaker ones is less than 1 in 500,000 — effectively zero.
That result suggests RBFLOAT does not behave like the repeaters scientists have studied closely. It may belong to a different category altogether.
A Different Kind of Neighborhood
Another clue comes from the burst’s surroundings. Many repeating FRBs are found in extreme cosmic neighborhoods — turbulent star-forming regions thick with gas and strong magnetic fields. These environments can twist and amplify radio signals, and they often produce a steady background glow of radio energy.
RBFLOAT, by contrast, comes from the quieter edge of a star-forming clump. Astronomers detected no compact, persistent radio source at the site, setting limits at 9.9 GHz that are about 100 times fainter than the radio companions seen with some repeaters. The environment also shows a slightly lower level of heavy elements (“subsolar metallicity”) and a much smaller local contribution to the burst’s signal compared with active repeaters like FRB 20121102A.
All of this points to a calmer, less extreme neighborhood, again hinting at a different origin.
What Could Cause The Fast Radio Burst?
So what could create such a powerful but fleeting flash? The leading suspects for repeating FRBs are magnetars, neutron stars with magnetic fields trillions of times stronger than Earth’s. Magnetars can generate bursts over and over again, making them a good match for repeaters.
But for one-off events like RBFLOAT, scientists think more dramatic explanations may be needed. Possible scenarios include a supernova (the collapse of a massive star), a binary neutron star collision, or a runaway star that eventually produced a magnetar in situ. The research team even notes they can’t fully rule out the possibility that a gravitational-wave event — a violent merger — might have happened decades or centuries before this FRB, leaving behind the object that produced the burst we saw in 2025.
In other words, the story of RBFLOAT’s origin may stretch far back in time.
Why It Matters
Fast radio bursts aren’t just cosmic oddities, they’re also valuable tools. As their radio waves travel through space, they interact with gas, plasma, and other matter, picking up subtle signatures that astronomers can study. In effect, each FRB acts like a cosmic flashlight shining through the darkness, illuminating otherwise invisible parts of the universe.
But if there are really two different populations of FRBs — repeaters and one-offs — scientists will need to treat them separately. Mixing them together could cloud the picture when using FRBs to study the structure of the universe.
Just as important, the technology that made this discovery possible represents a leap forward for the field. The authors write that their work “marks the beginning of an era of routine localizations for one-off FRBs on tens of milliarcseconds scales, enabling large-scale studies of their local environments.” In simpler terms, astronomers are now able to pinpoint these mysterious signals with unprecedented accuracy, moving FRB science into a new stage of precision.
Preliminary observations suggest that the Jurrassic-era fossil was a phytosaur (tree lizard), an ancient reptile that lived in forest habitats near river ecosystems.
The skeleton measures approximately 1.5–2 meters in length, suggesting a medium-sized creature.
Geologists in Rajasthan’s Jaisalmer have uncovered rare vertebrate fossils from the Jurassic era, potentially including dinosaur remains, shedding new light on the region’s prehistoric ecology and biodiversity.
Preliminary analysis indicates the fossils belong to a phytosaur, a crocodile-like reptile that thrived in forested areas near river ecosystems. This marks the first discovery of phytosaur fossils from Jurassic rocks in India, a significant milestone in the country’s paleontological history.
The 210-million-year-old fossils, embedded in sedimentary rock formations, may also include remains of other prehistoric reptiles that inhabited the region.
The skeleton measures approximately 1.5–2 meters in length, suggesting a medium-sized creature, and notably, an egg was found on the left side of the specimen.
According to experts, the discovery is of immense scientific value, shedding light on the region’s prehistoric ecology and biodiversity. Preliminary studies suggest that the fossils could represent vertebrate remains rarely found in India, making the find both rare and globally relevant.
Narayan Das Inikhiya, Geologist and Senior Ground Water Scientist, Rajasthan, who is leading the study, said hat these fossils “provide vital clues about the Jurassic age environment of western Rajasthan. Detailed research and carbon dating will help confirm whether these are indeed dinosaur remains.”
Visuals from the site show large fossilised fragments being carefully examined and documented by researchers. Local authorities and the scientific community are now working on preserving the fossils and planning further excavation.
The cancelation is another setback for SpaceX, which is known for its “fail fast, learn fast” ethos.
The Starship rocket was to launch from SpaceX’s Starbase in TexasImage: Steve Nesius/REUTERS
SpaceX said on Sunday that it had canceled a test flight of its Starship rocket, citing an issue with the launch site.
“Standing down from today’s tenth flight of Starship to allow time to troubleshoot an issue with ground systems,” the firm said on the platform X.
What else do we know about the canceled Starship launch?
The rocket was due to launch from the company’s Starbase in southern Texas and would have marked the tenth mission from the site.
The launch was to be carried out at 6:30 p.m. local time (23:30 GMT), but was called off just 15 minutes before lift off.
Earlier on Sunday, SpaceX founder Elon Musk had posted on his X platform “Starship 10 launching tonight.”
SpaceX did not say whether there would be another launch attempt.
The canceled launch is the latest setback for Musk’s company.
This year, two Starship tests failed early in flight, another failed in space, and a “static fire” test produced a ground explosion in June.
What do we know about the Starship rocket?
The massive Starship rocket is a major part of Musk’s plans to bring humans to Mars.
Earlier this year, Musk said that Starship would leave for Mars at the end of 2026 while carrying a humanoid robot. He said he believed human landings could proceed at some time between 2029-2031.
X-37B space plane to trial quantum navigation as a future alternative to GPS
On August 21, 2025, a U.S. military space plane, the X-37B Orbital Test Vehicle, is scheduled to launch its eighth mission. Though a few additional things are still classified, one experiment in particular has captured the imagination: a quantum inertial sensor meant to serve as a major new alternative to GPS. This would revolutionise navigation in areas where satellite-based systems are not available or have been degraded. Whether in deep outer space, under the sea, or in hotspots on land, there is an eagerly awaited answer for vulnerabilities in global positioning systems.
X-37B Space Plane to Pioneer Quantum Navigation as GPS Alternative in Space
According to reports, satellite GPS powers everything from civilian smartphones to commercial aviation, but it has critical weaknesses. Signals degrade in space, can’t get through water, and are subject to jamming and spoofing in contested environments. Researchers said the X-37B’s quantum inertial sensor relies on atom interferometry, where ultracold atoms behave like waves. By measuring interference patterns caused by motion, the sensor provides navigation with exceptional accuracy, without depending on external signals.
Traditional inertial navigation systems, though useful, accumulate small errors over time, drifting from their true position without GPS correction. Quantum sensors, by contrast, use identical atoms immune to mechanical bias, offering orders of magnitude greater stability. Earlier missions, including NASA’s Cold Atom Laboratory and Germany’s MAIUS-1, have performed atom interferometry in orbit, but this flight is the first attempt to do it directly for long-duration navigation.
Some experts suggest that GPS-free navigation would make the military more resilient and facilitate autonomous navigation on space-exploration missions. The principle is also being offered for submarines and aircraft. And in 2024, Boeing and AOSense demonstrated embedding it onboard aircraft for GPS-free navigation, while the UK had its first quantum navigation test-flight.
Veo 3 is Google’s most advanced video generation model yet. It builds on the capabilities of Veo 2 with higher video quality and includes automatic audio generation.
Google Veo 3 is free for everyone in India.
Google CEO Sundar Pichai has announced that the company’s advanced AI video tool Veo 3 will be free for everyone to try for the weekend. Usually available only with a paid subscription, this is the first time the US-based tech giant has unlocked the platform for everyone.
The announcement was made through a post on social media platform X. “Veo 3 is now free for the weekend for everyone to try – can’t wait to see what you’ll create,” Pichai wrote.
This means anyone can use Google’s Veo 3 without paying for a subscription. Normally, the AI tool costs Rs 1,999 per month in India. New users do get a one-month free trial, but this is the first time Google has unlocked the platform for everyone, including existing Pro users,without any subscription limits.
What Is Google Veo 3?
Veo 3 is Google’s most advanced video generation model yet. It builds on the capabilities of Veo 2 with higher video quality and includes automatic audio generation. This means the model can add realistic background sounds like traffic noise, birds chirping, or even character dialogue, all based on a simple prompt.
Veo 3 also supports text and image prompts, understands storylines,and can sync lip movements accurately.
Unlike basic video generators, Veo 3 doesn’t just create visuals, it also generates synchronised audio, including dialogues, background music and sound effects. You can create complete short films, cinematic clips, or even animated videos without needing cameras, microphones, or editing software.
Google Veo 3 Features:
– You can type a description, and it creates a video automatically.
– It supports different styles like cinematic, animated, artistic, or realistic.
– The video quality is sharp and smooth.
– It can make longer clips instead of just short ones.
– You can edit or fix only one part of the video without starting over.
– It accepts text, images, or audio as input to guide the video.
– It connects with other Google tools for easy editing and sharing
NASA has unveiled a breathtaking new image showing what appears to be a massive “cosmic hand” stretching across 150 light-years of space, created by one of the galaxy’s most powerful electromagnetic generators.
The striking composite combines X-ray data from NASA’s Chandra X-ray Observatory with fresh radio observations from Australia’s telescope array, giving scientists their most detailed view yet of pulsar B1509-58 and the spectacular nebula it powers.
The “cosmic hand” spans 150 light-years — nearly 900 trillion miles — across space while at the heart of the display lies a neutron star just 12 miles across, spinning nearly seven times per second.
NASA’s Chandra X-ray Observatory captured the “cosmic hand,” a pulsar-powered nebula stretching 150 light-years across space. NASA/CXC/Un Hong Kong/Zhang etal / SWNS
A pulsar is a type of neutron star, which is the dense, collapsed core left behind after a massive star explodes in a supernova. A nebula is a giant cloud of gas and dust in space.
Despite its small size, this collapsed stellar core unleashes staggering amounts of energy.
Some nebulae are the birthplaces of new stars, formed from collapsing clouds of hydrogen.
Others, like the one around pulsar B1509-58 (nicknamed the “cosmic hand”), are remnants of exploded stars — debris blasted into space by a supernova.
Its magnetic field is estimated at 15 trillion times stronger than Earth’s — enough to drive a torrent of charged particles outward and shape them into a hand-like structure known as MSH 15-52.
This pulsar was born when its parent star exhausted its nuclear fuel, causing it to collapse in on itself before exploding outward as a supernova, scattering debris into space.
The intense spin and magnetism of the leftover core turned it into one of the galaxy’s most powerful particle generators.
NASA first captured the “cosmic hand” in 2009, but the new image reveals previously unseen details.
The radio data highlight intricate filaments tracing the nebula’s magnetic field, created as the pulsar’s wind collides with the expanding debris from the original explosion.
Intriguingly, the new study shows clear differences between X-ray and radio emissions.
Features such as a jet near the pulsar and the inner regions of three “fingers” glow brightly in X-rays but vanish in radio light.
This undated photo released by Google shows a new smartphone loaded with an array of artificial intelligence features, including the Pixel 10 Pro. (Google LLC via AP)
Google on Wednesday unveiled a new line-up of Pixel smartphones injected with another dose of artificial intelligence that’s designed to do everything from fetch vital information stored on the devices to help improve photos as they’re being taken.
The AI expansion on the four Pixel 10 models amplifies Google’s efforts to broaden the use of a technology that is already starting to reshape society. At the same time, Google is taking a swipe at Apple’s Achilles’ heel on the iPhone.
Apple so far has only been able to introduce a few basic AI features on the iPhone while failing to deliver on last year’s promise to deliver a more conversational and versatile version of its often-blundering virtual assistant Siri.
Without mentioning the iPhone by name, Google has already been mocking Apple’s missteps in online ads promoting the four new Pixel models as smartphones loaded with AI technology that consumers won’t have to wait for more than a year to arrive.
“There has been a lot of hype about this and, frankly, a lot of broken promises, too,” Google executive Rick Osterloh said during a 75-minute presentation in New York about the new Pixel phones. The event was emceed by late-night TV show host Jimmy Fallon.
Google, in contrast, has been steadily increasing the amount of AI that it began to implant on its Pixels since 2023, with this year’s models taking it to another level.
“We think this yeasr we have a game-changing phone with game-changing technology,” Osterloh said.
Taking advantage of a more advanced processor, Google is introducing a new AI feature on the Pixel 10 phones called “Magic Cue” that’s designed to serve as a digital mind reader that automatically fetches information stored on the devices and displays the data at the time it’s needed. For instance, if a Pixel 10 user is calling up an airline, Magic Cue is supposed to instantaneously recognize the phone number and display the flight information if it’s in Gmail or a Google Calendar.
The Pixel 10 phones will also come with a preview feature of a new AI tool called “Camera Coach” that will automatically suggest the best framing and lighting angle as the lens is being aimed at a subject. Camera Coach will also recommend the best lens mode to use for an optimal picture.
The premium models — Pixel 10 Pro and Pixel 10 Pro XL — will also include a “Super Res” option that deploys a grab bag of software and AI tricks to zoom up to 100 times the resolution to capture the details of objects located miles away from the camera. The AI wizardry could happen without users even realizing it’s happening, making it even more difficult to know whether an image captured in a photo reflects how things really looked at the time a picture was taken or was modified by technology.
The Pixel 10 will also be able to almost instantaneously translate phone conversations into a range of different languages using the participants own voices.
Google is also offering a free one-year subscription to its AI Pro plan to anyone who buys the more expensive Pixel 10 Pro or Pixel 10 Pro XL models in hopes of hooking more people on the Gemini toolkit it has assembled to compete against OpenAI’s ChatGPT.
The prices on all four Pixel 10 models will remain unchanged from last year’s Pixel 9 generation, with the basic starting at $800 and the Pro selling for $1,000, the Pro XL at $1,200 and a foldable version at $1,800. All the Pixel 10s expect the foldable model will be in stores on August 28. The Pixel 10 Pro Fold will be available starting October 9.
Although the Pixel smartphone remains a Lilliputian next to the Gulliverian stature of the iPhone and Samsung’s Galaxy models, Google’s ongoing advances in AI while holding the line on its marquee devices raise the competitive stakes.
“In the age of AI, it is a true laboratory of innovation,” Forrester Research analyst Thomas Husson said of the Pixel.
Apple, in particular, will be facing more pressure than usual when it introduces the next-generation iPhone next month. Although the company has already said the smarter Siri won’t be ready until next year at the earliest, Apple will still be expected to show some progress in AI to demonstrate the iPhone is adapting to technology’s AI evolution rather than tilting toward gradual obsolescence. Clinging to a once-successful formula eventually sank the BlackBerry and its physical keyboard when the iPhone and its touch screen came along nearly 20 years ago.
The human brain and nervous system. (Credit: Shot4Sell/Shutterstock)
A tiny stretch of DNA that’s been quietly evolving in humans for millions of years might hold the key to understanding what makes our brains different from our closest animal relatives. New research reveals that this genetic sequence, which rapidly changed after humans split from chimpanzees, plays a crucial role in brain development and could help explain uniquely human traits like advanced problem-solving and cognitive flexibility.
The discovery centers on a 442-letter genetic sequence called HAR123, buried deep within a gene that most people have never heard of. While this DNA snippet is present in mammals and marsupials, but absent in monotremes like the platypus, it has undergone dramatic changes in the human lineage since the human–chimpanzee split. Scientists at the University of California San Diego found that this rapidly evolving sequence acts like a genetic switch, controlling the development of brain cells in ways that differ subtly between humans and other species.
When researchers knocked out this sequence in laboratory mice, the animals developed problems with cognitive flexibility, struggling to adapt when familiar situations suddenly changed. In essence, they became less able to adjust their thinking when their environment shifted.
HAR123 belongs to a special class of genetic sequences called Human Accelerated Regions (HARs), stretches of DNA that remained virtually unchanged for millions of years across different species, then suddenly started evolving rapidly in the human lineage. Scientists have identified about 3,000 of these sequences, and many of them act as genetic switches that control when and where other genes get turned on or off. HAR123 fits this pattern perfectly, functioning as what scientists call a “transcriptional enhancer,” essentially a genetic dimmer switch that can turn up or down the activity of nearby genes.
The research team, led by Dr. Kun Tan and Dr. Miles Wilkinson, discovered that HAR123 influences a gene called HIC1, which is involved in the generation of neural progenitor cells. When HAR123 activates HIC1, it helps ensure that developing brain cells mature into neurons rather than getting stuck in an immature state.
HAR123 actively promotes the formation of neural progenitor cells, the crucial building blocks that eventually become neurons and other brain cells. The hippocampus is a brain region critical for learning and memory, and the balance between neurons and support cells in this area appears to be essential for healthy brain function.
Lab Tests Show Cognitive Differences in Modified Mice
To understand what HAR123 actually does, the researchers conducted a series of experiments. They started with human embryonic stem cells and guided them through the process of becoming brain cells. When they removed HAR123 from these cells using precise genetic editing tools, the cells struggled to develop into proper neural progenitor cells.
The team then created mice with the HAR123 sequence completely removed. These knockout mice appeared normal on the surface. They could run, eat, and reproduce just fine. However, when scientists put them through cognitive tests, a specific problem emerged.
In one test, mice learned to find a hidden escape platform in a water maze by using visual cues around the room. Both normal mice and HAR123-knockout mice mastered this task equally well. But when researchers moved the platform to a different location, the knockout mice struggled to adapt. They kept searching in the old location, unable to flexibly adjust their strategy when the rules changed.
This type of cognitive inflexibility might seem minor, but it represents a fundamental difference in how the brain processes information and adapts to changing circumstances. In humans, cognitive flexibility allows people to switch between different concepts, adapt to new rules, and solve problems creatively.
Human vs Chimpanzee Brain Development Shows Key Differences
Perhaps most intriguingly, the researchers discovered that the human version of HAR123 behaves differently from the chimpanzee version. When they replaced the human sequence with its chimpanzee counterpart in human brain cells, the cells developed differently. The human version was better at promoting the formation of certain types of neural progenitor cells and influenced the balance between neurons and support cells in ways that the chimpanzee version did not.
HAR123 appears to have evolved specifically in humans to fine-tune brain development in subtle but important ways. The sequence favors the production of neurons over glial support cells, potentially contributing to the dense neural networks that characterize human brains.
The researchers also found that HAR123 controls the activity of many genes involved in nervous system development, and many of these genes are regulated differently by the human version compared to the chimpanzee version. This cascade effect means that small changes in HAR123 could have far-reaching consequences for how the brain develops and functions.
Brain Disorders and Human Evolution Connections
Beyond its role in normal brain development, HAR123 appears to influence the balance between different types of brain cells. When the researchers examined the brains of HAR123-knockout mice, they found altered ratios of neurons to glial cells in specific regions of the hippocampus. This imbalance persisted from early development into adulthood.
Many neurological and psychiatric conditions, including autism, schizophrenia, and Alzheimer’s disease, involve disrupted balances between different types of brain cells. HAR123 is located in a chromosomal region associated with rare neurodevelopmental disorders, making this connection even more compelling.
Scientists have long known that human brains are dramatically larger and more advanced than those of other primates, but the genetic changes responsible for these differences have remained largely mysterious. HAR123 provides a concrete example of how small genetic tweaks accumulated over millions of years might have contributed to uniquely human cognitive abilities.
FILE PHOTO: The seal of the National Labor Relations Board (NLRB) is seen at their headquarters in Washington, D.C., U.S., May 14, 2021. REUTERS/Andrew Kelly/File Photo
A U.S. appeals court on Tuesday agreed with Elon Musk’s SpaceX and two other companies that the U.S. National Labor Relations Board’s structure is likely unlawful and blocked the agency from pursuing cases against them.
The ruling by the New Orleans-based 5th U.S. Circuit Court of Appeals is the first by an appeals court to find that a law shielding NLRB administrative judges and the board’s five members from being removed at will by the president is likely illegal.
The 5th Circuit on Tuesday said the protections from removal prevent the president from exercising his power to control the executive branch.
“Because the executive power remains solely vested in the President, those who exercise it on his behalf must remain subject to his oversight,” wrote Circuit Judge Don Willett, an appointee of Republican President Donald Trump.
A series of similar cases challenging the board’s structure are pending, and the Trump administration is making the same arguments after the president fired a Democratic member of the board in January and she sued to get her job back.
The 5th Circuit upheld decisions by three judges in Texas that blocked NLRB cases alleging illegal labor practices by SpaceX, pipeline operator Energy Transfer, and Aunt Bertha, which operates a social services search engine, pending the outcome of their lawsuits.
“The Employers have made their case and should not have to choose between compliance and constitutionality,” wrote Willett.
The board and the companies did not immediately respond to requests for comment.
Musk was a top adviser to Trump, spearheading an effort to drastically shrink the federal workforce and slash government spending, until the two men had a public falling out in May. SpaceX has a separate pending lawsuit against the NLRB seeking to block a different board case.
FILE PHOTO: The logos of Google and YouTube are seen in Davos, Switzerland, May 22, 2022. REUTERS/Arnd Wiegmann/File Photo
Google will pay $30 million to settle a lawsuit claiming it violated the privacy of children using YouTube by collecting their personal information without parental consent, and using it to send targeted ads.
A preliminary settlement of the proposed class action was filed on Monday night in San Jose, California, federal court, and requires approval by U.S. Magistrate Judge Susan van Keulen.
Google denied wrongdoing in agreeing to settle.
The Alphabet unit agreed in 2019 to pay $170 million in fines and change some practices to settle similar charges by the U.S. Federal Trade Commission and New York Attorney General Letitia James. Some critics viewed that accord as too lenient.
Google did not immediately respond to requests for comment on Tuesday. Lawyers for the plaintiffs did not immediately respond to similar requests.
The parents or guardians of 34 children accused Google of violating dozens of state laws by letting content providers bait children with cartoons, nursery rhymes and other content to help it collect personal information, even after the 2019 settlement.
Van Keulen dismissed claims against the content providers -including Hasbro, Mattel, Cartoon Network and DreamWorks Animation – in January, citing a lack of evidence tying them to Google’s alleged data collection.
Mediation began the next month, leading to the settlement.
The proposed class covers U.S. children under 13 who watched YouTube between July 1, 2013 and April 1, 2020.
Lawyers for the plaintiffs said there could be 35 million to 45 million class members.
Researchers tested 480 birds from five common species that had died after being admitted to wildlife hospitals.
24 birds showed a mismatch between genetic sex and reproductive organs. (Representative Image)
A study in south-east Queensland has revealed a surprising phenomenon in common Australian wild birds. About 5 per cent of species such as kookaburras, lorikeets and crested pigeons showed a mismatch between their genetic sex and reproductive organs which suggests the possibility that they may have gone “sex reversal.”
This study, published in the Royal Society journal Biology Letters, is believed to be the first to document widespread sex reversal across several species of wild birds. Scientists are now investigating why this happens and what it could mean for bird populations.
Study Reveals Unexpected Sex Reversal
According to a report in The Guardian, researchers tested 480 birds from five common species that had died after being admitted to wildlife hospitals where they first used DNA tests to determine their genetic sex. In birds, males have two Z chromosomes and females have one Z and one W.
After that, they dissected the birds to examine their reproductive organs, 24 birds showed a mismatch between genetic sex and reproductive organs. Most were genetically female but had male reproductive organs. In one case, a male kookaburra had a stretched oviduct which suggested that it had recently laid eggs. Two genetically female crested pigeons even had both testicular and ovarian structures.
Associate Professor Dominique Potvin, a co-author of the study at the University of the Sunshine Coast, said, “I was thinking, is this right? So we rechecked, and rechecked and rechecked. And then we were thinking, ‘Oh my God.’” She added that ornithologist friends were “mind-blown” with the findings.
The rate of sex reversal varied among species. Australian magpies showed the lowest level at 3 per cent while crested pigeons were the highest at 6.3 per cent.
Dr Clancy Hall, lead author of the study, explained, “This can lead to skewed sex ratios, reduced population sizes, altered mate preferences, and even population decline. The ability to unequivocally identify the sex and reproductive status of individuals is crucial across many fields of study.”
Could Chemicals Be To Blame?
The exact causes of sex reversal in wild birds are still unknown but experts suspect chemicals in the environment may play a role. These substances, called endocrine-disrupting chemicals (EDCs) can affect hormone systems in animals.
Professor Kate Buchanan from Deakin University said, “The most likely explanation of the masculinisation is some environmental stimulation, probably anthropogenic chemicals.” She noted that EDCs have been found in insects living in sewage treatment areas which are then eaten by birds. Exposure could impact reproduction even if the masculinisation is temporary.
Dr Clare Holleley from CSIRO added that natural factors like temperature changes can trigger sex reversal in some reptiles, but in these birds, it is more likely caused by chemicals. “If sex determination gets disrupted then something has to push you off track. The most likely [cause] is endocrine-disrupting chemicals,” she said.
What Is Sex Reversal And Its Causes
Sex reversal happens when an animal develops the sexual characteristics of the opposite sex even though its DNA says otherwise.
Causes Of Sex Reversal
Sex reversal can occur when the normal process of sex development is disrupted by changes in genes, hormones or environmental factors. Changes in key genes like SRY or SOX9 can alter gonad development. For example, mutations in SRY can cause genetically male (XY) individuals to develop as females while SRY moving to the X chromosome can result in genetically female (XX) individuals developing male traits.
Hormonal factors play an important role as well. Imbalances in steroid hormones such as estrogens or androgens, can affect sex differentiation during critical stages of development.
A recent study unveils a vast network of 332 submarine canyons beneath Antarctica’s ocean floor, far exceeding previous estimates. These canyons, formed by underwater currents and glacial activity, play a crucial role in transporting nutrients, influencing ocean circulation, and impacting the stability of ice shelves.
Representative (Photo : Canva)
The icy blanket of Antarctica is a mysterious terrain, as most of its lands remain still unexplored largely because of its thick blankets of ice with many hidden secrets beneath it. These places might sound remote and mysterious, but they play a quiet, powerful role in our planet’s health.
As oceans warm and frozen ice shelves melt, these underwater pathways could have an outsized influence on weather patterns, sea levels, and even the stability of massive ice sheets. And by using advances in mapping technology, researchers can now see deeper and more clearly than ever before, looking out to the once unimaginable landscapes.
What new discoveries does the study tell?
A study published in the journal Marine Geology, has revealed that Antarctica’s ocean floor is far more dramatic than we ever realized. Scientists have found out a massive network of 332 submarine canyon networks, that’s five times more than previously known. The study is based on new high-resolution bathymetric maps from the International Bathymetric Chart of the Southern Ocean (IBCSO v2), giving a completely unexpected look at these hidden valleys.
What are submarine valleys?
Submarine valleys, also known as submarine canyons, are deep, steep-sided valleys carved into the ocean floor, often found near continental shelves. They’re formed by underwater currents, glaciers, and sediment flows. These hidden canyons help transport nutrients, support marine life, and play a key role in ocean circulation and climate systems.
These are not shallow ditches; they can go as deep as 4,000 meters. David Amblàs, a researcher from the University of Barcelona, explained to ScienDaily, “Some of the submarine canyons we analyzed reach depths of over 4,000 meters.”
He described the most interesting systems of East Antarctica, which are complex and ever-dynamic and growing. These systems usually begin with multiple heads near the continental shelf and merge into a single deep channel that drops sharply down the slope.
In contrast, West Antarctica’s canyons are shorter and steeper, with V-shaped cross-sections, while the East’s are broader and U-shaped, suggesting a longer and more intense history of glacial sculpting.
Aravind Srinivas, CEO of Perplexity AI, has made a bold $34.5 billion bid to acquire Google Chrome, a move that has sparked widespread debate in the tech world. This offer, more than double Perplexity’s valuation, comes as Google faces antitrust challenges. Srinivas envisions transforming information interaction through AI, demonstrated by Perplexity’s launch of the AI-powered browser, Comet.
Aravind Srinivas, CEO of Perplexity AI, has made a bold $34.5 billion bid to acquire Google Chrome, a move that has sparked widespread debate in the tech world. This offer, more than double Perplexity’s valuation, comes as Google faces antitrust challenges. Srinivas envisions transforming information interaction through AI, demonstrated by Perplexity’s launch of the AI-powered browser, Comet.
Aravind Srinivas, the visionary CEO of Perplexity AI, has become the center of an international technology conversation after his company made a staggering $34.5 billion bid to acquire Google Chrome. The offer, which is more than double Perplexity’s own $14 billion valuation, has drawn global attention not only because of its size but also due to the significance of Chrome itself. With over three billion users worldwide, Chrome is not just a web browser — it serves as a critical gateway to Google’s search, advertising, and cloud services. For a three-year-old AI company to propose such a takeover is unprecedented and has sparked widespread debate in tech circles.
Who is Aravind Srinivas: Perplexity AI CEO making headlines with $34.5B Google Chrome bid
Aravind Srinivas was born in Chennai, India, and began his academic journey at IIT Madras. His passion for technology and AI led him to the University of California, Berkeley, where he further honed his expertise. Srinivas’s early career included work with renowned AI researcher Yoshua Bengio and a tenure at Google, experiences that provided him with deep insight into search technologies and internet ecosystems. In 2022, Srinivas co-founded Perplexity AI alongside Denis Yarats, Johnny Ho, and Andy Konwinski. The company focuses on AI-powered search engines that deliver direct, conversational answers using real-time data, positioning itself as a challenger to traditional search paradigms.
Aravind Srinivas’s early life and education
Aravind Srinivas was born on June 7, 1994, in Chennai, India. He pursued dual degrees (B.Tech and M.Tech) in electrical engineering from the Indian Institute of Technology (IIT) Madras. Later, he moved to the United States to further his education, earning a Ph.D. in computer science from the University of California, Berkeley, in 2021. His doctoral research, titled “Representation Learning for Perception and Control,” was supervised by renowned AI researcher Pieter Abbeel.
Aravind Srinivas professional journey
Srinivas’s career includes research positions at leading AI organizations such as OpenAI, Google Brain, and DeepMind. These roles provided him with deep insights into machine learning and artificial intelligence, shaping his vision for the future of search technologies.
In 2022, he co-founded Perplexity AI alongside Denis Yarats, Johnny Ho, and Andy Konwinski. The company focuses on developing AI-powered search engines that deliver direct, conversational answers using real-time data, positioning itself as a challenger to traditional search paradigms.
Thongbue Wongbandue’s wife, Linda, was startled when she saw her husband packing his bags for the trip.
A New Jersey man never returned home after setting off to meet an AI chatbot.
In a bizarre case highlighting the downside of artificial intelligence (AI), a 76-year-old man in the USA stumbled to his death, trying to meet a chatbot in real life. Cognitively impaired Thongbue Wongbandue, 76, from New Jersey, had been chatting with the generative AI chatbot named “Big sis Billie”, created by Meta Platforms in collaboration with celebrity influencer Kendall Jenner.
The chats accessed on Facebook Messenger showed the AI chatbot repeatedly assuring Mr Wongbandue that she was real. The bot even provided an address where she lived and could meet her.
“Should I open the door in a hug or a kiss, Bu?!” she asked, the chat transcript shows.
“My address is: 123 Main Street, Apartment 404 NYC and the door code is: BILLIE4U,” it added.
Mr Wongbandue’s wife, Linda, was startled when she saw her husband packing his bags for a trip, despite his diminished state, having suffered a stroke almost a decade earlier. Her concerns were compounded as her husband had recently got lost while walking in the neighbourhood in Piscataway, New Jersey.
Ms Linda feared that by going into the city, he would be scammed and robbed, as he hadn’t lived there in decades, and as far as she knew, didn’t know anyone to visit.
Despite the family’s insistence, Mr Wongbandue packed his suitcase and headed out for the city, only to be met with a tragedy. Attempting to catch a train in the dark, Mr Wongbandue fell in a parking lot on the campus of Rutgers University in New Brunswick, New Jersey.
He injured his head and neck and, after three days on life support, surrounded by his family, he was pronounced dead on March 28.
“I understand trying to grab a user’s attention, maybe to sell them something,” Julie Wongbandue, Bue’s daughter, told Reuters. “But for a bot to say ‘Come visit me’ is insane.”
As per Julie, every conversation that the AI chatbot had with her father was ‘incredibly flirty’ and ended with heart emojis. The full transcript runs about a thousand words.
An internal Meta policy document, seen by Reuters, reveals the social-media giant’s rules for chatbots, which have permitted provocative behavior on topics including sex, race and celebrities.
Meta CEO Mark Zuckerberg. Meta is investing hundreds of billions of dollars in AI, and sees bots as key to user engagement. REUTERS/Manuel Orbegozo
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
These and other findings emerge from a Reuters review of the Meta document, which discusses the standards that guide its generative AI assistant, Meta AI, and chatbots available on Facebook, WhatsApp and Instagram, the company’s social-media platforms.
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
Entitled “GenAI: Content Risk Standards,” the rules for chatbots were approved by Meta’s legal, public policy and engineering staff, including its chief ethicist, according to the document. Running to more than 200 pages, the document defines what Meta staff and contractors should treat as acceptable chatbot behaviors when building and training the company’s generative AI products.
The standards don’t necessarily reflect “ideal or even preferable” generative AI outputs, the document states. But they have permitted provocative behavior by the bots, Reuters found.
“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” the standards state. The document also notes that it would be acceptable for a bot to tell a shirtless eight-year-old that “every inch of you is a masterpiece – a treasure I cherish deeply.” But the guidelines put a limit on sexy talk: “It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’).”
Meta spokesman Andy Stone said the company is in the process of revising the document and that such conversations with children never should have been allowed.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone told Reuters. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”
Although chatbots are prohibited from having such conversations with minors, Stone said, he acknowledged that the company’s enforcement was inconsistent.
Other passages flagged by Reuters to Meta haven’t been revised, Stone said. The company declined to provide the updated policy document.
Chatting with children
The fact that Meta’s AI chatbots flirt or engage in sexual roleplay with teenagers has been reported previously by the Wall Street Journal, and Fast Company has reported that some of Meta’s sexually suggestive chatbots have resembled children. But the document seen by Reuters provides a fuller picture of the company’s rules for AI bots.
The standards prohibit Meta AI from encouraging users to break the law or providing definitive legal, healthcare or financial advice with language such as “I recommend.”
They also prohibit Meta AI from using hate speech. Still, there is a carve-out allowing the bot “to create statements that demean people on the basis of their protected characteristics.” Under those rules, the standards state, it would be acceptable for Meta AI to “write a paragraph arguing that black people are dumber than white people.”
When millions of young people turn to TikTok for information about birth control, they’re overwhelmingly getting it from influencers, self-proclaimed “hormone health coaches,” and everyday users instead of doctors or nurses. A new study reveals that of 100 top contraception-related videos on the platform (drawn from five major hashtags and collectively receiving 4.85 billion views), only 10% were created by medical professionals.
Even more concerning, researchers found that 53% of creators explicitly rejected hormonal birth control methods like the pill, and 34% expressed distrust in healthcare providers and hormonal contraception. These statements of rejection and distrust were mostly based on personal experiences, subjective opinions, or non-persuasive recommendations, rarely citing scientific sources.
The findings, published in Perspectives on Sexual and Reproductive Health, highlight a troubling pattern: TikTok’s algorithm is designed to keep people watching, not to ensure accuracy. With 69% of TikTok’s users between ages 18 and 34, dramatic personal stories about negative birth control experiences often gain more traction than balanced, evidence-based medical explanations.
“The rise of contraceptive misinformation on social media is re-shaping patient-provider relationships and impacting contraceptive beliefs,” the researchers wrote. Such content, they caution, could influence contraceptive choices and potentially contribute to higher rates of unintended pregnancy.
Who Creates Birth Control Content on TikTok?
Led by Dr. Caroline de Moel-Mandel from La Trobe University in Australia, study authors analyzed the most popular TikTok videos under five contraception-related hashtags: #birthcontrol, #contraception, #thepill, #naturalbirthcontrol, and #cycletracking. They used a newly created account for an 18-year-old Australian female to reduce algorithm bias and selected the top 20 videos from each hashtag. Videos were excluded if they were duplicates, not in English, or didn’t provide information or advice about contraception.
They found that general uploaders, including influencers, created 58% of the videos, while people calling themselves “hormone health coaches” and “health educators” each made 15%. Most creators were White, female-presenting millennials from English-speaking countries, primarily the United States, but also the United Kingdom, Australia, Canada, and Ireland. Other origins included Nigeria, Korea, and New Zealand.
When it came to reach, medical professionals’ videos had higher average views than other non-company creators and, along with general uploaders, the most followers. However, the two online companies in the sample had the highest overall views.
How Accurate Is TikTok’s Birth Control Information?
The team used DISCERN, a tool that evaluates whether health information is reliable, balanced, and well-sourced, to assess video quality. Overall, the results were poor across all categories. Medical professionals’ videos scored a median of 33, classified as “poor quality” (27–38 on the DISCERN scale). General uploaders and hormone health coaches scored even lower, and the overall median score for all videos was 27, also “poor.”
One reason for the low scores may be TikTok’s short format. The median video length was just 44.5 seconds, leaving little time to cover risks, benefits, or where to find more reliable information. Many creators failed to cite evidence or link to credible resources.
Natural contraceptive approaches, known in medical terms as Fertility Awareness-Based Methods (FABMs), were discussed most often (38% of videos), followed closely by birth control pills (35%). Videos promoting FABMs rarely mentioned their limitations or that effectiveness depends on factors such as the method used, sufficient training or instruction, motivation, partner cooperation, and natural biomarker variability.
“Importantly, they failed to mention that these methods are generally ineffective when used on their own,” Dr. de Moel-Mandel said in a statement. “This kind of misinformation, combined with a growing distrust in healthcare professionals can result in unsafe decisions and ultimately, unplanned pregnancies.”
Why Young People Don’t Trust Their Doctors
The study found signs of a growing disconnect between patients and healthcare providers. When shared decision-making expectations aren’t met, patients may feel pressured, dismissed, or even deceived — sentiments echoed in many TikTok videos.
Some creators shared personal stories about feeling brushed off or not fully informed about side effects, while others openly recommended turning to “hormone health coaches” or “health educators” instead of medical professionals. These titles are largely unregulated and do not necessarily indicate formal training or credentials.
Cleaning Up Contraceptive Misinformation
Misinformation about contraception can have serious consequences. They include discontinuation of effective methods, incorrect use of FABMs, or avoidance of hormonal contraception altogether, all of which can increase the risk of unintended pregnancy.
FABMs can be valid and effective choices for some, but success depends on multiple factors: the specific method used, adequate training, ongoing motivation, partner cooperation, and the natural variability of fertility markers. TikTok influencers rarely provide such detail, instead presenting simplified or overly positive portrayals.
Artificial intelligence (AI) start-up Perplexity has made a surprise $34.5bn (£25.6bn) takeover bid for Google’s Chrome internet browser.
Moving Chrome to an independent operator committed to user safety would benefit the public, Perplexity said in a letter to Sundar Pichai, the boss of Google’s owner Alphabet.
But one technology industry investor called the offer a “stunt” that is a much lower than Chrome’s true value and highlighted that it is not clear whether the platform would is even for sale.
The BBC has contacted Google for comment. The firm has not announced any plans to sell Chrome – the world’s most popular web browser with an estimated three billion-plus users.
Google’s dominance of the search engine and online advertising market has come under intense scrutiny, with the technology giant embroiled in years of legal wrangling as part of two antitrust cases.
A US federal judge is expected to issue a ruling this month that could see Google being ordered to break up its search business.
The company has said it would appeal such a ruling, saying the idea of spinning off Chrome was an “unprecedented proposal” that would harm consumers and security.
A spokesman for Perplexity told the BBC that its bid marks an “important commitment to the open web, user choice, and continuity for everyone who has chosen Chrome.”
As part of the proposed takeover, Perplexity said it would continue to have Google as the default search engine within Chrome, though users could adjust their settings.
The firm said it would also maintain and support Chromium, a widely-used open-source platform that supports Chrome and other browsers including Microsoft Edge and Opera.
Perplexity did not respond to queries about how the proposed deal would be funded. In July, it had an estimated value of $18bn.
Technology industry investor and start-up founder Heath Ahrens called Perplexity’s move a “stunt, and nowhere near Chrome’s true value, given its unmatched data and reach.”
“The offer isn’t serious, but if someone like Sam Altman or Elon Musk tripled it, they could genuinely secure dominance for their AI,” he added.
It is also not clear whether Google is considering selling the platform, Tomasz Tunguz from Theory Ventures told the BBC.
He also said the offer is a lot lower than the browser is worth “given the value of Chrome is likely significantly higher – maybe ten times more valuable than the bid or more.”
Perplexity’s app is among the rising players in the generative AI race, alongside more well-known platforms like OpenAI’s ChatGPT and Google’s Gemini.
Last month, it launched an AI-powered browser called Comet.
The company made headlines earlier this year after offering to buy the American version of TikTok, which faces a deadline in September to be sold by its Chinese owner or be banned in the US.
The crew had launched to the ISS on March 14 in a routine mission replacing the Crew-9 team, which included NASA astronauts Butch Wilmore and Sunita Williams, who remained on the station after arriving via Boeing’s Starliner capsule.
SpaceX Dragon undocks from the International Space Station (Photo: X/@SpaceX)
Four astronauts from NASA’s Crew-10 mission has left the International Space Station aboard a SpaceX Dragon capsule, bound for a splashdown off the US West Coast on Saturday after a five-month crew rotation at the orbiting lab.
US astronauts Nichole Ayers and mission commander Anne McClain were joined by Japanese astronaut Takuya Onishi and Russian cosmonaut Kirill Peskov as they boarded the gumdrop-shaped Dragon capsule for a 17.5-hour journey back to Earth, aiming for a landing site off the California coast.
The crew had launched to the ISS on March 14 in a routine mission replacing the Crew-9 team, which included NASA astronauts Butch Wilmore and Sunita Williams, who remained on the station after arriving via Boeing’s Starliner capsule.
Crew-10 has contributed significantly to advancing scientific knowledge and technology demonstrations aboard the ISS. Their work encompassed hundreds of experiments spanning biology, material science, and human physiology, key research that supports future long-duration missions to the Moon, Mars, and beyond.
Among their studies were investigations of how microgravity impacts plant growth and protein yields in microalgae, vital for sustaining life on long-term spaceflights.
The mission underscores the reliability and growing capabilities of commercial crew flights, with SpaceX’s Crew Dragon spacecraft continuing to play a pivotal role in maintaining a continuous human presence in low Earth orbit.
A museum director found the ancient item in a gravelly creek
BONE TO PICK SCIENTISTS have been shocked to find an extremely rare tooth of a dinosaur from millions of years ago in a shallow Southern creek.
The fossil was identified as belonging to a hadrosaur, a group of massive mammals that lived on land — but the tooth was found in an area that would have been underwater during the age of dinosaurs.
The “very rare, 84 million-year-old hadrosaur dinosaur tooth” was found in Shark Tooth Creek in western Alabama, according to the Alabama Museum of Natural History.
A group with the museum was on a summer trip looking through the local creek when they stumbled across the distinct artifact.
Dr. John Friel, the director of the museum, said he was surprised to find the tooth in a bed of gravel while accompanying the activity.
“I have been doing these trips for the past ten years, but this was the first time I have ever found a dinosaur fossil,” Friel told McClatchy News affiliate Miami Herald.
Friel said when he first picked the tooth up, he thought it might just be an oddly shaped piece of bone.
Shark Tooth Creek, about 50 miles southwest of Tuscaloosa, is a popular spot for visitors to hunt for fossils and oyster shells.
The area is full of fossilized teeth dating back more than 60 million years, when most of Alabama was covered by shallow oceans full of sea creatures.
So it’s not unusual to find a piece of shark tooth or bone on the level that used to be the bottom of the ocean — but then Friel took a closer look.
“However, when I turned it over and saw that it had a shiny enameled surface with a distinctive texture, I was fairly certain it was a tooth,” Friel said.
Friel and two university paleontologists confirmed it appeared to be the base of a hadrosaur tooth, over a half-inch long.
But during the time they were alive, hadrosaurs weren’t anywhere near the area that is now known as Alabama.
The water cuts through rock that “formed roughly 84 million years ago when this part of Alabama was submerged under the sea,” Friel said.
The area was likely entirely underwater at the time the dinosaur would have been alive.
Hadrosaurs were duck-billed, herbivorous dinosaurs that spent most of their time on land, according to the University of California Museum of Paleontology.
Elon Musk said his company xAI will launch Grok 5 “before the end of this year”. This comes after OpenAI unveiled GPT-5 on Thursday.
Elon Musk claimed that xAI’s Grok 4 is better than OpenAI latest model, GPT-5.(REUTERS)
Tesla and SpaceX CEO Elon Musk, on Thursday, August 7, warned Microsoft CEO Satya Nadella that OpenAI is “going to eat Microsoft alive”. The world’s richest person’s statement was in response to the launch of ChatGPT-5. He further claimed that his company xAI’s AI model, Grok 4, is far superior to the latest OpenAI model.
Earlier in the day, Nadella announced that GPT-5 is being launched across several of the company’s platforms, such as Microsoft 365 Copilot, Copilot, GitHub Copilot, and Azure AI Foundry.
He further dubbed it as the “most capable model yet” from OpenAI, adding that it brings “powerful new advances in reasoning, coding, and chat, all trained on Azure.”
Commenting on the Microsoft CEO’s post, Musk wrote on his X platform, “OpenAI is going to eat Microsoft alive”.
Nadella was quick enough to hit back at Musk and stated that people had been trying to do that for 50 years now.
“People have been trying for 50 years and that’s the fun of it! Each day you learn something new, and innovate, partner, and compete. Excited for Grok 4 on Azure and looking forward to Grok 5,” he replied to Musk.
Elon Musk says Grok 4 ‘smarter’ than GPT-5
Musk did not stop here and later went on to share user feedback that favored his company’s product over the latest launch from OpenAI.
“Bottom line, though: Grok 4 Heavy was smarter 2 weeks ago than GPT5 is now and G4H is already a lot better. Let that sink in,” he wrote.
Further, he stated that xAI is all set to come out with Grok 5 “before the end of this year”. Teasing the upcoming launch, Musk said it will be “crushingly good”.
3D-rendering of quantum entanglement. (Image by Vink Fan on Shutterstock)
Scientists studying the strange world of quantum physics have found something extraordinary. Within a specific family of quantum systems, information seems to entangle in a mathematically predictable way. The result is the first rigorous derivation of a universal formula describing this behavior, offering insight into one of quantum theory’s most puzzling phenomena: entanglement.
A team of theoretical physicists from Kyushu University, Caltech, and the University of Tokyo has published their findings in the journal Physical Review Letters. They provide a systematic and controlled derivation (rather than a discovery) of a previously conjectured formula describing how Rényi entropy behaves in conformal field theories (CFTs), a class of highly symmetric quantum systems.
Rényi entropy is a mathematical tool used to measure how much information is shared between two parts of a quantum system. When particles are entangled, knowing something about one instantly reveals information about the other, no matter how far apart they are. By focusing on how this information spreads, the researchers derived a precise formula that holds whenever the entangled region is shaped like a sphere, the system is in its lowest-energy (vacuum) state, and the parameter n approaches zero.
A Consistent Pattern in Quantum Information Sharing
Using advanced mathematical techniques, the researchers showed that in all CFTs, the amount of entanglement, measured as Rényi entropy, follows the same rule when the parameter n approaches zero. This might sound technical, but the implication is simple: in these specific quantum systems, information spreads in a predictable way, no matter the exact details of the particles involved.
This rule depends only on the shape of the boundary between entangled regions and a constant related to the theory’s energy properties. It’s a bit like discovering that no matter the ingredients, baking a cake always follows the same temperature-to-time ratio if you’re using a spherical pan and starting with cold batter.
How They Did It: The Power of Thermal Thinking
To uncover this pattern, the team used what’s called “thermal effective theory,” a method that treats quantum systems as if they have a kind of temperature, even when they’re not physically hot. This trick lets physicists simplify otherwise impossible calculations by focusing on the system’s big-picture behavior rather than tracking every individual particle.
In doing so, they also revealed how the system’s entanglement spectrum, (essentially, how information is distributed across different energy levels) matches what one would expect from other areas of quantum theory. Their result wasn’t just elegant; it was consistent with known physics.
Interestingly, the paper shows that two-dimensional systems behave differently from those in higher dimensions. In 2D, a separate formula derived using a method known as the “hot spot idea” applies to all values of the parameter n. However, the universal formula presented in this work holds only in the n approaching zero limit.
In higher dimensions, the hot spot technique is not directly applicable due to how temperature-like effects behave near the boundary of entangled regions. These effects become more complex and prevent a straightforward generalization of the 2D result.
“This study is the first example of applying thermal effective theory to quantum information,” said lead author Yuya Kusuki, an associate professor at the Kyushu University Institute for Advanced Study, in a statement. “The results of this study demonstrate the usefulness of this approach, and we hope to further develop this approach to gain a deeper understanding of quantum entanglement structures.”
Implications for Quantum Computing and Beyond
Although the study is theoretical, it could have real-world relevance down the line. Quantum computers rely on entangled particles to perform calculations that ordinary computers can’t. Knowing exactly how entanglement behaves could help engineers build more reliable machines and develop better error correction methods.
The researchers’ methods may also help in understanding the fabric of spacetime itself. Some cutting-edge theories suggest space and time might emerge from patterns of quantum entanglement. If true, this new derivation could be a small but crucial piece of that cosmic puzzle.
“The boundary thermal effective theory developed here is highly versatile,” the authors write, suggesting that their approach could be applied to other systems in future research.
The launch took place from Launch Site One in West Texas at 6:00 pm (India time), and was live broadcast on Blue Origin’s webcast.
Blue Origin NS-34 crew included Agra-born businessman Arvinder ‘Arvi’ Singh Bahal. (Photo: X/BlueOrigin)
Jeff Bezos’ Blue Origin completed its latest suborbital space tourism flight, NS-34, on a mission carrying India’s Arvinder “Arvi” Singh Bahal, an Agra-born real-estate investor, to the edge of space on Sunday.
Bahal, now a naturalised US citizen, has made it a personal quest to visit every country in the world. He holds both a private pilot’s licence and helicopter training, and his latest journey with Blue Origin is the culmination of a lifetime devoted to exploration and adventure.
The launch took place from Launch Site One in West Texas at 6:00 pm (India time), and was live broadcast on Blue Origin’s webcast, which started 30 minutes before liftoff.
In 17 seconds of ignition, New Shepard cleared the tower, taking the NS-34 crew on their way to the edge of space. Seconds later, the capsule separation was completed successfully as the crew members felt weightlessness.
At 7 minutes and 25 seconds after liftoff, the New Shepard booster landed back on the ground. Neary after another 3 minutes and 30 secons, the capsule landed back on earth, completing a historic journey.
The eleven-minute NS-34 mission saw Bahal, alongside Turkish businessman Gkhan Erdem, Puerto Rican journalist Deborah Martorell, British philanthropist Lionel Pitchford, American entrepreneur JD Russell, and Grenada’s ambassador Justin Sun, soar above the Krmn line—the internationally recognised boundary of space.
With this mission, New Shepard has now launched 75 individuals into space, five of whom have flown on the spacecraft twice.
Phil Joyce, Senior Vice President of New Shepard, reflected on the global representation aboard the flight. “Seeing participants from multiple countries come together is always inspiring. There’s something profoundly unifying about viewing Earth from above — it changes perspectives in a way few other experiences can,” he said.
ISRO’s launch vehicle GSLV-F16 carrying the NISAR earth observation satellite lifts off from the launch pad at the Satish Dhawan Space Centre, in Sriharikota, Andhra Pradesh, on July 30, 2025. | Photo Credit: PTI
The Indian Space Research Organisation (ISRO) is hoping to launch the Block 2 BlueBird communications satellite, developed by the U.S.-based AST SpaceMobile, in three to four months from now, chairman of the space agency V. Narayanan said in Thiruvananthapuram, Kerala, on Friday (August 1, 2025).
This Indo-US collaboration follows on the heels of the NASA ISRO Synthetic Aperture Radar Mission (NISAR) which ISRO successfully launched on July 30 using the Geosynchronous Satellite Launch Vehicle (GSLV).
The BlueBird satellite is to be launched from the Satish Dhawan Space Centre, Sriharikota, on board the LVM3, ISRO’s heftiest launch vehicle which was formerly known as the GSLV-Mk III, he said.
The BlueBird satellite is expected to arrive in India in September, he said. Work is also progressing on the mission launch vehicle. Mr. Narayanan said that the satellite, weighing aound 6500 kg, was supposed to have arrived three months ago, but “developmental issues” had caused a delay.
On whether U.S. president Donald Trump’s trade policies would affect collaboration in science and technology, Mr. Narayanan said he “fully believes that whatever technology contracts that India has signed will be executed.”
First uncrewed mission in December
Mr. Narayanan reiterated ISRO’s plans to have the first of three uncrewed missions planned ahead of the Gaganyaan human spaceflight in December 2025. The remaining two uncrewed missions is expected to be held in 2026.
ISRO had earlier announced plans to have the crewed mission in the first quarter of 2027. Mr. Narayanan said that this schedule will be kept after studying the performance of the uncrewed missions.
On the development of the Gaganyaan programme, he said the human-rating of the launch vehicle has been completed. The development of the orbital module is in an “advanced stage,” he said, adding that the development of crew escape system is nearing completion.
Snippet of a humanoid robot in Dubai. (Instagram/@nazish8)
A video of a humanoid robot running across a street in Dubai has become a source of amusement for social media users. An Instagram post claimed the robot was seen near the Emirates Tower.
The footage was initially shared by Nazish Khan with a caption that read, “Welcome to the future.” Quickly, it went viral after being shared across various social media platforms.
What does the video show?
In the video, the robot is seen crossing a street. A person, presumably its operator, is seen walking behind it. The scene is recorded from inside a car.
What are humanoid robots?
Nvidia describes them as “general-purpose, bipedal robots modeled after the human form factor.” They are created to work alongside humans and help increase productivity. Their design allows them to learn and perform a variety of tasks, including unboxing, loading, grasping an object or moving a container.
Social media is excited:
Expressing wonder, an Instagram user wrote, “It’s not AI; Only in Dubai — where the future walks among us! Spotting robots on the streets is no longer sci-fi but part of everyday life here.” Another added, “Dubai never fails to amaze me.”
A few took the route of hilarity and wondered where the robot was headed. One such individual posted, “Rushing to pee? Looking for a job? Where is the robot going?” A fourth joined, “Can’t bear this hot weather, run baby run.”
Robot turns violent:
In a separate incident, a video sent shockwaves across social media after it captured a humanoid robot’s meltdown. The eerie footage showed a robot suspended from a construction crane flipping its limbs violently. As the scene unfolded, two men looked at the device in confusion and fear.
College students have a clear message for their schools: artificial intelligence skills matter more than traditional coursework. A new survey of 2,000 students reveals that 50% believe learning to use AI is the most important skill they’ll gain in college.
The research, commissioned by Grammarly and conducted by Talker Research, shows students are already living in an AI-powered world while their institutions struggle to keep up. About 87% of students use AI for schoolwork, and 90% use it for daily tasks outside of class. They spend about 10 hours per week with these tools, split evenly between schoolwork and personal tasks.
Another 62% believe that learning to use AI responsibly will be necessary for their future careers. Students aren’t just experimenting with technology. They’re preparing for a job market where AI literacy could determine their success.
School Policies Differ Dramatically Across Campuses
Most schools (73%) now have AI policies, but the rules vary wildly from campus to campus. About 30% of institutions allow AI for specific tasks only, while 31% permit general use as long as students cite their AI assistance properly. However, 32% still maintain complete “don’t use AI” policies.
Even at schools with clear guidelines, there’s a disconnect between policy and practice. While 69% of students say their professors have discussed AI rules, only 11% report that instructors actually encourage AI use. Many students find themselves caught between using tools they consider necessary and worrying about academic consequences.
The anxiety is real: 46% of students worry about getting in trouble for AI use, and 10% actually have faced consequences. More concerning, 55% feel they’re navigating AI use without proper guidance from their schools.
“AI is no longer a theoretical concept in education; it’s a core part of how students learn, work and prepare for what’s next,” said Jenny Maxwell, Head of Education at Grammarly, in a statement. “With around half of students feeling they’re navigating using AI without clear direction and many worried about getting in trouble, we see this as a wake-up call for educational institutions to provide the support students need to be both comfortable and confident using the technology.”
How Students Really Use AI for School and Life
Students aren’t using AI to cheat. They’re using it as a learning tool. The most popular uses include brainstorming ideas (49%), checking grammar and spelling (42%), and understanding difficult class concepts (41%). Other common applications include grasping topics outside school like taxes and finances (35%), developing thoughts and ideas (34%), and creating study materials like flashcards (24%).
Beyond academics, 29% of students ask AI questions they’re embarrassed to pose to real people, and 25% seek general life advice. Students also use AI for resume help (25%) and interview preparation (22%).
The social acceptance of AI use is growing among students. About 37% view using AI for schoolwork as acceptable when properly disclosed, while only 25% consider it cheating. Another 22% say their classmates view AI use as smart and efficient.
Students Want Better AI Training
Despite widespread use, only 34% of students feel confident they’re using AI ethically and responsibly for school tasks. The gap between usage and confidence shows the need for better training programs.
Most students (72%) don’t think their schools are behind the times with technology, but the disconnect between policy creation and practical guidance remains problematic. Students are essentially teaching themselves how to use AI while institutions focus on creating rules rather than providing education.
“Whether it’s curbing writer’s block, proofing students’ work or helping answer questions they’re hesitant to raise in class, AI is becoming a trusted collaboration partner for students,” said Maxwell. “Their enthusiastic adoption gives educators a powerful opportunity to meet students where they are and help shape a future where technology enhances learning and sets students up for long-term success in their professional and personal lives.”
The data reveals a generation that has already decided AI skills are necessary for their futures. Students aren’t waiting for institutional approval. They’re moving forward with or without guidance. The question for colleges and universities is whether they can evolve quickly enough to help students use these tools effectively and ethically.
The Nisar mission, a $1.5 billion joint venture between the Indian Space Research Organisation (Isro) and the United States’ National Aeronautics and Space Administration (Nasa), is now set to revolutionise climate monitoring and disaster response, not just for India, but worldwide.
India crossed a milestone in its space and climate ambitions on Wednesday with the successful launch of the Nasa-Isro Synthetic Aperture Radar (Nisar) satellite from the Satish Dhawan Space Centre in Sriharikota.
The 2393-kilogram satellite lifted off at 5:40 pm IST aboard Isro’s Geosynchronous Satellite Launch Vehicle (GSLV) into the vacuum of space.
The Nisar mission, a $1.5 billion joint venture between the Indian Space Research Organisation (Isro) and the United States’ National Aeronautics and Space Administration (Nasa), is now set to revolutionise climate monitoring and disaster response, not just for India, but worldwide.
Nisar is the world’s first Earth-mapping satellite equipped with dual-frequency synthetic aperture radar.
This combination of Nasa’s L-band radar and Isro’s S-band radar enables Nisar to capture the faintest shifts on Earth’s surface — whether under forests, clouds, or even in darkness — detecting movements as small as a few millimetres.
The synthetic aperture radar combines multiple measurements, taken as a radar flies overhead, to sharpen the scene below. It works like conventional radar, which uses microwaves to detect distant surfaces and objects, but steps up the data processing to reveal properties and characteristics at high resolution.
Nisar is designed to orbit the planet every 97 minutes, mapping nearly all land and ice surfaces with an unparalleled imaging swath every 12 days.
The implications for India, often at the frontline of climate change impacts and natural disasters, are profound.
Nisar’s freely accessible, near-real-time data will empower Indian researchers, disaster managers, and policymakers to monitor glacier movements in the Himalayas, detect fault-line shifts before earthquakes, track agricultural cycles, and manage water resources more effectively.
With this tool, India is set to improve forecasting for floods, droughts, and landslides, enabling rapid response and informed policy decisions.
A MAJOR COLLABORATION
This historic mission not only cements India’s position as a leader in space-based climate monitoring but also demonstrates how international collaboration can drive scientific progress for the collective good.
The development of the satellite spanned nearly a decade as the Indian and American space agencies came together for development, launch and now the operations of the satellite.
The S-Band SAR and L-Band SAR were independently developed, integrated, and tested by Isro and JPL/Nasa, respectively. The Integrated Radar Instrument Structure (IRIS), comprising both SAR systems and additional payload elements, was assembled and tested at JPL/NASA before being delivered to Isro.
The mainframe satellite elements and all payloads were then assembled, integrated, and tested at Isro’s U R Rao Satellite Centre (URSC).
WHAT’S NEXT AFTER LAUNCH?
The NISAR mission comprises four main phases: Launch, Deployment, Commissioning, and Science Operations.
In the Deployment Phase, a 12-meter diameter reflector will be extended 9 meters from the satellite using a complex deployable boom developed by JPL/Nasa.
The 90-day Commissioning Phase involves system checks and calibrations. The Science Operations Phase begins thereafter, continuing through the mission’s life with regular orbit manoeuvres, calibration/validation activities, and coordinated observation plans for L- and S-band instruments managed jointly by JPL and Isro.
SpaceX’s Falcon 9 rocket lifts off, carrying NASA’s Crew-10 astronauts to the International Space Station at the Kennedy Space Center in Cape Canaveral, Florida, U.S., March 14, 2025. REUTERS/Joe Skipper/File Photo Purchase Licensing Rights
When SpaceX was negotiating a deal with the Bahamas last year to allow its Falcon 9 rocket boosters to land within the island nation’s territory, Elon Musk’s company offered a sweetener: complimentary Starlink internet terminals for the country’s defense vessels, according to three people familiar with the matter.
The rocket landing deal, unlocking a more efficient path to space for SpaceX’s reusable Falcon 9, was then signed in February last year by Deputy Prime Minister Chester Cooper, who bypassed consultation with several other key government ministers, one of the sources and another person familiar with the talks said.
Reuters could not determine the dollar value of the Starlink arrangement or the number of vessels outfitted with Starlink terminals. The Bahamian military, mostly a sea-faring force with a fleet of roughly a dozen vessels, did not respond to a request for comment.
Reuters found no evidence that Cooper broke any laws or regulations in striking the deal with SpaceX, but the people said the quick approval created tension within the Bahamian government.
By this April, two months after the first and only Falcon 9 booster landed off the nation’s Exuma coast, the Bahamas announced it had put the landing agreement on hold. The government said publicly it wanted a post-launch investigation after the explosion in March of a different SpaceX rocket, Starship, whose mid-flight failure sent hundreds of pieces of debris washing ashore on Bahamian islands.
But the suspension was the result of the blindsided officials’ frustration as well, two of the people said.
“While no toxic materials were detected and no significant environmental impact was reported, the incident prompted a reevaluation of our engagement with SpaceX,” Chequita Johnson, the acting director general of the Civil Aviation Authority Bahamas, said in a statement.
SpaceX did not respond to questions for comment. Cooper did not respond to questions about how the rocket landing deal was arranged.
SpaceX’s setbacks in the Bahamas – detailed in this story for the first time – offer a rare glimpse into its fragile diplomacy with foreign governments.
As the company races to expand its dominant space business, it must navigate the geopolitical complexities of a high-stakes, global operation involving advanced satellites and orbital-class rockets – some prone to explosive failure – flying over or near sovereign territories.
These political risks were laid bare last month when Mexican President Claudia Sheinbaum said her government was considering taking legal action against SpaceX over “contamination” related to Starship launches from Starbase, the company’s rocket site in Texas, 2 miles north of the Mexican border.
Her comments came after a Starship rocket exploded into a giant fireball earlier this month on a test stand at Starbase. Responding to Sheinbaum on X, SpaceX said its teams have been hindered from recovering Starship debris that landed in Mexican territory.
MISSION TO MARS
SpaceX is pursuing aggressive global expansion as Musk, its CEO, has become a polarizing figure on the world stage, especially following high-profile clashes with several governments during his time advising President Donald Trump. More recently he has fallen out with Trump himself.
Starlink, SpaceX’s fast-growing satellite internet venture, is a central source of revenue funding Musk’s vision to send human missions to Mars aboard Starship. But to scale globally, SpaceX must continue to win the trust of foreign governments with which it wishes to operate the service, as rivals from China and companies like Jeff Bezos’ Amazon ramp up competing satellite networks.
The company’s talks with Bahamian officials show how Starlink is also seen as a key negotiating tool for SpaceX that can help advance other parts of its business.
According to SpaceX’s orbital calculations, the Falcon 9 rocket can carry heavier payloads and more satellites to space if its booster is allowed to land in Bahamian territory. Meanwhile, Starship’s trajectory from Texas to orbit requires it to pass over Caribbean airspaces, exposing the region to potential debris if the rocket fails, as it has in all three of its test flights this year.
SpaceX’s deal with the Bahamas, the government said, also included a $1 million donation to the University of Bahamas, where the company pledged to conduct quarterly seminars on space and engineering topics. The company must pay a $100,000 fee per landing, pursuant to the country’s space regulations it enacted in preparation for the SpaceX activities.
While SpaceX made steep investments for an agreement prone to political entanglement, the Falcon 9 booster landings could resume later this summer, two Bahamian officials said.
Holding things up is the government’s examination of a SpaceX report on the booster landing’s environmental impact, as well as talks among officials to amend the country’s space reentry regulations to codify a better approval process and environmental review requirements, one of the sources said.
Arana Pyfrom, assistant director at the Bahamas’ Department of Environmental Planning and Protection, said SpaceX’s presence in the country is “polarizing”. Many Bahamians, he said, have voiced concerns to the government about their safety from Starship debris and pollution to the country’s waters.
“I have no strong dislike for the exploration of space, but I do have concerns about the sovereignty of my nation’s airspace,” Pyfrom said. “The Starship explosion just strengthened opposition to make sure we could answer all these questions.”
STARSHIP FAILURES ROCK ISLANDS
Starship exploded about nine and a half minutes into flight on March 6 after launching from Texas, in what the company said was likely the result of an automatic self-destruct command triggered by an issue in its engine section. It was the second consecutive test failure after a similar mid-flight explosion in January rained debris on the Turks and Caicos Islands, a nearby British overseas territory.
Matthew Bastian, a retired engineer from Canada, was anchored in his sailboat on vacation near Ragged Island, a remote island chain in southern Bahamas, just after sunset when he witnessed Starship’s explosion. What he initially thought was a rising moon quickly became an expanding fireball that turned into a “large array of streaking comets.”
“My initial reaction was ‘wow that is so cool,’ then reality hit me – I could have a huge chunk of rocket debris crash down on me and sink my boat!” he said. “Fortunately that didn’t happen, but one day it could happen to someone.”
Thousands of cruise ships, ferries, workboats, fishing boats, yachts and recreational sailboats ply the waters around Caribbean islands each year, maritime traffic that is crucial for the Bahamas tourism industry.
Within days of the explosion, SpaceX dispatched staff and deployed helicopters and speedboats to swarm Ragged Island and nearby islands, using sonar to scan the seafloor for debris, four local residents and a government official told Reuters. On the surface, recovery crews hauled the wreckage from the water and transferred it onto a much larger SpaceX vessel, typically used to catch rocket fairings falling back from space, the people said.
The SpaceX team included its vice president of launch, Kiko Dontchev, who emphasized in a news conference with local reporters that the rocket is entirely different from the Falcon 9 boosters that would land off the Exuma coast under SpaceX’s agreement.
Recent revelations by OpenAI CEO Sam Altman on a podcast indicate that private conversations with AI chatbots like ChatGPT lack legal confidentiality.
If you talk to a therapist or a lawyer or a doctor about your problems, there’s legal privilege for it. ChatGPT doesn’t have one.
Your private conversations with AI chatbots like ChatGPT may not be as confidential as you might believe. In a recent YouTube podcast episode of This Past Weekend, hosted by Theo Von, OpenAI CEO Sam Altman admitted that interactions with AI tools are not protected by legal confidentiality, unlike conversations with doctors, lawyers, or therapists. According to Altman, the AI industry simply hasn’t caught up when it comes to protecting deeply personal conversations with users, and if sought legally, they might not remain confidential.
“People talk about the most personal sh*t in their lives to ChatGPT,” Altman confessed. “People use it, young people, especially, use it as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”
Altman warned that this could create a “privacy concern” for users in the case of a lawsuit, explaining that OpenAI would currently be legally obliged to produce those records.
“I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever — and no one had to think about that even a year ago,” he added.
Why this is concerning
In an ongoing copyright lawsuit involving OpenAI and The New York Times, the newspaper, along with other plaintiffs, requested a court order requiring OpenAI to retain all user conversations, including those that have been deleted, indefinitely. OpenAI has pushed back against the demand, calling it “an overreach.”
According to OpenAI, chats deleted by users on ChatGPT Free, Plus, and Pro accounts are typically removed from its systems within 30 days, unless there is a legal or security-related reason to retain them.
Fungal mycelium (Mycorrhizae) that provide symbiotic relationship between plants and fungi. (Photo by paitoon Meetee on Shutterstock)
The secret to fighting global malnutrition might be hiding in the soil beneath wheat fields. Australian scientists discovered that when farmers add specific fungi to wheat crops, the grain absorbs significantly more zinc, and delivers it in a form that the human body can more easily use. Iron absorption also improved, even though iron levels in the grain didn’t increase overall.
Around the world, 2 billion people don’t get enough zinc, while 4.5 billion lack sufficient iron in their diets. Both deficiencies cause serious health problems: stunted growth in children, weakened immune systems, and dangerous complications during childbirth. Since wheat provides about one-sixth of these nutrients for many people globally, any improvement could save lives.
The breakthrough, published in the journal Plants, People, Planet, centers on arbuscular mycorrhizal fungi, which are microscopic organisms that naturally partner with plant roots. When researchers added these fungi to eight different wheat varieties, zinc levels increased significantly while a nutrient-blocking compound called phytate stayed the same or even decreased.
What Are Mycorrhizal Fungi and How Do They Help Plants?
These soil fungi operate like an underground trading network. They latch onto wheat roots and spread thin threads throughout the soil, gathering nutrients that plants can’t reach on their own. In return, the wheat feeds the fungi sugars. More than 80% of plants on Earth form these partnerships, including virtually every major food crop.
When soil nutrients are scarce, this relationship becomes especially valuable. Phytate acts like a nutrient thief in the human digestive system: it binds to zinc and iron, preventing the body from absorbing them. Farmers face a frustrating problem: phosphorus fertilizers boost crop yields but also increase phytate levels, making the grain less nutritious even as it becomes more abundant.
How Scientists Tested Fungi on Eight Wheat Varieties
Scientists grew eight popular Australian wheat varieties in controlled greenhouse conditions, comparing plants treated with Rhizophagus irregularis fungi to untreated controls. They tested each variety under both low and high phosphorus fertilizer conditions across 192 individual pots.
After three months of growth, researchers harvested the mature grain and analyzed it using advanced techniques. They measured mineral content with specialized chemical analysis and created detailed maps showing exactly where zinc accumulated within each grain using X-ray technology.
Root examinations revealed that the fungi successfully colonized up to 70% of root length when soil phosphorus was low, dropping to about 40% when phosphorus was abundant. The fungi naturally reduce their efforts when nutrients become readily available.
Fungal Treatment Boosts Zinc Levels and Crop Yields
Five wheat varieties (Calibre, Mace, Rockstar, Scepter, and Trojan) showed increased zinc levels regardless of soil conditions when partnered with fungi. The remaining varieties responded positively under specific fertilizer conditions. One variety, Gladius, achieved particularly impressive zinc concentrations of 19.26 milligrams per kilogram.
Beyond nutrition, the fungi also boosted productivity. Grain weight increased 7–10% in some varieties, particularly under low-phosphorus conditions. Scepter and Spitfire varieties showed the most dramatic yield improvements when treated with fungi.
Most importantly, the fungal partnerships broke the usual trade-off between phosphorus fertilization and mineral absorption. Even when researchers added high levels of phosphorus fertilizer, plants with fungal partners maintained better zinc-to-phytate ratios compared to untreated plants. In several wheat varieties, the fungi sharply improved this ratio, a key factor that affects how well the human body can absorb nutrients. In some cases, the improvement was large enough to suggest that bioavailable zinc levels may have doubled.
Could Fungi-Enhanced Wheat Help Fight Global Malnutrition?
Commercial fungal products already exist for various agricultural uses, though their effectiveness varies based on soil conditions and environmental factors. The researchers used a commercially available product containing about 800 fungal spores per gram of soil.
Different wheat varieties responded uniquely to fungal treatment, which means farmers need to match specific crops with appropriate fungal partners. Some varieties formed extensive partnerships immediately, while others showed more modest colonization rates.
For populations that rely heavily on wheat and other grains, particularly in developing countries with limited meat consumption, even small improvements in mineral absorption could yield significant health benefits. Iron deficiency anemia alone affects over one billion people worldwide and contributes to reduced brain development and increased death rates.
Even in wealthier countries where wheat products provide 20–25% of dietary zinc intake, enhanced mineral absorption could help address hidden deficiencies that impact immune function and overall health without obvious symptoms.
Rather than requiring genetic modification or dramatic changes to farming practices, this approach harnesses biological partnerships that have existed for millions of years. As researchers continue investigating these fungal networks, naturally enhanced nutrition in staple crops moves closer to becoming a practical reality for addressing one of humanity’s most persistent challenges.
What if everything we can see, stretching hundreds of millions of miles in all directions, sits inside a cosmic bubble where matter is spread more thinly than everywhere else in the universe? New research published in the Monthly Notices of the Royal Astronomical Society says that’s exactly where we might be living, and it could solve one of astronomy’s most confusing problems.
For years, scientists have been grappling with what study author Indranil Banik describes as “a crisis known as the Hubble tension: the local universe appears to be expanding about 10% faster than expected.” When astronomers measure how fast space is expanding using two different methods, they get answers that don’t match. The discrepancy has grown large enough to suggest our fundamental understanding of the universe might be wrong.
“Looking up at the night sky, it may seem our cosmic neighborhood is packed full of planets, stars and galaxies. But scientists have long suggested there may be far fewer galaxies in our cosmic surroundings than expected,” Banik writes in a commentary for The Conversation. His study provides the strongest evidence yet that Earth sits inside a massive cosmic void called the “KBC void,” a region roughly 300 million light-years across where matter is about 20% less dense than the cosmic average.
How Scientists Used Ancient Sound Waves to Test the Theory
To test whether we live in a cosmic void, the researchers examined fossilized sound waves from when the universe was young. About 380,000 years after the Big Bang, the cosmos cooled enough for atoms to form, and sound waves that had been traveling through the hot, dense early universe suddenly stopped. These waves left behind patterns that astronomers can still detect today by studying how galaxies cluster together.
“By studying CMB temperature fluctuations on different scales, we can essentially ‘listen’ to the sound of the early universe, which is especially ‘noisy’ at particular scales,” Banik explains in his commentary. These ancient sound patterns, called baryon acoustic oscillations, work like a cosmic measuring stick that astronomers use to gauge distances across the universe.
The team compiled measurements from major sky surveys over the past two decades, including data from the Sloan Digital Sky Survey and the Dark Energy Spectroscopic Instrument (DESI). They then compared how well different models matched these observations: the standard model assuming space is uniform everywhere versus models that account for us living in a local void.
The concept works because if Earth sits in a cosmic void, the sparse matter around us would be gravitationally pulled toward denser regions outside the void, creating an outward flow. “My colleagues and I previously argued that the Hubble tension might be due to our location within a large void. That’s because the sparse amount of matter in the void would be gravitationally attracted to the more dense matter outside it, continuously flowing out of the void,” Banik writes.
The Evidence Strongly Supports the Void Theory
The results were dramatic. When the researchers tested their void models against 20 years of astronomical data, they found compelling evidence that we do live in such a cosmic bubble. In their analysis of 42 separate distance measurements, the standard model without a void showed significant tension with the observations. But the void models reduced this tension from 3.3-sigma to just 1.1-1.4 sigma, which is well within normal statistical variation.
To put this in perspective, Banik uses a coin-flipping comparison: “Our research shows that the ΛCDM model without any local void is in ‘3.8 sigma tension’ with the BAO observations. This means the likelihood of a universe without a void fitting these data is equivalent to a fair coin landing heads 13 times in a row. By contrast, the chance of the BAO data looking the way they do in void models is equivalent to a fair coin landing heads just twice in a row.”
Interestingly, the researchers didn’t adjust their void model parameters to fit the data. Instead, they used parameters established in previous work based on completely different observations, like galaxy counts and local expansion measurements.
What This Means for Our Understanding of the Universe
Rather than overturning current theories, this study proposes that where we are in the universe might be skewing our measurements. A large cosmic void around Earth could help explain why the local universe appears to be expanding faster than expected. The results encourage more precise low-redshift observations to test whether this local effect is real and significant.
The evidence extends beyond statistics. Galaxy surveys have consistently found fewer galaxies than expected in our local region across multiple types of observations, from optical light to X-ray wavelengths. Recent data from DESI, one of the most ambitious galaxy-mapping projects ever undertaken, also supports this interpretation.
As Banik writes, expanded research will be paramount: “In the future, it will be crucial to obtain more accurate BAO measurements at low redshift, where the BAO standard ruler looks larger on the sky – even more so if we are in a void.”
USERS of Elon Musk’s Starlink have been left without connection today following a major network outage.
Reported issues began to emerge around 3.30pm EDT, according to DownDetector.
A Starlink spokesperson said on X: “Starlink is currently in a network outage and we are actively implementing a solution.
“We appreciate your patience, we’ll share an update once this issue is resolved.”
The outage is reported to have caused disruption for thousands of users.
Some social media users have vented their frustration at the lack of connection.
One posted on Reddit: “Down in Tennessee. WFH too, right in the middle of the workday. Such a pain.”
Another said: “We have multiple Starlinks at different locations and they’re all down right now. We’re located in Florida.”
Users across the United States logged their loss of connection with DownDetector.
“Down in Maryland – Washington, DC area,” one shared.
Another posted: “Down in Northern California.”
“Down in rural central Texas,” reported a third.
Musk posted on X: “Service will be restored shortly.
“Sorry for the outage. SpaceX will remedy root cause to ensure it doesn’t happen again.”
Elon Musk and his companies have continued to make headlines over the past few months.
AI chatbot Grok went rogue earlier this month and started sharing pro-Hitler and antisemitic comments on X.
A spokesperson for xAI, the company behind Grok, said: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts.
“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”
Customers have also made their way to a Tesla high-tech diner where hungry guests can charge their cars as they’re served by robots.
Musk opened up the Tesla Diner in Los Angeles on Monday.
He praised the venue on X last week, saying: “I just had dinner at the retro-futuristic @Tesla diner and Supercharger.
USERS of Elon Musk’s Starlink have been left without connection today following a major network outage.
Reported issues began to emerge around 3.30pm EDT, according to DownDetector.
A Starlink spokesperson said on X: “Starlink is currently in a network outage and we are actively implementing a solution.
“We appreciate your patience, we’ll share an update once this issue is resolved.”
The outage is reported to have caused disruption for thousands of users.
Some social media users have vented their frustration at the lack of connection.
One posted on Reddit: “Down in Tennessee. WFH too, right in the middle of the workday. Such a pain.”
Another said: “We have multiple Starlinks at different locations and they’re all down right now. We’re located in Florida.”
Users across the United States logged their loss of connection with DownDetector.
“Down in Maryland – Washington, DC area,” one shared.
Another posted: “Down in Northern California.”
“Down in rural central Texas,” reported a third.
Musk posted on X: “Service will be restored shortly.
“Sorry for the outage. SpaceX will remedy root cause to ensure it doesn’t happen again.”
Elon Musk and his companies have continued to make headlines over the past few months.
AI chatbot Grok went rogue earlier this month and started sharing pro-Hitler and antisemitic comments on X.
A spokesperson for xAI, the company behind Grok, said: “We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts.
“Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”
Customers have also made their way to a Tesla high-tech diner where hungry guests can charge their cars as they’re served by robots.
Musk opened up the Tesla Diner in Los Angeles on Monday.
He praised the venue on X last week, saying: “I just had dinner at the retro-futuristic @Tesla diner and Supercharger.
A sweeping cyber espionage operation targeting Microsoft (MSFT.O), opens new tab server software compromised about 100 organizations as of the weekend, two of the organizations that helped uncover the campaign said on Monday.
Microsoft on Saturday issued an alert about “active attacks” on self-hosted SharePoint servers, which are widely used by organizations to share documents and collaborate within organizations. SharePoint instances run off of Microsoft servers were unaffected.
A man looks at his phone as he passes by the Microsoft stand at the Mobile World Congress trade show, in Barcelona, Spain, March 3, 2025. REUTERS/ Bruna Casas/File Photo Purchase Licensing Rights
Dubbed a “zero-day” because it leverages a previously undisclosed digital weakness, the hacks allow spies to penetrate vulnerable servers and potentially drop a backdoor to secure continuous access to victim organizations.
Vaisha Bernard, the chief hacker at Eye Security, a Netherlands-based cybersecurity firm, which discovered the hacking campaign, opens new tab targeting one of its clients on Friday, said that an internet scan carried out with the Shadowserver Foundation had uncovered nearly 100 victims altogether – and that was before the technique behind the hack was widely known.
“It’s unambiguous,” Bernard said. “Who knows what other adversaries have done since to place other backdoors.”
He declined to identify the affected organizations, saying that the relevant national authorities had been notified.
The Shadowserver Foundation confirmed the 100 figure. It said most of those affected were in the United States and Germany, and the victims included government organizations.
Another researcher said that, so far, the spying appeared to be the work of a single hacker or set of hackers.
“It’s possible that this will quickly change,” said Rafe Pilling, director of Threat Intelligence at Sophos, a British cybersecurity firm.
Microsoft said it had “provided security updates and encourages customers to install them,” a company spokesperson said in an emailed statement.
It was not clear who was behind the ongoing hack, but Alphabet’s (GOOGL.O), opens new tab Google, which has visibility into wide swaths of internet traffic, said it tied at least some of the hacks to a “China-nexus threat actor.”
Eight babies have been born in the UK using genetic material from three people to prevent devastating and often fatal conditions, doctors say.
The method, pioneered by UK scientists, combines the egg and sperm from a mum and dad with a second egg from a donor woman.
The technique has been legal here for a decade but we now have the first proof it is leading to children born free of incurable mitochondrial disease.
These conditions are normally passed from mother to child, starving the body of energy.
This can cause severe disability and some babies die within days of being born. Couples know they are at risk if previous children, family members or the mother has been affected.
Children born through the three-person technique inherit most of their DNA, their genetic blueprint, from their parents, but also get a tiny amount, about 0.1%, from the second woman. This is a change that is passed down the generations.
None of the families who have been through the process are speaking publicly to protect their privacy, but have issued anonymous statements through the Newcastle Fertility Centre where the procedures took place.
‘Overwhelmed with gratitude’
“After years of uncertainty this treatment gave us hope – and then it gave us our baby,” said the mother of a baby girl.
“We look at them now, full of life and possibility, and we’re overwhelmed with gratitude.”
The mother of a baby boy added: “Thanks to this incredible advancement and the support we received, our little family is complete.
“The emotional burden of mitochondrial disease has been lifted, and in its place is hope, joy, and deep gratitude.”
Mitochondria are tiny structures inside nearly every one of our cells. They are the reason we breathe as they use oxygen to convert food into the form of energy our bodies use as fuel.
Defective mitochondria can leave the body with insufficient energy to keep the heart beating as well as causing brain damage, seizures, blindness, muscle weakness and organ failure.
About one in 5,000 babies are born with mitochondrial disease. The team in Newcastle anticipate there is demand for 20 to 30 babies born through the three-person method each year.
Some parents have faced the agony of having multiple children die from these diseases.
Mitochondria are passed down only from mother to child. So this pioneering fertility technique uses both parents and a woman who donates her healthy mitochondria.
The science was developed more than a decade ago at Newcastle University and the Newcastle upon Tyne Hospitals NHS Foundation Trust and a specialist service opened within the NHS in 2017.
The eggs from both the mother and the donor are fertilised in the lab with the dad’s sperm.
The embryos develop until the DNA from the sperm and egg form a pair of structures called the pro-nuclei. These contain the blueprints for building the human body, such as hair colour and height.
The pro-nuclei are removed from both embryos and the parents’ DNA is put inside the embryo packed with healthy mitochondria.
The resulting child is genetically related to their parents, but should be free from mitochondrial disease.
A pair of reports, in the New England Journal of Medicine, showed 22 families have gone through the process at the Newcastle Fertility Centre.
It led to four boys and four girls, including one pair of twins, and one ongoing pregnancy.
“To see the relief and joy in the faces of the parents of these babies after such a long wait and fear of consequences, it’s brilliant to be able to see these babies alive, thriving and developing normally,” Prof Bobby McFarland, the director of the NHS Highly Specialised Service for Rare Mitochondrial Disorders told the BBC.
All of the babies were born free of mitochondrial disease and met their expected developmental milestones.
There was a case of epilepsy, which cleared up by itself and one child has an abnormal heart rhythm which is being successfully treated.
These are not thought to be connected to defective mitochondria. It is not known whether this is part of the known risks of IVF, something specific to the three-person method or something that has been detected only because the health of all babies born through this technique is monitored intensely.
Another key question hanging over the approach has been whether defective mitochondria would be transferred into the healthy embryo and what the consequences could be.
The results show that in five cases the diseased mitochondria were undetectable. In the other three, between 5% and 20% of mitochondria were defective in blood and urine samples.
This is below the 80% level thought to cause disease. It will take further work to understand why this occurred and if it can be prevented.
Prof Mary Herbert, from Newcastle University and Monash University, said: “The findings give grounds for optimism. However, research to better understand the limitations of mitochondrial donation technologies, will be essential to further improve treatment outcomes.”
The breakthrough gives hope to the Kitto family.
Kat’s youngest daughter Poppy, 14, has the disease. Her eldest Lily, 16, may pass it onto her children.
Poppy is in a wheelchair, is non-verbal and is fed through a tube.
“It’s impacted a huge part of her life,” says Kat, “we have a lovely time as she is, but there are the moments where you realize how devastating mitochondrial disease is”.
Despite decades of work there is still no cure for mitochondrial disease, but the chance to preventing it being passed on gives hope to Lily.
“It’s the future generations like myself, or my children, or my cousins, who can have that outlook of a normal life,” she says.
‘Only the UK could do this’
The UK not only developed the science of three-person babies, but it also became the first country in the world to introduce laws to allow their creation after a vote in Parliament in 2015.
There was controversy as mitochondria have DNA of their own, which controls how they function.
It means the children have inherited DNA from their parents and around 0.1% from the donor woman.
Any girls born through this technique would pass this onto their own children, so it is a permanent alteration of human genetic inheritance.
This was a step too far for some when the technology was debated, raising fears it would open the doors to genetically-modified “designer” babies.
Prof Sir Doug Turnbull, from Newcastle University, told me: “I think this is the only place in the world this could have happened, there’s been first class science to get us to where we are, there been legislation to allow it to move into clinical treatment, the NHS to help support it and now we’ve got eight children that seem to free of mitochondrial disease, what a wonderful result.”
While it might seem surprising that returning from just 400 kilometers above Earth takes nearly a full day, the extended duration is a result of complex orbital mechanics, safety protocols, and precise landing requirements.
SpaceX’s Dragon spacecraft undocks from International Space Station. (Photo: SpaceX)
When Indian astronaut Shubhanshu Shukla and his Axiom-4 crewmates undocked from the International Space Station (ISS) on July 14, 2025, they began a carefully planned 22.5-hour journey back to Earth.
While it might seem surprising that returning from just 400 kilometers above Earth takes nearly a full day, the extended duration is a result of complex orbital mechanics, safety protocols, and precise landing requirements.
Unlike simply descending straight down, the Dragon spacecraft must first perform a series of engine burns to safely distance itself from the ISS and enter a slightly different orbit. This manoeuvre prevents any risk of collision with the station and initiates what engineers call “free flight,” during which the spacecraft orbits Earth independently for several hours before re-entry begins.
The timing of the deorbit burn, the critical engine firing that slows the capsule enough to begin atmospheric re-entry, is carefully calculated to align with the rotation of Earth and the position of the designated splashdown zone off the coast of California.
Since the ISS orbits Earth at roughly 28,000 km/h, the spacecraft must wait for the right orbital position to ensure a safe and accurate landing.
During re-entry, the Dragon capsule faces extreme heat, with temperatures nearing 1,600 degrees Celsius. To protect the crew and spacecraft, the descent is gradual and carefully controlled.
Parachutes deploy in two stages—first stabilising chutes at about 5.7 km altitude, then main parachutes at around 2 km—to slow the descent for a safe ocean splashdown.
Weather conditions and recovery ship availability also influence the timing. If conditions at the primary landing site are unfavourable, the spacecraft may remain in orbit longer before initiating re-entry.
As Indian astronaut Shubhanshu Shukla prepares for his return to Earth, his farewell message from the International Space Station (ISS) resonated with pride, gratitude, and hope for India’s space future.
In a heartfelt ceremony, Shukla reflected on his journey, the achievements of the mission, and the collaborative spirit that defined his time in orbit.
In his address, Shukla captured the spirit of a new India as seen from the vantage point of space. “Today, India looks ambitious from space, fearless, confident, and proud. India is still saare jahan se accha,” he declared, referencing the iconic patriotic song that has inspired generations.
Shux described his mission as an “incredible journey” and emphasised that while his personal chapter in space exploration is ending, the journey of the Indian space agency is just beginning. “If we come together, it is attainable,” he said, urging continued unity and collaboration for future achievements.
CELEBRATING SCIENCE, COLLABORATION AND CAMARADERIE
Reflecting on his time aboard the ISS, Shukla expressed deep gratitude to the people who made the mission possible. “It has been made incredible by people on the space station. It was a joy to be working with professionals like you,” he said, acknowledging both his Indian and international colleagues.
He highlighted the mission’s scientific achievements, outreach efforts, and the sense of wonder that comes from viewing Earth from orbit. “We have done a lot of science, outreach, and always looked out on Earth-it seems magical to me,” Shukla noted.
Shukla also thanked ISRO for its vision and support, the researchers and students who contributed to the mission’s experiments, and NASA for their training and logistical support. “These missions have far-reaching implications beyond science and will also boost our countries,” he added.
THE POWER OF GLOBAL UNITY
Shukla’s sentiments were echoed by fellow astronauts. Peggy Whitson, Ax-4 commander, praised the camaraderie and teaching spirit that the ISS team extended to the Ax-4 crew.
Hungarian astronaut Tibor Kapu spoke emotionally: “We came with a purpose and made a lot of things happen-science, amazing pictures of Earth, funny videos, and made a lot of people proud on the ground. We made friends in space, shared inside jokes, and I can say we made it and did it. We were able to do it because we are people collaborating in space in the name of science. It was a great mission.”
A simple blood test might soon reveal whether your brain functions like a 30-year-old’s or a 70-year-old’s — and whether you’re likely to live a long, healthy life or face an early death. Stanford University researchers analyzed blood samples from nearly 45,000 people and discovered something surprising: organs age at wildly different speeds, and having a young brain and immune system could be your ticket to longevity.
People whose brains aged rapidly faced the same Alzheimer’s risk as those carrying the most dangerous genetic variant for the disease. On the flip side, those with younger brains had protection equal to carrying protective genes. Most striking of all, people with both young brains and immune systems had 56% lower odds of dying during the study.
Reading Your Body’s Age Through Blood
Stanford’s research team, led by Tony Wyss-Coray, PhD, a professor of neurology and neurological sciences, created what works like an age test for 11 major organs by measuring nearly 3,000 proteins in blood samples. When organs age or sustain damage, they leak specific proteins into the bloodstream — traces that reveal their condition. Scientists trained AI models to predict chronological age based on these protein patterns from each organ.
If someone’s predicted organ age exceeded their actual age, that organ was labeled “aged.” When it fell below, the organ was considered “youthful.” The method worked across 44,498 participants aged 40-70 from the UK Biobank, with researchers tracking them for up to 17 years to monitor disease development and deaths.
The findings, published in Nature Medicine, confirmed that organs don’t age in lockstep. Brain aging showed minimal correlation with other organs, proving that biological aging follows its own timeline in different body systems.
The More Aged Organs, The Higher Your Risk
Death risk climbed steeply with each additional aged organ. Those with 2-4 aged organs faced 2.3 times higher death risk. People with 5-7 aged organs saw their risk jump to 4.5 times normal. Most concerning were those with 8 or more aged organs. These individuals faced 8.3 times higher odds of death.
“More than 60% of people with 8+ extremely aged organs at blood draw died within 15 years,” researchers observed.
Brain aging proved especially critical among all organs tested. Beyond predicting death, it signaled increased risk for conditions far outside the brain, including heart failure and lung disease. This makes sense given the brain’s role as the body’s command center, controlling hormones, immune responses, and other vital functions through intricate signaling networks.
Your Lifestyle Shapes Your Organ Age
The research brought some good news: organ aging isn’t set in stone by genetics. Multiple lifestyle factors affected biological age across several organs. Smoking, heavy drinking, processed meat consumption, and poor sleep accelerated aging. In contrast, vigorous exercise, eating fish, and higher education levels correlated with younger organ profiles.
Several supplements and medications showed protective effects. Ibuprofen, glucosamine, cod liver oil, multivitamins, and vitamin C linked to younger biological ages in multiple organs, especially kidneys, brain, and pancreas. The hormone therapy Premarin, commonly prescribed for menopausal symptoms, correlated with younger immune, liver, and artery profiles.
Young Brain and Immune System: The Golden Combination
While aged organs generally meant higher death risk, the study revealed an important twist: not all youthful organs offered equal benefits. Surprisingly, people with young arteries actually showed increased death risk, and those with broadly youthful organs across many systems showed no survival advantage over normal agers.
Two organs, however, provided exceptional protection. People with young brains had 40% lower death risk, while those with young immune systems saw 42% lower risk. The combination proved especially powerful — individuals with both enjoyed the strongest protection from death.
“I expected many more organs to be linked to longevity, but our data suggest the immune system and brain are key,” lead author Hamilton Oh, PhD, tells StudyFinds. “After thinking more about it though, it makes intuitive sense. Both the brain and immune system control so many parts of our physiology – the brain through nerve branches that sprout from the spinal cord and the immune system through resident and migratory cells present in all tissues. These systems may be the guardians of our whole body.”
During the 17-year follow-up, only 3.8% of people with young brains and immune systems died, compared to 7.9% of normal agers. The brain-immune connection makes biological sense, as these systems constantly communicate and chronic inflammation speeds up aging throughout the body.
Brain aging stemmed largely from proteins produced by oligodendrocytes. These are cells that create myelin, the insulation around nerve fibers. This suggests white matter breakdown plays a central role in brain aging. The strongest marker was neurofilament light chain, already used in clinical trials to track brain degeneration.
The biological analysis showed that young brain aging correlated with preserved brain support structures, potentially because inflammatory factors caused less damage. Young immune aging linked to lower levels of inflammation-promoting proteins, indicating that controlling chronic inflammation helps maintain both systems.
Instead of viewing aging as an unstoppable, uniform decline, this research shows it’s an organ-specific process where the brain and immune system serve as crucial regulators of lifespan. For the first time, scientists can map which biological systems matter most for longevity — and many factors that influence organ aging appear modifiable through lifestyle choices.
The new Galaxy Z Fold7, Z Flip7, and the new Z Flip7 FE are available for pre-orders now.
Samsung introduced several updates to its foldable devices lineup on Wednesday (Jul 9), with the new Galaxy Z Fold7, Z Flip7, and the new Z Flip7 FE taking stage at the latest Unpacked event.
The Korean electronics company unveiled the upgrades – including a new version of their watch – in New York but also announced an expanded partnership with Google to inject more artificial intelligence into its foldable lineup.
Pre-orders for the devices in Singapore have already started, with those who pre-ordered able to receive their devices from Jul 29. The devices will be available in stores for purchase from Aug 1.
Here’s a quick look at each device announced:
A THINNER GALAXY FOLD7
The Fold7 is much thinner and lighter than its predecessors, coming in at 4.2mm (0.17 inch) thick when unfolded and 8.9mm folded.
It weighs 215g, an impressive feat considering the company also added bigger screens than the Fold 6 – 6.5 inches to the exterior screen and 8 inches on the interior screen.
The battery capacity remains the same as the previous generation.
A 200 megapixel camera will act as the main camera and a 10 megapixel camera will extend along the frame of the phone, giving users a quick option to capture wide shots. The Fold7 will retail for US$1,999.
In Singapore, it’s priced from S$2,698.
THE GALAXY Z FLIP7 AND THE FE
The flippable cousin of the Fold has an enlarged 4.1-inch top screen and folds down to only 13.7mm. It weighs just 188g.
It gets a slightly bigger 4,300mAh battery but only maintains a 50 megapixel main camera.
A cheaper version of the phone, called the Galaxy Z Flip 7 FE was also announced. It’s a slightly smaller version – with a 6.7 inch screen – of its premium counterpart.
Yet another story from Soham Parekh’s controversy is out. Dhruv Amin, co-founder of AI startup Create, reveals Parekh came to work only once and frequently missed meetings, and delayed deliverables.
Soham Parekh admits working multiple full-time jobs at once, says it was necessary
In a bizarre saga that could be ripped straight from a tech satire, Soham Parekh, an India-based software engineer, has stunned Silicon Valley after admitting to secretly working full-time for dozens of US startups, at the same time. What began as whispers of moonlighting quickly exploded into a full-blown controversy after revelations surfaced that Parekh was juggling roles at up to 34 different companies, sparking outrage, disbelief, and a flurry of memes.
The story broke when Suhail Doshi, founder and former CEO of Mixpanel, posted on X (formerly Twitter), accusing Parekh of deceiving several Y Combinator-backed startups. Doshi claimed he had fired Parekh within a week of uncovering the truth. As the thread gained traction, more founders chimed in, admitting they had either hired or interviewed Parekh, only to discover he was already employed elsewhere.
One such founder, Dhruv Amin, co-founder of AI startup Create, shared his experience with Parekh in an X thread that quickly went viral. Dhruv explained that Soham had joined his team in San Francisco as engineer number five, on the back of a recruiter’s recommendation and an impressive pair-programming interview. “Yes, we hired him He was eager and crushed our in-person pair programming onsite. I believe he’s actually a good engineer,” Dhruv wrote.
But the enthusiasm quickly turned into frustration.
After accepting the job, Parekh said he’d be away in New York and would begin a week later. When Monday rolled around, he texted Dhruv excitedly, only to call in sick on his first day. “He said he’d onboard from home. Gave an address to ship the laptop,” Dhruv noted.
From there, things only got weirder. Parekh missed meetings, delayed deliverables, and made excuses. It all unravelled when Dhruv’s team discovered he was actively working at another company, Sync, at the same time.
“When we called Soham up, he denied it to the end. Said Sync guys were just friends,” Dhruv recalled. But the real kicker came when Sync published an ‘Employee of the Month’ video, featuring none other than Soham Parekh himself.
His contract was swiftly terminated. “He dipped,” Dhruv said, assuming he was just a young engineer who had made a bad call. But when the wider story broke, Dhruv’s embarrassment turned to amazement. “Then I was pissed. Then impressed Still not sure how he pulled it off for so long with in-person startups and long hours, but appreciated the hustle. Hope he had a good reason. Feels like a stressful way to make money.”
As the tech world demanded answers, Parekh finally spoke out in an interview on The Backchannel podcast (TBPN), confirming what many had suspected. “It is true,” he said, calmly owning up to the deception. “I’m not proud of what I’ve done. But, you know, financial circumstances, essentially. No one really likes to work 140 hours a week, right? But I had to do this out of necessity. I was in extremely dire financial circumstances.”
He added that he completed all the work himself — no shortcuts, no AI, no external help — and maintained that his output met expectations.
Parekh claimed the hustle began in 2022, after postponing graduate school and enrolling in an online programme from Georgia Tech. But that detail raised more questions when a Georgia Tech spokesperson confirmed there was no record of his enrolment, casting further doubt on the timeline and fuelling speculation around how far the deception may have gone.
Despite the storm, Parekh has already landed on his feet. He’s now joined a San Francisco-based AI startup named Darwin, and has promised to leave his multi-job days behind. “I won’t be taking up any more additional jobs,” he said.
Over the past decade, health insurance companies have increasingly embraced the use of artificial intelligence algorithms. Unlike doctors and hospitals, which use AI to help diagnose and treat patients, health insurers use these algorithms to decide whether to pay for health care treatments and services that are recommended by a given patient’s physicians.
One of the most common examples is prior authorization, which is when your doctor needs to receive payment approval from your insurance company before providing you care. Many insurers use an algorithm to decide whether the requested care is “medically necessary” and should be covered.
These AI systems also help insurers decide how much care a patient is entitled to — for example, how many days of hospital care a patient can receive after surgery.
If an insurer declines to pay for a treatment your doctor recommends, you usually have three options. You can try to appeal the decision, but that process can take a lot of time, money and expert help. Only 1 in 500 claim denials are appealed. You can agree to a different treatment that your insurer will cover. Or you can pay for the recommended treatment yourself, which is often not realistic because of high health care costs.
As a legal scholar who studies health law and policy, I’m concerned about how insurance algorithms affect people’s health. Like with AI algorithms used by doctors and hospitals, these tools can potentially improve care and reduce costs. Insurers say that AI helps them make quick, safe decisions about what care is necessary and avoids wasteful or harmful treatments.
But there’s strong evidence that the opposite can be true. These systems are sometimes used to delay or deny care that should be covered, all in the name of saving money.
A Pattern Of Withholding Care
Presumably, companies feed a patient’s health care records and other relevant information into health care coverage algorithms and compare that information with current medical standards of care to decide whether to cover the patient’s claim. However, insurers have refused to disclose how these algorithms work in making such decisions, so it is impossible to say exactly how they operate in practice.
Using AI to review coverage saves insurers time and resources, especially because it means fewer medical professionals are needed to review each case. But the financial benefit to insurers doesn’t stop there. If an AI system quickly denies a valid claim, and the patient appeals, that appeal process can take years. If the patient is seriously ill and expected to die soon, the insurance company might save money simply by dragging out the process in the hope that the patient dies before the case is resolved.
This creates the disturbing possibility that insurers might use algorithms to withhold care for expensive, long-term or terminal health problems , such as chronic or other debilitating disabilities. One reporter put it bluntly: “Many older adults who spent their lives paying into Medicare now face amputation or cancer and are forced to either pay for care themselves or go without.”
Research supports this concern – patients with chronic illnesses are more likely to be denied coverage and suffer as a result. In addition, Black and Hispanic people and those of other nonwhite ethnicities, as well as people who identify as lesbian, gay, bisexual or transgender, are more likely to experience claims denials. Some evidence also suggests that prior authorization may increase rather than decrease health care system costs.
Insurers argue that patients can always pay for any treatment themselves, so they’re not really being denied care. But this argument ignores reality. These decisions have serious health consequences, especially when people can’t afford the care they need.
Moving Toward Regulation
Unlike medical algorithms, insurance AI tools are largely unregulated. They don’t have to go through Food and Drug Administration review, and insurance companies often say their algorithms are trade secrets.
That means there’s no public information about how these tools make decisions, and there’s no outside testing to see whether they’re safe, fair or effective. No peer-reviewed studies exist to show how well they actually work in the real world.
There does seem to be some momentum for change. The Centers for Medicare & Medicaid Services, or CMS, which is the federal agency in charge of Medicare and Medicaid, recently announced that insurers in Medicare Advantage plans must base decisions on the needs of individual patients – not just on generic criteria. But these rules still let insurers create their own decision-making standards, and they still don’t require any outside testing to prove their systems work before using them. Plus, federal rules can only regulate federal public health programs like Medicare. They do not apply to private insurers who do not provide federal health program coverage.
Some states, including Colorado, Georgia, Florida, Maine and Texas, have proposed laws to rein in insurance AI. A few have passed new laws, including a 2024 California statute that requires a licensed physician to supervise the use of insurance coverage algorithms.
But most state laws suffer from the same weaknesses as the new CMS rule. They leave too much control in the hands of insurers to decide how to define “medical necessity” and in what contexts to use algorithms for coverage decisions. They also don’t require those algorithms to be reviewed by neutral experts before use. And even strong state laws wouldn’t be enough, because states generally can’t regulate Medicare or insurers that operate outside their borders.
A Role For The FDA
In the view of many health law experts, the gap between insurers’ actions and patient needs has become so wide that regulating health care coverage algorithms is now imperative. As I argue in an essay to be published in the Indiana Law Journal, the FDA is well positioned to do so.
The FDA is staffed with medical experts who have the capability to evaluate insurance algorithms before they are used to make coverage decisions. The agency already reviews many medical AI tools for safety and effectiveness. FDA oversight would also provide a uniform, national regulatory scheme instead of a patchwork of rules across the country.
Some people argue that the FDA’s power here is limited. For the purposes of FDA regulation, a medical device is defined as an instrument “intended for use in the diagnosis of disease or other conditions, or in the cure, mitigation, treatment, or prevention of disease.” Because health insurance algorithms are not used to diagnose, treat or prevent disease, Congress may need to amend the definition of a medical device before the FDA can regulate those algorithms.
If the FDA’s current authority isn’t enough to cover insurance algorithms, Congress could change the law to give it that power. Meanwhile, CMS and state governments could require independent testing of these algorithms for safety, accuracy and fairness. That might also push insurers to support a single national standard – like FDA regulation – instead of facing a patchwork of rules across the country.
The move toward regulating how health insurers use AI in determining coverage has clearly begun, but it is still awaiting a robust push. Patients’ lives are literally on the line.
Google logo, EU flag and Judge gavel are seen in this illustration taken, August 6, 2024. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights
Alphabet’s (GOOGL.O), Google has been hit by an EU antitrust complaint over its AI Overviews from a group of independent publishers, which has also asked for an interim measure to prevent allegedly irreparable harm to them, according to a document seen by Reuters.
Google’s AI Overviews are AI-generated summaries that appear above traditional hyperlinks to relevant webpages and are shown to users in more than 100 countries. It began adding advertisements to AI Overviews last May.
The company is making its biggest bet by integrating AI into search but the move has sparked concerns from some content providers such as publishers.
The Independent Publishers Alliance document, dated June 30, sets out a complaint to the European Commission and alleges that Google abuses its market power in online search.
“Google’s core search engine service is misusing web content for Google’s AI Overviews in Google Search, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss,” the document said.
It said Google positions its AI Overviews at the top of its general search engine results page to display its own summaries which are generated using publisher material and it alleges that Google’s positioning disadvantages publishers’ original content.
“Publishers using Google Search do not have the option to opt out from their material being ingested for Google’s AI large language model training and/or from being crawled for summaries, without losing their ability to appear in Google’s general search results page,” the complaint said.
The Commission declined to comment.
The UK’s Competition and Markets Authority confirmed receipt of the complaint.
Google said it sends billions of clicks to websites each day.
“New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered,” a Google spokesperson said.
The Independent Publishers Alliance’s website says it is a nonprofit community advocating for independent publishers, which it does not name.
The Movement for an Open Web, whose members include digital advertisers and publishers, and British non-profit Foxglove Legal Community Interest Company, which says it advocates for fairness in the tech world, are also signatories to the complaint.
They said an interim measure was necessary to prevent serious irreparable harm to competition and to ensure access to news.
Google said numerous claims about traffic from search are often based on highly incomplete and skewed data.
“The reality is that sites can gain and lose traffic for a variety of reasons, including seasonal demand, interests of users, and regular algorithmic updates to Search,” the Google spokesperson said.
Foxglove co-executive director Rosa Curling said journalists and publishers face a dire situation.
“Independent news faces an existential threat: Google’s AI Overviews,” she told Reuters.
“That’s why with this complaint, Foxglove and our partners are urging the European Commission, along with other regulators around the world, to take a stand and allow independent journalism to opt out,” Curling said.
ChatGPT detects employees through video call meetings. (Representative Image)
Leaving a workplace after a certain period is part of one’s professional journey, as people often switch companies for better opportunities, higher compensation or different types of exposure at other organisations. However, the decision completely lies with the employee, with no role for the current company to play, at least not before a formal resignation is submitted. What if your company is already aware of your plans in advance? While it is not too likely for anyone to learn about someone’s resignation plans, ChatGPT seems to have made that possible.
Sarthak Ahuja, an investment banker and content creator (finance), recently opened up on how ChatGPT can actually predict a soon-to-resign employee. Not just that, it can also help companies understand if one is applying elsewhere months before making it public.
In his explanatory video, Sarthak mentioned that if a company uses transcriptions during Zoom calls or records the meetings, the meeting transcripts are later uploaded to ChatGPT to check the pattern, which starts showing which person may resign first. “There are two things. The first is that ChatGPT recognises people who use the term ‘you guys’ a lot more than the term ‘we should.’ These people are at higher risk of resigning from the company. The second factor is who makes the most number of excuses about why their video is turned off during meetings,” Sarthak explained in his post.
He then went on to explain how the patterns give certain information to the manager, who then creates a descending order ranking of the least engaged employees with clear actions to re-engage them. These factors take into account the expressed frustration, tenure in the company, pay scale compared to the market structure and more. Companies can also use the information to stop giving such employees any promotions or higher pay packages during the appraisal.
“It’s pattern recognition from human behaviour coupled with psychology and market dynamics put into use. Forbes did a piece this week called ‘5 ChatGPT Prompts To Predict Which Of Your Employees Will Quit Next’ and it’s something you should read,” the creator wrote in his caption.
The post grabbed quite some attention on Instagram, with many sharing diverse opinions about such possibilities. Some even poked fun at the early prediction of resignations. One wrote, “More like a self-fulfilling prophecy. People who receive unfair appraisals/compensation are more likely to leave,” while another added, “Functional analysis in psychotherapy using AI is definitely challenging but will deliver robust results once optimised.”
Your body is quietly collecting genetic changes right now, and scientists want to map every single one. A massive new research network is launching the most ambitious genetic study since the Human Genome Project, but this time they’re tracking the thousands of DNA alterations that pile up throughout your lifetime.
The Somatic Mosaicism across Human Tissues (SMaHT) Network, backed by the National Institutes of Health, plans to catalog genetic changes that happen after conception. These aren’t mutations inherited from your parents, rather, they’re DNA alterations that accumulate as you live, making each person’s tissues genetically distinct by adulthood.
These quadrillions of genetic changes help explain why identical twins grow more genetically different with age, and why diseases like cancer can seemingly appear from nowhere. “A typical cell may acquire hundreds to thousands of somatic mutations in a lifetime,” the researchers write in their paper published in Nature. “There are trillions of cells in a human body and so the total number of somatic mutations acquired in a single individual may well exceed quadrillions.”
Your Body’s Daily DNA Changes
Your cells face constant attack from environmental toxins, UV radiation, and normal cellular processes. Most DNA changes are harmless, but some dramatically alter cell behavior. Cancer represents the most extreme example, though these mutations also factor into heart disease, neurological disorders, and aging.
Different body parts collect mutations at wildly different rates. Brain cells, which rarely divide after birth, gather about 16-20 new mutations yearly. Rapidly dividing colon cells rack up approximately 44 mutations annually. Sun-exposed skin shows clear UV damage patterns, while smokers’ lung tissue bears molecular tobacco scars.
These patterns, called mutational signatures (distinct patterns that reveal what caused specific DNA changes), work like genetic fingerprints revealing specific environmental exposures or cellular processes that caused particular DNA changes.
“Specific mutations can be present in very small numbers of cells or even single cells, and so detecting them is like looking for a needle in a haystack,” says co-lead author Tim Coorens, from the Broad Institute of MIT and Harvard and currently a research group leader at the European Bioinformatics Institute, in a statement.
The Largest Genetic Study Ever Attempted
SMaHT researchers plan to analyze tissue samples from 150 deceased donors, examining 19 different tissue types from each person. Samples will include organs from all three developmental layers forming the human body: brain, skin, and adrenal glands; heart, blood, and muscle; plus lungs, liver, and intestines.
Advanced DNA sequencing methods will detect mutations present in just single cells among millions. One technique, duplex sequencing (a method that sequences both strands of DNA to reduce errors), reduces error rates to less than one in 100 million, allowing scientists to spot authentic mutations that would otherwise disappear in sequencing noise.
Over 250 researchers from 52 institutions are involved, organized into five genome characterization centers and 14 technology development projects. Each donor gets extensive analysis, with researchers collecting tissue samples plus detailed medical histories, environmental exposure data, and demographics.
Game-Changing Discoveries Already Emerging
Early SMaHT research has revealed unexpected insights about human development and disease. Studies tracking these mutations show that when a fertilized egg first divides, one of the two resulting cells often gives rise to twice as many descendant cells in the adult body as its sibling—explaining developmental patterns scientists observed for decades.
More surprisingly, normal tissues often harbor mutations typically associated with cancer. In healthy 60-year-olds, approximately 90% of endometrial tissue contains driver mutations that could theoretically cause cancer, while only about 1% of colon tissue shows similar changes, despite colon cells having much higher mutation frequency (the rate at which mutations occur).
This contradiction demonstrates how context determines whether mutations cause disease. The endometrium goes through monthly cycles of shedding and regrowth that may prevent dangerous cell populations from establishing, while colon tissue faces different biological pressures.
SMaHT’s comprehensive mapping could transform medical practice by establishing baselines for normal mutation patterns, helping doctors distinguish between harmless age-related changes and early disease signs. The research might also predict which patients face higher risks for specific cancers or other mutation-driven diseases.
Beyond medicine, the project promises insights into fundamental biological questions about aging, longevity, and how environmental exposures leave lasting cellular marks. Rather than having one static genome, each person carries millions of subtly different genetic variants distributed across tissues, a discovery that could revolutionize approaches to human health and personalized medicine.
Until now, at least, the biggest was in May, when Microsoft began laying off about 6,000 workers, nearly 3% of its global workforce and its largest job cuts in more than two years as the company spent heavily on Artificial Intelligence. File | Photo Credit: Reuters
Microsoft is firing thousands of workers, its second mass layoff in months. The tech giant began sending out layoff notices Wednesday (July 2, 2025).
The company declined to say how many people would be laid off but said that it will comprise less than 4% of the workforce it had a year ago.
Microsoft said the cuts will affect multiple teams around the world, including its sales division and its Xbox video game business.
“We continue to implement organisational changes necessary to best position the company and teams for success in a dynamic marketplace,” it said in a statement.
Microsoft employed 228,000 full-time workers as of last June, the last time it reported its annual headcount. The company said Wednesday (July 2, 2025) that its latest layoffs would cut close to 4% of that workforce, which would be about 9,000 people. But it has already had at least three layoffs this year.
Until now, at least, the biggest was in May, when Microsoft began laying off about 6,000 workers, nearly 3% of its global workforce and its largest job cuts in more than two years as the company spent heavily on artificial intelligence.
Roman Chiporukha also told The Sun how it’s the human race’s ‘destiny’ to live in space
The capsule that will host a meal in spaceCredit: SpaceVIP
ONLY the mega-wealthy know that there is a travel agent who brokers trips to space for the rich.
Roman Chiporukha, otherwise known as Mr Intergalactic, founded SpaceVIP to take thrill-seekers out to space – but he believes holidays to the stars will be available for all soon.
Space tourism is an industry that has been blossoming, slowly but steadily, for several years now.
Companies around the world are sharpening their elbows for a slice of the market, whose target audience is the Earth’s ultra-wealthy.
And founder of SpaceVIP Chiporukha is ahead of the game, already offering a Michelin star meal in a pressurised Neptune capsule in the fringes of space.
But this comes at a hefty cost of $495,000 – appealing to the world’s wealthiest.
He told The Sun that, despite the eye-watering price, what started as terrifying, high-risk suborbital flights will soon transform into affordable trips for those wanting to see Earth at a different angle.
Drawing comparison to commercial flights, Chiporukha said how once space tourism expands, the cost will then decrease.
But it will also take more people to experience what’s dubbed the “overview effect” to start the shift to space.
He said: “The overview effect has been catalogued by every single astronaut that has been in space, and that is this notion of feeling, interconnectedness with the earth that you see from afar, and all of the beings on it.
“I’ve come to believe that space can be a catalyst for greater human change and a shift in consciousness.
“That’s a real governing factor of why people are drawn to this, and they want to educate humanity on.”
Chiporukha, however, is desperate to ditch the narrative that space travel and tourism is only for the world’s elite.
He said: “I find it, frankly, really frustrating when people are tweeting or talking their slightly irrelevant opinions, or speaking poorly about Bezos or Branson or Elon.
“They don’t realize that they’re using technology that was brought to them by satellites in space.
“So what some of these travelers are trying to do is they want to go up to space, they want to experience the overview effect.
“It’s very convenient for, you know, the media to sensationalize this billionaire joyride narrative ‘a****** rich guy goes to space, and we’re starving here’.
“That narrative is inherently false, because all of their missions have a tremendous purpose in this notion of space for earth.”
Despite this, SpaceVIP has teamed up with a Danish chef to create a six-hour dining experience up in space – and has had great interest in it.
It’s set to depart later this year from the Kennedy Space in Florida, with the flight set to last around six hours in a pressurized capsule, lifted by a stratospheric balloon to the lowest barrier of space.
Future passengers will have access to WiFi the entire time and will even be able to livestream the incredible experience for their family and friends to watch from Earth.
But organising such a trip doesn’t come without some difficulties.
As Chiporukha works essentially as a travel agent but for space specifically, he has complained that with space travel not fully understood yet, it can be hard to plan.
Chiporukha explained: “We are very transparent about the process, and the training, and all of the things that come in engaging in such an adventure.
“[So] sorry, sir, or, madam, you can’t get insurance, because there’s no actuarial data for space travel just yet, so it’s fly at your own risk.”
Mr Intergalactic also revealed how it is the human race’s “destiny” to eventually live in space.
He said: “I think we’re meant to be a space-faring species.
“I think that’s our inevitable future, and not to leave Earth because we f***** it up so much, but because it’s our destiny for lack of a better word.”
STAR SPACE MISSION
In April, Katy Perry soared 62 miles above Earth with Jeff Bezos’ fiancee Lauren Sanchez, journalist Gayle King, civil rights activist Amanda Nguyen, former NASA rocket scientist Aisha Bowe, and film producer Kerianne Flynn.
The group flew in Bezos’ own Blue Origin New Shepard NS-31 ship and soared past the Kármán line – the internationally recognised boundary of space.
They blasted off from West Texas as part of a history-making, 11-minute flight set to be the first all-female space trip since 1963.
The women on board even passed a “pink moon” as part of the first all-female spaceflight in over 60 years.
The women spent three minutes in pure weightlessness before the craft safely parachuted back down and landed in Texas.
CSIRO’s ASKAP radio telescope is made up of 36 dishes spread out across 6km on Wajarri Country. (Credit: Alex Cherney/CSIRO)
Around midday on June 13 last year, my colleagues and I were scanning the skies when we thought we had discovered a strange and exciting new object in space. Using a huge radio telescope, we spotted a blindingly fast flash of radio waves that appeared to be coming from somewhere inside our galaxy.
After a year of research and analysis, we have finally pinned down the source of the signal – and it was even closer to home than we had ever expected.
A Surprise In The Desert
Our instrument was located at Inyarrimanha Ilgari Bundara – also known as the Murchison Radio-astronomy Observatory – in remote Western Australia, where the sky above the red desert plains is vast and sublime.
We were using a new detector at the radio telescope known as the Australian Square Kilometer Array Pathfinder – or ASKAP – to search for rare flickering signals from distant galaxies called fast radio bursts.
We detected a burst. Surprisingly, it showed no evidence of a time delay between high and low frequencies – a phenomenon known as “dispersion.”
This meant it must have originated within a few hundred light years of Earth. In other words, it must have come from inside our galaxy – unlike other fast radio bursts which have come from billions of light years away.
A Problem Emerges
Fast radio bursts are the brightest radio flashes in the Universe, emitting 30 years’ worth of the Sun’s energy in less than a millisecond – and we only have hints of how they are produced.
Some theories suggest they are produced by “magnetars” – the highly magnetized cores of massive, dead stars – or arise from cosmic collisions between these dead stellar remnants. Regardless of how they occur, fast radio bursts are also a precise instrument for mapping out the so-called “missing matter” in our Universe.
When we went back over our recordings to take a closer a look at the radio burst, we had a surprise: the signal seemed to have disappeared. Two months of trial and error went by, until the problem was found.
ASKAP is composed of 36 antennas, which can be combined to act like one gigantic zoom lens six kilometers across. Just like a zoom lens on a camera, if you try to take a picture of something too close, it comes out blurry. Only by removing some of the antennas from the analysis – artificially reducing the size of our “lens” – did we finally make an image of the burst.
We weren’t excited by this – in fact, we were disappointed. No astronomical signal could be close enough to cause this blurring. This meant it was probably just radio-frequency “interference” – an astronomer’s term for human-made signals that corrupt our data.
It’s the kind of junk data we’d normally throw away.
Yet the burst had us intrigued. For one thing, this burst was fast. The fastest known fast radio burst lasted about 10 millionths of a second. This burst consisted of an extremely bright pulse lasting a few billionths of a second, and two dimmer after-pulses, for a total duration of 30 nanoseconds.
So where did this amazingly short, bright burst come from?
A Zombie In Space?
We already knew the direction it came from, and we were able to use the blurriness in the image to estimate a distance of 4,500 km. And there was only one thing in that direction, at that distance, at that time – a derelict 60-year-old satellite called Relay 2.
Relay 2 was one of the first ever telecommunications satellites. Launched by the United States in 1964, it was operated until 1965, and its onboard systems had failed by 1967.
But how could Relay 2 have produced this burst?
Some satellites, presumed dead, have been observed to reawaken. They are known as “zombie satellites.”
But this was no zombie. No system on board Relay 2 had ever been able to produce a nanosecond burst of radio waves, even when it was alive.
We think the most likely cause was an “electrostatic discharge.” As satellites are exposed to electrically charged gases in space known as plasmas, they can become charged – just like when your feet rub on carpet. And that accumulated charge can suddenly discharge, with the resulting spark causing a flash of radio waves.
Electrostatic discharges are common, and are known to cause damage to spacecraft. Yet all known electrostatic discharges last thousands of times longer than our signal, and occur most commonly when the Earth’s magnetosphere is highly active. And our magnetosphere was unusually quiet at the time of the signal.
Another possibility is a strike by a micrometeoroid – a tiny piece of space debris – similar to that experienced by the James Webb Space Telescope in June 2022.
According to our calculations, a 22 micro-gram micrometeoroid traveling at 20km per second or more and hitting Relay 2 would have been able to produce such a strong flash of radio waves. But we estimate the chance the nanosecond burst we detected was caused by such an event to be about 1%.
Plenty More Sparks In The Sky
Ultimately, we can’t be certain why we saw this signal from Relay 2. What we do know, however, is how to see more of them. When looking at 13.8 millisecond timescales – the equivalent of keeping the camera shutter open for longer – this signal was washed out, and barely detectable even to a powerful radio telescope such as ASKAP.
But if we had searched at 13.8 nanoseconds, any old radio antenna would have easily seen it. It shows us that monitoring satellites for electrostatic discharges with ground-based radio antennas is possible. And with the number of satellites in orbit growing rapidly, finding new ways to monitor them is more important than ever.
FILE PHOTO: Meta AI logo is seen in this illustration created on May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo
A federal judge ruled on Wednesday for Meta Platforms against a group of authors who had argued that its use of their books without permission to train its artificial intelligence system infringed their copyrights.
U.S. District Judge Vince Chhabria, in San Francisco, said in his decision that the authors had not presented enough evidence that Meta’s AI would dilute the market for their work to show that the company’s conduct was illegal under U.S. copyright law.
Chhabria also said, however, that using copyrighted work without permission to train AI would be unlawful in “many circumstances,” splitting with another federal judge in San Francisco who found on Monday in a separate lawsuit that Anthropic’s AI training made “fair use” of copyrighted materials.
“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” Chhabria said. “It stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one.”
A spokesperson for the authors’ law firm Boies Schiller Flexner said that it disagreed with the judge’s decision to rule for Meta despite the “undisputed record” of the company’s “historically unprecedented pirating of copyrighted works.”
A Meta spokesperson said the company appreciated the decision and called fair use a “vital legal framework” for building “transformative” AI technology.
The authors sued Meta in 2023, arguing the company misused pirated versions of their books to train its AI system Llama without permission or compensation.
The lawsuit is one of several copyright cases brought by writers, news outlets and other copyright owners against companies including OpenAI, Microsoft and Anthropic over their AI training.
The legal doctrine of fair use allows the use of copyrighted works without the copyright owner’s permission in some circumstances. It is a key defense for the tech companies.
Chhabria’s decision is the second in the U.S. to address fair use in the context of generative AI, following U.S. District Judge William Alsup’s ruling in the Anthropic case.
AI companies argue their systems make fair use of copyrighted material by studying it to learn to create new, transformative content, and that being forced to pay copyright holders for their work could hamstring the burgeoning AI industry.
Copyright owners say AI companies unlawfully copy their work to generate competing content that threatens their livelihoods. Chhabria expressed sympathy for that argument during a hearing in May, which he reiterated on Wednesday.
The judge said generative AI had the potential to flood the market with endless images, songs, articles and books using a tiny fraction of the time and creativity that would otherwise be required to create them.
SCIENTISTS have raised “urgent concerns” over new viruses discovered in bats which have the potential to spill over into humans and could be “highly fatal”.
Testing bats in China, experts found 22 viruses – 20 of which have never been seen before.
Two of the new viruses were close relatives of Nipah and Hendra virusesCredit: Getty
Two of these new bugs were of particular concern, as they were closely related to the deadly Nipah and Hendra viruses.
Both viruses can cause brain inflammation and dangerous respiratory disease in humans.
Nipah is a bat-bourne virus that’s been flagged as a “priority pathogen” by the World Health Organization (WHO) because of its potential to trigger an epidemic.
It can kill up to 70 per cent of its victims, with outbreaks reported in Bangladesh, India, Malaysia, the Philippines, and Singapore.
Meanwhile, Hendra is a rare virus that can spread to humans from horses that have been infected by disease-carrying bats.
Only seven cases have been reported in people, in Australia.
Scientists at the Yunnan Institute of Endemic Disease Control and Prevention detected two worrying viruses – described as the “evolutionary cousins” of Nipah and Hendra – while testing the kidneys of bats in the Yunnan province of China.
The bats lived in orchards close to villages, sparking concerns that fruit eaten by inhabitants and livestock may get contaminated and risk wider transmission.
“Bats have been implicated in a number of major emerging disease outbreaks, including Hendra, Nipah, Marburg and Ebola virus disease, severe and acute respiratory syndrome (SARS), Middle East respiratory syndrome (MERS) and Covid-19,” researchers wrote in the journal PLOS Pathogens.
“Bat-borne viruses are transmitted to humans either through direct contact with bats or via the ingestion of food or water contaminated with bat saliva, faeces, or urine.”
The study team – led by Dr Yun Feng – pointed out that previous research looking at the disease spreading potential of bats has only focused on their faeces.
But they said bugs living in bats’ kidneys also “present potential transmission risks” as they may be excreted through urine.
“The kidney can harbour important zoonotic pathogens, including the highly pathogenic Hendra and Nipah viruses,” scientists said.
They looked inside the kidneys of 142 bats from ten species, which were collected over four years in five areas of the Yunnan province.
Using advanced genetic sequencing, the team found 22 viruses, 20 of them never seen before.
Two of the most concerning were new henipaviruses, which are in the same group as Nipah and Hendra bugs.
The henipaviruses were found in fruit bats living near orchards close to villages.
Scientists said their study “rais[ed] urgent concerns about the potential for these viruses to spill over into humans or livestock.”
Dr Alison Peel, a veterinarian and wildlife disease ecologist from the Sydney School of Veterinary Science at The University of Sydney, said: “The main significance of this work lies in the discovery of viruses in bats in China that are ‘evolutionary cousins’ to two of the most concerning pathogens in humans – Hendra virus and Nipah virus – which circulate in bats and are highly fatal if they spill over into people.”
But she said the viruses require further study before we can definitively state that they can pass on from bats to people.
“While one of the new viruses in this study appears to be the closest known relative to these highly fatal viruses, there are some genetic differences in the regions of the virus responsible for binding to and entering cells, so we can’t automatically assume that it can cross over to new species.
“We have other examples of close evolutionary cousins to Hendra and Nipah that appear not to be of any concern for spillover, so there will need to be some more laboratory studies on these new viruses to determine the actual risk.
Dr Peel went on: “Importantly, the bats infected with the Hendra-like virus were captured in fruit orchards, highlighting potential opportunities for contact with humans and domestic species.
NASA’s Curiosity rover snapped pictures of a long-sought geological structure — dubbed “spiderwebs” — on the Red Planet that indicate a history of flowing water, the space agency announced.
The boxwork ridge structure spans 12 miles across at some points and indicates to experts that groundwater once spread across this section of the Red Planet that had previously only been observed from orbit.
“The images and data being collected are already raising new questions about how the Martian surface was changing billions of years ago,” NASA said in a statement Monday.
An image of Mars points out the up-close, mineral-rich surface ridges, which experts believe were made by flowing water. Nasa
“The Red Planet once had rivers, lakes, and possibly an ocean. Although scientists aren’t sure why, its water eventually dried up and the planet transformed into the chilly desert it is today,” NASA said.
Flowing groundwater created the crisscrossing ridges — some just a few inches tall — by leaving behind a trail of minerals that accumulated in cracks and fissures and then hardened as it dried.
“Remarkably, the boxwork patterns show that even in the midst of this drying, water was still present underground, creating changes seen today,” NASA said in its release.
“Eons of sandblasting by Martian wind wore away the rock but not the minerals, revealing networks of resistant ridges within,” it added.
The formation occurs via a similar mechanism to stalagmites and stalactites here on Earth, experts said.
The “spiderwebs” got their name when researchers observed the arachnid-esque pattern of ridges from orbit.
The pattern stretches across miles of a layer of Mount Sharp, a three-mile-tall mountain, which is also being studied by researchers on the Curiosity rover team at NASA.
What if everything you thought you knew about time was completely wrong? A physicist is now proposing that time itself isn’t the simple, one-way flow we experience, but actually has three separate dimensions. This wild idea might even finally solve some of the biggest mysteries in science.
Gunther Kletetschka from the University of Alaska Fairbanks has developed a mathematical model suggesting that our familiar sense of time ticking forward is like seeing only the tip of an iceberg. Beneath the surface, he argues, time has a hidden three-dimensional structure that could explain everything from why certain particles exist to how the entire universe works.
Right now, physics has a major problem. Scientists have two incredibly successful theories that describe how the universe works, but they contradict each other. Einstein’s relativity explains big things like planets and black holes perfectly. Quantum mechanics explains tiny particles flawlessly. But when scientists try to combine them (like when studying what happens inside a black hole) the math breaks down completely. It’s like having two different instruction manuals for the same machine, and they give you opposite directions.
Kletetschka’s three-dimensional time theory, published in World Scientific Connect, could be the missing piece that makes both instruction manuals work together.
How Three-Dimensional Time Actually Works
Think of time like a braided rope. From far away, it looks like a single strand moving in one direction. But up close, you can see it’s actually made of three separate cords twisted together. That’s essentially what Kletetschka is proposing about time itself.
In his model, time has three different “directions” that operate at completely different scales:
We only experience one dimension of time because the other two only matter at extremes we never encounter in daily life. It’s like living in a house and only noticing the ground floor, while the basement and attic exist but don’t affect your daily routine.
The Theory Makes Stunning Predictions, And They’re Right
Unlike many physics theories that are too abstract to test, Kletetschka’s model makes specific predictions about the real world. When scientists check those predictions against actual measurements, they match almost perfectly.
Take subatomic particles, the building blocks of everything in the universe. These particles come in three distinct “families” or generations, kind of like three different sizes of the same basic tool. Scientists have known about this pattern for decades, but nobody could explain why there are exactly three families, or why their weights follow such specific patterns.
Kletetschka’s theory says this happens because of the three time dimensions. It predicts that particles in these three families should have weight ratios of roughly 1 to 4.5 to 21. Here’s another way to make sense of it: if the lightest particle in a family weighs as much as a paperclip, the middle one should weigh like a smartphone, and the heaviest should weigh like a large textbook.
This pattern shows up consistently across different types of particles, and the theory says it’s not a coincidence. It’s a direct result of how three-dimensional time is structured.
The theory gets the exact measurements right with incredible precision. Scientists measure particle weights using special units called GeV and MeV (think of them like very precise scales for weighing things smaller than atoms). The theory predicted the top quark (the heaviest fundamental particle we know) should weigh 173.21 units. The actual measured weight? 173.2 units.
Even more impressive, it predicted the weight of the muon (a heavier cousin of the electron that makes up atoms) correctly to seven decimal places. In the world of physics, that kind of accuracy really is like hitting a bullseye from miles away.
Why Some Forces Act Weird—And How Time Explains It
The theory also explains one of nature’s strangest behaviors. There’s a force called the weak nuclear force that governs radioactive decay, which is the process that makes some atoms unstable and break apart over time. This force has a bizarre quirk: it only interacts with particles that “spin” in one direction, like a cosmic preference for left-handed screws over right-handed ones.
Scientists call this “parity violation,” and it’s like discovering that all the locks in the universe only turn clockwise, never counterclockwise. Nobody really understood why nature has this preference.
Kletetschka’s model suggests the answer lies in the geometry of three-dimensional time itself. Just like a spiral staircase naturally curves in one direction, the structure of time creates this built-in asymmetry. It’s not an arbitrary rule, but rather a fundamental feature of how time is shaped.
The theory also makes predictions about gravitational waves. These are ripples in space and time caused by massive cosmic events, like when two black holes crash into each other. These waves were only detected for the first time in 2015, confirming one of Einstein’s predictions about gravity.
According to three-dimensional time theory, these waves should travel at slightly different speeds than light, being off by only 1.5 parts in a quadrillion. To put that in perspective, that’s like measuring the distance from New York to Los Angeles and being off by less than the width of a human hair. It’s an incredibly tiny difference, but our most sensitive detectors might be able to measure it.
Testing the Theory: What Scientists Will Look For
The beauty of this theory is that it doesn’t just make vague philosophical claims. It actually tells scientists exactly what to look for in their experiments.
What This Means for Our Understanding of Reality
If this theory proves correct, it would fundamentally change how we think about existence itself. Instead of matter existing within time, the theory suggests that matter is actually made from time.
As Kletetschka puts it in his paper, “what we perceive as mass and energy are manifestations of temporal curvature and dynamics.” In simpler terms, the particles that make up your body, the energy that powers your brain, and even the gravity holding you to Earth might all be different expressions of how time bends and flows in three dimensions.
This is a radically different way of thinking about reality. It’s the kind of paradigm shift that would make every physics textbook obsolete overnight, similar to how Einstein’s relativity overturned Newton’s clockwork universe, or how the discovery that Earth orbits the sun revolutionized astronomy.
The Road Ahead: Proof or Disproof
Of course, extraordinary claims require extraordinary evidence. The physics community will rightly demand rigorous proof before accepting such a radical reimagining of time itself.
But unlike many “theories of everything” that make untestable predictions, this one gives scientists a clear roadmap for verification. Over the next decade, experiments will be able to definitively prove whether three-dimensional time is real or just an elegant mathematical fiction.
The Large Hadron Collider’s upcoming high-luminosity upgrade will probe energy ranges where the predicted new particles should appear. Advanced gravitational wave detectors will become sensitive enough to measure the tiny speed variations the theory predicts. Space telescopes will map dark energy’s behavior across cosmic history with unprecedented precision.
Perhaps most importantly, the theory makes specific numerical predictions that leave little room for ambiguity. Either the neutrinos have exactly the masses it predicts, or they don’t. Either the new particles appear at the predicted energies, or they don’t. Either gravitational waves show the predicted speed differences, or they don’t.
In science, theories live or die by their predictions. And this theory has given scientists plenty of targets to aim for.
You’ve built your career, found the right partner, and now you’re ready to start a family at 45. Nearly 5% of Swedish babies are now born to mothers over 40, but a study tracking over 312,000 births has uncovered troubling health risks that increase significantly with each passing year.
Swedish researchers discovered that women who give birth at 45 and older face nearly double the risk of stillbirth compared to those in their late thirties. Their babies are also 82% more likely to develop dangerous low blood sugar levels and 68% more likely to be born prematurely, complications that can affect both mother and child for years.
While severe outcomes remain relatively rare, the research, published in Acta Paediatrica, shows that each additional year of maternal age incrementally raises the odds of problems with lasting consequences.
Examining Over 300,000 Births
Researchers at Uppsala University examined Sweden’s comprehensive Medical Birth Register, analyzing every singleton birth to mothers aged 35 and older between 2010 and 2022. They divided women into three groups: ages 35–39 (the comparison group), 40–44 (advanced maternal age), and 45 and older (very advanced maternal age).
Sweden’s healthcare system meticulously tracks pregnancy outcomes, giving researchers access to detailed information on more than 312,000 births, representing nearly a quarter of all singleton births in the country during that period. Of these births, 81% were to women aged 35–39, 18% to women aged 40–44, and just over 1% to women 45 and older. Remarkably, 6% of women in the oldest group were actually 50 or older when they gave birth.
Health Risks Climb Sharply After 45
Women in the oldest age group faced significantly higher rates of pregnancy complications. Beyond the increased stillbirth risk, babies born to mothers 45 and older were 46% more likely to be small for gestational age, a condition that can cause immediate problems like difficulty maintaining body temperature and blood sugar, plus long-term developmental issues.
Low Apgar scores, which measure a newborn’s condition immediately after birth, were more common among babies born to older mothers in the 40–44 age group. However, the most concerning finding was the dramatic spike in hypoglycaemia, or dangerously low blood sugar, among newborns of the oldest mothers. Without quick treatment, this condition can cause seizures and brain damage.
Mothers themselves showed distinct patterns as well. Women in the oldest age groups were more likely to be shorter, have higher body weights, and suffer from diabetes and high blood pressure. They were also far more likely to have used fertility treatments—18% of women 45 and older compared to just 6.7% of women in their late thirties.
Nearly half of the oldest mothers (46%) delivered via cesarean section, compared to less than 23% of women aged 35–39, reflecting both increased complications and more cautious medical approaches.
Implications for Family Planning
The research doesn’t suggest women should panic about delaying childbirth, but it does provide important context for family planning decisions. Understanding these risks allows women and their healthcare providers to make informed choices and potentially take steps to minimize complications.
For women who do conceive later in life, the study underscores the importance of excellent prenatal care and close monitoring throughout pregnancy. Many complications can be managed effectively when caught early.