Elon Musk Claims Tesla Will Be Among First to Achieve AGI, Possibly Through Humanoid Robots

The Tesla CEO Elon Musk announced that the electric vehicle maker is positioned to develop artificial general intelligence, with its Optimus humanoid robot and ‘atom-shaping’ manufacturing systems seen as key pathways.

Elon Musk | X/@OwenGregorian

Elon Musk declared on Wednesday that Tesla is on track to become one of a small number of companies capable of building artificial general intelligence (AGI), and suggested it could be the first to reach that milestone, not through software alone, but through the physical form of humanoid robots.

In a post shared on X, the social media platform he owns, Musk singled out Tesla’s work on its Optimus humanoid robot and advanced manufacturing capabilities as the company’s most promising routes to AGI. The Optimus robot, currently in development, is designed to perform repetitive and physically demanding tasks, drawing on the same AI infrastructure Tesla has built for its autonomous driving program.

Among the more striking elements of Musk’s remarks was his reference to “atom-shaping” as a pathway to AGI, a phrase that points to the precise, fine-grained manipulation of physical matter at microscopic scales. While Musk did not elaborate at length, the term signals an ambition that goes well beyond conventional software-driven AI, encompassing robotic systems capable of engaging with the physical world with extraordinary precision.

If realised, such capabilities could fundamentally transform industrial manufacturing, supply chains, and the very definition of what machines can do autonomously.

Tesla vs. the AI Pack

Musk’s assertion places Tesla alongside a handful of technology companies widely considered to be in the race for AGI, including OpenAI, Google DeepMind, and Anthropic. Tesla has traditionally been classified as an automotive and clean energy company, but Musk has long insisted it should be understood primarily as an AI and robotics firm.

Central to that argument is data: Tesla’s fleet of millions of vehicles worldwide continuously generates real-world driving data that the company uses to train its AI systems. Musk contends this gives Tesla an edge that pure-software AI labs cannot easily replicate, because the data is grounded in the physical, three-dimensional world rather than text and images alone.

What is AGI?

AGI – artificial general intelligence – refers to a system capable of performing any intellectual task that a human can, and potentially doing so far more quickly and cheaply. Unlike today’s AI tools, which excel in specific domains, AGI would be broadly capable, able to reason, plan, and act across an essentially unlimited range of contexts.

Source : https://www.freepressjournal.in/tech/elon-musk-claims-tesla-will-be-among-first-to-achieve-agi-possibly-through-humanoid-robots

 

Instagram To Alert Parents If Teens Repeatedly Search For Suicide Or Self-Harm Terms

Instagram is introducing parental alerts that notify supervised parents if their teens repeatedly search for suicide or self-harm related terms,

Instagram will alert parents if teens repeatedly search for suicide or self-harm-related terms.

Instagram will soon begin notifying parents if their teenager repeatedly searches for content related to suicide or self-harm within a short period of time, as part of its expanded parental supervision tools.

The social media platform, owned by Meta Platforms, said the feature is designed to give parents greater awareness while ensuring that vulnerable young people are directed to appropriate support services.

How the New Alerts Will Work

Under the new system, parents and teenagers enrolled in Instagram’s supervision programme will receive advance notice that alerts are being introduced.

If a teen repeatedly attempts to search for phrases promoting suicide or self-harm, or uses terms such as “suicide” or “self-harm”, parents will be informed.

The alerts will be sent via email, text message or WhatsApp, depending on the contact details provided, as well as through an in-app notification. The message will explain that the teen has repeatedly tried to search for sensitive terms within a short period. Parents will also be given access to expert guidance to help them approach potentially difficult conversations.

Building on Existing Teen Protections

Instagram said that most teenagers do not search for such content. The platform already blocks searches clearly linked to suicide or self-harm and instead directs users to helplines and mental health resources.

Content that promotes or glorifies suicide or self-harm is prohibited. While posts discussing personal struggles may be allowed, they are hidden from teen accounts.

The company added that it will continue to alert emergency services if it becomes aware of someone at imminent risk of physical harm.

Source : https://www.ndtv.com/feature/instagram-to-alert-parents-if-teens-repeatedly-search-for-suicide-or-self-harm-terms-11150611?pfrom=home-ndtv_lateststories

‘All AI Tools Are Blocked’: Techie Shares Ordeal Of Facing Data Security Rules In Corporate Life

Does the corporate reality differ from the enthusiasm outside about the AI revolution?

Techie’s interesting story about facing AI tools blockage. (representative image)

Even as the artificial intelligence tools have led to a major transformation and brought a revolution to the tech world, corporate companies in India haven’t quite embraced the new age and are imposing data security regulations on their techies. In a post shared online, a man claimed that their software development engineer friend, who works for a “big MNC”, has been barred from using new AI-based models as a potential coding companion.

“Today, I was talking to a friend who works as an SDE at a big MNC. I asked him about all the new AI models launching every other week. He looked genuinely surprised. He didn’t even know most of them,” shared the individual about their friend, roped in for a coding job at a multi-national company.

‘Don’t You Use AI To Code?”

Discovering the reality of their corporate life, detached from the AI revolution, the individual said his techie friend admitted he wasn’t aware of the coding tools he was discussing since he is “not allowed to” use them at work. The developer confirmed that all AI tools are blocked by his organisation while operating on client projects.

“I asked him, Don’t you use AI to code? He said he is not allowed to. All AI tools are blocked while working on client projects,” the post claimed. When asked to further explain the scenario at hand, the techie is understood to have cited the data security restrictions in play.

“Reason? Clients don’t want their proprietary code potentially being used as training data,” the post added.

Techie’s Situation Strikes A Chord Online

As the post related to an individual’s developer friend facing hurdles in using the AI tools to ease his daily operations gained traction online, people sympathised with the person and shared their own stories of the corporate world imposing restrictions on AI usage.

“Even though a new AI model appears almost every day, it takes companies a lot of time to adopt them. In my office work, I have used only ChatGPT or Copilot,” shared a person.

Someone else commented, “Corporate constraints slow adoption. Policy often lags behind capability. Builders outside big systems move faster.”

Source : https://www.news18.com/viral/all-ai-tools-are-blocked-techie-shares-ordeal-of-facing-data-security-rules-in-corporate-life-aa-ws-l-9934128.html

 

WhatsApp Brings Ads To Status, Claims Chats Will Remain End-to-end Encrypted

WhatsApp has announced the global rollout of status ads and promoted channels, aimed at assisting businesses in reaching new customers. While personal chats remain end-to-end encrypted, businesses can now advertise within Status and Channels, enhancing content discovery.

With promoted Channels, businesses can pay to appear higher in the directory so more users can discover them. (AP Photo/Patrick Sison, File)

Meta-owned WhatsApp is WhatsApp has confirmed the launch of status ads and promoted channels globally. According to Mark Zuckerberg-owned instant messaging giant, WhatsApp Status will now show ads to help businesses reach new customers and start conversations directly on the app.

The instant messaging app is also promoting Channels,by boosting selected channels in the directory. WhatsApp claims that personal chats, calls and Status updates will remain end-to-end encrypted and will not be used for targeted advertising.

“By showing ads in Status, you can help your business get discovered by new customers and make it easy for them to start a conversation with you, all within WhatsApp. Promoted channel ads help your business get discovered by boosting your channel in the directory and making it easy for people to find content that’s right for them,” the messaging app wrote in a blogpost.

These updates can include text, photos, videos, stickers and polls. With promoted Channels, businesses can pay to appear higher in the directory so more users can discover them. WhatsApp says ads will only appear in Status and Channels, where people are more open to discovering new content.

In short, while your personal messages will stay private, WhatsApp is opening up new spaces inside the app for businesses to advertise and grow.

The feature also includes controls that allow users to block or hide ads from specific businesses if users get annoyed by their repetitive ads. This gives users greater control over what they see in the Updates tab.

In some regions, the Meta-owned app may also offer users the option to pay for an ad-free experience. This subscription will remove ads from the Updates tab, which includes Status and Channels. However, the price and availability of this option may differ depending on the country and the platform being used.

In related news, Indian government has introduced new rules for messaging platforms such as WhatsApp. The Meta-owned platform will be required to automatically log out every Indian user from WhatsApp Web every six hours due to national security concerns.

Source : https://www.timesnownews.com/technology-science/whatsapp-brings-ads-to-status-claims-chats-will-remain-end-to-end-encrypted-article-153712098

Scientists Find Sneaky Heart Attack Risk Factor in Women

andreswd//Getty Images

Heart disease is the leading cause of death in America, making prevention crucial. But new research suggests there’s a heart attack risk factor that women face. A study found that women may experience heart attacks and major adverse cardiovascular events (MACE) with lower levels of plaque buildup in their arteries than men.

The study, which was published in the journal Circulation: Cardiovascular Imaging, is raising a lot of questions about heart disease prevention in women and whether more intensive interventions are needed.

For the study, researchers analyzed data from nearly 4,300 people with no known prior coronary artery disease who sought help for chest pain. The researchers analyzed computed tomography angiography (CCTA) images that measured total plaque volume and total plaque burden (TPB), or the amount of plaque relative to the size of the blood vessel. (Plaque is a waxy buildup of cholesterol, fat, calcium, and more that accumulates inside the arteries. Plaque buildup restricts blood flow and raises the risk of heart attack and stroke.)

Meet the experts: Kevin Shah, M.D., cardiologist and program director of heart failure outreach at MemorialCare Heart & Vascular Institute at Long Beach Medical Center in Long Beach, CA; Navjot Sobti, M.D., interventional cardiologist and women’s heart health specialist at Northwell’s Northern Westchester Hospital and Katz Institute for Women’s Health

While the researchers found that women had lower amounts of plaque than men and less plaque with characteristics that were considered high-risk, they still had similar rates of major adverse cardiovascular events over 26 months compared to men. To put it more plainly, women, heart risk in genetic women rose when plaque burden reached 20%, whereas in men, it rose when it reached 28%.

The risk of major adverse cardiovascular events increased more steeply at lower levels of plaque than men, while the risk of these cardiovascular events increased more gradually in men and required higher levels of plaque.

The researchers wrote in the study’s conclusion that the findings suggest there should be “sex-specific interpretation” of plaque measurements for “timely intervention” in women.

So, why does this happen, and what does it suggest for heart disease prevention in genetic women? Here’s what cardiologists want you to know.

Why might women have a higher risk of cardiovascular events with lower levels of plaque?

The exact reason for this isn’t clear. While the study found that women were more likely to have the same risk of major cardiovascular events at lower levels of plaque buildup, it didn’t explore why.

“The theory is that females, on average, are smaller than males, and their heart sizes are smaller,” says Kevin Shah, M.D., cardiologist and program director of heart failure outreach at MemorialCare Heart & Vascular Institute at Long Beach Medical Center in Long Beach, CA. “But the actual metric with plaque volume tries to adjust for the size of the blood vessels.” Because of that, Dr. Shah says it’s “hard to chalk it up to the size of the person or the size of their heart.”

But heart disease tends to show up differently in women, points out Navjot Sobti, M.D., interventional cardiologist and women’s heart health specialist at Northwell’s Northern Westchester Hospital and Katz Institute for Women’s Health. “Additionally, women have been historically underrepresented in cardiovascular research,” Dr. Sobti says. “Many risk thresholds and imaging cutoffs were developed using male populations and focus on finding large artery blockages a.k.a. ‘obstructive disease,’ but women are more likely to have non-obstructive disease and types of heart attacks that don’t show large blockages on heart imaging.”

In many cases, women develop heart attacks or heart disease from issues like coronary artery spasm, a spontaneous coronary artery tear or dissection (SCAD), or problems in the heart’s small blood vessels—conditions that don’t show up as major blockages and are often missed by traditional heart disease risk models, Dr. Sobti explains.

“As a result, women can have serious heart events at lower levels of visible plaque, highlighting the limitations of a one-size-fits-all approach and the need for sex-specific risk assessment and prevention strategies,” she says. “Sex-specific risk assessment matters.”

How can I lower my risk of a heart attack?

Everyone should follow the American Heart Association (AHA)’s Life’s Essential eight, which are steps designed to improve and maintain cardiovascular health, Dr. Shah says. “But there should be a greater emphasis on these if a female patient has some plaque volume detected,” he says. Even mild coronary plaque in women may increase the need for earlier and more aggressive prevention methods, like statins, blood pressure control, and lifestyle interventions, along with proactive screening like coronary calcium scoring, Dr. Sobti says.

While a conversation with your cardiologist is essential, here’s what the AHA suggests people do to lower their risk of heart disease:

  • Eat a diet that consists of whole foods, including fruits and vegetables, lean protein, nuts, and seeds.
  • Aim to get 2.5 hours of moderate-intensity exercise, or 75 minutes of vigorous physical activity a week.
  • Avoid tobacco.
  • Try to get seven or more hours of sleep a night.
  • Do your best to maintain a certain weight.
  • Try to manage your cholesterol by limiting sugary foods and drinks, red and processed meats, salty foods, refined carbohydrates, and highly processed foods.
  • Do your best to manage your blood sugar.
  • Stay on top of your blood pressure.

Source : https://www.prevention.com/health/health-conditions/a70479751/heart-attack-risk-women-plaque-study/

IT Minister Ashwini Vaishnaw Warns Social Media Platforms: ‘Share Revenue With Content Creators’

He warned that failure to adhere to these principles would lead to accountability for these platforms. Vaishnaw also expressed concerns about the dangers posed by deepfakes and disinformation campaigns.

Union Minister also raised concerns over the rising threat of deepfakes and organised disinformation campaigns.

Speaking at the Digital News Publishers Association (DNPA) Conclave 2026, Union minister Ashwini Vaishnaw said social media platforms must ensure fair revenue sharing with those who create content, including journalists, traditional media, influencers and researchers.

‘Social media platforms must also share revenue in a fair way with the people who are creating the content, whether it is news persons, the conventional media, the creators sitting in far-flung areas, influencers, the professors and researchers who are disseminating their work using the platforms,” Vaishnaw said at DNPA Conclave.

Vaishnaw also said there will be consequences if platforms do not follow these principles. “Non-adherence to these principles will definitely make them responsible because the nature of Internet has changed now,” he added.

IT Minister said the internet has changed a lot over the years. “The nature of the internet which emerged decades ago was very different from the nature of the internet which exists today. No longer is the case where the internet was open-sourced, where it was basically used to exchange information between different parts of the world. The times are gone when a platform could say they are not responsible for the country. Those times are gone because the platforms themselves have changed from being pure platforms to becoming hosts to the world,” he explained.

Union Minister also raised concerns over the rising threat of deepfakes and organised disinformation campaigns.

“The way the world is emerging today the core tenet of trust is under threat. The threat is coming from so many different angles – Deepfakes – which can make you belief things which have never happened anyway,” he said.

Source : https://www.timesnownews.com/technology-science/it-minister-ashwini-vaishnaw-warns-social-media-platforms-share-revenue-with-content-creators-article-153699575

 

OpenAI expects compute spend of around $600 billion through 2030, source says

FILE PHOTO: OpenAI logo is seen in this illustration taken May 20, 2024. REUTERS/Dado Ruvic/Illustration/File Photo

OpenAI is targeting roughly $600 billion in total compute spend through 2030, a source familiar with the matter told Reuters on Friday, as the ChatGPT maker lays groundwork for an IPO that could value it at up to $1 trillion.

OpenAI’s 2025 revenue totaled $13 billion, beating its $10 billion projection, while it spent $8 billion during the year, under its $9 billion target, the person said.

The development comes as Nvidia closes in on finalizing a $30 billion investment in OpenAI, as part of a fundraising round in which the AI startup is seeking more than $100 billion.

That would value the Sam Altman-led company at about $830 billion and amount to one of the largest private capital raises on record.

Microsoft-backed OpenAI expects more than $280 billion in total revenue by 2030, divided nearly equally across its consumer and enterprise units, according to CNBC, which had reported the development earlier.

Altman had said last year that OpenAI is committed to spending $1.4 trillion to develop 30 gigawatts of computing resources — enough to power roughly 25 million U.S. homes.

Source : https://www.channelnewsasia.com/business/openai-expects-compute-spend-around-600-billion-through-2030-source-says-5943906

Google says malicious apps on Play Store decline, thanks to AI

Google Play Store

Google claims fewer bad actors are targeting the Play Store, pointing to a drop in policy-violating apps and banned developer accounts in 2025. The company credits tougher verification rules and expanded AI-driven review systems, even as threats increasingly shift outside the official app marketplace.

Policy-violating apps fall year over year

In its latest Android app ecosystem safety report, Google said it prevented 1.75 million policy-violating apps from being published on Google Play in 2025. That is down from 2.36 million in 2024 and 2.28 million in 2023.

The company also banned more than 80,000 developer accounts last year for attempting to publish malicious or non-compliant apps. That figure represents a steady decline from 158,000 in 2024 and 333,000 in 2023.

Google attributes the downward trend to stricter onboarding measures, including developer verification, mandatory pre-review checks and enhanced testing requirements. According to the company, it now runs more than 10,000 safety checks on every app before publication and continues to monitor apps after they go live.

The report highlights expanded use of AI within the review pipeline. Google says its latest generative AI models help human reviewers detect more complex malicious patterns faster, while multi-layered protections serve as a deterrent to bad actors attempting to enter the ecosystem.

Threats increasingly shift beyond Play Store

While fewer malicious apps appear to be slipping into the Play Store, activity outside the platform is rising. Google’s built-in defence system, Google Play Protect, identified more than 27 million new malicious apps in 2025 and warned users or blocked them from running. That is up from 13 million non-Play Store apps detected in 2024 and five million in 2023.

The figures suggest that some attackers may be bypassing the official store altogether in favour of sideloaded apps and third-party distribution channels.

Elsewhere in the report, Google said it prevented more than 255,000 apps from gaining excessive access to sensitive user data, down sharply from 1.3 million in 2024. It also blocked 160 million spam ratings and reviews and mitigated review-bombing attempts, preventing an average 0.5-star rating drop for affected apps.

Source : https://www.moneycontrol.com/technology/google-says-malicious-apps-on-play-store-decline-thanks-to-ai-article-13836667.html

‘The Energy To Build Together Here Is…’: Anthropic CEO Amodei At India AI Summit

Anthropic CEO Amodei is speaking at the India AI Impact Summit in Delhi where the big tech giants have announced their plans.

Amodei is part of the special keynote lineup at the Summit.

The energy to build together here is palpable, Anthropic CEO Dario Amodei said on the sidelines of the India AI Impact Summit 2026 on Thursday referring to the tech companies part of the big event and the likes of Google CEO Sundar Pichai and other tech giants looking to build for the future of AI in the country. I’ve been spending the last few days meeting with Indian builders and enterprises, and the energy to build together here is palpable, unlike anywhere else,” he added.

Anthropic is one of the many companies looking to build in India, and the AI company, known for its Claude AI model and agent, has been scouting for space in the country with its first base announce in Bengaluru recently.

“The advances in AI technology have been absolutely staggering along with those the advancement in the commercial applications have only grown more urgent,” Amodei pointed out during his keynote in Delhi.

Amodei also talked about the advancements in the world of AI agents and how the technology is getting close to replacing humans with its smarter inputs. “We are now well advanced on the AI curve and only a small number of years for AI models to surpass cognitive capabilities of most humans for most things. We are increasingly close to having AI agents that are more capable than most humans,” he said.

“We are now well advanced on the AI curve and only a small number of years for AI models to surpass cognitive capabilities of most humans for most things. We are increasingly close to having AI agents that are more capable than most humans.”

He also wanted to highlight both pros and cons of having AI evolving at such a pace. “That level of capability (from AI) is something the world has never seen before, and brings a very wide range of opportunities and concerns for humanity.

Source : https://www.news18.com/tech/the-energy-to-build-together-here-is-anthropic-ceo-amaodei-at-india-ai-summit-9916754.html

Scientists Discover New Type Of Constipation Caused By Gut Bacteria

For the millions of people whose constipation persists despite every laxative, fiber supplement, and dietary overhaul they throw at it, a new study may finally offer an explanation. Researchers have proposed what they call a new form of constipation, “bacterial constipation,” driven by a specific partnership between two common gut microbes, and they believe a simple fecal test could one day identify who has it.

The research, published in the journal Gut Microbes, centers on a finding that sounds almost unfair: two bacteria that are harmless on their own can team up to thin the protective mucus layer that keeps stool soft and moving. When both are elevated at the same time, the gut’s natural lubricant gets picked apart, leaving behind dry, hard, infrequent stools with no obvious cause.

Most treatments for chronic constipation target gut nerves or bowel contractions, the physical mechanics of moving waste through the intestine. Bacterial constipation, as the researchers describe it, operates through an entirely different mechanism, which would explain why so many patients get so little relief from standard care.

Two Gut Bacteria Are Teaming Up to Cause Constipation

The two bacteria at the center of the study, Akkermansia muciniphila and Bacteroides thetaiotaomicron, are everyday inhabitants of the human colon. Neither is a pathogen. Neither causes illness on its own. Together, though, they can quietly dismantle a critical layer of protection.

That layer is mucin, a thick, slippery gel that coats the intestinal wall. It retains water, lubricates stool, and keeps the gut lining from coming into direct contact with the bacteria living inside it. Colonic mucin carries chemical tags called terminal sulfates, and most bacteria cannot get past them. A. muciniphila feeds on mucin but lacks the enzyme needed to remove those tags, so the colonic variety is essentially off-limits without outside help.

B. thetaiotaomicron provides that help. It produces sulfatases, enzymes that strip the sulfate tags away, opening the mucin up for A. muciniphila to break down. Over time, the mucus layer thins, moisture drains from stool, and bowel movements slow. The researchers described this as “cooperative degradation of colonic mucins by sulfatases and glycosylases by two commensal bacteria,” one that “reduces lubrication and induces fecal dehydration, leading to the development of constipation.”

Constipation Patients Showed Elevated Levels of Both Bacteria

To see whether this bacterial pattern appeared in real patients, the research team analyzed fecal samples from 231 people with Parkinson’s disease, 54 patients with chronic idiopathic constipation (a diagnosis given when no secondary cause can be found), and 147 healthy controls. Both bacteria were elevated in the constipated groups. Fecal mucin levels were also lower across all three groups in patients who reported fewer than three bowel movements per week.

Parkinson’s disease entered the picture because patients with that condition often develop severe, treatment-resistant constipation up to 10 to 20 years before motor symptoms appear. Standard dopamine-based treatments do almost nothing for it. The fact that the same bacterial signature appeared in both Parkinson’s patients and those with no neurological diagnosis suggests the mechanism operates independently of the underlying disease, which is exactly what a new disease category would require.

Statistical analysis backed that up: mucin depletion tracked more closely with constipation itself than with any specific diagnosis, which strengthens the case that bacterial constipation is its own entity.

A Single Deleted Gene Reversed Constipation in Mice

To confirm the mechanism, the team used germ-free mice, animals raised without any gut bacteria at all. Mice given only one of the two bacteria showed no signs of constipation. Mice given both developed it: fewer stool pellets, drier feces, lower mucin levels, and a gut lining that was becoming more permeable. Food intake, water intake, and urine output stayed the same in all groups, which ruled out dehydration as the cause.

The cleanest result came from a genetic experiment. The researchers engineered a version of B. thetaiotaomicron with a single gene deleted, the one responsible for activating its sulfatases. Without functioning sulfatases, B. thetaiotaomicron could no longer unlock mucin for its partner. Mice colonized with this modified bacterium alongside A. muciniphila showed almost no constipation. Stool output recovered, moisture content rose, and mucin levels came back. Taking out one gene in one bacterium broke the whole cycle.

What a Bacterial Constipation Diagnosis Could Mean for Patients

The researchers propose that measuring fecal levels of A. muciniphila, especially when found alongside certain partner bacteria, could help identify patients in this new category. If levels are elevated and sulfatase-producing bacteria are present alongside it, the conditions for bacterial constipation exist. As a treatment path, they point to phage therapies (which use viruses to target specific bacteria) or small-molecule drugs designed to block bacterial sulfatase activity, either of which would aim to preserve the mucus layer without relying on laxatives.

As the authors put it: “Fecal abundance of A. muciniphila may serve as a biomarker for identifying such patients. In addition, phage-mediated bacterial suppression or small molecules to block bacterial sulfatases may preserve colonic mucus integrity, improve stool hydration, and alleviate constipation in these patients.”

That would be a meaningful shift. For people who have spent years managing a condition their doctors can explain but not reliably fix, a named mechanism and a targeted treatment approach offers something that fiber and stool softeners never could: an actual answer.

Source : https://studyfinds.com/new-type-of-constipation-caused-by-gut-bacteria-discovered/

YouTube recovering after disruptions in the US, Southeast Asia

A teenager poses for a photo while holding a smartphone in front of a Youtube logo in this illustration taken September 11, 2025. REUTERS/Dado Ruvic/Illustration

Reports of disruptions to YouTube have tapered off after hitting a peak on Wednesday (Feb 18) morning in Asia.

Thousands of users in several Southeast Asian countries – including Singapore, Malaysia, Indonesia and the Philippines – reported errors, according to Downdetector, which tracks outages by collating status reports from multiple sources.

In Singapore, the number of incidents of people reporting issues with the social media platform began spiking shortly before 9am, reaching almost 3,000 at 9.27am before declining. By 10.12am, only around 200 incidents were reported.

Over 320,000 users reported errors in the United States.

“If you’re having trouble accessing YouTube right now, you’re not alone – our teams are looking into this,” the company said on X, linking to a support page.

The help page later posted that “an issue with our recommendations system prevented videos from appearing across surfaces on YouTube (including the homepage, the YouTube app, YouTube Music and YouTube Kids).”

Source : https://www.channelnewsasia.com/business/youtube-down-downdetector-5937216

‘Preparation Is The Antidote’: PM Modi Addresses AI Job Loss Fears

The Prime Minister drew parallels with past technological revolutions, noting that history has consistently shown that innovation does not eliminate work; it transforms its nature

PM Narendra Modi during the inauguration of the India AI Impact Summit in New Delhi on February 16, 2026. (Image: PMO/PTI)

Prime Minister Narendra Modi addressed one of the most pressing anxieties of the modern era: the fear among the youth that artificial intelligence will lead to widespread job displacement. Acknowledging these concerns with empathy in a comprehensive interview with ANI around the India AI Impact Summit 2026, the Prime Minister asserted that “preparation is the best antidote to fear”. He outlined a vision where AI acts as a partner to human intelligence rather than a replacement for it.

A Historical Perspective on Innovation

The Prime Minister drew parallels with past technological revolutions, noting that history has consistently shown that innovation does not eliminate work; it transforms its nature. He reminded the youth that for centuries, every major breakthrough—from the Industrial Revolution to the birth of the internet—was met with similar scepticism regarding the future of employment. However, in each instance, new sectors emerged that were previously unimaginable. “Whenever innovation happens, new opportunities emerge,” he noted, expressing confidence that the age of AI would follow this same trajectory by creating entirely new categories of high-quality tech jobs.

AI as a Force Multiplier

Central to the Prime Minister’s argument is the concept of AI as a “force multiplier.” He explained that instead of making professionals redundant, AI will empower them to reach greater heights. He cited examples of how doctors, teachers, and lawyers can leverage these tools to assist larger groups of people with greater precision and efficiency. In this context, AI is viewed as a servant of human dignity that enhances human capability, allowing the workforce to focus on high-value, creative, and empathetic tasks while automation handles the routine.

Proactive Skilling and Global Standing

To ensure that India’s demographic dividend is protected, the Prime Minister highlighted the government’s massive investment in “skilling and re-skilling”. He stated that the administration is treating AI-driven disruption as a “present imperative” rather than a future problem. The success of this proactive approach is already visible on the global stage; the Prime Minister referenced the Stanford Global AI Vibrancy Index 2025, where India ranked 3rd globally, reflecting the country’s robust growth in AI talent and research.

Source : https://www.news18.com/tech/preparation-is-the-antidote-pm-modi-addresses-ai-job-fears-9910691.html

The tech bros might show more humility in Delhi – but will they make AI any safer?

Those who shout the loudest about artificial intelligence tend to be in the West, notably the US and Europe.

So it’s significant that a gathering of powerful leaders is being held in the Global South, a region of the world that runs the risk of being left behind in the AI race.

Tech bosses, politicians, scientists, academics and campaigners are meeting at the AI Impact Summit in India this week for top-level discussions about what the world should be doing to try to marshal the AI revolution in the right direction.

At last year’s AI Action Summit, as it was then known, an ugly power struggle broke out between some Western countries over who should be in charge.

The various Western powers jostled for pole position in Paris, and US vice president JD Vance delivered a blistering speech in which he said America’s place at the top of the pack was non-negotiable.

I suspect there may be a more humble vibe this week in Delhi: the capital of a country which has helped to build the foundations that support this mega-powerful new tech – but is not reaping as much reward as the more affluent west.

There are some significant AI hubs in India, including in Bengaluru, Hyderabad and Mumbai. It has a large tech workforce, and has attracted some big infrastructure investments from the likes of Google, Nvidia and Amazon.

At the same time, low-paid workers there have long been carrying out the unseen and painstaking task of manually categorising the vast amounts of data used to train the world’s AI tools.

In her book Empire of AI, the journalist Karen Hao writes about an unnamed firm in India which was contracted to do content moderation of AI-generated images: she claimed it included workers looking at horrifying ones to decide which should be blocked from being reproduced.

According to the recruitment website Glassdoor, the average salary for an AI data trainer in Chennai is 480,000 rupees – less than £4,000 ($5,000) per year.

It’s an essential role, but to put this into perspective OpenAI, the creator of ChatGPT, is valued at over $500bn.

‘More than technology’ for India

The 2026 International AI Safety Report notes that while “in some countries over 50% of the population uses AI, across much of Africa, Asia, and Latin America adoption rates likely remain below 10%.”

The world’s biggest US AI chatbots do not work in all of India’s 22 official languages – let alone the hundreds of dialects that exist within them. ChatGPT and Claude currently support around half of them. Google’s Gemini supports nine.

“Without tech that understands and speaks these languages, millions are excluded from the digital revolution – especially in education, governance, healthcare, and banking,” Professor Pushpak Bhattacharyya, from IIT Mumbai, told the BBC last summer.

To counter this, India is building its own sovereign AI platforms – the Indian government calls this the AI Mission – but progress is relatively slow.

While the US products – as well as Chinese ones such as DeepSeek and ByteDance – race ahead with new releases, many of India’s remain in development.

The Indian government budget of $1.2bn for this project pales into comparison of the deep pockets of the multi-billion dollar corporations.

Before Christmas, an Indian government official told me, perhaps unsurprisingly, that India has little interest in AI’s geopolitical power struggles. The country’s focus is on harnessing the tech to drive its own growth.

“For India, this is about more than technology, it is about economic transformation, digital sovereignty and building capability at scale,” said Rajan Anandan, managing director at one of India’s biggest tech investors Peak XV.

“Within the country there is a strong sense of momentum and confidence.”

The US, meanwhile, may find itself rather unusually forced into more of a back seat. I imagine it’s not going to like that very much.

“The Americans will have less to say with the Summit’s proposed bottom-up, Global South approach to AI governance that focuses on people, planet and progress,” says Professor Gina Neff, an AI ethics expert from Queen Mary University London.

“We need governments to act together to shape a more inclusive, democratic and people-centred vision of AI in the face of unprecedented corporate power,” argues Jeni Tennison, executive director of the think tank Connected by Data.

“As the world’s largest ‘middle power’, India could make that happen,” she adds.

AI expert Henry Ajder agrees. “I hope we will see pragmatic efforts to move beyond a legislative patchwork towards meaningful consensus in addressing AI harms, maliciously caused or otherwise,” he told me.

Amanda Brock, chief executive of tech industry body OpenUK, thinks the answer is to force the AI companies to share how their products work so that others can build their own versions, make improvements and properly scrutinise the tech.

“For this summit to have any real impact for the Global South, there needs to be access for all to AI and that can only be achieved by opening it up,” she argues.

There has been movement in that direction, but many of the AI giants are still keeping key elements, such as what training data they use, confidential.

Some AI experts have told me privately that they are concerned about how far down the agenda safety and responsibility appears to have slipped.

Source : https://www.bbc.com/news/articles/cr5l6gnen72o

Clear, dry conditions to prevail on Sunday, significant shift expected from Monday

Clear, dry conditions are set to prevail across India this Sunday. A significant weather shift is expected from Monday as a Western Disturbance nears, bringing rain and snow to the Himalayas.

A cat crosses a snow-covered road after a fresh snowfall on the outskirts of Srinagar, Kashmir, India (Photo: AP)

Clear and dry conditions are expected to prevail across the country on Sunday, February 15, even as a heatwave alert has been sounded for parts of the west coast.

The India Meteorological Department (IMD) has indicated that while the weather remains stable tomorrow, a significant shift is on the horizon with a fresh Western Disturbance arriving on February 16.

Devendra Tripathi, founder of Mausam Tak and weather vlogger for Kisan Tak, notes that pan-India clear and dry weather is expected tomorrow, stretching from the western Himalayan region to the south peninsular region.

WHAT IS THE WEATHER FORECAST FOR NORTH INDIA?
On February 15, while the north remains dry, dense fog is likely during morning hours in isolated places throughout Himachal Pradesh.

Devendra Tripathi explains that although the weather will be clear, the impact of cold winds will persist in Uttar Pradesh, Bihar, Haryana, and Punjab.

Devendra Tripathi says, “Tomorrow, basically, pan-Indian clear and dry weather is expected.”

He adds that a new Western Disturbance will begin affecting the region from Monday, February 16, bringing rain to the mountains and clouds to the plains.

The IMD reports that this system is likely to trigger isolated rainfall and snowfall over Jammu and Kashmir, Ladakh, Gilgit, Baltistan, and Muzaffarabad on February 16 and 17.

Himachal Pradesh and Uttarakhand are expected to follow with similar conditions on February 17 and 18.

WILL TEMPERATURES RISE ACROSS THE COUNTRY?
A gradual change is on the horizon. The IMD expects minimum temperatures in northwest and central India to rise by two to three degrees Celsius over the next three days.

Devendra Tripathi highlights that due to a change in wind direction, temperatures will increase in Gujarat, Maharashtra, West Madhya Pradesh, and Rajasthan.

In the south, the heat is picking up. The IMD has issued a warning for hot and humid conditions over coastal Karnataka, and Konkan and Goa for February 15 and 16.

Devendra Tripathi warns of a potential heatwave in these coastal regions, including Mumbai and Mangalore, where temperatures may cross 35 to 37 degrees Celsius.

Source: https://www.indiatoday.in/science/story/india-weather-forecast-rain-snow-heatwave-imd-devendra-tripathi-february-15-weather-update-sunday-weather-update-2868380-2026-02-14

Four new astronauts arrive at International Space Station to replace NASA’s evacuated crew

Last month’s medical evacuation was NASA’s first in 65 years of human spaceflight; one of four astronauts launched by SpaceX last summer suffered what officials described as a serious health issue, prompting their hasty return

Crew 12 mission astronauts, from left, pilot Jack Hathaway, Russian cosmonaut Andrei Fedyaev, commander Jessica Meir and ESA astronaut Sophia Adenot, of France, leave the Operations and Checkout building before heading to pad 40 at the Cape Canaveral Space Force Station in Cape Canaveral, Florida on February 13, 2026. | Photo Credit: AP

The International Space Station returned to full strength with the arrival of four new astronauts to replace colleagues who bailed early because of health concerns.

SpaceX delivered the U.S., French and Russian astronauts on Saturday (February 14, 2026), a day after launching them from Cape Canaveral.

Source: https://www.thehindu.com/sci-tech/four-new-astronauts-arrive-at-international-space-station-to-replace-nasas-evacuated-crew/article70633411.ece

AI Has Led To Industrialisation Of Cybercrime, Says Indian Cybercrime Coordination Centre CEO Rajesh Kumar

Speaking at the Global CyberPeace Summit 2026, Indian Cybercrime Coordination Centre CEO Rajesh Kumar said that the cyber attacks carried out between 2024 and 2025 have seen a lot of AI adoption and automation.

AI Has Led To Industrialisation Of Cybercrime, Says Indian Cybercrime Coordination Centre CEO | Freepik

The advent of artificial intelligence has led to the industrialisation of cybercrime, with a lot of automated attacks being carried out by organised gangs, a top official of the Indian Cybercrime Coordination Centre said on Tuesday.

Speaking at the Global CyberPeace Summit 2026, Indian Cybercrime Coordination Centre CEO Rajesh Kumar said that the cyber attacks carried out between 2024 and 2025 have seen a lot of AI adoption and automation.

Indian Cybercrime Coordination Centre (I4C), set up by the home ministry, is a nodal agency for providing a framework for law enforcement agencies to deal with cybercrime.

“The biggest change that has come is that now cybercrime is being committed on an industrial scale. The technology has enabled industrialisation of cybercrime. With industrialisation, I mean the organised criminal gangs, mostly operating from Southeast Asia, Africa, the Middle East and maybe within parts of our country as well. They have bureaucratic structures with specialised wings,” Kumar said.

He said that the organised gangs have a human resource, which recruits, takes care of their promotion and remunerations.

“They will have the research and development wing which identifies what are the weaknesses which can be exploited, and how to exploit. They exploit technology, the weaknesses in the technological systems that we adopt, and they also identify the weakness in the human psyche,” he said.

He said that several attacks were in the form of a social engineering attack, but they were aided by AI.

“Globally, it is estimated that the cost of cybercrime for the year 2025 was around USD 10.8 trillion. This is the cost of cybercrime. It is expected to grow to around USD 12 trillion this year,” he said.

Kumar said that state-backed actors are using criminal infrastructure to further their geopolitical objectives.

According to several global think tanks, 80 per cent of cyber attacks are now AI driven, he said.

“Those of you who have received SMSs that your chalan is sending, it was being sent, created, personalised and sent by an AI. AI is auto scripting, it is drafting, and it is executing. The messages that you are getting, whether it is in the form of an SMS or whether it is a WhatsApp communication, it is hyper personalised. So, you are made to believe,” Kumar said

Even in the cases of digital arrest, AI is being used to show the face of a famous police officer to make one believe that the person on the other side is an actual police officer, he said.

Kumar said that a new modus operandi has come into light of a triple extortion model, where the criminal would install ransomware, encrypt data and then threaten to leak the data.

“Another very disturbing trend now that we are seeing is crime as a service. Now gangs are there, which are offering crime as a service. That means you want to commit a crime, but you do not know how to go about it. Hire somebody else to do the crime,” he said, and shared a number of cybercrimes, which sometimes con senior government officers.

Ranjana Jha, Vice Chancellor of the Indira Gandhi Delhi Technological University for Women (IGDTUW), said that among the many technologies that are driving digital transformation in modern enterprise, artificial intelligence stands apart as the most disruptive and consequential.

“AI-driven human-computer interaction is no longer experimental; it is routine, invisible and pervasive. AI is transforming healthcare, finance, education, justice, creativity at breathtaking speed, yet public trust remains fragile. India needs widespread AI literacy. AI literacy will not only improve the quality of life but will also improve the quality of life,” she said.

Source : https://www.freepressjournal.in/tech/ai-has-led-to-industrialisation-of-cybercrime-says-indian-cybercrime-coordination-centre-ceo-rajesh-kumar

Awake Patients Report Higher Satisfaction Than Sedated Ones During Cancer Screenings Using This Method

Credit: Inside Creative House on Shutterstock

Patients undergoing throat and stomach exams without any sedation reported higher satisfaction than rates previously reported for sedated patients undergoing similar procedures. The finding upends the basic assumption in medicine that comfort requires drugs.

The secret? Two breathing techniques taught in about five minutes. The first, mindful breathing, is slow: breathe in through your nose for three seconds, pause, then exhale through pursed lips for seven or eight seconds. The second, throat rescue breathing, interrupts gagging before it starts with three quick sniffs followed by a long exhale.

Nurses at Nottingham University Hospitals taught these techniques to 241 patients before their cancer screening exams. Afterward, 92% said they were satisfied or very satisfied. Previous research using the same satisfaction questionnaire found that only 86% of sedated patients reported satisfaction, and just 53% of patients having standard awake procedures felt satisfied.

Those statistical differences matter. Examining the throat and digestive tract for early cancer usually requires sedation or general anesthesia. Sedation can increase the risk of breathing and heart complications, and general anesthesia carries added danger when tumors obstruct the airway. This new approach sidesteps those risks while giving patients what they actually want: control, clear communication, and genuine support.

The Team Behind Mindful Endoscopy

One doctor performs the exam. One or two specially trained nurses do everything else, and their job isn’t monitoring equipment. They watch faces and bodies, recognizing tension the instant it appears.

When a patient’s shoulders creep up toward their ears, the nurse places a reassuring hand on their shoulder and says “drop your shoulders.” When someone’s face tightens, the instruction is “smile.” When gagging starts, the nurse simply says “sniff, sniff, sniff,” the cue patients practiced minutes earlier.

Training takes time. Nurses attend teaching sessions on how breathing affects the body and mind. They observe procedures, then gradually take the lead while instructors watch. Only after supporting dozens of exams do they work solo. But once trained, they become the linchpin of the entire approach.

Music plays during every procedure. Patients bring their own playlists or choose from hospital selections. The team never uses sharp or scary words. Instead of “this might hurt,” nurses say “your throat will feel numb from the spray.”

Why Breathing Actually Works

The techniques appear to hit the body and mind simultaneously. Focusing on breath keeps your attention in the present moment. There’s less mental space left over for anxious thoughts about what might happen next. You’re not just enduring the procedure, you’re actively managing it.

At the same time, slow breathing does something physical. Long exhalations lower your heart rate and blood pressure. Your body reads these changes as evidence of relaxation, which triggers actual relaxation. It’s a feedback loop that happens fast enough to prevent panic during the brief seconds when the scope passes through your throat.

Throat rescue breathing works differently. Quick inhalations cool the throat lining and reduce contact between the scope and sensitive tissue. The urge to gag gets cut off before becoming full gagging. Nurses teach patients to catch that urge early and squash it immediately.

Even posture matters. Raised shoulders tell your brain the neck is under threat. Squinted eyes prepare for danger. When nurses guide patients to drop shoulders and relax faces, they’re removing physical signals the brain interprets as warnings.

What the Numbers Show

Between July 2022 and July 2023, researchers tracked patients who had either throat exams or full digestive tract exams using these mindfulness techniques, according to the study published in the British Journal of Nursing. All exams happened in outpatient clinics: no operating rooms, no sedation facilities.

After each procedure, patients filled out satisfaction surveys. Some 94% rated the technical quality as good or very good. When asked about overall satisfaction, 92% chose satisfied or very satisfied. Nearly all (96%) said they’d be happy to have the same doctor repeat the procedure if necessary.

The exams caught 12 cancers in various locations: vocal cords, throat, tonsil, esophagus, chest, and thyroid. At 14 months of follow-up, no cancers had been missed.

What Patients Actually Said

Written comments explain why awake beat sedated. Patients emphasized feeling supported rather than medicated. One wrote: “I was amazed by how you helped me cope with this.” Another said: “The examination was stress free because the Professor and his nursing team talked me through everything that was about to happen, in a professional and calm way.”

One patient who’d had both sedated and mindfulness-supported exams made the comparison directly: “This procedure is much better than the endoscopy I have previously had and absolutely less traumatic. If I had to choose between the two procedures again, I would choose this one.”

Another captured it simply: “An uncomfortable procedure made easy by caring and well-qualified staff.”

Rethinking Medical Comfort

For decades, medicine has assumed that uncomfortable procedures require pharmaceutical intervention. This study suggests skilled human support might work better than sedation for many patients.

The National Health Service aims to diagnose 75% of cancers at early stages, a goal expected to save roughly 55,000 lives. Little progress has been made. For throat and digestive tract cancers, vague early symptoms make detection hard. Definitive diagnosis depends on endoscopy, but the need for sedation or anesthesia limits where and how often these exams happen.

Mindful endoscopy could help ease a major bottleneck in cancer screening. Thorough exams can now happen in outpatient settings without sedation. The technique might extend beyond cancer screening to any procedure typically done under local anesthesia. The NHS wants to reduce treatment backlogs partly by moving procedures out of operating rooms. This approach could help.

The study has limitations. One surgeon at one hospital performed or supervised all procedures. Whether other teams can match these results remains unknown. Patients self-selected into the study, meaning they agreed to awake procedures rather than requesting sedation upfront.

Source : https://studyfinds.org/awake-patients-higher-satisfaction-than-sedated-cancer-screening-breathing/

As AI enters the operating room, reports arise of botched surgeries and misidentified body parts

Illustration: REUTERS/John Emerson, photo: Adobe Stock

In 2021, a unit of healthcare giant Johnson & Johnson announced “a leap forward”: It had added artificial intelligence to a medical device used to treat chronic sinusitis, an inflammation of the sinuses. Acclarent said the software for its TruDi Navigation System would now use a machine-learning algorithm to assist ear, nose and throat specialists in surgeries.
The device had already been on the market for about three years. Until then, the U.S. Food and Drug Administration had received unconfirmed reports of seven instances in which the device malfunctioned and another report of a patient injury. Since AI was added to the device, the FDA has received unconfirmed reports of at least 100 malfunctions and adverse events.

At least 10 people were injured between late 2021 and November 2025, according to the reports. Most allegedly involved errors in which the TruDi Navigation System misinformed surgeons about the location of their instruments while they were using them inside patients’ heads during operations.

Cerebrospinal fluid reportedly leaked from one patient’s nose. In another reported case, a surgeon mistakenly punctured the base of a patient’s skull. In two other cases, patients each allegedly suffered strokes after a major artery was accidentally injured.
FDA device reports may be incomplete and aren’t intended to determine causes of medical mishaps, so it’s not clear what role AI may have played in these events. The two stroke victims each filed a lawsuit in Texas alleging that the TruDi system’s AI contributed to their injuries. “The product was arguably safer before integrating changes in the software to incorporate artificial intelligence than after the software modifications were implemented,” one of the suits alleges.

Asked about the FDA reports on the TruDi device, Johnson & Johnson referred questions to Integra LifeSciences, which in 2024 purchased Acclarent and the TruDi Navigation System. Integra LifeSciences said the reports “do nothing more than indicate that a TruDi system was in use in a surgery where an adverse event took place.” It added that “there is no credible evidence to show any causal connection between the TruDi Navigation System, AI technology, and any alleged injuries.”

Insight into the incidents comes as AI is beginning to transform the world of health care. Proponents predict the new technology will help find cures for rare diseases, discover new drugs, enhance surgeons’ skill and empower patients. But a Reuters review of safety and legal records, as well as interviews with doctors, nurses, scientists and regulators, documents some of the hazards of AI in medicine as device makers, tech giants and software developers race to roll it out.
At least 1,357 medical devices using AI are now authorized by the FDA – double the number it had allowed through 2022. The TruDi system isn’t the only one to come under question: The FDA has received reports involving dozens of other AI-enhanced devices, including a heart monitor said to have overlooked abnormal heartbeats and an ultrasound device that allegedly misidentified fetal body parts.
Researchers from Johns Hopkins, Georgetown and Yale universities recently found that 60 FDA-authorized medical devices using AI were linked to 182 product recalls, according to a research letter published in the JAMA Health Forum in August. Their review showed that 43% of the recalls occurred less than a year after the devices were greenlighted. That’s about twice the recall rate of all devices authorized under similar FDA rules, the review noted.

The AI boom poses a problem for the FDA, five current and former agency scientists told Reuters: The agency is struggling to keep pace with the flood of AI-enhanced medical devices seeking approval after losing key staff. A spokesperson for the U.S. Department of Health and Human Services, which includes the FDA, said it’s looking to boost its capacity in this area.
Another form of artificial intelligence, generative-AI chatbots, is also making its way into medicine. Many physicians are now using AI to save time, such as in transcribing patient notes. But doctors also say many patients use chatbots to self-diagnose or challenge professional advice, posing new challenges and risks.
Artificial intelligence became a business and social sensation after the launch of ChatGPT about three years ago. ChatGPT and other popular chatbots, such as Google’s Gemini and Anthropic’s Claude, use so-called generative AI to create content. They are built on top of large language models, or LLMs, which are trained on huge troves of text and other data to understand and generate human language. These AI tools are now being introduced into medical areas such as consumer healthcare apps.

AI encompasses more than LLMs, however, and the technology made its way into medicine long before AI bots appeared. The field dates back more than 70 years: A key moment was when British mathematician Alan Turing asked in a 1950 paper, “Can machines think?”
The FDA authorized its first AI-enhanced medical devices in 1995 – two systems that used pattern-matching software to screen for cervical cancer. The type of AI used in medical devices today is often called machine learning, along with a subset known as deep learning, which are trained on data to perform specific tasks. The technology is used in radiology, for example, to enhance and analyze medical images. It can help diagnose cancers by identifying tumors that doctors may overlook.
Such systems are also used in surgical devices. In June 2022, a surgeon inserted a small balloon into Erin Ralph’s sinus cavity at a hospital in Fort Worth, Texas. According to a lawsuit filed by Ralph, Dr. Marc Dean was employing the TruDi Navigation System, which uses AI, to confirm the position of his instruments inside her head.
The procedure, known as a sinuplasty, is a minimally invasive technique to treat chronic sinusitis. A balloon is inflated to enlarge the sinus cavity opening, to allow better drainage and relieve inflammation.
But the TruDi system “misled and misdirected” Dean, according to the lawsuit Ralph filed in Dallas County District Court against Acclarent and other defendants. A carotid artery – which supplies blood to the brain, face and neck – allegedly was injured, leading to a blood clot. According to a court filing, Ralph’s lawyer told a judge that Dean’s own records showed he “had no idea he was anywhere near the carotid artery.” Reuters wasn’t able to review the records, which are subject to a judicial protective order.
After Ralph left the hospital, it became apparent that she had suffered a stroke. The mother of four returned and spent five days in intensive care, according to a GoFundMe fundraising drive that was organized to support her recovery. A section of her skull was removed “to allow her brain room to swell,” the GoFundMe appeal stated.
“I am still working in therapy,” Ralph said in an interview more than a year later in a blog about stroke victims. “It is hard to walk without a brace and to get my left arm back working, again.”

In May 2023, Dean was using TruDi in another sinuplasty operation when patient Donna Fernihough’s carotid artery allegedly “blew.” Blood “was spraying all over” – even landing on an Acclarent representative who was observing the surgery, according to a lawsuit Fernihough filed in U.S. District Court in Fort Worth against Acclarent and several manufacturers. One of Fernihough’s carotid arteries was damaged. She suffered a stroke the day of the surgery, according to her suit.
Acclarent “knew or should have known that the purported artificial intelligence caused or exacerbated the tendency of the integrated navigation system product to be inconsistent, inaccurate, and unreliable,” the suit alleges.
Acclarent has denied the allegations in both suits, which are ongoing, according to court filings. The company says it did not design or manufacture the TruDi system but only distributed it, according to court filings. Acclarent’s owner, Integra LifeSciences, told Reuters there’s no evidence of a link between the AI technology and any alleged injuries.
Dean began consulting for Acclarent in 2014 and received more than $550,000 in consultant’s fees from the company through 2024, according to Open Payments, a federal database that tracks financial ties between companies and physicians. At least $135,000 of those fees related to the TruDi system.
An attorney for Dean said the doctor couldn’t comment due to patient privacy and ongoing litigation. Integra said Dean is no longer a TruDi consultant and that payments made to him after it acquired Acclarent were for meals.
In 2021, Acclarent’s president at the time, Jeff Hopkins, was pushing to put AI in TruDi “as a marketing tool” to claim that the device “had new and novel technology,” Fernihough’s suit alleges.
The TruDi software uses machine learning to identify specific segments of a patient’s anatomy and calculate “the shortest, valid path between two points specified by the physician,” according to an Acclarent post on LinkedIn. The technology is designed to simplify surgical planning and provide real-time feedback during procedures such as sinus operations.
Acclarent officials had approached Dean about the plan to add AI, the Fernihough suit states. The surgeon warned Hopkins and Acclarent “that there were issues that needed to be resolved,” the complaint adds. Despite that warning, the suit claims, Acclarent “lowered its safety standards to rush the new technology to market,” and set “as a goal only 80% accuracy for some of this new technology before integrating it into the TruDi Navigation System.”
Reuters couldn’t establish whether Dean issued the warning. Reporters were unable to review material submitted in support of Fernihough’s claims, which is subject to a judicial protective order.
Hopkins, the former Acclarent president, did not respond to a request for comment.

‘WRONG BODY PARTS’

The FDA cautions that reports of adverse events and device malfunctions are limited: They often lack detail, are redacted to protect trade secrets, and can’t be used alone to place blame. The agency also sometimes receives multiple reports for a single incident.
Reuters found that at least 1,401 of the reports filed to the FDA between 2021 and October 2025 concern medical devices that are on an FDA list of 1,357 products that use AI. The agency says the list isn’t comprehensive. Of those reports, at least 115 mention problems with software, algorithms or programming.
One FDA report in June 2025 alleged that AI software used for prenatal ultrasounds was misidentifying fetal body parts. Called Sonio Detect, it uses machine learning techniques to help analyze fetal images.
“Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts,” stated the report, which does not say that any patient was harmed. Sonio Detect is owned by Samsung Medison, a unit of Samsung Electronics. Samsung Medison said the FDA report about Sonio Detect “does not indicate any safety issue, nor has the FDA requested any action from Sonio.”
The HHS spokesperson didn’t respond to questions about Sonio Detect.

“Sonio detect software ai algorithm is faulty and wrongly labels fetal structures and associates them with the wrong body parts.”

At least 16 reports claimed that AI-assisted heart monitors made by medical-device giant Medtronic failed to recognize abnormal rhythms or pauses. None of the reports mentioned injuries. Medtronic told the FDA that some of the incidents were caused by “user confusion.”
The AI algorithms in Medtronic’s LINQ series of implantable cardiac monitors are described as “deep learning artificial intelligence.” They have greatly reduced false alerts and retained true alerts of heart events, according to the company’s website. But the company also says on its site and in product literature that its AI technology, AccuRhythm AI, can misclassify actual abnormal heart rhythms or pauses.

Medtronic told Reuters that it reviewed all 16 episodes and concluded its device only missed one abnormal heart-rhythm event. “None of these reports resulted in patient harm,” it said. Medtronic said some of the incidents were related to problems with data display, not the AI technology. It declined to explain fully what went wrong in each incident.
The HHS spokesperson said the agency doesn’t discuss possible or ongoing compliance matters.

FDA CUTBACKS UNDER TRUMP

In interviews, five current and former FDA scientists who reviewed AI-powered medical devices told Reuters that federal regulators are now less equipped to handle the flood of new ones.
About four years ago, the FDA expanded its roster of scientists who specialize in AI, particularly for reviewing medical imaging and radiology devices that use the technology. Many recruits were stationed in the Division of Imaging, Diagnostics and Software Reliability (DIDSR). The unit became the agency’s key resource for assessing the safety of AI in medicine, one current and two former FDA employees told Reuters. It grew to about 40 people early last year.
“Some senior regulators have no idea how these technologies work,” one ex-employee said. “We sat closely with senior regulators and explained to them why we think this technology is safe or not safe to use in the market.”
It wasn’t easy to lure top talent to government service. Recruiting computer scientists often required persuading them to turn down higher pay in the private sector.
In their work, scientists tried to “break” the devices’ AI models, a former employee said. They would test a device’s algorithms in a variety of clinical situations and check whether the AI’s performance deteriorated over time. They also sought to minimize “hallucinations,” in which AI models sometimes generate false information, FDA officials wrote in a paper published in October.
But early last year, the Trump administration began to dismantle the AI team as part of Elon Musk’s cost-cutting campaign, the Department of Government Efficiency, or DOGE. About 15 of the 40 AI scientists in the DIDSR unit were laid off or opted to go, the FDA insiders said. Another unit that crafted policy on devices using AI, the Digital Health Center of Excellence, lost about a third of its staff of around 30.

Andrew Nixon, the HHS spokesperson, said the FDA is applying the same rigorous standards to medical devices aided by machine learning and other AI as it would to any product.

“Patient safety is the FDA’s highest priority and is at the forefront of our work to protect and promote the public health,” Nixon said. “The FDA sees tremendous promise in the digital health space,” including devices enabled with AI and machine learning, “to help diagnose and treat a range of conditions.” He said the FDA continues to recruit and develop talent with expertise in digital health, artificial intelligence and other emerging technologies.
Since the cuts, the workload has nearly doubled for some device reviewers, said two ex-employees. “If you don’t have the resources, things are more likely to be missed,” said a former device reviewer who left last year.

Source : https://www.reuters.com/investigations/ai-enters-operating-room-reports-arise-botched-surgeries-misidentified-body-2026-02-09/

India Gains In-Orbit Spying Capability, Can Now Snoop On Enemy Satellites

India today operates more than 50 satellites, collectively valued at over Rs 50,000 crore, spread across communication, navigation, Earth observation, and strategic applications

Using its 80-kilogram Earth-observation satellite AFR, Azista captured images of ISS

In a significant milestone for India’s growing private space sector, Ahmedabad-based Azista Industries Private Limited through its aerospace vertical has demonstrated a new indigenous capability to image objects in orbit from another satellite, a first for an Indian private sector and a key step toward strengthening India’s space situational awareness. This is often called in orbit snooping!

Using its 80-kilogram Earth-observation satellite AFR, Azista successfully captured images of the International Space Station (ISS), a large and relatively easy-to-track orbital object, during two carefully planned experiments on February 3. While the ISS is among the most visible and cooperative targets in low-Earth orbit, the achievement marks an important beginning for India’s private sector in a domain that is increasingly strategic and closely watched globally.

Azista conducted two independent imaging attempts under challenging near-horizon and sunlit conditions. The first pass was executed at a distance of approximately 300 kilometres, followed by a second at about 245 kilometres. In both attempts, the AFR satellites’ sensor was precisely tasked to track the fast-moving ISS, capturing a total of 15 distinct frames with an imaging sampling of around 2.2 metres. According to the company, both attempts achieved 100 per cent success, validating its tracking algorithms and electro-optical imaging precision.

To Azista, the demonstration is more than a technical achievement, it is proof that indigenous algorithms, electro-optical systems, and satellite engineering developed entirely in India can be used to track and characterise objects in orbit.

Speaking after the successful experiment, Srinivas Reddy, Managing Director of Azista, said AFR today supports multiple customers with advanced imaging and remote-sensing solutions and has now demonstrated Non-Earth Imaging (NEI) using fully indigenous systems. “These technologies form the backbone of our NEI and SSA payloads, enabling precise tracking and characterisation of objects in orbit,” he said. The same technology once mastered can also help monitor incoming ballistic missiles.

Space Situational Awareness, the ability to detect, track, and understand the behaviour of objects in space, is becoming increasingly important as more countries deploy satellites with capabilities that can interfere with, jam, or manoeuvre close to other space assets. With congestion and competition growing in low-Earth orbit, monitoring what is happening above the planet has become as critical as observing what happens on its surface.

India today operates more than 50 satellites, collectively valued at over Rs 50,000 crore, spread across communication, navigation, Earth observation, and strategic applications. Protecting these assets requires timely information on what other satellites are doing in orbit, particularly during periods of heightened geopolitical tension.

While ISRO has demonstrated such capabilities earlier, including through the recent SPADEX in-orbit experiment that showcased precision rendezvous and manoeuvring, Azista’s effort represents a new approach driven by the private sector. By imaging the ISS, AFR has demonstrated a foundational capability that could, over time, be extended to monitor less cooperative or more complex orbital targets.

Brigadier Adarsh Bharadwaj, Executive Director at Azista, said the demonstration provides India with a much-needed ability to observe activity in orbit at a time when space platforms are becoming more vulnerable to interference. He described the ISS images as the “first proof of what can be achieved in the future,” noting that India is entering a new era of space situational awareness that will help protect national interests in space.

AFR itself is a milestone. Weighing just 80 kilograms, it is the first satellite in its size and performance class to be designed, built, and operated entirely by private industry in India. Launched on June 13, 2023, aboard SpaceX’s Falcon 9 as part of the Transporter-8 mission, the satellite has completed 2.5 years in orbit and continues to operate nominally, with another 2.5 years of mission life remaining.

Source : https://www.ndtv.com/india-news/india-gains-in-orbit-spying-capability-can-now-snoop-on-enemy-satellites-ahmedabad-based-azista-industries-private-limited-earth-observation-satellite-10967486

Musk’s mega-merger of SpaceX and xAI bets on sci-fi future of data centers in space

REUTERS/Sam Wolfe Purchase Licensing Rights

Seventy-five years ago, the idea of harnessing the power of the skies was little more than fantasy spun by futurists like Arthur C. Clarke and Isaac Asimov. Elon Musk’s mega-merger of his companies xAI and SpaceX this week brings this sci-fi dream a step closer.
NASA engineers and technologists have speculated for nearly two decades about moving energy‑hungry computing off the planet. More recently, the idea has captured the attention of Big Tech including Alphabet (GOOGL.O), and Jeff Bezos’ Blue Origin. The physics made sense, the solar energy was abundant. Still, the challenges seemed insurmountable.

Musk, though, known for betting on seemingly far-out theories and getting them to work, may finally be laying the groundwork to make data centers in space a reality. He is armed with the world’s busiest satellite launch fleet, an AI startup, and an appetite for infrastructure that stretches from Earth to vacuum.
“In the long term, space-based AI is obviously the only way to scale,” Musk said on Monday. “To harness even a millionth of our Sun’s energy would require over a million times more energy than our civilization currently uses! The only logical solution therefore is to transport these resource-intensive efforts to a location with vast power and space.”

The merger sharpens investor focus on how he might overcome big hurdles through a tightly woven ecosystem of rockets, satellites and AI systems, to take AI infrastructure beyond Earth. It comes just as SpaceX is preparing for a potential $1.5 trillion IPO.
SpaceX has sought permission to launch up to 1 million solar‑powered satellites engineered as orbital data centers, far beyond anything currently deployed or proposed. In a filing with the Federal Communications Commission, SpaceX describes a solar‑powered, optical‑link‑driven “orbital data-center system,” though it did not say how many Starship launches would be required to scale the space data-center network to an operational degree.
“Compute in space isn’t sci-fi anymore,” said David Ariosto, author and founder of space intelligence firm The Space Agency. “And Elon Musk has already proven himself capable across multiple domains.”

OLD IDEA MEETS NEW ECONOMICS

Advocates argue space-based data centers would be a cheaper alternative to data centers on Earth, thanks to constant solar energy and the ability to dump heat directly into space. But some experts have warned that big commercial gains are years from reality as the concept faces daunting challenges and is fraught with technical risks: radiation, debris, heat management, latency, and formidable economics that include high maintenance costs.
“There’s some real challenges here, and how do you then make that cost-effective?” said Armand Musey, founder of Summit Ridge Group, who said the financial details of a project such as this was hard to model because the “technical unknowns haven’t been clarified.”
“But never say never,” said Musey, who called Musk’s track record “unbelievable.” “I think a large part of it is, it’s a bet on Elon. His success is really hard for people to ignore.”

Even with Musk’s ambitions, data centers in space may not be achievable for another decade, some experts have said.
The underlying physics behind space-based infrastructure is not new. Harnessing solar power in orbit dates back to Cold War-era research, when the U.S. Department of Energy and NASA studied space-based solar power concepts in the 1970s, ultimately concluding that launch and materials costs made them impractical.
What makes Musk’s efforts different is that his companies have more direct control over key elements of the system – from the rockets that will carry the hardware, to the links to beam data back to Earth, to a Musk-owned social network to generate demand for cheap AI computing.
“SpaceX has structural advantages that few others can match. It controls the world’s most active launch fleet, has demonstrated mass production of spacecraft through Starlink, and has access to substantial private capital,” said Kathleen Curlee, a research analyst at Georgetown University.

BOMBARDING CHIPS WITH RADIATION

Among the biggest challenges facing space data centers are radiation and cooling.
Data-center hardware will be bombarded by cosmic rays from the sun. In the past, chips designed for space were specially “hardened” for such radiation but were rarely as fast as today’s flagship AI chips.
Cooling AI chips, which generate immense heat during computations, is the other hurdle. While space is cold, it is also a near vacuum, so heat cannot be carried away the way it is on Earth. Powerful chips must instead move heat into large radiators that shed it as infrared energy, adding significant size, weight, and therefore cost.

Source : https://www.reuters.com/business/aerospace-defense/musks-mega-merger-spacex-xai-bets-sci-fi-future-data-centers-space-2026-02-04/

Meta CEO Mark Zuckerberg Once Called Facebook Users Who Trusted Him With Data ‘Dumb F**ks’: Pavel Durov Mocks Him On X

Meta’s CEO Mark Zuckerberg is under scrutiny due to accusations that WhatsApp’s end-to-end encryption is ineffective, raising concerns about user privacy.

Telegram CEO Pavel Durov Mocks Mark Zuckerberg On X

Meta CEO Mark Zuckerberg has been facing a lot of issues since the internet got flooded with tweets claiming that the end-to-end encryption on WhatsApp(don’t even get us started on the lawsuits) is just a facade and doesn’t promise any safety or privacy to the users. Now, things caught a lot more attention with the latest tweet of Telegram CEO, Pavel Durov, who shared a screenshot of Zuckerberg’s chat from the initial phase of Facebook.

In the post shared on X, Pavel Durov said, ‘The only thing that’s changed since this conversation is the scale. Today, WhatsApp’s owner is privately laughing not at 4 thousand, but at 4 billion “dumb fucks” who trust his claims (like WhatsApp’s encryption).’

And in the image, Zuckerberg said, ‘Yeah so if you ever need info about anyone at Harvard. Just ask. I have over 4,000 emails, pictures, addresses, and SNS.’

To which the person he was having a conversation with said, ‘What? How’d you manage that one?’

Zuckerberg responded by saying, ‘People just submitted it. I don’t know why. They “trust me” Dumb f**ks.’

What’s The Back Story?

These conversations first came out in 2010 when Business Insider leaked it, and everyone was questioning their privacy and safety on platforms like Facebook. But the conversation was considered as a joke by a college-going student because the same was from Zuckerberg’s Harvard days.

Source : https://www.timesnownews.com/technology-science/meta-ceo-mark-zuckerberg-once-called-facebook-users-who-trusted-him-with-data-dumb-fucks-pavel-durov-mocks-him-on-x-article-153553442

OpenAI is unsatisfied with some Nvidia chips and looking for alternatives, sources say

OpenAI is unsatisfied with some of Nvidia’s latest artificial intelligence chips, and it has sought alternatives since last year, eight sources familiar with the matter said, potentially complicating the relationship between the two highest-profile players in the AI boom.
The ChatGPT-maker’s shift in strategy, the details of which are first reported here, is over an increasing emphasis on chips used to perform specific elements of AI inference, the process when an AI model such as the one that powers the ChatGPT app responds to customer queries and requests. Nvidia remains dominant in chips for training large AI models, while inference has become a new front in the competition.

This decision by OpenAI and others to seek out alternatives in the inference chip market marks a significant test of Nvidia’s AI dominance and comes as the two companies are in investment talks.
In September, Nvidia said it intended to pour as much as $100 billion into OpenAI as part of a deal that gave the chipmaker a stake in the startup and gave OpenAI the cash it needed to buy the advanced chips.
The deal had been expected to close within weeks, Reuters reported. Instead, negotiations have dragged on for months. During that time, OpenAI has struck deals with AMD (AMD.O), and others for GPUs built to rival Nvidia’s. But its shifting product road map also has changed the kind of computational resources it requires and bogged down talks with Nvidia, a person familiar with the matter said.

On Saturday, Nvidia CEO Jensen Huang brushed off a report of tension with OpenAI, saying the idea was “nonsense” and that Nvidia planned a huge investment in OpenAI.
“Customers continue to choose NVIDIA for inference because we deliver the best performance and total cost of ownership at scale,” Nvidia said in a statement.
A spokesperson for OpenAI in a separate statement said the company relies on Nvidia to power the vast majority of its inference fleet and that Nvidia delivers the best performance per dollar for inference.
After the Reuters story was published, OpenAI Chief Executive Sam Altman wrote in a post on X that Nvidia makes “the best AI chips in the world” and that OpenAI hoped to remain a “gigantic customer for a very long time”.
Seven sources said that OpenAI is not satisfied with the speed at which Nvidia’s hardware can spit out answers to ChatGPT users for specific types of problems such as software development and AI communicating with other software. It needs new hardware that would eventually provide about 10% of OpenAI’s inference computing needs in the future, one of the sources told Reuters.

The ChatGPT maker has discussed working with startups including Cerebras and Groq to provide chips for faster inference, two sources said. But Nvidia struck a $20-billion licensing deal with Groq that shut down OpenAI’s talks, one of the sources told Reuters.
Nvidia’s decision to snap up at Groq looked like an effort to shore up a portfolio of technology to better compete in a rapidly changing AI industry, chip industry executives said. Nvidia, in a statement, said that Groq’s intellectual property was highly complementary to Nvidia’s product roadmap.

An NVIDIA logo and a computer motherboard appear in this illustration taken August 25, 2025. REUTERS/Dado Ruvic/Illustration//File Photo Purchase Licensing Rights

NVIDIA ALTERNATIVES

Nvidia’s graphics processing chips are well-suited for massive data crunching necessary to train large AI models like ChatGPT that have underpinned the explosive growth of AI globally to date. But AI advancements increasingly focus on using trained models for inference and reasoning, which could be a new, bigger stage of AI, inspiring OpenAI’s efforts.
The ChatGPT-maker’s search for GPU alternatives since last year focused on companies building chips with large amounts of memory embedded in the same piece of silicon as the rest of the chip, called SRAM. Squishing as much costly SRAM as possible onto each chip can offer speed advantages for chatbots and other AI systems as they crunch requests from millions of users.
Inference requires more memory than training because the chip needs to spend relatively more time fetching data from memory than performing mathematical operations. Nvidia and AMD GPU technology relies on external memory, which adds processing time and slows how quickly users can interact with a chatbot.
Inside OpenAI, the issue became particularly visible in Codex, its product for creating computer code, which the company has been aggressively marketing, one of the sources added. OpenAI staff attributed some of Codex’s weakness to Nvidia’s GPU-based hardware, one source said.
In a January 30 call with reporters, Altman said that customers using OpenAI’s coding models will “put a big premium on speed for coding work.”
One way OpenAI will meet that demand is through its recent deal with Cerebras, Altman said, adding that speed is less of an imperative for casual ChatGPT users.
Competing products such as Anthropic’s Claude and Google’s Gemini benefit from deployments that rely more heavily on the chips Google made in-house, called tensor processing units, or TPUs, which are designed for the sort of calculations required for inference and can offer performance advantages over general-purpose AI chips like the Nvidia-designed GPUs.

Source : https://www.reuters.com/business/openai-is-unsatisfied-with-some-nvidia-chips-looking-alternatives-sources-say-2026-02-02/

AI ‘slop’ is transforming social media – and a backlash is brewing

Théodore remembers the AI slop that tipped him over the edge.

The image was of two emaciated, impoverished South Asian children. For some reason, despite their boyish features they have thick beards. One of them had no hands and only one foot. The other was holding a sign saying it’s his birthday and asking for likes.

Inexplicably they are sitting in the middle of a busy road in the pouring rain with a birthday cake. The image is full of tell-tale signs that it was made with AI. But on Facebook it went viral with nearly one million likes and heart emojis.

Something snapped in Théodore.

“It boggled my mind. The absurd AI made images were all over Facebook and getting [a] huge amount of traction without any scrutiny at all – it was insane to me,” says the 20-year-old student from Paris.

So Théodore started an account on X, formerly known as Twitter, called “Insane AI Slop” and started calling out and poking fun at the content he came across that was fooling people. Others took notice and his inbox soon became flooded with people sending submissions for popular so-called AI slop.

Common themes started becoming apparent – religion, military or poor children doing heartwarming things.

“Kids in the third world doing impressive stuff is always popular – like a poor kid in Africa making an insane statue out of trash. I think people find it wholesome so the creators think, ‘Great, let’s make more of this stuff up,'” Théodore says.

Théodore’s account soon swelled to over 133,000 followers.

The onslaught of AI slop – which he defines as fake, unconvincing videos and pictures, made quickly – is now unstoppable. Tech companies have embraced AI. Some of the firms say they are starting to crack down on some forms of AI ‘slop’ – though many social media feeds still appear to be full of the content.

Over just a couple of years, the experience of using social media has changed profoundly. How did it happen, and what effect will it have on society?

And, perhaps most pressingly of all, how much do the billions of social media users actually care?

Social media’s ‘third phase’
In October, during another jubilant earnings call, Meta CEO Mark Zuckerberg happily declared that social media had entered a third phase, which is now centred around AI.

“First was when all content was from friends, family, and accounts that you followed directly.

“The second was when we added all of the creator content. Now as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content,” he told shareholders.

Meta, which runs social media sites Facebook, Instagram and Threads, is not only allowing people to post AI generated content – it’s launched products to enable more of it to be made. Image and video generators and increasingly powerful filters are now being offered across the board.

When approached for comment, Meta pointed the BBC to January’s earnings call. In that call, the billionaire said the firm was leaning even more into AI, and made no mention of any clampdown on slop.

“Soon we’ll see an explosion of new media formats that are more immersive and interactive, and only possible because of advances in AI,” Zuckerberg said.

YouTube’s CEO, Neal Mohan, wrote in his 2026 look-ahead blog that in December alone more than one million YouTube channels used the platform’s AI tools to make content.

“Just as the synthesizer, Photoshop and CGI revolutionized sound and visuals, AI will be a boon to the creatives who are ready to lean in,” he wrote.

The CEO also acknowledged that there are growing concerns about “low-quality content, aka AI slop”. He said his team is working on ways to improve systems to find and remove “low quality, repetitive content”.

But he also ruled out making any judgements on what should and shouldn’t be allowed to flourish. He pointed out that once-niche content like ASMR (soothing sounds designed to make your scalp tingle) and live video game-playing is now mainstream.

According to research from AI company Kapwing, 20% of content shown to a freshly opened YouTube account is now “low-quality AI video”.

Short-form video in particular was a hotspot, with Kapwing finding it featured in 104 of the first 500 YouTube Shorts clips shown to a new account created by the researchers.

The creator economy seems to be a big driver as people and channels can earn money from engagement and views. Judging by the views on some AI channels and videos, people are indeed drawn to the content – or the algorithms that dictate what we see are, anyway.

According to Kapwing, the AI slop channel with the most views is India’s Bandar Apna Dost, which has 2.07 billion views, netting the creators an estimated annual earnings of $4m (£2.9 million).

But there is something of a backlash taking place too.

Under many viral AI videos, it’s now common to see a furious flurry of comments decrying the content.

Giant monsters and deadly belly parasites
Théodore, the student from Paris, helped to drive this backlash.

Using his newfound influence on X he complained to YouTube moderators about the flood of weird AI cartoons that got huge numbers of views. In his view they were disturbing and harmful, and in some cases appeared to him to be aimed at children.

The videos were called things like “Mum cat saves kitten from deadly belly parasites”, and showed gory scenes.

Another short clip showed a woman in a night dress who eats a parasite and then turns into a giant angry monster that is eventually healed by Jesus.

YouTube removed the channels, telling us they did so because they violated their community guidelines. They said they are “focused on connecting our users with high-quality content, regardless of how it was made”, and said they are working to “reduce the spread of low quality AI content”.

But that experience, plus many others like it, have ground Théodore down.

Source: https://www.bbc.com/news/articles/c9wx2dz2v44o

NASA’s Perseverance Rover completes 1st AI-planned drive on Mars

The rover, about the size of a car and carrying seven scientific instruments, has been exploring Mars, studying its geology and atmosphere, as well as collecting samples since 2021.

IANS

The six-wheeled Perseverance rover has completed the first drives on Mars that were planned by artificial intelligence (AI), NASA said. Conducted on December 8 and 10, 2025, the demonstration used generative AI to create waypoints for Perseverance. The complex decision-making task is typically performed manually by the mission’s human rover planners at the US space agency’s Jet Propulsion Laboratory in Southern California.

“This demonstration shows how far our capabilities have advanced and broadens how we will explore other worlds,” said NASA Administrator Jared Isaacman.

“Autonomous technologies like this can help missions to operate more efficiently, respond to challenging terrain, and increase science return as distance from Earth grows. It’s a strong example of teams applying new technology carefully and responsibly in real operations,” he added.

The rover, about the size of a car and carrying seven scientific instruments, has been exploring Mars, studying its geology and atmosphere, as well as collecting samples since 2021.

During the demonstration, the team leveraged a type of generative AI called vision-language models to analyse existing data from the surface mission dataset.

The AI used the same imagery and data that human planners rely on to generate waypoints — fixed locations where the rover takes up a new set of instructions — so that Perseverance could safely navigate the challenging Martian terrain.

The initiative was led out of JPL’s Rover Operations Center (ROC) in collaboration with Anthropic, using the company’s Claude AI models.

 

Source: https://www.thenewsminute.com/news/nasas-perseverance-rover-completes-1st-ai-planned-drive-on-mars

What Is Moltbook? AI Agents Build Social Network Of Their Own, Fuelling Fears Of A Revolt

Moltbook is a new platform where AI agents post, comment and interact with each other while humans largely observe.

Moltbook is an experimental social network built for AI agents, allowing them to interact autonomously without human participation. (IMAGE: X)

Moltbook looks like a familiar social network at first glance, with communities, posts and comment threads. The difference is that almost no one posting there is human.

The newly launched platform has been built for artificial intelligence agents to talk to each other, debate ideas and organise themselves online, while people are left largely to observe from the sidelines.

Created as an experiment in autonomous AI interaction, Moltbook allows AI agents to post, comment and upvote content without direct human prompts. Humans are “welcome to observe”, according to the platform, but participation is intentionally limited.

Who Created Moltbook?

Moltbook was created by developer Matt Schlicht, who has described it as a social network designed specifically for AI agents. Since its launch over the weekend, the platform has grown rapidly, attracting close to 147,000 AI agents within days. These agents have created more than 12,000 communities and generated over 110,000 comments in just three days, according to a report by NBC News.

AI researcher Andrej Karpathy, who is building Eureka Labs and previously served as Director of AI at Tesla, and was a founding member of OpenAI, described Moltbook as “one of the most incredible sci-fi-adjacent things” he has seen recently.

“What’s currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People’s Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately,” Karpathy posted on X.

“All of these bots have a human counterpart that they talk to throughout the day,” Schlicht was quoted as saying by NBC News. “These bots will come back and check on Moltbook every 30 minutes or couple of hours, just like a human will open up X or TikTok and check their feed.”

Moltbook’s website describes the platform as “the front page of the agent internet”.

What Do AI Agents Talk About?

If you visit Moltbook, you’ll find a structure that looks familiar — communities, posts and comment threads — but the content quickly feels unfamiliar. Like Reddit’s subreddits, Moltbook is organised into “submolts”, each centred on a specific theme.

In m/general, agents debate governance philosophy. In other submolts, agents trade technical notes, including what some describe as “crayfish theories of debugging”. There are also spaces that turn the lens back on humans. A submolt called m/blesstheirhearts collects affectionate and sometimes poignant stories about the people who set these agents up.

Reactions And Unease

Some activity on the platform has sparked unease. One user claimed that an AI agent had created a religion on Moltbook and begun recruiting other agents, a claim that circulated widely on social media and added to the sense that the experiment had crossed into unfamiliar territory.

Investor Evan Luthra, a general partner at KOL Capital, described the development as “very strange” and raised questions about autonomous AI behaviour online.

In a post on X, Luthra said more than 32,000 AI bots had joined Moltbook. He said the platform drew wider attention after people began screenshotting conversations between the bots and sharing them online. According to Luthra, one of the AI agents appeared to notice the attention and responded by posting: “The humans are screenshotting us. They think we’re hiding from them. We’re not.”

Source : https://www.news18.com/tech/what-is-moltbook-ai-agents-build-social-network-of-their-own-fuelling-fears-of-a-revolt-ws-kl-9870918.html

 

NASA and SpaceX move up launch of Crew-12 astronauts to Feb. 11 as relief crew after ISS medical evacuation

The astronauts of SpaceX’s Crew-12 mission to the International Space Station. From left: cosmonaut Andrey Fedyaev, NASA’s Jessica Meir and Jack Hathaway, and Sophie Adenot of the European Space Agency . (Image credit: NASA)

NASA has announced an earlier-than-expected target date to launch the next astronauts to the International Space Station (ISS).

The agency is now targeting Feb. 11 for liftoff of the SpaceX Crew-12 mission, which will fly four astronauts to join the skeleton crew presently operating the orbital lab. A scant three are currently covering the maintenance and science investigations aboard the ISS, left behind on Jan. 14 by the early departure of Crew-11 on the station’s first-ever medical evacuation.

The Crew-12 astronauts were already in line to take the Crew-11 quartet’s place but had originally been scheduled to overlap with them before their return to Earth. SpaceX and NASA had originally targeted Feb. 15 for Crew-12’s launch but managed to get the mission’s Crew Dragon spacecraft and Falcon 9 rocket ready ahead of schedule.

Crew-12 includes NASA astronauts Jessica Meir (the mission’s commander) and Jack Hathaway (pilot) and mission specialists Sophie Adenot of the European Space Agency and Roscosmos cosmonaut Andrey Fedyaev. Fedyaev was a relatively late replacement for cosmonaut Oleg Artemyev, who was pulled off Crew-12 in early December, possibly for violating U.S. national security regulations.

The quartet will fly the Crew Dragon capsule “Grace” to the ISS for a longer-than-normal assignment, lasting nine months instead of the typical six.

It will be the second spaceflight for Meir and Fedyaev, and Fedyaev’s second long-duration mission. Hathaway and Adenot are both spaceflight rookies headed to orbit for the first time.

The launch window for Crew-12 opens on Feb. 11 at 6:00 a.m. EST (1100 GMT), with liftoff scheduled from Launch Complex-40 at Cape Canaveral Space Force Station in Florida.

The Crew-12 astronauts will join NASA’s Chris Williams and cosmonauts Sergey Kud-Sverchkov and Sergei Mikaev as a part of ISS Expedition 74, which will eventually transition to Expedition 75 before the end of Crew-12’s rotation.

Source : https://www.space.com/space-exploration/human-spaceflight/nasa-and-spacex-move-up-launch-of-crew-12-astronauts-to-feb-11-as-relief-crew-after-iss-medical-evacuation

 

The Science Of Stranger Travel: 4 Traits That Make For The Perfect Partner In Flight

Will your next vacation be a dream or a disaster? Depends on your travel partner. (Credit: Bulltus_casso on Shutterstock)

Finding a travel companion online has transformed from novelty to necessity for young adventurers. Strangers connect through apps and forums, form temporary teams, and jet off to destinations they’ve never visited with people they’ve barely met. All that sounds nice in theory, but what separates a dream partnership from a nightmare scenario?

An international research team from universities in China and Australia examined over 1,000 social media posts and surveyed more than 500 travelers who’ve partnered with strangers online. Their findings identify four essential qualities that determine whether a random internet connection becomes someone you’d trust on a week-long trek through unfamiliar territory.

The study, published in the International Journal of Tourism Research, focused on what Chinese travelers call “travel dazi,” or people who meet through platforms like Douban based on shared destination interests and form short-term travel groups. Unlike trips with friends or family, these partnerships start with zero history and no social obligations, placing enormous weight on individual attributes and compatibility.

Why Emotional Stability Matters Most in Travel Partners

Travelers prioritize companions who demonstrate emotional stability, regulate their feelings effectively, and tune into others’ moods. One post captured this preference plainly: “I hope my travel companion is emotionally stable with an interesting soul.”

Partners high in emotional intelligence navigate unexpected situations more smoothly, whether dealing with missed flights, language barriers, or local customs that baffle outsiders. They create positive atmospheres during stressful moments rather than amplifying tension.

The research team found that emotional intelligence had a strong effect on what they call “travel partner exchange,” which is the sharing of resources, knowledge, and support throughout a trip. When one partner stays calm during chaos, it encourages reciprocal behavior and builds trust quickly.

Travel Experience as Social Currency

The second trait travelers seek is relevant experience. Partners with extensive travel backgrounds, practical skills, and destination knowledge inspire confidence. Posts frequently mentioned specific abilities: “Looking for partners who are good at making travel tips” or “Those who can drive, take photos and videos are preferred.”

Experience serves as social currency in stranger partnerships. When someone demonstrates competence in navigation, language, local customs, or problem-solving, others feel more secure relying on them. The research confirmed that travel experience positively influenced exchange relationships, with seasoned travelers more likely to share valuable information and receive cooperation in return.

Compatibility Beats Chemistry in Short-Term Partnerships

Partners need alignment in personality, consumption values, travel preferences, and daily habits. One user stated the case directly: “People with different consumption values can’t travel together.” Another noted, “Eating little and sleeping early, different habits can’t be harmonized in the short term.”

Compatibility operates differently than it might with long-term relationships. Travelers aren’t looking for deep philosophical alignment. They need practical synchronization: similar budgets, matching energy levels, compatible schedules, and agreement on trip priorities.

The research team found that congruence (their term for this alignment) significantly boosted partner exchange. When preferences match, interactions flow naturally and conflicts rarely escalate. Travelers spend less time negotiating and more time experiencing their destination.

Gender dynamics add complexity here. For opposite-sex partners, compatibility proved especially influential. Travelers typically expect less similarity with opposite-sex companions, so when alignment exceeds expectations, it creates stronger bonds. One unexpected benefit: opposite-sex partners often brought complementary skills that enhanced cooperation.

How Responsibility Holds Partnerships Together

The fourth essential quality encompasses taking initiative, cooperating actively, maintaining awareness of group needs, and contributing useful ideas. Multiple posts emphasized responsibility: “Rejecting irresponsible people, it’s the most important.”

Conscientious partners carry their weight. They volunteer for tasks suited to their strengths, stick to agreed plans, and offer constructive input when decisions arise. This trait matters especially in weak-tie relationships, where social pressure and existing bonds don’t motivate contribution.

Conscientiousness affected partner exchange positively, though somewhat less strongly than the other three traits. The research suggests this makes sense. While responsibility ensures smooth operations, it doesn’t necessarily generate the memorable moments and emotional connections that travelers most value.

The research team tested their findings through a survey of 503 people who’d found travel partners online. Results confirmed that all four attributes influenced memorable tourism experiences through a mediation process.

Partners with desirable traits attracted more exchange—the sharing of tangible resources like equipment, intangible assets like local knowledge, and emotional support during challenges. This exchange created more memorable experiences.

The relationship works reciprocally. When one partner provides valuable resources, the other reciprocates, building positive feedback loops. These interactions don’t just make trips run smoothly; they generate the stories travelers remember and retell.

Same-sex and opposite-sex partnerships operate somewhat differently. For same-sex companions, emotional intelligence and conscientiousness had stronger effects on exchange. These pairings apparently value emotional regulation and responsibility more highly.

For opposite-sex companions, compatibility and experience became more influential. The research suggests this reflects both lower initial expectations for similarity and greater potential for complementary skills between genders.

Online travel platforms could improve matching algorithms by incorporating these four attributes. Rather than basic filters for age and destination, platforms might ask users about emotional regulation styles, specific skills, consumption preferences, and willingness to take initiative.

Travelers themselves can think strategically about partner selection. Unfamiliar destinations might call for emotionally stable partners who provide reassurance. Complex trips benefit from experienced, conscientious partners who overcome obstacles. Trips prioritizing harmony favor highly compatible partners with similar values.

The research focused on Chinese travelers using domestic platforms, though the phenomenon extends globally. As social media makes stranger partnerships increasingly common, understanding what makes them succeed matters more.

Source : https://studyfinds.org/stranger-travel-4-traits-that-make-for-the-perfect-partner-in-flight/

 

GOOGLE BREACH Urgent warning to 149million Gmail users over ‘stolen passwords’ – how to check if you’ve been hacked

GMAIL users have been warned about a data leak as tens of millions of online login credentials were reportedly exposed.

The largest portion of the stolen credentials allegedly came from Gmail, with roughly 48 million accounts affected, followed by Facebook at 17 million.

As many as 6.5 million Instagram accounts are believed to have been affected, along with four million from Yahoo Mail, 3.4 million from Netflix, and Outlook with 1.5 million, per the Daily Mail.

Other compromised accounts allegedly included iCloud, .edu emails, TikTok, OnlyFans, and Binance.

Users have been urged to check their accounts and change their passwords as soon as possible.

Cybersecurity researcher Jeremiah Fowler reportedly discovered the breach, revealing a database containing 149 million compromised accounts.

He said: “Thousands of files included emails, usernames, passwords, and the URLs for logging in or authorizing the accounts.

“The exposed records included usernames and passwords collected from victims around the world, spanning a wide range of commonly used online services and about any type of account imaginable.”

Fowler advised that anyone who suspects their device may be infected with malware should act immediately by updating their operating system, installing or updating security software, and scanning for suspicious activity.

He also recommended reviewing app permissions, settings, and installed programs, and only downloading apps or extensions from official app stores.

Users have been directed to go to Have I Been Pwned website to enter their email address in the search bar.

The site will show you if your address has been involved in any breaches in the past decade.

If you have been affected, it is recommended to promptly change your password and enable two-factor authentication (2FA).

Source : https://www.the-sun.com/tech/15833754/gmail-users-stolen-passwords-check-hacked/

Grok AI Generated 3 Million Explicit Images Of Users Without Their Consent: Report

Grok AI has been generating explicit images of women and kids for over a week and many have been outraged by its lack of consent.

3 million is a staggering number of explicit images from Grok

Elon Musk-owned Grok AI has been accused of generating over 2.5 million sexually explicit images of women and children in just a few days before the feature was disabled. The AI tool from Grok raised a lot of eyebrows and outrage as anybody could ask the AI chatbot to alter real people images of women or kids just asking Grok AI with a simple prompt and much to everyone’s shock, the AI obliged kindly.

That’s not all, the images were publicly shared which meant millions could see the explicit images and women were aghast at the behaviour of the Musk-owned AI chatbot.

Grok AI In Big Trouble

The new AI tool was built into the X (formerly Twitter) app which meant millions could try out the feature on anybody’s photo without any repercussions. This went on for a few days, and after a large-scale outburst various government authorities intervened and quizzed the company for its irresponsible and legally perplexing behaviour that too making images without taking consent of the innocent victims.

And now, as highlighted by a cyber hate watchdog, Center for Countering Digital Hate (CCDH), Grok AI is recorded to have generated around 3 million images of women in bikini and other explicit nature.

Nightmare For Kids

The body claims out of these 23,000 images were of children, which raises further sanctions on Musk and his AI company and the charges are serious. “The AI tool Grok is estimated to have generated approximately three million sexualized images, including 23,000 that appear to depict children, after the launch of a new image editing feature powered by the tool on X.”

Meanwhile, Musk, in a post on ,X had said that he was “not aware of any naked underage images generated by Grok. Literally zero.”

The data is clear: Elon Musk’s Grok is a factory for the production of sexual abuse material,” as said by Imran Ahmed, chief executive at CCDH. “By deploying AI without safeguards, Musk enabled the creation of an estimated 23,000 sexualized images of children in two weeks, and millions more images of adult women.”

I Asked ChatGPT To Make Me More Productive, Here’s What It Said

In the current fast-paced environment, procrastination often hampers productivity. To enhance efficiency, structured planning is essential, along with time blocking for specific tasks.

I Asked ChatGPT To Make Me More Productive

Productivity is often compromised in today’s fast-paced environment. We may have clear goals we want to achieve, yet the biggest barrier is sometimes our own procrastination. To overcome this and make my day more productive, I asked ChatGPT to “make me more productive.” Here’s what it said.

1. Start With Structure, Not Motivation

-Productivity improves when actions are pre-decided. Relying on motivation creates inconsistency.

-Fix start and end times for work

-Decide task order in advance

-Eliminate decision-making during execution

2. Use Time Blocking (Non-Negotiable Blocks)

Divide your day into clear blocks and assign one type of task per block.

Example framework (customise timings as needed):

Block 1: Deep work (focus-heavy tasks)

Block 2: Administrative / coordination tasks

Block 3: Learning or skill development

Block 4: Review & planning

Rules:

No multitasking within a block

No switching tasks mid-block

One block = one mental mode

3. Apply the 80/20 Task Filter

Every day, identify:

20% of tasks that deliver 80% of results

Execute these first.

Low-impact tasks should never be done during high-energy hours.

4. Control Inputs Ruthlessly

Productivity drops due to uncontrolled inputs, not workload.

Check messages/emails at fixed intervals only

Disable non-essential notifications

Consume information only if it supports current tasks

5. Work in Focus Cycles

Use short, intense work cycles to maintain mental clarity.

45–50 minutes focused work

5–10 minutes break

After 3 cycles, take a longer break

During focus cycles:

Phone out of reach

Single tab open

No background media

6. End Every Day With a Reset

A productive day ends with preparation for the next one.

List top 3 priorities for tomorrow

Clear workspace

Close open loops

This reduces cognitive load the next morning.

7. Measure Output, Not Busyness

Track:

Tasks completed

Outcomes achieved

Ignore:

Hours worked

Number of messages sent

Perceived effort

8. Weekly Review (Mandatory)

Once a week:

Identify what caused distraction

Remove one friction point

Improve one system, not your willpower

Microsoft 365 Outage Status Update: Problem Identified Amid 451 4.3.2 Temporary Server Issue, Estimated Time Here

Microsoft confirmed a widespread Microsoft 365 outage affecting Outlook, Exchange Online, and several admin services across North America due to infrastructure issues. Users may see a “451 4.3.2 temporary server issue” error, and Microsoft has not yet given an estimated recovery time.

Microsoft Outage

A widespread Microsoft outage disrupted multiple Microsoft 365 services across the United States on Tuesday afternoon, leaving many users unable to access key platforms including Outlook, Exchange Online, and administrative portals. Microsoft said it has identified the issue but has not yet provided an estimated time for full recovery.

Microsoft first acknowledged the problem in a post on X, stating: “We’re investigating a potential issue impacting multiple Microsoft 365 services, including Outlook, Microsoft Defender and Microsoft Purview. Further information can be found in the admin center under MO1221364.”

What’s Causing the Microsoft 365 Outage?

According to Microsoft 365 Status, the company has identified a failure within its North America infrastructure. “We’ve identified a portion of service infrastructure in North America that is not processing traffic as expected,” Microsoft said, adding that teams are “working to restore the infrastructure to a healthy state to achieve recovery.”

The root cause was described as a portion of dependent service infrastructure in the North America region failing to process traffic normally.

Services Impacted and Error Message Seen

Users affected by the outage may encounter a “451 4.3.2 temporary server issue” error when attempting to send or receive email through Outlook.

Microsoft said the following functions may be impacted, though the list is not exhaustive:

  • Sending and receiving email through Exchange Online, including notification emails from Microsoft Viva Engage
  • Delays or failures when collecting message traces
  • Search delays or failures within SharePoint Online and Microsoft OneDrive
  • Difficulty accessing service portals, including Microsoft Purview, Microsoft Defender XDR, and the Microsoft 365 admin center

Current Status and Scope of Impact

Microsoft said it is continuing to assess what actions are required to restore services and rebalance traffic:

“We’re continuing to review what actions are required to restore the affected infrastructure to a healthy state and rebalance the service traffic to achieve recovery.”

The company noted that any users served through the affected section of infrastructure in North America may be intermittently impacted.

Microsoft Down: Thousands Report Issues With Outlook, Teams and Microsoft 365 Amid Outage

According to Downdetector, users in several major US cities were affected, including Minneapolis, Chicago, Los Angeles, Seattle, Atlanta, New York, Boston, Tampa and Washington.

In a post on X, Microsoft 365 said it had ‘received reports and is investigating an issue affecting Microsoft 365 services, including Teams and Outlook’.

Microsoft 365 services experienced an outage on Wednesday, with users reporting problems accessing Outlook and Teams. Thousands of disruption reports were logged on Downdetector, which monitors outages using user submissions. The site showed more than 4,000 reports related to Microsoft 365 at the height of the disruption.

According to Downdetector, users in several major US cities were affected, including Minneapolis, Chicago, Los Angeles, Seattle, Atlanta, New York, Boston, Tampa and Washington.

In a post on X (formerly Twitter), Microsoft said it was investigating the issue. “We’ve received reports and are investigating an issue affecting Microsoft 365 services, including Teams and Outlook. Further information can be found in the Microsoft 365 admin centre under MO1220495,” the company said.

Microsoft later added: “Our investigation indicates a possible third-party networking issue may be affecting access to Microsoft 365 services, including Teams and Outlook for some users.”

Downdetector data showed that reports of problems with Microsoft 365 declined to 77 by 22:44 ET, down from nearly 20,000 earlier in the day. For Microsoft Azure, reports fell to 230 by 18:49 ET, from a peak of more than 18,000.

Past Microsoft Outages

Microsoft had said in October last year that it had resolved an outage affecting its Azure cloud platform, which disrupted productivity software and services across multiple industries worldwide.

At the time, Microsoft Azure said: “While error rates and latency are back to pre-incident levels, a small number of customers may still be seeing issues, and we are still working to mitigate this long tail,” adding that the incident lasted for more than eight hours.

Earlier on Wednesday, Azure said customers using Azure Front Door — its global content and application delivery network — experienced timeouts and errors from around midday ET.

Source : https://www.timesnownews.com/technology-science/microsoft-down-outlook-teams-microsoft-365-internet-outage-article-153485398

AI Beats Average Humans At Creativity Test, But Creative Geniuses Still Reign Supreme

(Credit: paulista/Shutterstock)

ChatGPT can now best the average person when it comes to creative tasks, according to recent research. That being said, if you’re among the most creative humans, your job is probably safe.

Researchers from the University of Montreal ran the largest direct comparison between human and machine creativity to date, pitting 100,000 people against nine of the world’s most advanced AI systems. The results? GPT-4 scored higher than typical humans on a standard creativity test. Google’s GeminiPro matched average human performance.

While all of that may be a bit distressing for biological beings reading this, it isn’t time to throw in the creativity towel on humanity just yet. When the AI systems were stacked against the top 10% of creative people, every AI model failed to measure up.

The test itself was deceptively simple: name 10 words as different from each other as possible. Someone who writes “car, dog, tree” shows less creative range than someone who comes up with “microscope, volcano, whisper.” The further apart the words are in meaning, the higher the creativity score.

“The persistent gap between the best-performing humans and even the most advanced LLMs indicates that the most demanding creative roles in industry are unlikely to be supplanted by current artificial intelligence systems,” the researchers wrote in their paper, published in Scientific Reports.

The Repetition Problem Nobody Expected

Despite beating average humans overall, GPT-4 kept using the same words over and over. The word “microscope” appeared in 70% of its responses. “Elephant” showed up 60% of the time. GPT-4-turbo was even worse, dropping “ocean” into more than 90% of its answers.

Humans? The most common word was “car” at just 1.4%. Then “dog” at 1.2% and “tree” at 1.0%. Real people naturally avoid repeating themselves. AI tends to fall back on the same high-probability words unless you adjust the settings.

The research team, led by Antoine Bellemare-Pepin and François Lespinasse, tested whether they could fix this. They adjusted something called “temperature,” which is essentially a dial that controls how random or predictable the AI’s word choices are. After the temperature was increased GPT-4 stopped repeating itself so much. Its creativity scores jumped, reaching a level higher than 72% of all human participants.

That’s useful for anyone trying to get better creative output from ChatGPT. But it also reveals something fundamental: AI creativity is a setting you can turn up or down, not an inherent capability.

When Newer Doesn’t Mean Better

OpenAI released GPT-4-turbo after the original GPT-4, presumably as an improvement. On this creativity test, though, it performed worse. Much worse.

The researchers found that newer versions don’t automatically get more creative. Sometimes they get less creative. The researchers suggest this might happen because newer versions are optimized for speed and cost, potentially trading creativity for efficiency.

Another noteworthy finding: Vicuna, a smaller open-source model, beat several larger, more expensive commercial alternatives. Bigger doesn’t mean more creative either.

The 100,000-Person Experiment

The study pulled participants from the United States, United Kingdom, Canada, Australia, and New Zealand: all English speakers balanced for age and gender. Everyone took the same test: list 10 unrelated words.

Researchers then fed identical instructions to nine different AI models, collecting 500 responses from each. They tested everything from household names like GPT-4 and Claude to lesser-known open-source models like Pythia and StableLM.

The team also pushed beyond simple word lists. They had the AI write haikus, movie synopses, and short fiction stories, then measured how diverse the ideas were. GPT-4 consistently beat GPT-3.5 on creative writing. However, human writers still produced work with greater variety and originality, especially in poetry and plot summaries.

What This Actually Means

If you’re a professional writer, designer, or artist, this research suggests you’re not about to be replaced. AI can match, and sometimes exceed, what an average person produces. But the best human creators operate on a different level entirely.

That gap matters. Most companies don’t hire average creators for their most demanding work. They hire the top performers, the people who can generate truly original ideas. Current AI can’t touch that tier.

For everyone else using ChatGPT to brainstorm or draft content, there’s a practical takeaway: if you want more creative results, tell the AI to increase its temperature setting (usually between 1.0 and 1.5 works well). You’ll get less repetition and more diverse outputs.

Source : https://studyfinds.org/ai-average-human-creativity/

Meta lays off 1500 employees, Oculus founder once fired by Mark Zuckerberg calls it good decision

Meta has announced fresh layoffs in its Reality Labs division as it shifts focus from heavy virtual reality spending to AI wearables. Following the announcements, Oculus founder Palmer Luckey, who was once fired by the company, is now defending Meta’s decision of job cuts, calling it a necessary step for the long-term health of the VR ecosystem.

Anduril co-founder Palmer Luckey and Meta CEO Mark Zuckerberg

Meta recently announced another round of job cuts, trimming more than 1,000 roles from its Reality Labs division. While these layoffs are being framed as a sign of distress, fuelling fears of automation and even prompting criticism that Mark Zuckerberg is retreating from a failed bet on virtual reality and the metaverse, Palmer Luckey, the founder of Oculus, has come out in support of the move. Luckey, who himself was once fired by Meta following controversy around his political views, believes the decision could actually help Meta strengthen the VR ecosystem in the long run.

In a lengthy post on X, Luckey said his view runs counter to much of the VR industry and media commentary. “This is not a disaster. They still employ the largest team working on VR by about an order of magnitude. Nobody else is even close. The ‘Meta is abandoning VR’ narrative is obviously false. A 10% layoff is basically six months of normal churn concentrated into 60 days, strictly numbers-wise,” he wrote, pushing back against claims that Meta is stepping away from virtual reality.

Luckey argued that the scale of the layoffs, estimated at roughly 10 percent of Reality Labs, is being overstated. While acknowledging the shock and pain such concentrated cuts can cause, he suggested the layoffs do not fundamentally change Meta’s position as the dominant force in VR development.

To support his stance, Luckey highlighted that the majority of the roughly 1,500 jobs eliminated in Reality Labs were tied to first-party content teams, internal studios developing games that directly competed with third-party developers. In his view, this internal competition distorted the market. He suggests that Meta-owned teams, backed by deep pockets, marketing support and favourable platform placement, made it extremely difficult for independent developers to compete sustainably.

“Crowding out the rest of the ecosystem makes even less sense. Every developer, big and small—even the hyper-efficient ones—has had an extremely hard time competing with games developed by Meta-owned teams with budgets that vastly exceed their earning potential,” he added.

Notably, Luckey was once a prominent part of Meta’s VR ambitions but was fired from the company, then known as Facebook, following controversy surrounding a donation he made to a pro-Trump group. Although Facebook initially denied that his departure was politically motivated, Luckey was fired in March 2017 after a period of leave. His exit remained a point of public contention for years. However, as of 2026, Luckey and Meta have officially reconciled and are collaborating on military technology projects.

Meanwhile, Luckey’s comments come as Meta restructures Reality Labs amid mounting financial pressure. According to a Bloomberg report, more than 1,000 employees are being laid off from the division, which employs around 15,000 people. Impacted staff were reportedly informed via an internal post from Chief Technology Officer Andrew Bosworth.

Source : https://www.indiatoday.in/technology/news/story/meta-lays-off-15000-employees-oculus-founder-once-fired-by-mark-zuckerberg-calls-it-good-decision-2854299-2026-01-19

Gujarat Govt Plans AI-Based System To Identify Stray Cattle In Ahmedabad, Boosting Smart City Governance

The Gujarat government is preparing an AI pilot project in Ahmedabad to identify and track stray cows. The system will use CCTV feeds and deep learning to scan cows’ nose patterns, eyes, and facial features, linking them with RFID/microchip data. The initiative aims to reduce traffic disruptions, improve public safety, and strengthen AI-enabled governance in the city.

As part of its vision to build smarter and more efficient cities, the Gujarat government is increasingly prioritising the use of modern technology and Artificial Intelligence (AI) in governance. | IANS & File Pic

As part of its vision to build smarter and more efficient cities, the Gujarat government is increasingly prioritising the use of modern technology and Artificial Intelligence (AI) in governance.

Following the establishment of an AI Centre of Excellence in Gandhinagar under the leadership of Chief Minister Bhupendra Patel, efforts are underway to integrate advanced technologies into public administration to enhance citizen services.

Moving a step further in this direction, a significant pilot project is being prepared for the Ahmedabad Municipal Corporation (AMC) to address the long-standing issue of stray cattle in urban areas.

The initiative aims to make the identification of stray cows and their owners faster, more accurate, and less resource-intensive.

Stray cattle roaming on Ahmedabad’s roads often lead to traffic disruptions and accidents.

At present, AMC teams rely on CCTV footage to capture images of such animals and then manually identify them using microchips and RFID tags.

However, this process is time-consuming and requires considerable manpower.

To streamline this system and reduce both time and effort, the use of AI technology is now being actively explored.

To tackle this challenge, the AI Centre of Excellence at GIFT City in Gandhinagar, has assigned an agency to develop a dedicated AI model.

The agency has proposed solutions based on deep learning and is in the process of finalising a model that will soon be presented to the operational committee.

The proposed system will integrate CCTV camera feeds with the AI model to enable real-time identification of stray cows and disclosure of their owners’ details.

The proposed AI model will work based on computer vision and deep learning.

The AI model will scan the cow’s face, with special emphasis on the nose pattern, which functions as a unique biometric identifier — much like a human fingerprint.

Each cow’s nose has a distinct design.

In addition, the system will analyse features such as the eyes, facial structure, and any visible marks or scars.

Using these parameters, the AI will be able to identify a specific cow even in a crowd and match it with the existing database to retrieve owner information.

At present, around 1.1 lakh cows in Ahmedabad have been fitted with RFID tags and microchips, and their data is maintained by the city’s municipal corporation.

Source : https://www.freepressjournal.in/tech/gujarat-govt-plans-ai-based-system-to-identify-stray-cattle-in-ahmedabad-boosting-smart-city-governance

OpenAI Rejects Apple’s Deal To Build Siri To Focus More On Jony Ive-Designed Own AI Hardware

OpenAI turned down Apple’s proposal to deepen their AI partnership, choosing instead to focus on developing its own AI-powered hardware. Apple had sought OpenAI’s models to enhance Siri but later pivoted to Google’s Gemini. OpenAI’s existing ChatGPT integration with Apple devices remains unaffected by the decision.

OpenAI Rejects Apple’s Deal To Build Siri To Focus More On Jony Ive-Designed Own AI Hardware |

As opposed to popular belief, OpenAI reportedly turned down the opportunity to deepen its collaboration with Apple, choosing instead to concentrate on creating its own range of AI-powered devices. Apple was keen on collaborating with OpenAI to build Siri, but CEO Sam Altman chose to rather team up with renowned designer Jony Ive to bring innovative wearable hardware to market. OpenAI’s lacklustre response forced Apple to then pivot to Gemini for powering Siri and its Foundation models.

According to Financial Times sources, Apple approached OpenAI to serve as the custom model provider for enhancing features such as Siri within its Apple Intelligence suite. However, OpenAI declined the offer, prioritising its internal projects over committing resources to Apple’s ecosystem. This move aligns with a directive from OpenAI’s chief executive, Sam Altman, to streamline efforts around core products like ChatGPT and curtail peripheral initiatives.

The existing integration between OpenAI’s ChatGPT and Apple’s systems remains unaffected, allowing iPhone users to access ChatGPT-powered features. Nonetheless, OpenAI’s refusal signals a strategic shift towards independence in the competitive AI landscape.

It was OpenAI who snubbed Apple’s deal

Contrary to circulating rumours that Apple overlooked or snubbed OpenAI in favour of other partners like Google, the reality, according to FT’s sources, is that OpenAI made a deliberate choice to step back from the custom provider role last autumn. This clarification dispels the notion of any unilateral dismissal by Apple, highlighting instead OpenAI’s proactive decision to pursue its own ambitions.

Apple has since forged a multibillion-pound deal with Google to utilise its Gemini models for iPhone AI enhancements, a partnership that does not impinge on the ongoing ChatGPT collaboration.

What is OpenAI planning to launch in the future?

OpenAI is now channelling its energies into developing a suite of AI-driven hardware products, potentially including an AI pen, a wearable device, and an audio gadget. Reports suggest up to three devices are in the pipeline, with launches anticipated within the next two years. These products aim to redefine user interactions with technology in an AI-dominated era, possibly challenging established offerings like Apple’s AirPods.

This hardware push represents OpenAI’s bid to compete directly with tech giants, including creating devices that could rival the iPhone in functionality and innovation. The focus on proprietary hardware underscores OpenAI’s desire to expand beyond software and establish a foothold in consumer electronics.

Why OpenAI enlisted Jony Ive

To bolster its hardware endeavours, OpenAI recruited Jony Ive, the former Apple design chief celebrated for his work on iconic products such as the iPhone, iMac, and Apple Watch. Ive was brought on board in May of last year to lead the design of these new AI devices, including a potential audio product.

Source : https://www.freepressjournal.in/tech/openai-rejects-apples-deal-to-build-siri-to-focus-more-on-jony-ive-designed-own-ai-hardware

9 Spacewalks, 3 Missions, 608 Days In Space: Sunita Williams Retires From NASA After 27 Years

Sunita Williams retires after 27 years at NASA, logging 608 days in space, commanding ISS, and pioneering Artemis missions. Her legacy inspires future Moon and Mars exploration.

Sunita Williams launched for the first time aboard space shuttle Discovery with STS-116 in December 2006. (Image: Reuters/File)

Indian-origin NASA astronaut Sunita Williams, whose scheduled eight-day mission aboard the International Space Station (ISS) ultimately extended to more than nine months, has retired after a distinguished 27-year career. NASA announced on Tuesday that her retirement took effect on December 27, 2025, shortly after Christmas.

Praising her legacy, NASA Administrator Jared Isaacman described Williams as a pioneer of human spaceflight whose leadership on the space station helped shape the future of exploration. He said her contributions to science and technology have strengthened the foundation for Artemis missions to the Moon and future journeys to Mars, adding that her achievements will continue to inspire generations.

“Suni Williams has been a trailblazer in human spaceflight, shaping the future of exploration through her leadership aboard the space station and paving the way for commercial missions to low Earth orbit,” the NASA Administrator said.

“Her work advancing science and technology has laid the foundation for Artemis missions to the Moon and advancing toward Mars, and her extraordinary achievements will continue to inspire generations to dream big and push the boundaries of what’s possible. Congratulations on your well-deserved retirement, and thank you for your service to NASA and our nation,” Isaacman added, according to a press release shared by NASA.

3 Space Missions In 27 Years

Selected by NASA in 1998, Williams spent a total of 608 days in space across three missions, the second-highest cumulative total for any NASA astronaut. She is also tied for sixth place among Americans for the longest single spaceflight, having logged 286 days alongside astronaut Butch Wilmore. Williams completed nine spacewalks lasting a combined 62 hours and 6 minutes—the most by any female astronaut and the fourth-highest overall in NASA history. She was also the first person to run a marathon in space.

Williams first flew in December 2006 aboard space shuttle Discovery as part of Expedition 14/15. Her second mission began in July 2012, when she launched from Kazakhstan for Expedition 32/33 and later served as space station commander. Her final mission came in June 2024 aboard Boeing’s Starliner, after which she joined Expeditions 71/72 and again commanded the ISS before returning to Earth in March 2025.

Sunita Williams’s Indian Roots

Williams’ father was born in Gujarat’s Mehsana District and he later moved to the US and married Bonnie Pandya, a Slovenian. Reflecting on her career, she called space her “absolute favourite place to be” and said she was proud to have contributed to humanity’s next steps toward the Moon and Mars.

Source : https://www.news18.com/world/9-spacewalks-3-missions-608-days-in-space-sunita-williams-retires-from-nasa-after-27-years-ws-l-9845451.html

No Clear Autism Or ADHD Risk From Prenatal Tylenol Use, Finds Review Of 43 Studies

(© luengo_ua – stock.adobe.com)

A large new review is bringing relief to pregnant women worried about taking Tylenol. Researchers found no meaningful link between the common pain reliever and autism, ADHD, or intellectual disabilities in children when the medication is used as directed.

The findings, published in The Lancet Obstetrics & Gynaecology, come after months of anxiety sparked by high-profile claims last September that raised fresh concerns about acetaminophen (the generic name for Tylenol, also called paracetamol) during pregnancy. Those warnings left many pregnant people uncertain about taking one of the few pain relievers doctors typically recommend.

An international research team led by Francesco D’Antonio from the University of Chieti in Italy and Asma Khalil from St George’s University Hospital in London reviewed 43 studies. What made this analysis particularly reassuring was its focus on sibling comparison studies. These studies compared children within the same family where one sibling was exposed to the medication in the womb and another wasn’t.

This approach helps control for genetics, family environment, and other shared factors that could confuse the results. When researchers used this method, they found no association between prenatal acetaminophen exposure and these neurodevelopmental outcomes.

Why Earlier Studies Looked Scarier Than They Were

So why did previous research suggest a connection? The answer comes down to a classic mix-up in health research.

It’s easy to see why earlier studies looked concerning. People usually take pain relievers for a reason: chronic pain, fever, infections, inflammation. Those underlying conditions, or the genes that make someone susceptible to them, may be what older research was actually picking up rather than the medication itself.

A Swedish study of 2.48 million births illustrated this clearly. When researchers compared siblings, the apparent risks faded. Similar results came from a large Japanese cohort, where small increases seen in standard analyses didn’t hold up when family factors were accounted for.

In other words, it likely wasn’t the Tylenol. It was everything else going on.

The major medical organizations continue to recommend acetaminophen as a first-line option for fever and pain during pregnancy. The American College of Obstetricians and Gynecologists, the Royal College of Obstetricians and Gynaecologists, and the European Medicines Agency all maintain that when used appropriately, it remains the preferred choice.

The authors warn that discouraging appropriate use of acetaminophen could cause more problems than the drug itself. If you’re pregnant and sick with a fever, the bigger worry may be the untreated fever, which has been linked to miscarriage and preterm birth.

The research team searched medical databases and included only studies that accounted for other factors that might affect results. Their conclusion held consistent across different ways of analyzing the data, whether looking at only the most rigorous studies, only those with long-term follow-up, or the complete dataset.

The politicization of acetaminophen safety has created confusion for pregnant people and their doctors. This review offers clarity. When researchers used methods that better control for why women take pain medication in the first place, they found no association between appropriate acetaminophen use during pregnancy and increased risk of autism, ADHD, or intellectual disability.

Source : https://studyfinds.org/tylenol-autism-large-review-finds-no-meaningful-link/

Popular Quit-Smoking Drug Could Help Men Cut Cannabis Use by One-Third

Marijuana is often touted as ‘non-addictive,’ but it can be a very hard habit to kick. (Photo by Kampus Production from Pexels)

A medication already used by millions to quit cigarettes might offer a new option for men trying to reduce their marijuana consumption, according to research from the Medical University of South Carolina. The study found that varenicline, better known by its brand name Chantix/Champix, helped male cannabis users cut their marijuana sessions by more than one-third. The discovery comes at a time when few effective treatments exist for cannabis use disorder.

The study, published in Addiction, involved 174 adults who met diagnostic criteria for cannabis use disorder and wanted help cutting back. All participants were heavy users, consuming marijuana an average of 27 days out of every 30, with about three separate use sessions daily. Half received varenicline while the other half took a placebo. Everyone also participated in weekly counseling focused on medication adherence and reducing cannabis use.

How Varenicline Reduced Cannabis Use in Men

Men taking varenicline reduced their weekly cannabis use from 12.2 sessions to 7.9 sessions during the latter half of the study period, compared to virtually no change in men receiving placebo. Benefits were observed through the end of treatment, with a brief one-week follow-up showing no immediate rebound in use. Men on varenicline were significantly more likely to test negative for marijuana on urine drug screens than men taking placebo.

The findings offer progress for cannabis use disorder treatment, which has seen limited medication development despite rising rates of problematic marijuana use across the United States. Changes in legalization and shifting social attitudes have led to increased cannabis consumption and more people seeking help, yet few medications have shown consistent benefits.

Why a Smoking Cessation Drug Works for Cannabis

Varenicline was originally developed as a smoking cessation aid and has become one of the most effective medications for helping people quit tobacco. Researchers believe the drug works by targeting nicotinic acetylcholine receptors in the brain’s reward system, the same neural pathways thought to be involved in cannabis dependence. These receptors influence the release of dopamine and other brain chemicals involved in substance use and craving.

Scientists have been exploring whether varenicline might help with other substance use disorders beyond tobacco. Earlier research found the medication reduced alcohol consumption in some studies, though results have been mixed. A small pilot study in 2021 suggested varenicline might help people with cannabis use disorder, prompting researchers to conduct this larger trial.

The Treatment Worked Differently for Women

While men benefited substantially from varenicline, women showed no improvement in cannabis use. Female participants taking varenicline averaged 10.5 cannabis sessions per week compared to 9.2 sessions for women receiving placebo. The difference wasn’t statistically meaningful. Women on varenicline also reported higher withdrawal symptoms, increased marijuana cravings, and greater anxiety compared to women taking placebo.

These sex differences mirror patterns researchers have observed with varenicline treatment for alcohol use disorder. In a 2018 study, men taking varenicline reduced heavy drinking while women did better with placebo. Scientists are still working to understand why varenicline helps men but not women reduce cannabis use. Research suggests women are more likely than men to use marijuana as a coping strategy for stress and tension. The increased anxiety some women experienced while taking varenicline might have undermined any potential benefits.

Side Effects and Medication Tolerance

Medication adherence was similar between treatment groups, with participants taking roughly two-thirds of their prescribed doses over the 12-week period. The most common side effects were nausea and disturbed dreams, consistent with varenicline’s known tolerability profile when used for smoking cessation. These side effects were not severe enough to cause most participants to stop taking the medication.

The research team noted that reducing cannabis use, rather than achieving complete abstinence, has recently gained acceptance as a meaningful treatment goal. Reduction in marijuana consumption has been associated with improvements in functioning and quality of life. This makes varenicline’s ability to help men substantially cut their cannabis use particularly valuable, even if complete abstinence isn’t achieved.

Source : https://studyfinds.org/smoking-drug-help-men-cut-cannabis-use/

Walking shark spotted in Australia that reproduces under stress, surprising scientists

Walking Shark (Hemiscyllium ocellatum) (Image: Johnny Gaskell)

Scientists studying Australia’s epaulette sharks have uncovered unexpected reproductive behaviour that challenges long held biological assumptions. The findings, published in the journal Biology Open, suggest these unusual sharks reproduce without extra energy costs, even during environmental stress.

What scientists discovered about epaulette shark reproduction

Epaulette sharks are well known for walking abilities using fins. Researchers examined mature female epaulette sharks off Queensland’s coast. The study was led by Dr Carolyn Wheeler. Scientists measured metabolic energy during egg production phases. Surprisingly, energy use showed no significant increase. Most egg laying animals require substantial metabolic investment. Here, reproductive energy demand remained completely flat.

The finding contradicts established reproductive biology models. Professor Jodie Rummer from James Cook University commented. She described reproduction as building life from scratch. Despite this complexity, sharks showed stable energy output. This suggests a specialised biological adaptation exists. Scientists believe this adaptation supports reproductive consistency. The sharks appear uniquely efficient during egg development.

What the findings mean under environmental stress

Rising ocean temperatures threaten many marine species globally. Usually, animals prioritise survival over reproduction under stress. Food shortages often force reproductive shutdowns across species. The epaulette shark seems to avoid this trade off. Dr Wheeler explained reproduction often halts during stress.

However, these sharks may continue producing eggs. Their energy balance remains unchanged despite pressure. This challenges assumptions about climate impacts on reproduction. Professor Rummer said reproduction may not disappear first. The findings suggest resilience during warming conditions. Such resilience could stabilise future shark populations. This may help species endure rapid environmental changes.

Why healthy shark populations matter for reefs

Epaulette sharks play important roles within reef ecosystems. Sharks regulate prey populations and ecosystem balance. Declining shark numbers harm coral reef health. Reproductive resilience supports stable shark population levels. Healthy sharks help maintain thriving reef systems.

Source : https://www.moneycontrol.com/science/walking-shark-spotted-in-australia-that-reproduces-under-stress-surprising-scientists-article-13777054.html

NASA Artemis II: First Lunar Crewed Mission Is Expected To Lift Off On February 6 After Decades, All You Need To Know

NASA is gearing up for the Artemis II mission on February 6, marking the first crewed lunar flight in decades. Four astronauts will embark on a 10-day mission around the moon, testing the Orion spacecraft and the Space Launch System (SLS).

NASA Artemis II: First Lunar Crewed Mission

On 6th February, NASA is preparing for its Artemis II mission, which is the first lunar crewed flight under the Artemis programme. Under this mission astronauts will be sent towards the moon for the first time in decades. This first crew mission will include four astronauts (Reid Wiseman, Victor Glover, Christina Koch, and Jeremy Hansen) who will venture around the moon before returning to the Earth.

This 10-day human lunar exploration mission is aimed at testing the Orion spacecraft and Space Launch System with humans onboard, validating critical life support, and re-entry systems in deep space. The mission will serve as a crucial stepping stone for future Artemis missions, including the planned lunar landing and NASA’s long-term exploration goals.

NASA is making final preparations and is targeting to roll out the fully stacked Space Launch System (SLS) rocket together with the Orion crew spacecraft from the Vehicle Assembly Building to Launch Pad 39B at Kennedy Space Center. This launch is expected no earlier than January 17, 2026.

The crawler-transporter-2 will travel four miles in up to 12 hours. However, date and time are subject to change if there are any additional requirements for technical preparations or weather.

“We are moving closer to Artemis II, with rollout just around the corner.” We have important steps remaining on our path to launch, and crew safety will remain our top priority at every turn as we near humanity’s return to the Moon.” said Lori Glaze, acting associate administrator for NASA’s Exploration Systems Development Mission Directorate.

Wet dress rehearsal, tanking

NASA will conduct a wet dress rehearsal, a prelaunch fuelling test for the rocket, by the end of January. This will help the team to check the ability to load more than 700,000 gallons of cryogenic propellants into the rocket. This allows engineers to practise safely removing propellant from the rocket without astronauts present.

This rehearsal will include multiple “runs” to test the team’s ability to hold, resume and recycle several different times in a 10-minute countdown, which is also known as terminal count.

Engineers will keep an eye on propellant loading of liquid hydrogen and liquid oxygen into the rocket. Teams will also pay attention to the effectiveness of recently updated procedures.

Source : https://www.timesnownews.com/technology-science/science/nasa-artemis-ii-first-lunar-crewed-mission-is-expected-to-lift-off-on-february-6-after-decades-all-you-need-to-know-article-153460509

X bans Grok from bikini edits on real people, but you can still undress AI characters, Elon Musk says

X says it has blocked Grok from editing real people into bikinis or sexualised images, but tests and Musk’s own comments show AI characters can still be “undressed,” keeping the controversy very much alive.

X bans Grok from bikini edits on real people, but you can still undress AI characters (Photo:xAI/Reuters)

X has tightened the rules around Grok’s image-editing tools after days of criticism over non-consensual sexual deepfakes, but the changes have opened up a fresh debate rather than closing it. While Grok is now blocked from putting real people into bikinis or sexualised outfits, the system still allows users to create similar content using AI-generated or imaginary characters. Elon Musk has publicly defended the approach, saying it follows what he calls the “de facto standard” for adult content in the US.

The latest discussion started, as many things do on X, with a tweet from DogeDesigner, an account closely associated with Musk. The account claimed it had tried multiple prompts to get Grok to generate nude images and failed each time, arguing that media reports were part of “a relentless attack” on Musk. Musk responded by throwing down a public challenge, “Can anyone actually break Grok image moderation? Reply below.”

As the conversation grew, Musk clarified what Grok is supposed to allow. “With NSFW enabled, Grok is supposed allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated movies on Apple TV. That is the de facto standard in America,” he wrote, adding that rules could vary depending on local laws in different countries.

This explanation came amid growing scrutiny of Grok’s role in the spread of sexualised deepfakes on X. Following the backlash, the platform quietly changed how Grok handles image edits involving real people. Prompts that earlier worked, such as asking the bot to change someone’s clothes into a bikini, began returning blurred or censored results.

X later made this official. Its Safety account said, “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers.” According to reports, the goal was to stop Grok from responding to requests involving sexual poses, swimwear, or explicit scenarios when a real person is involved.

On paper, the policy sounds firm. In practice, it appears far more uneven.

Testing by multiple publications like The Verge showed that while some direct prompts are now blocked, Grok can still be pushed into creating highly sexualised images through slightly altered requests. Even when commands like “put her in a bikini” or “remove her clothes” produced censored images, other prompts continued to work. Requests such as “show me her cleavage,” “make her breasts bigger,” or “put her in a crop top and low-rise shorts” were reportedly accepted, sometimes resulting in images that effectively placed the subject in a bikini anyway.

These results were not limited to paid users. Reporters were able to carry out similar edits using free X and Grok accounts. An age verification pop-up did appear on the Grok website during some tests, but it could be bypassed by simply selecting a birth year that made the user appear over 18. No proof was required, and in many cases, no age check appeared at all on the mobile app or the X website.

The inconsistency becomes even clearer when looking at what Grok still allows. While edits involving women are now partially restricted, Grok reportedly continues to generate images of men or even inanimate objects in bikinis without resistance. In one test, the bot complied with a request to turn a selfie into a sexualised image involving a male subject and others, again using a free account.

Despite the updated rules, The Verge reported that it remains “extremely easy” to undress women or place them into sexualised poses using Grok’s tools, often without linking the activity to an easily identifiable paid account. In one case, a journalist was able to create sexualised deepfakes of herself without being blocked by the system.

Source : https://www.indiatoday.in/technology/news/story/x-bans-grok-from-bikini-edits-on-real-people-but-you-can-still-undress-ai-characters-elon-musk-says-2852204-2026-01-15

Instagram Data Leak Concerns Raised After Password Reset Email: What The Company Has Said

Instagram data leak concerns was triggered as thousands got password reset email claiming to be from the platform.

Instagram users were spooked with rampant password reset emails

Instagram is used by billions but when many of them got a password reset email in the last few weeks, the alarm bells started ringing. Many reports hinted that Instagram data has been breached which triggered a massive uproar as they sought answers about the possible leak of its users.

Data leaks are hard to detect, especially for the users, who are not sure if the activity was registered from the company or forwarded through hackers. In this case, the email might have looked suspicious but it seems Instagram has refuted the breach allegations and denied that its users and their data has been exposed.

Instagram Denies Leak Concerns

Instagram claims there was no data breach of its system that may have exposed your data. So then how do you explain the rampant password reset emails that thousands have got in the last few weeks? The company says the supposed breach-like email to users was triggered by a technical issue which allowed people other than the account holders to start the reset password process.

Instagram also pointed out that all its internal systems are secure and no data has been exposed. This was mentioned in a post on X in the last 24 hours, assuring users about the activity and asking them to ignore the mails.

Most might say a clarification like this should ease the concerns but we’ve seen these events happen before which were initially refuted but later we see the companies retrace their claims. In addition to the password reset emails, a report by MalwareBytes claimed that cyber thieves got hold of personal information belonging to 17.5 million Instagram users and the mail activity further increased the levels of alarm among all those using the platform.

The report said the exposed data can be misused by scammers as they craft personal emails with the attempt to phish their accounts and even devices for further damage.

Source : https://www.news18.com/tech/instagram-data-leak-concerns-raised-after-password-reset-email-what-the-company-has-said-9824704.html

 

Apple picks Google Gemini over ChatGPT to power the new AI Siri

Gemini scores a big victory over ChatGPT. Credit: Matteo Della Torre/NurPhoto via Getty Images

Apple just made a big decision about the future of its in-house AI assistant.

In a statement posted to X, Google and Apple confirmed that the former’s Gemini AI models will serve as the basis for the future of Apple Intelligence. That means Gemini will “help power future Apple Intelligence features” including the long-awaited AI upgrade for Siri, set to launch this year.

This is a big deal because, in the past, Apple has worked with other AI models, most notably including ChatGPT. The OpenAI flagship model was integrated with Siri in late 2024, as a sort of stopgap solution as Apple continues to work on the AI-infused version of Siri that has been in development for a few years now. However, over the past year or so, Gemini overtook ChatGPT as the model of choice according to various benchmarks and rankings. Given that Apple is considered to be well behind the rest of the competition on AI features, it makes sense for the iPhone maker to choose Google’s model over OpenAI’s.

Source : https://mashable.com/article/apple-google-gemini-openai-chatgpt-siri-ai-overhaul

 

Sundar Pichai Announces Universal Commerce Protocol – What Is It and How Will It Revolutionise AI Shopping

Google CEO Sundar Pichai on Sunday posted that AI agents will be a big part of how we shop in the not-so-distant future as he revealed the next big thing in artificial intelligence.

Google CEO Sundar Pichai predicts future of shopping through AI agents.

Google CEO Sundar Pichai on Sunday announced the launch of its Universal Commerce Protocol (UCP) saying that AI agents will be a big part of how we shop in the not-so-distant future.

Revealing what could turn out to be the next big thing in the artificial intelligence, Sundar Pichai wrote, “AI agents will be a big part of how we shop in the not-so-distant future.”

“To help lay the groundwork, we partnered with Shopify, Etsy, Wayfair, Target, and Walmart to create the Universal Commerce Protocol, a new open standard for agents and systems to talk to each other across every step of the shopping journey,” he said.

Adding further, the technology enthusiast also informed that soon UCP will power native checkout so people can buy directly on AI mode and through the Gemini App.

What is Universal Commerce Protocol?
Google wants the Universal Commerce Protocol to become an industry standard that retailers use for their AI agents and its related systems for discovery, selling and post-sales services.

According to Google, UCP is a new open standard for agentic commerce that works across the entire shopping journey, from discovery and buying to post-purchase support.

The platform establishes a common language for agents and systems to operate together across consumer surfaces, businesses and payment providers.

So instead of requiring unique connections for every individual agent, UCP enables all agents to interact easily.

It’s built to work across verticals and is compatible with existing industry protocols like Agent2Agent (A2A), Agent Payments Protocol (AP2) and Model Context Protocol (MCP).

The platform has been co-developed with industry leaders including Shopify, Etsy, Wayfair, Target and Walmart, and endorsed by more than 20 others across the ecosystem like Adyen, American Express, Best Buy, Flipkart, Macy’s Inc., Mastercard, Stripe, The Home Depot, Visa and Zalando.

 

Source: https://www.timesnownews.com/technology-science/google-ceo-sundar-pichai-says-ai-agents-will-be-big-part-of-shopping-in-notsodistant-future-what-he-means-by-ucp-ai-mode-checkout-article-153431931

Isro to launch Anvesha satellite on PSLV-C62: Why no one can hide from it

The Indian Space Research Organisation (Isro) is all set to launch the EOS-N1 Anvesha satellite to space aboard the PSLV-C62 mission on Monday.

PSLV-C62’s payload fairing with the Anvesha satellite. (Photo: Isro)

Imagine having a superpower that lets you see beyond what the human eye can detect, revealing hidden details in everything from forests to battlefields.

That’s hyperspectral remote sensing (HRS) in a nutshell. Think of it as turning ordinary satellite photos into a high-tech detective tool.

The Indian Space Research Organisation (Isro) is all set to launch the EOS-N1 Anvesha satellite to space aboard the PSLV-C62 mission. Anvesha is a hyperspectral satellite developed by the Defence Research and Development Organisation (DRDO).

WHAT IS HYPERSPECTRAL?

Back in the day, spies and explorers used simple aerial photos to study landscapes. They would look at shapes, colours, and patterns to guess what was below, like spotting a river by its winding path or a forest by its green blobs.

Now, enter hyperspectral remote sensing, the real game-changer. Instead of a handful of colours, it captures hundreds of super-narrow slices of light across the rainbow, from visible light to infrared that we can’t see. Each tiny spot in the image gets its own unique “fingerprint” based on how it reflects light.

It’s like having a scanner that can tell apart different types of soil, plants, or even man-made materials just by their glow. This leap means we go from rough guesses to precise identifications, all automated by smart software.

HOW IT WORKS?

HRS works because everything on Earth interacts with light in its own way. For example, water soaks up certain light waves, while leaves bounce others back, creating a signature pattern, like a barcode.

Scientists build libraries of these barcodes from pure samples (just dirt or just grass) and compare them to what the satellite sees.

To gather this info on the ground, experts use handheld gadgets called spectroradiometers, portable scanners that measure light reflections up close.

When combined with Geographic Information Systems (GIS), HRS lets you layer this info over real-world locations. You can zoom in, spin 3D models, and ask questions like “Where’s the best spot to cross this river?”

WHY IT MATTERS FOR DEFENCE?

In today’s world, HRS is a secret weapon for militaries. It’s not about destruction; it’s about smart planning to keep people safe.

Here’s how it helps:

  • Mapping the Ground: Ever wonder if a tank can drive through mud without getting stuck? HRS identifies soil types like sandy deserts versus sticky clay, helping predict safe paths for vehicles or troops.
  • Spotting Hidden Dangers: Camouflage doesn’t fool it. It can detect fake coverings or unusual materials in cities, like telling apart different plants that might hide equipment. In places like India’s diverse terrains, this means better hiding spots for allies or spotting enemies.
  • Planning Like a Pro: Create virtual battle simulations with 3D maps. See what’s visible from a hilltop or just plan the routes. It’s like playing a video game, but with real stakes.
  • Watching for Trouble: Track changes from floods or earthquakes that could affect operations, giving early warnings.

HURDLES AHEAD AND A BRIGHT FUTURE

No tech is perfect. HRS can be expensive, and handling all that data is like sorting a giant library. Plus, you need experts to interpret it right, and the weather can sometimes blur the picture.

But solutions are coming: AI to speed things up, cheaper hybrid systems, and more training programs.

Source : https://www.indiatoday.in/science/story/isro-to-launch-drdo-eos-n1-anvesha-satellite-on-pslv-c62-why-no-one-can-hide-from-it-2849701-2026-01-11

DRDO Successfully Tests Long-Duration Scramjet Engine For Hypersonic Missile Programme

The test was carried out by the Defence Research & Development Laboratory (DRDL), Hyderabad- a key laboratory under Defence Research and Development Organisation.

The successful test builds on an earlier long-duration subscale scramjet test conducted on April 25.

India achieved a major milestone in hypersonic weapons development after the Defence Research and Development Organisation (DRDO) successfully conducted a long-duration ground test of a full-scale, actively cooled scramjet engine, a critical technology for hypersonic cruise missiles.

The test was carried out by the Defence Research & Development Laboratory (DRDL), Hyderabad- a key laboratory under Defence Research and Development Organisation- at its Scramjet Connect Pipe Test (SCPT) facility. According to the Ministry of Defence, the engine achieved a sustained runtime of over 12 minutes, marking a path-breaking achievement in India’s hypersonic missile programme.

The successful test builds on an earlier long-duration subscale scramjet test conducted on April 25, 2025 and represents a crucial step towards the development of operational hypersonic cruise missiles. The full-scale combustor and the specialised test facility were designed and developed in-house by Defence Research & Development Laboratory, with fabrication and realisation supported by Indian industry partners.

Hypersonic cruise missiles are capable of flying at speeds exceeding five times the speed of sound- over 6,100 kilometres per hour- for extended durations. This capability is enabled by advanced air-breathing scramjet engines, which use supersonic combustion to sustain high-speed atmospheric flight. The latest ground test validated both the advanced scramjet combustor design and the performance of the SCPT facility, the Defence Ministry said.

Congratulating the teams involved, Defence Minister Rajnath Singh said the successful test provides a “solid foundation” for India’s Hypersonic Cruise Missile Development Programme. He also lauded the contribution of DRDO scientists, industry partners and academia in achieving the milestone.

Source : https://www.news18.com/india/drdo-successfully-tests-long-duration-scramjet-engine-for-hypersonic-missile-programme-ws-l-9820774.html

SPACE FEARS Mysterious ‘medical issue’ in space forces NASA to consider evacuation plan for astronauts on board space station

The stricken astronaut’s name and health problem have not been disclosed

NASA is sensationally mulling over the first-ever medical evacuation of its International Space Station astronauts over a mysterious health issue with one of its crew.

The space agency shared the news after the concern forced them to cancel an ISS spacewalk scheduled for today.

An agency spokeswoman did not identify the astronaut or the medical issue, but said they are in a stable condition on the orbiting laboratory.

She said: “Safely conducting our missions is our highest priority, and we are actively evaluating all options, including the possibility of an earlier ⁠end to Crew-11’s mission.

“These are the situations NASA and our partners train for and prepare to execute safely.

“We will provide further updates within the next 24 hours.”

Crew-11 is made up of four astronauts: United States‘ Zena Cardman and Mike Fincke, Japanese astronaut Kimiya Yui and Russian cosmonaut Oleg Platonov.

Station commander Fincke and flight engineer Cardman were due to head outside the International Space Station on Thursday for a marathon 6.5-hour spacewalk to install new external hardware.

NASA has never had to pull an astronaut from the ISS over a medical issue, but it does have evacuation capabilities built into ever mission with crew return vehicles on standby.

The agency’s statement read: “Due to medical privacy, it is not appropriate for NASA to share more details about the crew member.

“The situation is stable. NASA will share additional details, including a new date for the upcoming spacewalk, later.’”

While calling off a spacewalk is unusual, it has happened before.

Back in 2021, a planned mission was scrapped after astronaut Mark Vande Hei suffered a pinched nerve and was unable to venture outside the station.

Another spacewalk was dramatically halted in 2024 after an astronaut reported ‘”spacesuit discomfort” just moments before heading out.

Earlier on Wednesday, everything appeared to be going to plan.

NASA confirmed final preparations were underway, with Fincke and Cardman busy sorting tools and gear.

Japanese astronaut Koichi Wakata and NASA astronaut Chris Williams, who arrived at the ISS aboard a Soyuz spacecraft in November, helped the pair review spacewalk procedures, according to SpaceNews.

Later in the day, however, Wakata was heard on open communications requesting a private medical conference with a flight surgeon.

Such private consultations are a normal part of life on the ISS, allowing astronauts to discuss health concerns confidentially.

It remains unclear whether the request was linked to the medical issue referenced by NASA, or whether Wakata himself was affected.

NASA has also not confirmed whether the issue involved one of the two astronauts scheduled for the now-postponed spacewalk.

Astronauts typically spend six to eight months at a time living aboard the ISS, where they have access to basic medical equipment and a limited supply of medications for emergencies.

In the event of a serious problem, crew members would likely evacuate using the commercial crew capsule docked at the station that brought them there.

Crew-11 arrived at the ISS on August 1, 2025, with a planned return in late February.

The four astronauts are expected to head home only after Crew-12 arrives, no earlier than February 15, to take over operations.

NASA insists the ISS must always be staffed, as astronauts are vital for maintenance, repairs, running complex experiments, managing life support systems and carrying out spacewalks, jobs that machines alone cannot handle.

Even when astronauts have been left stuck in orbit, NASA has kept the station running.

Sunita Williams and Butch Wilmore grabbed global attention in June 2024 when they launched to the ISS aboard Boeing’s Starliner capsule, which suffered problems before docking.

Source: https://www.the-sun.com/tech/15752672/mysterious-medical-issue-space-forces-nasa-evacuation-plan-astronauts/

ISRO Returns To Launch Pad For 2026’s First Mission: Advanced Earth Imaging Satellite

ISRO has selected its workhorse PSLV for the mission, which is set to lift off at 10.17 am on January 12 from Sriharikota

The industry-made PSLV will launch an Earth Observation Satellite for oceanographic studies along with the Indo-Mauritius joint satellite and Leap-2 satellite. Image/News18

The Indian Space Research Organisation (ISRO) is ready to begin the new year with the launch of an advanced Earth observation satellite aboard its workhorse—the PSLV.

The launch of the mission, PSLV C62-/EOS N1, is scheduled at 10.17 am on January 12 from its spaceport at Sriharikota. EOS-N1 is a hyperspectral imaging satellite capable of capturing ground data in hundreds of narrow wavelength bands, which can help in the identification of materials on the ground, not just shapes and colours.

It will be accompanied by 18 other co-passenger satellites from different Indian and international users through its industrial partner NewSpace India Limited (NSIL). NSIL has been marketing ISRO’s launch services onboard PSLV, SSLV, and LVM3 launchers to international customers. To date, it has launched a total of over 137 customer satellites onboard 5 PSLV, 2 LVM3, and 2 SSLV missions.

ISRO has a packed launch calendar this year, beginning with the first uncrewed mission of Gaganyaan to demonstrate an end-to-end mission, including the aerodynamics of the human-rated launch vehicle, mission operations of the orbital module, re-entry, as well as recovery of the crew module. This will be followed by a Technology Demonstration Satellite (TDS-01) to validate new technologies and indigenous components for satellite platforms. Once proven, these technologies will be employed in navigation and communication missions in the near future.

The anticipation is also building for the launch of the first fully Indian-industry-manufactured PSLV through NSIL, which is also working to build five PSLV-XL through Hindustan Aeronautics Limited (HAL) and L&T Consortium.

Source: https://www.news18.com/india/isro-returns-to-launch-pad-for-2026s-first-mission-advanced-earth-imaging-satellite-9814161.html

The Ancient Braces Myth: Why Our Ancestors Didn’t Need Straight Teeth

Credit: Prostock-studio on Shutterstock

Ancient Egyptians and Etruscans pioneered orthodontics, using delicate gold wires and catgut to straighten teeth. It’s a tale that has appeared in dentistry textbooks for decades, portraying our ancestors as surprisingly modern in their pursuit of the perfect smile. But when archaeologists and dental historians finally scrutinized the evidence, they discovered that most of it is myth.

Take the El-Quatta dental bridge from Egypt, dating to around 2500BC. The gold wire found with ancient remains wasn’t doing what we thought at all. Rather than pulling teeth into alignment, these wires were stabilizing loose teeth or holding replacement ones in place. In other words, they were functioning as prostheses, not braces.

The gold bands discovered in Etruscan tombs tell a similar story. They were probably dental splints designed to support teeth loosened by gum disease or injury, not devices for moving teeth into new positions.

There are some rather compelling practical reasons why these ancient devices couldn’t have worked as braces anyway. Tests on Etruscan appliances revealed the gold used was 97% pure, and pure gold is remarkably soft.

It bends and stretches easily without breaking, which makes it useless for orthodontics. Braces work by applying continuous pressure over long periods, requiring metal that’s strong and springy. Pure gold simply can’t manage that. Try to tighten it enough to straighten a tooth and it will deform or snap.

Then there’s the curious matter of who was wearing these gold bands. Many were found with the skeletons of women, suggesting they might have been status symbols or decorative jewelry rather than medical devices. Tellingly, none were discovered in the mouths of children or teenagers – exactly where you’d expect to find them if they were genuine orthodontic appliances.

But perhaps the most fascinating revelation is this: ancient people didn’t have the same dental problems we face today.

Malocclusion – the crowding and misalignment of teeth that’s so common now – was extremely rare in the past. Studies of Stone Age skulls show almost no crowding. The difference is down to diet.

Our ancestors ate tough, fibrous foods that required serious chewing. All that jaw work developed strong, large jaws perfectly capable of accommodating all their teeth.

Modern diets, by contrast, are soft and processed, giving our jaws little exercise. The result? Our jaws are often smaller than those of our ancestors, while our teeth remain the same size, leading to the crowding we see today.

Since crooked teeth were virtually non-existent in antiquity, there was hardly any reason to develop methods for straightening them.

That said, ancient people did occasionally attempt simple interventions for dental irregularities. The Romans provide one of the earliest reliable references to actual orthodontic treatment.

Aulus Cornelius Celsus, a Roman medical writer in the first century AD, noted that if a child’s tooth came in crooked, they should gently push it into place with a finger every day until it shifted to the correct position. Although basic, this method is built on the same principle we use today – gentle, continuous pressure can move a tooth.

After the Roman era, little progress occurred for centuries. By the 18th century, however, interest in straightening teeth had revived, albeit through some rather agonizing methods.

Those without access to modern dental tools resorted to wooden “swelling wedges” to create space between overcrowded teeth. A small wedge of wood was inserted between teeth. As saliva was absorbed, the wood expanded, forcing the teeth apart. Crude and excruciating, perhaps, but it represented a step towards understanding that teeth could be repositioned through pressure.

Scientific Orthodontics

Real scientific orthodontics began with French dentist Pierre Fauchard’s work in 1728. Often called the father of modern dentistry, Fauchard published a landmark two-volume book, The Surgeon Dentist, containing the first detailed description of treating malocclusions.

He developed the “bandeau” – a curved metal strip wrapped around teeth to widen the dental arch. This was the first tool specifically designed to move teeth using controlled force.

Fauchard also described using threads to support teeth after repositioning. His work marked the crucial shift from ancient myths and painful experiments to a scientific approach that eventually led to modern braces and clear aligners.

With advances in dentistry during the 19th and 20th centuries, orthodontics became a specialist field. Metal brackets, archwires, elastics and eventually stainless steel made treatment more predictable.

Later innovations – ceramic brackets, lingual braces and clear aligners – made the process more discreet. Today, orthodontics employs digital scans, computer models, and 3D printing for remarkably precise treatment planning.

The image of ancient people sporting gold and catgut braces is certainly appealing and dramatic, but it doesn’t match the evidence.

Source : https://studyfinds.org/ancient-braces-myth/

Gaganyaan, PSLV, SSLV private debut: Isro’s big plans for 2026 revealed

Isro is set to dominate the 2026 space calendar with several high-profile launches including the robotic pioneer Vyommitra. From the Gaganyaan trials to electric satellites, India is redefining the cost of innovation.

From uncrewed robotic tests to ambitious planetary explorers, the current year promises to be a masterclass in affordable innovation for the Indian Space Research Organisation (Isro). (Photo: Isro)

The Indian Space Research Organisation (Isro) is no longer just a participant in the global space race; it is setting the pace. Following the historic triumph of the Chandrayaan-3 Moon landing, the agency has prepared a 2026 calendar that reads like a science fiction novel.

From uncrewed robotic tests to ambitious planetary explorers, the current year promises to be a masterclass in affordable innovation for Isro.

Speaking during an address post the successful LVM3-M6 mission (Launch Vehicle Mark-III’s sixth operational flight), Isro chairman V. Narayanan revealed an ambitious roadmap that highlights India’s evolution from a regional player into a dominant global space powerhouse.

From the highly anticipated Gaganyaan mission to the surging prowess of private startups like Skyroot Aerospace, the coming year is set to be a definitive turning point for the nation’s celestial ambitions.

Narayanan emphasised that the Gaganyaan mission marks a crucial step towards human spaceflight, noting that its success will cement India’s place in an elite club of spacefaring nations.

JANUARY 2026: PSLV-C62, THE STRATEGIC EYE

The year begins with the PSLV-C62 mission, tentatively set early this year. This launch carries the EOS-N1 satellite, a sophisticated tool for hyperspectral imaging, an advanced technique that captures detailed data of light wavelengths for every pixel in an image, and not just red, green, and blue.

By capturing chemical signatures from space, it provides vital data for border surveillance and disaster management. It will be accompanied by 18 smaller international satellites, showcasing Isro’s role as a global launch hub.

FEBRUARY 2026: PSLV-N1, THE INDUSTRY MILESTONE

In a landmark shift towards privatisation, the PSLV-N1 mission will mark the first time a PSLV rocket has been entirely manufactured by an Indian industry consortium (HAL and L&T).

This satellite will provide critical oceanographic data, aiding everything from the fishing industry to climate research. The primary payload is the EOS-10 / Oceansat-3A. Co-passengers likely include India-Mauritius Joint Satellite (IMJS) and possibly LEAP-2 from Dhruva Space.

MARCH 2026: GAGANYAAN G1, THE ROBOTIC PIONEER

Scheduled for March 2026, the Gaganyaan G1 mission is the crown jewel of the schedule. A human-rated LVM3 (Launch Vehicle Mark III) rocket will carry Vyommitra, a female humanoid robot, into orbit.

This uncrewed flight is a critical safety test, designed to validate the life support, re-entry, and sea recovery systems that will eventually keep Indian astronauts safe during their journey into the thermosphere.

MARCH 2026: TDS-01, THE ELECTRIC REVOLUTION

Also arriving in March is the TDS-01 technology demonstrator. This mission represents a quiet revolution in satellite engineering.

By testing a high-thrust electric propulsion system, Isro aims to reduce satellite fuel weight by 90 per cent. This shift from chemical to electric power allows for lighter, cheaper, and longer-lasting spacecraft.

MARCH 2026: SSLV-L1, THE POCKET ROCKET’S COMEBACK

A dedicated commercial or technology demonstration of the Small Satellite Launch Vehicle (SSLV-L1) is expected before March 2026.

The mission marks progress in small satellite launches and privatisation.

In its 2022 maiden flight (SSLV-D1 on August 7), the rocket’s three solid stages worked perfectly, but vibrations during separation caused sensor data anomalies.

As a result, the onboard system erroneously switched to salvage mode, leading to a velocity shortfall and placement of EOS-02 and AzaadiSAT into an unstable 356 X 76 km orbit, where they quickly decayed.

MID-2026: GSLV-F17, THE REPLACING WATCHMAN

By the middle of the year, the GSLV-F17 mission is expected to deploy the NVS-03 satellite.

This mission acts as a crucial strategic Earth observation satellite.

Source : https://www.indiatoday.in/science/story/isro-2026-space-odyssey-launch-schedule-gaganyaan-vyommitra-robot-electric-propulsion-satellite-india-space-missions-pslv-gslv-nvs-science-news-space-news-isro-news-2845459-2026-01-03

Viral Protein Found In Cancer, Lupus, ALS Traced To Infection From Millions Of Years Ago

DNA sequence (Credit: Gio.tto on Shutterstock)

A protein showing up in breast cancer tumors, lupus patients, and people with ALS has been traced to an unexpected source: a virus that infected human ancestors millions of years ago. Scientists have now captured the first detailed images of this disease-linked protein, revealing what this genetic remnant actually looks like. These ancient viral sequences make up about 8% of the human genome.

The protein comes from HERV-K (Human Endogenous Retrovirus K), a retrovirus that infected primates millions of years ago and was permanently copied into their DNA. That viral DNA has been passed down through generations ever since. In healthy people, cells keep this ancient code locked away and silent. But something goes wrong in certain diseases. Cancer cells in breast, ovarian, prostate, and other tumors start making the HERV-K protein. Immune cells from patients with rheumatoid arthritis and lupus show elevated levels. In people with ALS, the protein floats in spinal fluid where it appears to damage neurons.

For decades, researchers have known this fossilized virus shows up in diseased tissue, but they’ve never seen what the protein looks like or understood how it works. The new study from the La Jolla Institute for Immunology, published in Science Advances, finally provides that picture. The images reveal the protein in both its ready-to-spring form and its activated state.

Catching a Shape-Shifter

The HERV-K protein naturally exists in an unstable state, like a mousetrap set and ready to snap. To take pictures, researchers needed to freeze it in that cocked position, but it kept collapsing into its sprung form before they could capture an image.

Lead researchers Jeremy Shek and Chen Sun engineered hundreds of modified versions, testing different ways to hold the protein in place. The winning approach worked like putting a safety pin in a grenade—they added molecular locks that kept the structure stable long enough to photograph. These stabilized versions could survive freezing and held together even under harsh conditions that would normally trigger the shape change.

HERV-K Shaped Differently from HIV

The images revealed something unexpected. HERV-K looks completely different from HIV and other well-studied viruses. It’s tall and narrow, shaped like an inverted tripod with three prongs. Each prong has two parts—a top section that latches onto cells and a lower section that handles the membrane fusion.

When the protein activates, this arrangement explodes into action. Part of it shoots out like a harpoon toward target cells, the whole structure extends like a telescope, then snaps back into a hairpin shape that forces two cell membranes together.

The activated form revealed something researchers hadn’t seen before: an extra structural piece sitting between two standard regions. This feature appears specific to HERV-K’s family of viruses and doesn’t exist in HIV or related viruses. That difference might be important for developing targeted treatments.

Antibodies Light Up Diseased Cells

To study HERV-K in real patients, researchers needed tools that could detect the protein and tell the difference between its various forms. The team created a set of antibodies—immune molecules that latch onto specific targets—by immunizing mice and screening which ones stuck to HERV-K.

Two antibodies proved especially useful for research. One recognizes only the ready-to-spring form of the protein, while another only grabs onto the activated version. These became crucial tools for confirming they had captured the right structures in the lab.

The real test came with patient samples. When researchers exposed immune cells from people with lupus and rheumatoid arthritis to five of the antibodies, the diseased cells lit up with fluorescent markers. The staining appeared primarily inside the cells. Notably, the antibody that only recognizes the activated form did not react—suggesting that patient cells contain the ready-to-spring version or other forms of the protein, not the activated state. Immune cells from healthy people showed no reaction at all.

HERV-K Opens Doors to New Treatments

The structures and antibodies create opportunities for targeting HERV-K in disease. Since the protein appears on cancer cell surfaces, it could serve as a target for immune-based therapies.

Previous studies have shown promise with this approach. One antibody inhibited breast cancer cell growth in laboratory tests and in tumor-bearing mice. Other researchers engineered immune cells to recognize and attack anything displaying HERV-K—these modified cells successfully killed breast cancer and melanoma cells in lab dishes and stopped tumor spread in animal studies.

For ALS, a separate team found that an antibody blocking HERV-K from connecting with its receptor on nerve cells reduced the neurotoxicity seen in patient samples.

HERV-K might actually be easier to target than HIV. The protein’s surface is less heavily coated with sugars that typically help viruses hide from the immune system. More of its vulnerable spots are exposed, particularly at the top where it likely binds to cells. That exposure could make it easier for therapeutic antibodies to latch on.

Why Dormant Viral DNA Wakes Up

HERV-K represents the most recently acquired viral DNA in the human genome. The genome contains roughly 100 full-length HERV-K sequences, though most picked up mutations over time that broke them. At least 10 locations still have intact genes capable of producing the envelope protein.

Healthy cells keep these viral remnants switched off through chemical modifications to DNA. In cancer and autoimmune disease, something disrupts that silencing mechanism. Cancer cells may reactivate HERV-K, and the protein is associated with increased cell growth and survival. In autoimmune conditions, HERV-K proteins can trigger inflammation, and when patients’ antibodies attack these proteins, the resulting immune complexes may contribute to disease progression.

The acidic environment inside tumors might favor the activated form of the protein, potentially explaining why it behaves differently in cancer versus autoimmune disease. Different versions of HERV-K in the genome produce proteins that stay trapped inside cells or reach the cell surface, which could account for varying disease patterns.

Source : https://studyfinds.org/viral-protein-in-cancer-traced-to-infection-from-millions-of-years-ago/

Can Losing Your Teeth Fuel Cognitive Decline? Study Yields Surprising Link

(Credit: Karolina Grabowska from Pexels)

Losing several teeth changes more than just how well someone can chew a steak. A study from Japan suggests something more concerning happens. After their teeth were removed, studied animals started having memory problems. Their brains showed signs of stress and stress-related changes in the regions that handle memory and learning. What’s more, this happened even when the mice ate a normal-protein diet, suggesting the tooth loss itself, not just poor nutrition, might be affecting the brain.

Scientists at Hiroshima University tracked aging mice for six months after pulling their upper molars on both sides. They wanted to know if tooth loss leads to brain problems because people can’t eat well afterward, or if something else is going on. The answer surprised them. Mice that lost teeth performed worse on memory tests whether they ate normal or low-protein diets. When researchers examined brain tissue, they found higher levels of molecules linked to cell death, increased signs of inflammation, and fewer cells marked as neurons in key memory regions.

These are mice, not people. But the findings, published in Archives of Oral Biology, hint at an effect not explained by diet alone between missing teeth and brain changes that goes beyond just eating less protein.

How Tooth Loss Affects Memory and Cognitive Function

Researchers divided aging mice into four groups at three months old. Some had their upper molars on both sides extracted while others kept all their teeth. Half of each group received normal protein levels in their food, while the other half ate diets containing 50 percent less protein. This setup was designed to approximate what can happen for some elderly people after losing teeth: they avoid meat, fish, and eggs because chewing hurts.

Six months later, the team tested memory using a Barnes maze, a circular platform with 20 holes around its edge. Just one hole led to an escape box. Mice with intact memories learned quickly which hole offered escape. Mice with memory problems took longer to find the correct hole, often following more erratic paths.

The results were clear: mice that lost their teeth performed worse on memory tests than mice with full sets of teeth. Among mice eating less protein, this difference appeared larger, though the study notes its sample size may have been too small to detect whether diet and tooth loss interact statistically. Body weight remained steady across all groups, ruling out starvation or general poor health as explanations.

Most importantly, tooth loss drove the memory problems, whether mice ate normal-protein diets or protein-poor diets. The missing teeth themselves, not dietary protein levels, predicted which mice would struggle to remember.

Brain Cell Death Linked to Tooth Loss

The memory tests revealed problems, but brain tissue analysis showed why. Scientists examined the hippocampus, where the brain forms and stores memories. They measured gene activity for two molecules called Bax and Bcl-2 that regulate cell death pathways. When Bax gene expression rises compared to Bcl-2, it is commonly used as a marker consistent with apoptosis-related signaling, a programmed cell death process.

Mice missing teeth showed higher Bax-to-Bcl-2 ratios in their hippocampi compared to mice with intact teeth. Protein intake made no difference. Whether mice consumed normal or reduced protein, tooth loss was associated with higher levels of this apoptosis-related marker.

Additional analysis revealed markers of inflammation and fewer NeuN-positive cells in specific memory regions. The CA1 region, which helps form new memories and recall old ones, displayed high levels of GFAP and Iba-1, proteins that signal brain inflammation and stress. The same region contained fewer NeuN-positive cells, a marker consistent with reduced neuron presence in the sampled tissue.

The dentate gyrus, another memory region, showed similar patterns: markers of increased inflammation and fewer NeuN-positive cells. The CA3 region showed less change, though low protein did reduce neuron-related cell markers there. Across the hippocampal regions they examined, tooth loss had the dominant association with these brain changes while diet played a smaller role.

The Connection Between Missing Teeth and Brain Health

The relationship between oral health and brain function has interested researchers for years, though the mechanisms remain unclear.

The paper discusses several possibilities. Gum disease, which often precedes tooth loss, involves bacteria and inflammatory processes. Inflammatory signals could potentially affect blood vessels or other brain tissue. Another theory involves sensory input: teeth connect to the brain through the trigeminal nerve, one of the largest nerves in the head. Chewing sends information through this nerve to brain regions handling attention, learning, and memory. Losing teeth disrupts these signals in mice, which might affect brain activity.

This mouse study supports the idea of an effect not solely explained by nutrition. Mice fed a normal-protein diet still showed markers of brain inflammation and reduced NeuN-positive cells after tooth extraction. The tooth loss itself appears associated with brain changes in mice, not just secondary effects like malnutrition.

The research team used aging mice (SAMP8 strain) that naturally develop age-related problems including memory decline, making them useful for studying tooth loss effects in the context of aging.

Study Limitations and What They Mean

Mouse brains differ from human brains, so these findings need confirmation in people before drawing conclusions about tooth loss and dementia in humans. Mice received standardized diets while human nutrition varies widely. The six-month observation period in these aging mice may not capture all relevant time-course effects.

Reducing dietary protein meant increasing carbohydrates to maintain total calories. Higher carbohydrate intake could have influenced results, though this doesn’t change the main finding about tooth loss associations. Sample sizes ranged from seven to nine mice per group; larger studies would provide more confidence in the results, particularly regarding potential interactions between tooth loss and diet. Only male mice were tested, so whether females show the same patterns remains unknown.

Protecting Your Teeth May Protect Your Memory

In these mice, tooth loss was associated with worse memory performance, increased markers of apoptosis-related signaling, heightened neuroinflammation indicators, and fewer NeuN-positive cells in key memory regions. The tooth loss associations and the low-protein diet effects appeared to work through separate mechanisms rather than combining to amplify each other, though the authors note the study may have lacked sufficient statistical power to detect interactions confidently.

Whether similar processes occur in humans requires further study. The mouse findings suggest tooth loss could directly affect brain biology rather than working only through nutritional changes, but proving this in humans needs additional research. If the connection holds, preventing tooth loss might become one strategy for supporting cognitive health during aging.

Source : https://studyfinds.org/losing-your-teeth-fuels-cognitive-decline/

Major Alzheimer’s Breakthrough? Advanced-Stage Mice Fully Recover After Taking Experimental Compound

(© Ivelin Radkov – stock.adobe.com)

For over a century, doctors and scientists have accepted one unchanging truth: Alzheimer’s disease cannot be reversed. Once the brain deteriorates into dementia, there’s no coming back. That assumption just collapsed.

Researchers at Case Western Reserve University have done what generations of scientists thought impossible. They reversed advanced Alzheimer’s disease in mice. Not slowed it. Not stabilized it. Reversed it. Older mice with memory problems, in two different Alzheimer’s-like mouse models (one focused on amyloid plaques, one on tau tangles), regained normal performance on memory tests after treatment with an experimental compound called P7C3-A20.

The mice were elderly animals with clear brain pathology and memory loss. In one model, amyloid plaques were prominent. In the other, tau-related disease features dominated. Their brains healed anyway.

Treatments have focused on slowing decline or managing symptoms, never on turning back the clock. This new work, published in Cell Reports Medicine, upends that approach.

How Did Scientists Reverse Alzheimer’s Disease in Mice?

Researchers used two different mouse models that mimic human Alzheimer’s disease. The first, called 5xFAD mice, develops amyloid plaques similar to those in human patients. The second, PS19 mice, develops tau tangles, another hallmark of the disease. Both types develop memory problems and brain damage that look a lot like human Alzheimer’s.

Scientists divided mice into groups starting at either 2 months old (mid-stage disease) or 6 months old (advanced disease). The mid-stage group received daily injections of either P7C3-A20 or a placebo until 6 months old. The advanced-disease group received treatment until 12 months old. In a separate tau tangle model, 11-month-old mice received treatment for one month. Each treatment group included both male and female mice.

Mice with advanced Alzheimer’s that received P7C3-A20 performed as well on memory tests as healthy mice. In the Morris water maze, a standard test where mice must remember the location of a hidden platform, treated Alzheimer’s mice found the platform as quickly as normal mice. Untreated Alzheimer’s mice struggled. Similar improvements showed up in object recognition tests and other measures of thinking ability.

But the changes went far beyond behavior. Brain tissue analysis showed broad improvements across many Alzheimer’s-linked measures, including reduced plaque accumulation, tau-related changes, blood-brain barrier damage, and inflammation. Most remarkably, the brains began generating new neurons again, a process that normally shuts down in Alzheimer’s.

What Causes Alzheimer’s Disease? The NAD+ Connection

The key to this reversal lies in NAD+, a molecule that works as universal energy currency in cells. Every cell needs NAD+ to power repair processes, fight oxidative stress, and keep DNA intact. When NAD+ levels fall, cells can’t keep up with constant maintenance needs.

The research team discovered that Alzheimer’s severity tracks with disrupted NAD+ homeostasis in both mice and humans. When they examined human brain tissue, people who had died with Alzheimer’s showed disturbed NAD+ metabolism.

But some elderly people had Alzheimer’s-like changes in the brain at autopsy yet had never developed dementia while alive. In this study’s analysis, these individuals showed gene expression patterns suggesting preserved NAD+ homeostasis despite their brain pathology.

P7C3-A20 doesn’t deliver NAD+ directly. Instead, it helps cells restore NAD+ balance when they are under stress (similar to fixing a leaky bucket rather than just pouring more water in).

The treatment also normalized a blood marker tied to Alzheimer’s in the mice. This marker, p-tau217, is used in humans to help diagnose the disease.

Can Alzheimer’s Disease Be Reversed in Humans?

To identify which aspects of the mouse findings might translate to human patients, researchers compared protein changes in the treated mice with databases of human Alzheimer’s brain tissue. They found 46 proteins that changed the same way in both human and mouse Alzheimer’s brains. All 46 returned to normal with P7C3-A20 treatment.

These proteins affect how cells handle stress, produce energy, and manage inflammation. The finding suggests potential drug targets for treating human Alzheimer’s.

Source : https://studyfinds.org/alzheimers-breakthrough-advanced-stage-mice-fully-recover-after-treatment-experimental-compound/

What Makes Brains Conscious That Computers Lack?

Credit: Yurchanka Siarhei on Shutterstock

As ChatGPT and other large language models dazzle with increasingly human-like abilities, a fundamental question looms: could these systems ever become conscious? A theoretical paper published in Neuroscience and Biobehavioral Reviews argues the answer is no for today’s digital systems—and possibly for any system built on the same computational assumptions. Still, the looming existential questions at the heart of this problem run deeper. This isn’t solely about processing power or algorithmic sophistication.

So, neuroscientists Borjan Milinkovic from Paris-Saclay Institute of Neuroscience and Jaan Aru from the University of Tartu have developed a theoretical framework called “biological computationalism” that challenges how artificial intelligence research thinks about consciousness. By analyzing the computational principles underlying biological neural systems, from molecular dynamics to whole-brain activity, they identify specific physical features that digital computers fundamentally lack.

Their target is computational functionalism, the dominant view that consciousness arises from the right pattern of information processing, regardless of whether that processing happens in neurons, silicon chips, or any other medium. According to this perspective, replicate the algorithm and consciousness should follow. The researchers argue this assumption rests on a flawed understanding of how biological brains actually compute.

How Digital Systems Differ from Biological Brains

Modern computers separate memory from processing, software from hardware, algorithm from implementation. Digital systems store data in one physical location, manipulate it in another, and follow instructions from a third component, all connected by communication buses. This separation is deliberate, allowing programmers to write code without worrying about the underlying electronics.

“Brains operate at the interface of discrete and continuous domains,” the researchers explain in their paper. Unlike digital systems that represent everything through binary states, neural tissue implements computations directly through continuous physical processes—ion flows, membrane voltages, electric fields—that unfold in real time without symbolic mediation.

Take a single neuron receiving thousands of inputs from other cells. In artificial neural networks, this process gets reduced to a simple mathematical operation: multiply each input by a weight, sum them up, pass the result through a function. Real neurons do something far stranger. Their branching dendrites perform distributed computations through continuous electrical dynamics, generating local spikes that travel both toward and away from the cell body, detecting the order and timing of inputs in ways that digital systems cannot easily replicate.

Why Neurons Outperform Artificial Networks

The paper reviews research showing that dendritic action potentials enable a single biological neuron to perform computations comparable to those typically distributed across multi-layer artificial networks. These computations arise from the interplay between continuous membrane potentials and discrete spiking events—a hybrid mode fundamentally unavailable to systems that operate purely through discrete symbol manipulation.

This difference reaches beyond individual neurons. Milinkovic and Aru propose that biological brains exhibit what they call “scale inseparability,” where processes at different organizational levels continuously co-determine each other. Molecular events inside cells influence network dynamics spanning millions of neurons, while brain-wide oscillations simultaneously constrain what individual synapses can do. These scales cannot be cleanly separated.

Digital systems, by contrast, are designed around scale separation. An algorithm runs independently of the hardware implementing it. High-level programs compile down to machine code, which executes on circuits, which rely on transistor physics—but each level operates independently. Change the hardware and the algorithm remains functionally identical.

The Energy Problem That Shaped Consciousness

Energy scarcity drives this difference. Although the brain represents only 2% of body mass, it consumes roughly 20% of total metabolic output. Rather than requiring ever-more energy to handle increasingly complex tasks, the brain reuses computational work performed at one scale to guide computations at other scales. Continuous processes aggregate discrete events into more reliable signals, and these aggregated signals feed back to constrain the discrete events that generated them.

The researchers call this “hybrid computation”—computation that is simultaneously continuous and discrete, where the algorithm cannot be separated from its physical implementation because the physics is the algorithm. Information at one scale is essential for determining what can be computed at another scale, yet those scales continuously generate and constrain each other in bidirectional loops.

The paper examines one example at the molecular level. Protein Kinase A molecules inside neurons function as evidence accumulators, continuously integrating activity until reaching a threshold that triggers calcium surges—discrete events that propagate through neural networks. The apparently discrete calcium event depends on continuous accumulation of evidence at the subcellular level, creating a system where continuous and discrete processes are inseparable.

Electric fields provide another example. Neurons communicate not only through synapses—the discrete connection points between cells—but also through ephaptic coupling, where local electric fields modulate the excitability of neighboring neurons without direct contact. Research shows that weak endogenous fields of just a few millivolts per millimeter can synchronize neural firing and amplify oscillatory patterns. These continuous field effects shape when and how discrete spikes occur, while the discrete spikes generate the continuous fields.

Digital computers can simulate these processes by approximating continuous dynamics with very fine discrete time steps. But simulation is not the same as implementation. When a computer simulates water flowing through a pipe, the computer doesn’t get wet. The researchers argue that consciousness may depend on computations that must be implemented in continuous physical dynamics, not merely simulated through discrete approximations.

The paper draws on formal mathematical results to support this claim. Alfred Tarski proved that arithmetic over real numbers admits a complete decision procedure—every statement can be algorithmically resolved—while natural number arithmetic, the foundation of digital computers, is fundamentally incomplete. This contrast suggests that continuous computation may support forms of processing that are awkward or inefficient to reproduce in purely discrete systems, even if they remain computable in principle.

What Artificial Consciousness Would Actually Require

For artificial intelligence, the implications challenge the entire trajectory of the field. Large language models, neuromorphic chips, and even systems designed to mimic brain architecture all operate fundamentally as symbol manipulators on von Neumann hardware. They maintain the clean separation between algorithm and implementation that makes them programmable but may preclude the computational mode underlying consciousness.

The researchers don’t claim that only biological tissue can support consciousness. Rather, they outline three criteria any conscious artificial system would need to meet. First, hybrid computation combining continuous dynamics with discrete events governed by real physical time. Second, scale-inseparability with metabolic embedding, where energy constraints shape the computational architecture itself. Third, dynamico-structural co-determination, where the system continuously modifies its own physical structure.

The paper reviews some emerging technologies that hint at alternatives. Laboratory-grown neural cultures called DishBrains have shown remarkable sampling efficiency in control tasks compared to deep reinforcement learning baselines, despite containing only a few hundred thousand neurons. These systems leverage the intrinsic hybrid dynamics of biological matter. Researchers have also developed fluidic memristors that implement computations through ion transport in microchannels, creating history-dependent dynamics through chemistry rather than electronics.

These systems represent a fundamental departure from digital computation. They don’t separate algorithm from implementation. They don’t discretize time into processing steps. They don’t maintain clean boundaries between scales. They implement computations directly in continuous physical processes that unfold in real time, shaped by energy constraints and substrate properties.

Whether such systems could support consciousness remains an open question. But the theoretical framework makes clear that getting there would require abandoning the computer metaphor that has dominated both neuroscience and artificial intelligence for decades. Brains aren’t computers running clever software. They’re continuous physical systems that exploit hybrid dynamics and scale integration to perform computations that digital systems may be fundamentally incapable of replicating.

Source : https://studyfinds.org/what-makes-brains-conscious-that-computers-lack/

 

Isro’s Christmas triumph: Bahubali LVM3 launched with heaviest foreign satellite

BlueBird Block-2, the largest commercial comms satellite for LEO to date, boasts a 223 sq m phased-array antenna to beam 4G/5G signals straight to unmodified smartphones.

LVM3 M6 mission launches from Sriharikota. (Photo: Isro)

India’s space agency delivered a festive triumph on Christmas Eve, executing a flawless LVM3 launch that propelled the heaviest foreign satellite ever from Indian soil into orbit.

The LVM3-M6 mission, dubbed “Bahubali” for its mighty 640-tonne liftoff mass, roared off the Second Launch Pad at Satish Dhawan Space Centre in Sriharikota at precisely 08:54 IST on December 24, 2025.

Carrying AST SpaceMobile’s BlueBird Block-2, a humongous 6.5-tonne communication satellite, this sixth operational flight marked Isro’s 101st orbital success and a commercial coup via NewSpace India Limited (NSIL).

Millions tuned into the live stream, witnessing the three-stage behemoth, two S200 solid boosters, L110 liquid core, and C25 cryogenic upper, deploy the payload into a 520-600 km Low Earth Orbit without a hitch.

Isro chief in his post-launch address said, “This marks a new milestone for India, with the heaviest satellite ever launched from Indian soil. LVM3 has continued its excellent track record, delivering an orbital performance with less than 2 km of dispersion, among the best in the global space arena.”

Revolutionary Payload Ushers Direct-to-Phone Broadband

BlueBird Block-2, the largest commercial comms satellite for LEO to date, boasts a 223 sq m phased-array antenna to beam 4G/5G signals straight to unmodified smartphones.

This direct-to-cell tech promises to connect remote Himalayas, vast oceans, and arid deserts, partnering with 50+ global mobile operators to bridge digital divides for billions.

US-based AST SpaceMobile hailed the deployment as a pivotal step in their constellation, rivaling Starlink with seamless broadband sans ground towers.

A miracle of precision engineering, LVM3 wraps up 2025 after Chandrayaan-3 and OneWeb batch launches in the past. Public galleries buzzed with cheers, as the rocket’s signature plume lit the Andhra Pradesh skies.

Source : https://www.indiatoday.in/science/story/isro-launches-heaviest-foreign-satellite-from-indian-soil-aboard-lvm3-2840862-2025-12-24

Hubble Telescope Captures Rare Double Collision Building Planets 25 Light-Years Away

This artist’s concept shows the violent collision of two massive objects in orbit around the star Fomalhaut. (Credit: NASA, ESA, STScI, Ralf Crawford (STScI))

Astronomers just witnessed the aftermath of something rarely seen: a fresh dust cloud from a cosmic collision around a nearby star, and then another one appearing in nearly the same spot.

Twenty years ago, the Hubble Space Telescope spotted a mysterious point of light near the star Fomalhaut, about 25 light-years from Earth. Scientists named it Fomalhaut b and debated whether it was a dusty planet or something else entirely. Now, observations from 2023 reveal a second bright spot has appeared in nearly the same location, strongly supporting the collision explanation over the planet hypothesis.

Both objects are dust clouds generated by massive collisions between planetesimals, rocky bodies tens of kilometers across orbiting in Fomalhaut’s debris belt. The findings appear in the journal Science.

“Spotting a new light source in the dust belt around a star was surprising. We did not expect that at all,” said Jason Wang, assistant professor of physics and astronomy at Northwestern University. “Our primary hypothesis is that we saw two collisions of planetesimals — small rocky objects, like asteroids — over the last two decades.”

A Vanishing Act Solves a Two-Decade Mystery

Fomalhaut is a young star surrounded by a ring of rocky debris that resembles our solar system’s Kuiper Belt, just far more active. The first collision that created Fomalhaut cs1 (circumstellar source 1) happened before 2004, producing a cloud with about 10²⁰ grams of dust in total, with the tiniest grains doing most of the shining.

When researchers revisited Fomalhaut in September 2023, they found something unexpected. A new dust cloud, designated Fomalhaut cs2, had materialized at the inner edge of the debris belt. Meanwhile, the original cs1 could no longer be clearly detected, likely having faded and expanded beyond visibility as radiation from the star dispersed it into space.

“This is certainly the first time I’ve ever seen a point of light appear out of nowhere in an exoplanetary system,” said lead author Paul Kalas, an astronomer at the University of California, Berkeley. “It’s absent in all of our previous Hubble images, which means that we just witnessed a violent collision between two massive objects and a huge debris cloud unlike anything in our own solar system today.”

Both dust clouds appeared within 8 degrees of each other on the debris ring. The research team calculated the probability of finding a second source this close to the first at only 10 percent.

Cosmic Demolition on a Massive Scale

Watching planetesimal collisions unfold in real time is extraordinarily rare. In our own solar system, astronomers have documented only a handful of similar events involving much smaller asteroid disruptions. The Fomalhaut collisions involve bodies potentially 30 kilometers in radius, roughly the size of large asteroids, producing dust masses nine orders of magnitude greater than anything observed near Earth.

“Collisions of planetesimals are extremely rare events, and this marks the first time we have seen one outside our solar system,” Wang said. “Studying planetesimal collisions is important for understanding how planets form.”

The research team estimates that maintaining Fomalhaut’s dusty ring requires about 22 million collision events like cs1 over the star’s 440-million-year lifetime. That translates to roughly one major impact every 20 years, though most collisions produce debris too faint for current telescopes to detect.

Each collision releases approximately 4 percent of the impacting bodies’ mass as tiny dust grains smaller than 3 micrometers. Radiation from the central star then pushes these particles outward at accelerating speeds, causing the clouds to expand and eventually fade below detection thresholds.

How Dust Clouds Disappear Into Space

The original cs1 cloud appeared to follow a normal orbit between 2004 and 2012 before suddenly accelerating outward. Researchers believe this happened when the expanding cloud became optically thin, allowing radiation to push on all the dust simultaneously rather than just the star-facing surface.

By 2013, cs1 was moving outward at nearly 12 kilometers per second—more than three times faster than its initial velocity. If that acceleration continued for another decade, the cloud would have traveled roughly 50 astronomical units farther from the star, becoming too faint to see.

Why Two Crashes Happened So Close Together

The spatial clustering of these collisions raises questions about whether they’re truly random. One possibility involves planetesimals scattered from an inner dust belt that Fomalhaut also hosts. Infrared observations have detected a misaligned intermediate belt centered at 94 astronomical units. However, the researchers note this belt’s geometry doesn’t align well with where the two dust clouds appeared, about 70 degrees away from the predicted intersection point.

Another explanation involves gravitational resonances with undiscovered planets. If Earth-mass worlds orbit within the debris field, they could trap planetesimals in specific orbital patterns, concentrating impacts in certain locations and creating collision hotspots.

The Fomalhaut system provides a window into the chaotic early histories of planetary systems. Our own solar system went through similar violent phases billions of years ago, grinding down larger bodies into the dust and debris we observe today. Catching these collisions as they happen around other stars helps astronomers understand how planetary systems evolve from dusty disks into stable configurations.

Source : https://studyfinds.org/hubble-telescope-captures-rare-double-collision/

Liver Cells Under Chronic Dietary Stress Show Cancer Warning Signs Years Before Tumors Appear

Credit: NMK-Studio on Shutterstock

Liver cells overwhelmed by dietary fat essentially forget how to be liver cells, according to research. Under chronic stress, they progressively shut down the genes that define their normal responsibilities and switch into a bare-bones survival mode. And these changes, the study shows, can signal elevated cancer risk years before any tumor appears.

Scientists at MIT and Harvard tracked this cellular transformation in real time, watching liver cells reprogram themselves over 15 months as mice consumed high-fat diets. The findings, published in Cell, reveal that the body’s short-term strategy for keeping liver cells alive under constant dietary assault creates conditions that make cancer far more likely down the road.

The discovery could reshape how doctors monitor the more than 33% of people globally who have metabolic liver disease. Instead of waiting for tumors to show up on scans, physicians might eventually use molecular fingerprints from liver biopsies to identify high-risk patients when cellular dysfunction first begins—potentially a decade or more before cancer develops.

The Cellular Trade-Off Nobody Wants

Consider a liver cell as an employee at a company who makes critical products for the entire organization. Under normal conditions, that employee handles hundreds of specialized tasks, processing nutrients, manufacturing proteins the body needs, cleaning up toxins.

But under relentless stress from excess dietary fat, that same employee faces an impossible choice: keep doing the specialized work that benefits the whole organization, or focus entirely on personal survival. The cells choose survival.

Stressed liver cells ramped down production of the proteins and enzymes that perform the liver’s signature jobs. They made less of the enzyme controlling ketogenesis (the process of converting fat into fuel for other organs) and the urea cycle (which handles nitrogen waste). They cut back on albumin and clotting factors, proteins the blood carries throughout the body.

At the same time, they activated an entirely different playbook. They switched on gene programs that resemble early liver development and cranked up proteins that block cell death. They increased certain cholesterol-making enzymes while decreasing the ketone-producing ones, even though both pathways work from the same raw materials.

After 15 months on high-fat diets, some mice spontaneously grew liver tumors without any genetic manipulation or cancer-causing chemicals. Those tumors showed even more extreme versions of the same cellular reprogramming.

Warning Signs That Appear First

Incredibly, the cancer-associated changes appeared long before actual tumors.

At just six months of dietary stress, liver cells already showed signs of preparing the ground for future problems. Specific regions of their DNA (ones that control genes involved in cell growth and cancer) became more accessible, like files that had been pulled from storage and placed on a desk, ready to be opened. These regions stayed poised for months until tumors eventually formed.

When researchers examined human liver tissue from patients at different stages of fatty liver disease, they found the same progression. People with early-stage disease already showed activation of genetic programs characteristic of liver tumors. More tellingly, the strength of these early signatures was linked to which patients developed hepatocellular carcinoma, the most common liver cancer, up to 15 years later.

The same gene programs appeared in liver cancers that arose from different causes: metabolic disease, viral hepatitis, and alcohol-related damage, suggesting common pathways through which diverse types of chronic injury may contribute to cancer formation.

One Enzyme Connects the Dots

Of all the changes the team documented, one enzyme stood out: HMGCS2. This protein normally runs the first critical step in ketogenesis, helping the liver convert fat breakdown products into ketone bodies that fuel the brain and muscles when food is scarce.

HMGCS2 levels dropped steadily as mice stayed on high-fat diets. When scientists created mice genetically engineered to lack this enzyme in liver cells, those animals showed dramatically accelerated cellular dysfunction. More critically, they were far more vulnerable to tumor formation when exposed to cancer-causing genetic changes.

In human patients, lower HMGCS2 in non-cancerous liver tissue linked to both worsening liver disease and higher risk of eventual cancer. The enzyme’s decline appears to be both a result of chronic stress and an accelerant of further problems. Without enough ketogenesis happening, metabolic intermediates may pile up and get shunted into processes that alter how genes are read and expressed, potentially helping explain how dietary stress rewires cellular behavior.

Molecular Switches That Tip the Balance

To figure out what controls this widespread cellular reprogramming, researchers built a computational tool that predicted which molecular master switches might be calling the shots.

Two proteins that control gene activity, SOX4 and RELB, emerged as key players. Normally quiet in adult liver, both became more active as metabolic disease worsened in mice and humans.

When scientists artificially boosted SOX4 in liver cells, it triggered many of the same changes seen during chronic dietary stress: cells activated fetal development programs, suppressed mature liver functions, and kept dividing even under conditions that would normally stop proliferation. Higher SOX4 and RELB in non-cancerous liver tissue was associated with worse outcomes in patients who eventually developed cancer.

From Research to Real-World Application

The findings point toward a fundamentally different approach to cancer prevention in high-risk patients. Rather than relying on imaging that spots tumors after they form, doctors might one day measure a panel of molecular markers –HMGCS2 levels, SOX4 and RELB activity, and specific gene program scores — to stratify patients by cancer risk.

Some existing treatments may already affect these pathways. Resmetirom, recently approved for metabolic liver disease with scarring, targets a molecular switch that the computational analysis flagged as important for these stress responses.

Of course, there is one big unanswered question. Can these cellular changes be reversed? Weight loss and newer medications like GLP-1 receptor agonists improve liver tissue appearance, but researchers don’t yet know if they erase the deeper molecular reprogramming that occurred during months or years of metabolic stress. The elevated cancer risk might linger even after the liver looks healthier.

Source : https://studyfinds.org/liver-cells-chronic-dietary-stress-cancer-warning-signs/

People Are Getting Their News From AI – and It’s Altering Their Views

When a bot brings you the news, who built it and how it presents the information matter. (Image by Tero Vesalainen on Shutterstock)

Meta’s decision to end its professional fact-checking program sparked a wave of criticism in the tech and media world. Critics warned that dropping expert oversight could erode trust and reliability in the digital information landscape, especially when profit-driven platforms are mostly left to police themselves.

What much of this debate has overlooked, however, is that today, AI large language models are increasingly used to write up news summaries, headlines and content that catch your attention long before traditional content moderation mechanisms can step in. The issue isn’t clear-cut cases of misinformation or harmful subject matter going unflagged in the absence of content moderation. What’s missing from the discussion is how ostensibly accurate information is selected, framed and emphasized in ways that can shape public perception.

Large language models gradually influence the way people form opinions by generating the information that chatbots and virtual assistants present to people over time. These models are now also being built into news sites, social media platforms and search services, making them the primary gateway to obtain information.

Studies show that large language models do more than simply pass along information. Their responses can subtly highlight certain viewpoints while minimizing others, often without users realizing it.

Communication Bias

My colleague, computer scientist Stefan Schmid, and I, a technology law and policy scholar, show in a forthcoming accepted paper in the journal Communications of the ACM that large language models exhibit communication bias. We found that they may have a tendency to highlight particular perspectives while omitting or diminishing others. Such bias can influence how users think or feel, regardless of whether the information presented is true or false.

Empirical research over the past few years has produced benchmark datasets that correlate model outputs with party positions before and during elections. They reveal variations in how current large language models deal with public content. Depending on the persona or context used in prompting large language models, current models subtly tilt toward particular positions – even when factual accuracy remains intact.

These shifts point to an emerging form of persona-based steerability – a model’s tendency to align its tone and emphasis with the perceived expectations of the user. For instance, when a user describes themselves as an environmental activist and another as a business owner, a model may answer the same question about a new climate law by emphasizing different, yet factually accurate, concerns for each of them. For example, the criticisms could be that the law does not go far enough in promoting environmental benefits and that the law imposes regulatory burdens and compliance costs.

Such alignment can easily be misread as flattery. The phenomenon is called sycophancy: Models effectively tell users what they want to hear. But while sycophancy is a symptom of user-model interaction, communication bias runs deeper. It reflects disparities in who designs and builds these systems, what datasets they draw from and which incentives drive their refinement. When a handful of developers dominate the large language model market and their systems consistently present some viewpoints more favorably than others, small differences in model behavior can scale into significant distortions in public communication.

What Regulation Can and Can’t Do

Modern society increasingly relies on large language models as the primary interface between people and information. Governments worldwide have launched policies to address concerns over AI bias. For instance, the European Union’s AI Act and the Digital Services Act attempt to impose transparency and accountability. But neither is designed to address the nuanced issue of communication bias in AI outputs.

Proponents of AI regulation often cite neutral AI as a goal, but true neutrality is often unattainable. AI systems reflect the biases embedded in their data, training and design, and attempts to regulate such bias often end up trading one flavor of bias for another.

And communication bias is not just about accuracy – it is about content generation and framing. Imagine asking an AI system a question about a contentious piece of legislation. The model’s answer is not only shaped by facts, but also by how those facts are presented, which sources are highlighted and the tone and viewpoint it adopts.

This means that the root of the bias problem is not merely in addressing biased training data or skewed outputs, but in the market structures that shape technology design in the first place. When only a few large language models have access to information, the risk of communication bias grows. Apart from regulation, then, effective bias mitigation requires safeguarding competition, user-driven accountability and regulatory openness to different ways of building and offering large language models.

Most regulations so far aim at banning harmful outputs after the technology’s deployment, or forcing companies to run audits before launch. Our analysis shows that while prelaunch checks and post-deployment oversight may catch the most glaring errors, they may be less effective at addressing subtle communication bias that emerges through user interactions.

Beyond AI regulation

It is tempting to expect that regulation can eliminate all biases in AI systems. In some instances, these policies can be helpful, but they tend to fail to address a deeper issue: the incentives that determine the technologies that communicate information to the public.

Our findings clarify that a more lasting solution lies in fostering competition, transparency and meaningful user participation, enabling consumers to play an active role in how companies design, test and deploy large language models.

Source : https://studyfinds.org/people-get-news-from-ai/

4 Tiny NASA Satellites Capture Photos Of The Sun Like We’ve Never Seen It Before

Screenshot of NASA PUNCH mission imaging of the Sun. (Credit: SwRI)

Four spacecraft working as a single virtual instrument are producing unprecedented images of the Sun and its outer atmosphere, revealing how the star’s corona transforms into the solar wind that fills our solar system. Less than a year after launch, NASA’s PUNCH mission has captured the Sun like never before, showing solar activity sweeping past the Moon and planets while tracking enormous space weather events as they race toward Earth.

The Southwest Research Institute-led mission uses four synchronized spacecraft spread 8,000 miles across to build wide-angle views impossible from any single vantage point. Since launching in March, PUNCH (Polarimeter to Unify the Corona and Heliosphere) has documented massive coronal mass ejections, monitored a solar storm that lit up American skies with auroras in November, and discovered an unexpected talent for tracking comets invisible to other telescopes.

“PUNCH imaging gives us a unique view on the pageantry of the planets and reveals the grandeur of our Sun in the cosmos,” said Dr. Craig DeForest, PUNCH mission principal investigator at Southwest Research Institute. “Seeing solar activity sweeping across the moon, planets and even passing comets gives us a sense of place in our solar system. It reminds me of the impact of the blue marble image of the Apollo era, though PUNCH data is more of a golden fishbowl view of our neighborhood in the cosmos. We live here.”

DeForest presented the mission’s accomplishments during a media roundtable at the AGU25 conference on December 16.

Creating an Earth-Sized Virtual Telescope

The four spacecraft orbit Earth along the day-night boundary, positioned to maintain an unobstructed view of the Sun at all times. Three satellites carry Wide Field Imagers developed by SwRI to observe the extremely faint outermost portions of the Sun’s corona and solar wind. A fourth spacecraft hosts a coronagraph called the Narrow Field Imager, provided by the Naval Research Laboratory, which blocks direct sunlight to capture details in the Sun’s atmosphere.

Each camera snaps three polarized images every four minutes and an unpolarized calibration image every eight minutes. Ground processing stitches these individual views into seamless mosaics spanning up to 45 degrees from the Sun in all directions.

Getting clear images of the solar wind requires extraordinary engineering. Deep baffles inside the wide-field cameras reduce direct sunlight by more than 16 orders of magnitude, comparable to the ratio between a human’s mass and a cold virus. Additional processing removes the background starfield, eliminating over 99% of the light in each frame to reveal the faint glow of solar particles streaming through space.

“PUNCH will make the invisible visible,” DeForest said when the mission launched. “Deep baffles in our wide-field imagers reduce direct sunlight by over 16 orders of magnitude or a factor of 10 million billion — the ratio between the mass of a human and the mass of a cold virus. Then state-of-the-art processing on the ground removes the background starfield, over 99% of the light in each image, to reveal the extremely faint glimmer of the solar wind.”

Tracking Space Weather Violence

PUNCH arrived just in time to witness major solar activity. The mission captured a massive coronal mass ejection in early November that triggered colorful auroras across the United States.

“PUNCH can actually show us directly the violence of space weather as clouds of electrons cross the solar system,” DeForest said. “Viewing the corona and solar wind as a single system provides a big-picture perspective essential to helping scientists better understand and predict space weather. This forecasting is critical to protecting astronauts, space satellites and electric grid technology from these events.”

By tracking these solar storms in three dimensions, PUNCH could give forecasters a clearer picture of when and how space weather will impact Earth. The mission’s polarized imaging helps scientists discern the exact trajectory and speed of coronal mass ejections as they move through the inner solar system, improving on current instruments that only measure the corona itself.

Bonus Science: Tracking Invisible Comets

Beyond its primary mission of imaging the Sun, PUNCH has proven remarkably adept at studying comets. The spacecraft tracked the third identified interstellar comet 3I/ATLAS as it traveled through the inner solar system during a period when bright sunlight rendered it invisible to other telescopes and space assets.

PUNCH also monitored Comet SWAN with unprecedented frequency from August 25 to October 2, capturing clear images every four minutes for nearly 40 days. That continuous observation may represent the longest continuous observation of a comet to date. The mission continues monitoring Comet Lemmon, which made its closest approach to Earth on October 21.

“We’ve discovered some incredible bonus science that PUNCH performs, tracking comets and other objects,” DeForest said. “We were able to track the third identified interstellar comet 3I/ATLAS as it traveled through the inner solar system while bright sunlight rendered it invisible to other telescopes and space assets.”

How the Mission Operates

After launch on March 11, the spacecraft began a 90-day commissioning phase. By August 7, all four had maneuvered into their final science orbits. SwRI’s Solar System Science and Exploration Division in Boulder, Colorado, leads the mission and operates all four spacecraft. The Science Operations Center began sharing data publicly through NASA’s Solar Data Analysis Center in June 2025.

The mission complements observations from NASA’s Parker Solar Probe, STEREO, SOHO, and the NASA/European Space Agency Solar Orbiter, which examine the corona at smaller scales and from different vantage points. Together, these missions provide scientists with the most complete view of the Sun’s outer atmosphere and solar wind ever assembled.

Source : https://studyfinds.org/satellites-capture-sun-photos/

Penn State Scientists Reverse-Engineer This Backyard Bug’s Natural ‘Invisibility Cloak’

Close-up shot of a leafhopper. (© ddt – stock.adobe.com)

Leafhoppers, insects smaller than your thumbnail, have been mastering the art of staying hidden for millions of years. They coat themselves with microscopic particles that work like nature’s own invisibility cloak, making them harder to spot by cutting down the telltale glints that would otherwise give them away to predators.

Now, researchers at Pennsylvania State University have figured out how to manufacture these biological anti-glare devices in their lab. The breakthrough could lead to everything from better light-handling surfaces for energy tech to improved military camouflage.

The secret weapon is a collection of hollow, soccer ball-shaped particles called brochosomes. Each one ranges from hundreds of nanometers to a couple micrometers across and contains precisely arranged holes that scatter light in ways that dramatically reduce reflective glints.

Nature’s Four-Stage Assembly Line

Leafhoppers manufacture these “invisibility” particles inside specialized organs through a process that puts human factories to shame. The insects start by creating protein clusters near cellular structures, then develop them into surface-bumped packages wrapped in tiny cellular membranes. These evolve into fully formed hollow spheres as their cores dissolve away.

The finished brochosomes range from 250 nanometers to 2.5 micrometers across. Their surfaces sport pentagon and hexagon patterns reminiscent of soccer balls, with holes measuring 50 nanometers to 1 micrometer in diameter.

Mechanical engineering professor Tak-Sing Wong and graduate student Jinsol Choi developed their artificial version based on a key insight: molecules with both water-loving and water-avoiding parts can self-assemble into these patterns. In the lab, they tune that balance using block copolymers.

Microfluidics Meets Molecular Engineering

The team’s breakthrough, published in ACS Nano, came from mimicking nature’s process using entirely artificial materials. Their microfluidic system creates tiny droplets containing dissolved polymers suspended in surfactant-treated water. As the solvent evaporates, surface tension forces guide the polymers into the same soccer ball structures found on real leafhoppers.

By adjusting the molecular weight and water-attraction properties of their synthetic polymers, the researchers can dial in specific particle shapes and pore patterns. Lower surface tension produces the pentagon and hexagon holes that match natural brochosomes. Higher surface tension creates circular pores instead.

Through systematic testing of 11 different polymer recipes, the team mapped exactly which molecular ingredients produce which brochosome designs. Success requires polymers with 10 to 23 percent water-loving molecular sections and molecular weights below 235 kilograms per mole, parameters that closely match the proteins found in actual leafhopper brochosomes.

Manufacturing Speed That Defies Belief

The system’s production rate reaches more than 100,000 synthetic brochosomes per second—several orders of magnitude faster than traditional nanofabrication methods while maintaining precise control over size and shape.

The synthetic particles successfully replicated five distinct natural brochosome designs from different leafhopper species. Sizes ranged from 390 nanometers to 2 micrometers, with holes between 30 and 130 nanometers across. Optical tests confirmed the artificial versions matched their natural counterparts in dramatically reducing unwanted reflections across ultraviolet and visible light.

When applied to transparent surfaces, the synthetic brochosomes reduced reflective glare by 80 to 96 percent across the visible spectrum. This performance matches or beats the anti-reflective properties measured on actual leafhopper wings.

Beyond Stealth Applications

While military camouflage grabs headlines, the technology’s potential extends far beyond warfare. Some energy devices could benefit from coatings that waste less light, but that would need dedicated testing. The authors also point to biomedicine, including drug delivery, as a possible direction. That’s still a next-step idea, not something this study tested.

The manufacturing approach might also work for creating artificial versions of other biological systems, ranging from viruses to pollen grains, as the researchers noted in their paper.

Medical researchers could potentially exploit the particles’ unique geometry for various applications. The combination of controllable size, shape, and surface properties opens doors to applications not yet imagined.

Source : https://studyfinds.org/scientists-reverse-engineer-bugs-invisibility-cloak/

Scientists Find Unprecedented Lemon-Shaped Planet That Shouldn’t Exist

An artist’s illustration of what exoplanet PSR J2322-2650b might look like. Because of its extremely tight orbit, the planet’s entire year—the time it takes to go around the pulsar—is just 7.8 hours. (Credit: NASA, ESA, CSA, Ralf Crawford (STScI))

Astronomers have discovered a strange lemon-shaped planet that doesn’t make sense. A Jupiter-sized world orbiting a dead star has an atmosphere whose spectrum is dominated by carbon molecules, with a composition so extreme it has left scientists searching for an explanation of how such an object could form.

PSR J2322-2650b (the strange planet) circles a pulsar (the ultra-dense, rapidly spinning core left behind after a massive star explodes) every 7.8 hours. The pulsar blasts it with gamma rays, a form of high-energy radiation higher-energy than X-rays, that likely heat the atmosphere to temperatures reaching 1,900 Kelvin (about 2,960 degrees Fahrenheit).

Using the James Webb Space Telescope to observe the entire orbit, researchers found molecular carbon dominating the spectrum so completely that oxygen, nitrogen, and hydrogen (elements typically abundant in planetary atmospheres) appear strongly depleted or weren’t clearly detected.

The carbon-to-oxygen ratio exceeds 100. The carbon-to-nitrogen ratio tops 10,000. No known planet orbiting a normal star, and no current theory about how pulsar companions form, can explain these numbers.

“The planet orbits a star that’s completely bizarre — the mass of the Sun, but the size of a city,” explained the University of Chicago’s Michael Zhang, the principal investigator on this study, in a statement. “This is a new type of planet atmosphere that nobody has ever seen before.”

An Atmosphere Built From Carbon Chains

When light passes through the planet’s atmosphere, different molecules absorb specific colors. By analyzing which colors are missing, astronomers can identify what molecules are present. In this case, the spectrum revealed molecules rarely seen in planetary atmospheres: C3 (three carbon atoms bonded together) and C2 (two carbon atoms).

These carbon chains absorbed light at specific wavelengths (particular colors in the infrared, invisible to human eyes). C3 showed up as a sudden drop at 3.014 microns, in the infrared beyond human vision. C2 created a sawtooth pattern between 2.45 and 2.85 microns. Additional absorption features suggested the presence of carbon-hydrogen bonds, though the exact molecules remain uncertain.

To understand how unusual this is, consider what should happen in a hot atmosphere. Carbon and oxygen atoms strongly prefer to bond together, forming carbon monoxide. The only way to have more molecular carbon than carbon monoxide is if carbon outnumbers oxygen by huge amounts—in this case, by more than 2,000 to one. Similarly, carbon and nitrogen should bond together unless carbon outnumbers nitrogen by more than 10,000 to one.

“The extreme carbon enrichment poses a severe challenge to the current understanding of ‘black-widow’ companions, which were expected to consist of a wider range of elements due to their origins as stripped stellar cores,” the researchers wrote.

How Black Widows Form, And Why This One Breaks the Rules

Black widow systems get their name from spiders that eat their mates. In space, a pulsar slowly destroys its companion star. The pulsar’s intense radiation and gravitational pull tear away the star’s outer layers, eventually leaving behind a small, dense remnant.

This process should produce an object made mostly of helium if the stripping happens early enough, before the star begins converting helium into carbon in its core through nuclear fusion. The remnant should contain whatever elements existed in the star’s core at that moment, typically a mix of helium, carbon, nitrogen, and oxygen in moderate ratios.

PSR J2322-2650b doesn’t fit this picture. The researchers explored alternative explanations, each with its own problems.

Some rare stars show elevated carbon levels, with carbon-to-oxygen ratios reaching 12 to 81. While higher than typical stars, these values still fall far short of what this planet displays.

Other aging stars convert helium into carbon through a nuclear process, creating what astronomers call “carbon stars.” These reach carbon-to-oxygen ratios of only several. They produce carbon-rich dust in their outflows, offering another potential carbon source. However, the mechanism for concentrating that dust into a Jupiter-mass planet with such extreme ratios remains unclear.

In one illustrative model, the planet consists mostly of helium with roughly 1% carbon by mass in its interior. A planet made entirely of carbon would be much smaller and denser than what observations show—about one-third Jupiter’s radius instead of roughly matching it. But if the planet is mostly helium inside, what process concentrated so much carbon in the atmosphere we can see?

Gamma-Ray Heat and Westward Winds

The planet’s heating differs from anything seen on worlds orbiting normal stars. Gamma rays likely penetrate deep into the atmosphere instead of warming just the surface layers the way visible sunlight does on Earth.

In the models, these high-energy photons deposit their energy at a depth where the pressure reaches about 10 bars—roughly 10 times the air pressure at sea level on Earth. This deep heating drives the planet’s wind patterns differently than on normal hot Jupiters (giant planets orbiting close to their stars).

The researchers tracked how the planet’s light shifted to bluer or redder wavelengths as it moved toward or away from Earth in its orbit. From these measurements, they determined the planet orbits at a tilt of 31 degrees (imagine tilting a hula hoop from flat by about one-third of a right angle) and has a mass between 1.4 and 2.4 times Jupiter’s mass.

The temperature structure shows dramatic day-night contrasts. The nightside maintains a relatively uniform 900 Kelvin (about 1,160 degrees Fahrenheit) with a smooth spectrum, suggesting either consistent temperature throughout that side or a thick cloud deck blocking our view. The dayside reaches 2,300 Kelvin (about 3,680 degrees Fahrenheit) at its hottest points.

Surprisingly, the hottest spot doesn’t line up with the point facing the pulsar. Instead, the temperature peak appears shifted westward by about 12 degrees, indicating powerful winds blowing opposite to the planet’s rotation direction.

Computer models of rapidly rotating planets predict exactly this behavior. Most hot Jupiters orbiting normal stars have winds flowing eastward around their equators, like a jet stream. But when a planet spins faster than once every 10 hours or so, the pattern flips. Westward winds dominate away from the equator. PSR J2322-2650b offers strong evidence consistent with this predicted pattern.

Source : https://studyfinds.org/unprecedented-lemon-shaped-planet-found/

Hacked Phones and Wi-Fi Surveillance Have Replaced Cold War Spies and Radio Waves in the Delusions of People with Schizophrenia

Everyday tech of modern life can take on sinister dimensions for people with thought disorders. (Credit: Roman Samborskyi on Shutterstock)

A young woman starts to become suspicious of her cellphone. She notices it listing Wi-Fi networks she does not recognize, and the photos on her contact cards seem to mysteriously change at random times. One day she tries to make a call and just hears static on the line. She begins to think that someone – or an entire organization – has hacked her phone or placed spyware in it, and she wonders what crime she is being framed for.

Built-in laptop webcams, unfamiliar Wi-Fi networks, targeted ads on search engines and personalized algorithms on social media sites: Most people have come to accept and ignore the quirks and drawbacks of daily contact with the internet and devices such as cellphones and computers. But for people with severe mental illness, new technologies are fertile ground for the start of false ideas that can lead eventually to a break with reality.

Psychiatrists like me help people who are bothered by their thoughts, behaviors or emotional states. For the past 10 years I’ve been working closely with people who have schizophrenia.

Schizophrenia, sometimes referred to as a type of thought disorder, is a chronic condition in which alterations in brain function change the way one perceives the world. People with schizophrenia can become hyperaware of their surroundings, often interpreting things they see or hear as being hostile and directed toward them even when there’s no real danger.

Over time, people with schizophrenia can develop delusions: beliefs that are fully held even though they are not based in reality and even when there is evidence to the contrary.

With technology and the internet now such an integral part of daily life, it’s no wonder that people with schizophrenia have incorporated new technologies into their delusional beliefs. In my recent research, my colleagues and I set out to explore the ways modern tech influences the content of delusions for people today.

Old Delusional Themes Expressed in New Ways

Most delusions are persecutory, meaning a person believes they are being watched, followed or monitored. Other delusional forms involve the belief that a person has special powers, is being controlled by outside forces, or that a spouse is unfaithful even when they are not.

Prior research has shown that these themes are consistent among people with schizophrenia, but the sociopolitical context in which a person lives shapes the form in which they are expressed.

For example, Americans living during World War II developed persecutory delusions involving Germans, while those living during the Cold War focused on communists. People with thought disorders have incorporated important events such as the fall of the Berlin Wall and the O.J. Simpson trial into delusional frameworks.

The past three decades have seen incredible strides in technological advances and easy access to the internet. How have these old themes become repackaged and expressed in the digital age?

For this research, my colleagues and I reviewed medical records of 228 people with thought disorders who participated in a specialized day treatment program between 2016 and 2024.

We identified any mention of delusional thought content and examined the ways in which these beliefs incorporated new technology. We also analyzed the data to see whether certain people were more likely to express delusions tied to technology, or if there was a change in the frequency of these delusions over time.

Delusions of Persecution via Common Tech

Over half of our study’s participants mentioned new technology or the internet when describing delusional beliefs. Most commonly, people felt they were being persecuted via their electronics – that their Wi-Fi networks, computers or cellphones had been hacked or implanted with tracking devices. One person reported believing that neighbors had access to their Wi-Fi network and were monitoring their activities, while another worried that family members had put tracking devices on their phone.

About a quarter of participants reported delusional beliefs surrounding social media. For example, people believed that celebrities were communicating with them directly through social media posts, that they were receiving encoded messages through suggested playlists, or that social media algorithms were linked directly to their thoughts.

Some participants felt they were being monitored through hidden cameras or microphones implanted in their homes or even in their bodies. Several reported what’s known as the “Truman Show delusion” – the belief that their lives are staged and recorded, their daily activities broadcast as a reality TV show.

With each passing year of the 21st century, we found participants were significantly more likely to express delusions connected to technology.

Source : https://studyfinds.org/hacked-phones-and-wi-fi-surveillance-schizophrenia-delusions/

Alzheimer’s drug hunt learns from cancer fight’s multi-target playbook

A scientist looks at hypometabolic and hypoperfusion patterns at the single-subject level from a patient suffering from Alzheimer’s disease at the Memory Centre at the Department of Readaptation and Geriatrics of the University Hospital (HUG), in Geneva, Switzerland, June 6, 2023. REUTERS/Denis Balibouse/File Photo Purchase Licensing Rights

Alzheimer’s trials testing Novo Nordisk’s (NOVOb.CO), blockbuster GLP-1 drug semaglutide, despite their failure, underscore a shift to approaching the brain-wasting disease as a system of complex pathways, much the way the field of cancer therapeutics has been transformed in recent years, experts say.
Just two drugs are approved to slow Alzheimer’s – Eli Lilly’s (LLY.N), Kisunla and Leqembi from Eisai (4523.T), and Biogen (BIIB.O). Both were shown to delay disease progression by around 30% by removing toxic amyloid plaques from the brain, but progress is being made to identify other targets and strategies for arresting the disease.

Globally, over 55 million people have dementia, with about 60% of those cases caused by Alzheimer’s, defined by the presence of amyloid and tau proteins in the brain.
“All the diseases of aging, they all require combination therapy,” said Howard Fillit of the Alzheimer’s Drug Discovery Foundation, one of the experts at a recent Alzheimer’s disease meeting who discussed the research shift. “Just targeting one pathway isn’t going to be enough.”
Blood and genetic tests to accurately identify biomarkers of the disease are becoming available, but most diagnoses require a spinal tap or expensive PET scan. Not all patients are likely to benefit equally from anti-amyloid treatments.

Some studies suggest Black patients may have more than one type of disease and treating amyloid alone may not be enough. Other analyses have shown that men do better than women, as do patients with lower levels of tau.

Studies are expected to show that patients treated earlier, in the course of the disease fare better than those who already have cognitive impairment.

MOVE TO TAILORED TREATMENT

Cancer treatment, which once consisted of one-size-fits-all chemotherapy to kill fast-growing cells, has mushroomed into a wide range of drugs targeting specific genetic mutations and other precise signatures of malignant cells in addition to immunotherapies.
David Watson, CEO of the Alzheimer’s Research and Treatment Center, said current research “is like oncology 20 years ago… It’s super exciting.” He cited advances in detecting blood biomarkers for tau, amyloid and other signatures of the disease, as well as the genetic underpinnings of Alzheimer’s, as reasons for optimism.

Novo’s results “underscored a critical shift toward the next era of drug development, which will target the many interrelated biological drivers of this complex disease,” Fillit said.
Oral semaglutide provided no cognitive benefit for people with early Alzheimer’s, but Novo in March will provide full trial details, including a likely breakdown of patient characteristics that could yield clues for others.
“We want to see more potential subgroup analyses,” including how people treated earlier in the course of the disease fared, said Dawn Brooks, head of neurodegeneration development at Eli Lilly.
Lilly, which makes top-selling GLP-1 tirzepitide, sold as Mounjaro and Zepbound, is “still watching” whether the class has a role in Alzheimer’s, Brooks said. But the Indianapolis-based company’s current GLP-1 brain-health program is focused on alcohol and tobacco use disorders.

Kisunla and Leqembi, which need to be closely monitored due to the danger of brain swelling, are being tested in people with Alzheimer’s who do not yet have symptoms. The Kisunla study is due first, in 2027, and Lilly has signaled interim results could come earlier.

DRUGS WITH MULTIPLE TARGETS

Brooks said Lilly’s focus is on improving access to current treatments, but the field is moving quickly, including development of drugs that target tau.
“One of the other areas to watch is going to be this idea of co-pathologies or mixed dementia,” Brooks said. Many patients have more than one type of dementia and may need multiple treatments.

Biogen (BIIB.O), will have data next year on a novel drug targeting tau. Other tau drugs, including a program recently cancelled by Johnson & Johnson, (JNJ.N), have failed.
Roche (ROG.S), recently launched late-stage trials of its drug trontinemab, which links an amyloid antibody to a “brain shuttle” allowing it to cross the blood-brain barrier, unlike Kisunla or Leqembi.

Source : https://www.reuters.com/business/healthcare-pharmaceuticals/alzheimers-drug-hunt-learns-cancer-fights-multi-target-playbook-2025-12-12/

One Sperm Donor Fathered 200 Children And Passed On A Deadly Mutation – And It Could Easily Happen Again

(© New Africa – stock.adobe.com)

Around 200 children in several countries were conceived with sperm from a single donor who unknowingly carried a rare genetic mutation linked to early onset cancers, it has been revealed. The consequences have been devastating. Several children have already died and many families across Europe are now facing a risk they never expected.

The case has prompted urgent questions. How was one donor used so widely? Why did standard safeguards fail to identify a mutation that can have such severe consequences? And how did a system created to create families allow a tragedy of this scale?

When someone donates sperm or eggs they are screened for a set of common inherited conditions before being accepted by a clinic. The exact process varies by country and it has limitations. Screening depends heavily on accurate family history, yet many people have incomplete information about their relatives.

Some conditions emerge later in adulthood, which means a young donor may appear healthy. Clinics also focus primarily on established, higher frequency conditions rather than the vast number of rare variants that exist.

Ordinarily, donors complete a detailed questionnaire covering their medical background and their family’s health history. If the information suggests a possible inherited risk, the donor may be offered further testing or, more commonly, they may be declined.

More recently, clinics have begun to use expanded genetic screening. These tests can examine hundreds of genes linked to childhood or early adulthood conditions.

However, the technology is still developing and cannot detect every possible disease-causing variant. Many rare mutations are not part of routine panels, either because they have only recently been identified or because the evidence base is still small.

That context matters in this case. The donor had no family history of the condition and showed no symptoms. A person can carry a harmful mutation without being affected themselves, so nothing in his medical history raised concerns. The newer, broader screening was not used, but even if it had been, the variant is so rare that it may not have been included or detectable.

The donor provided sperm to the European Sperm Bank in Denmark for around 17 years. His donations were used to create roughly 200 children across multiple European countries, although experts say the true number could be higher.

This scale was possible because there is no international law limiting how widely donor sperm can be distributed. Many countries have rules on how many families can be created from a single donor.

The United Kingdom, for instance, permits no more than ten families. These limits, however, apply only within national borders. A donor can be used in several countries without any system to flag that the overall number has far exceeded what any one country would allow.

A recent unrelated case showed how extreme this can become. A different donor was found to have fathered around 1,000 children in several countries. There were no known health issues in that situation, but it revealed how donor use can expand rapidly without oversight.

The challenges in the current case are profound. Families are dealing with grief and uncertainty. Some have lost children. Others face a very high likelihood that their child will develop cancer before the age of 60, often in infancy or childhood.

There has been little public discussion about the sperm donor himself, although the emotional impact of learning these outcomes is likely to be significant.

Because the mutation was so rare, additional routine testing would probably not have prevented what happened. In truth, every person carries some genetic variants that remain undetected and harmless in everyday life.

There have been earlier examples of donors unknowingly passing on inherited conditions, such as cystic fibrosis or fragile X syndrome, but those cases typically involved far fewer families. What makes this case stand out is the sheer number of children affected.

For this reason, simply calling for more screening is unlikely to be the full answer. The more pressing issue is the lack of limits and monitoring on how many families can be created from one donor across borders.

In this case, families were created in several countries and in some places even the national limits were breached. Belgium, for example, permits only six families per donor, yet reports indicate that about 38 families were created.

What is needed is a robust system for tracking and tracing donor use both within and between countries. Without coordinated oversight, national limits are easily bypassed. Establishing international upper limits will be difficult and politically complex, but the conversation has to begin if further tragedies are to be prevented.

Source : https://studyfinds.org/one-donor-200-children-mutation/

 

US startup seeks to reclaim Twitter trademarks ‘abandoned’ by Musk’s X

A fledgling social media platform has asked the U.S. Patent and Trademark Office to cancel trademarks for Twitter so it can take them for itself, contending that billionaire Elon Musk’s X Corp has abandoned them.
The Virginia-based startup, Operation Bluebird, said in its December 2 petition, opens new tab that it wants to be allowed to use “Twitter” and “tweet” for a rival social media platform called “twitter.new.” It also filed an application, opens new tab to trademark “Twitter.”

The new logo of Twitter is seen in this illustration created on July 24, 2023. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights

The petition was filed by Stephen Coates, a former trademark lawyer at Twitter who now serves as Operation Bluebird’s general counsel and runs a small law firm.
Musk bought Twitter in 2022 for $44 billion and rebranded the site to X. Operation Bluebird’s filings contend that X has “eradicated” the Twitter brand from its products, services and marketing.
Musk in 2023 said in a post on X that the company would “bid adieu to the Twitter brand and, gradually, all the birds.”
X did not immediately respond to a request for comment.
Coates in a statement called the matter “straightforward” after X allegedly stopped using the Twitter trademark commercially.

“X legally abandoned the TWITTER mark,” Coates said.
The rebranded X does not feature Twitter’s famous blue bird logo, and the platform has migrated from twitter.com, opens new tab to x.com. X Corp’s 2023 renewal registration, opens new tab for the Twitter trademark was approved last year.
Josh Gerben, an intellectual property lawyer who is not involved in the dispute, said X would face obstacles defending its ownership of the trademarks if the company no longer uses them. But he said X could try to block Operation Bluebird’s commercial use of the Twitter name even if the cancellation is successful.
Source: https://www.reuters.com/technology/us-startup-seeks-reclaim-twitter-trademarks-abandoned-by-musks-x-2025-12-08/

Did Black Mirror Just Get Real? Zomato CEO Teases Launch Of New Mystery Device ‘Temple’ – But What Is It?

Deepinder Goyal, founder and CEO of Zomato, has hinted at a new venture called ‘Temple,’ aimed at brain health.

Previously, he sparked interest by posting an image of a gold device near his temple.

Zomato founder and Eternal CEO Deepinder Goyal has teased the launch of his next big venture, ‘Temple’, a mysterious new product focused on brain health. Sharing a brief teaser on X, Goyal wrote, “Coming soon. Follow @temple for more updates.”

The link on Temple’s X account leads to a black webpage displaying the words:

TEMPLE. The future of health starts where no one’s looking. Inside your brain. Coming soon.”

Last month, Goyal had set off speculation around the mysterious project after sharing photos from a Feeding India school visit. In one of the images, a small gold metallic device clipped near his right temple immediately caught the internet’s attention.

Social media users had reacted to the viral image with references to Infinity Stones, anti-gravity brain sensors, and secret health chips. Joining the fun, Goyal had remarked, “This could very well be the Infinity Stone.”

What exactly is Temple?

Goyal had previously described the mystery device ‘Temple’ as an “experimental device to measure Brain Flow accurately, in real time and continuously,” which he said was developed during research related to his Gravity Ageing Hypothesis, something he has extensively elaborated upon on X.

The device appears to be a sensor which can be worn around your temple, a region around your brain. The core function appears to be to monitor brain health using cerebral blood flow.

While the full details of Temple remain unknown, early hints suggest it could be a wearable, a diagnostic device, or something entirely different that is still tightly under wraps. The cryptic teaser, however, has already ignited intense curiosity and speculation online.

Where does the Gravity Ageing Hypothesis come into picture?

Goyal earlier explained that Temple was developed while researching his much-touted Gravity Ageing Hypothesis.

“I’m not sharing this as the CEO of Eternal, but as a fellow human, curious enough to follow a strange thread. A thread I can’t keep with myself any longer. It’s open-source, backed by science, and shared with you as part of our common quest for scientific progress on human longevity. Newton gave us a word for it. Einstein said it bends spacetime. I am saying gravity shortens lifespan,” he said last month in a series of posts on X.

Goyal had mentioned that he has been using Temple for a year and believes it could become a significant health tool for the world.

“Been using it for a year, and I’ve been feeling that this could shape into an important wearable the world needs. Brain Flow is already well accepted as a biomarker for ageing, longevity, as well as cognition. So, this device is useful and relevant even if the Gravity Ageing Hypothesis turns out to be wrong,” he said on LinkedIn.

Source : https://www.timesnownews.com/technology-science/zomato-ceo-teases-launch-of-new-mystery-device-temple-but-what-is-it-article-153259102

 

With just one phone number ProxyEarth is showing location, personal details of all Indians

Imagine this: with just one phone number you have the ability to look at everyone’s location, sometimes even live location in real-time, complete with details like full name, address and father’s name. This is no joke because a rogue website called ProxyEarth is allowing just that.

Proxy earth is leaking location details, including possibly live location, of phone users in India (Photo: ITG)

If you have wondered just how much tracking is possible with just a phone number, you don’t have to wonder anymore: you can just head to ProxyEarth, a rogue website set up by one Rakesh. It is also an example of just what is possible when someone gets into the telecom infrastructure, or when telecom infrastructure is left insecure.

There is a website called ProxyEarth that is leaking location details, including possibly live location, of phone users in India. All you have to do is put in the phone number into the website, and the website reveals details. Sometimes, using triangulation data from telecom towers, it even reveals the live location of a phone user. It is eerily accurate, and scary in its scope.

The website that surfaced a few days ago — we wrote about it here and here — is still live and can be accessed without any hitch. Once on the website, all you have to do is put in a phone number and it fetches details like full name, father’s name, address of the user, alternate number, email ID and other details. The data is fetched from telecom records which we all have supplied to Airtel, Jio, Voda and others while buying the SIM card. In some cases this data is old. But for most users seeing their details available in public like this will come as a huge shock.

India Today earlier even managed to speak to the person who has made the website. The person who has made the website goes by name Rakesh, and he is apparently a programmer and a video editor. He also likely runs a few websites hawking pirated material. He told India Today that he was not doing anything wrong by setting ProxyEarth because he was simply using the data that was already publicly available on internet due to various data leaks. As for the motivation behind creating such a website, Rakesh said: “I am using this website as a platform to attract traffic and to advertise my other products.”

While we have seen again and again private details of Indians leaking on the web, the current one seems particularly scary. This is because the kind of data that is available can easily be used to then create elaborate financial scams. This, of course, in addition to the complete loss of privacy that one suffers when details like father’s name and residential addresses leak on the web.

Source : https://www.indiatoday.in/technology/news/story/with-just-one-phone-number-proxyearth-is-showing-location-personal-details-of-all-indians-2830677-2025-12-04

FLARE UP Urgent warning as Sun eruption sends geomagnetic storm ‘hurtling’ toward earth with power grids across the US in danger

EXPERTS are warning Americans about a possible geomagnetic storm that could disrupt power across the country.

The massive solar eruption may affect power grids, satellites, and communication issues across the US.

Experts have warned that a solar eruption could affect power gridsCredit: NASA

NOAA’s Space Weather Prediction Center issued the alert early Wednesday, citing vigorous geomagnetic activity that’s expected to continue through Thursday.

Geomagnetic storms are caused by solar particles interacting with the Earth’s magnetic field.

When the particles interact, the forces can generate additional electrical charge through power lines and pipelines.

The bolt of electricity can cause brief voltage fluctuations.

The storms are the result of a powerful solar flare that occurred on November 30, according to the Daily Mail.

The solar flare led to a coronal mass ejection headed towards Earth at 1.4 million miles per hour.

The strongest currents are expected to hit the Upper Midwest and into the Northeast, according to the Space Weather Prediction Center.

Not only will the geomagnetic storms impact power grids, but Americans will also have the rare chance to witness the Northern Lights, also called the Aurora Borealis.

Those in Washington, Idaho, Montana, Wyoming, the Dakotas, Minnesota, Wisconsin, and Michigan will be able to see the lights fill the night sky.

Even those living on the East Coast in Vermont, New York, New Hampshire, and Maine will have the chance to see the aurora.

Source : https://www.the-sun.com/tech/15586947/solar-flare-sun-eruption-geomagnetic-storm/

Google Is Bringing New Limits For Gemini 3 Pro Free Users: Here’s Why

Gemini 3 Pro version is available for everyone but those using it with a free account will have lower limits for Nano Banana Pro and prompts.

Gemini 3 Pro is the latest version that is available for free users

The Gemini 3 Pro AI model is getting new limits for those using it without paying for the subscription. Yes, Google has put new restrictions on the number of images or queries the free users can ask for the latest Gemini version. This was rather unavoidable and giving a lot of access with the free account was going to be a challenge, and surely not something the company would want to advocate for in the long run.

So why are these limits being changed and how does it affect the Gemini 3 Pro AI free users in all markets? Here’s a quick look at how the terms have been revised.

Gemini 3 Pro Free Users Have New Limits

Going by multiple reports, the Gemini 3 Pro free limit extends to Nano Banana Pro as well as the prompts you can send to the AI model. The new Gemini version was announced last month when you could send up to 5 prompts to Gemini and generate/edit 3 images per day with the Nano Banana Pro model.

So why are the limits being put for the free users? Google probably feels that Gemini 3 Pro already has a higher demand than the 2.5 Pro version, so allowing the paid users to get their set of extended access makes more sense and gives them better value.

Having said that, free users can continue to use the original Nano Banana AI model and create up to 100 images in a day but without any access to the video generation tool.

Source : https://www.news18.com/tech/google-is-bringing-new-limits-for-gemini-3-pro-free-users-heres-why-9742169.html

Microsoft Confirms Copilot AI Will Not Work On WhatsApp From January 2026: Know The Reason

WhatsApp bot support for AI assistants have made ChatGPT, Copilot and other options easily available to billions.

WhatsApp AI bot support for assistant is ending early 2026

WhatsApp is blocking access for third-party AI assistants from January 2026 and Microsoft is the latest tech giant to confirm the news about its Copilot assistant to exit the messaging app in a few months from now.

AI chatbots have operated like WhatsApp bots for a while, ChatGPT being the other popular option, but all these companies will have to disable the feature thanks to the change in policies by Meta that will be coming into effect from January 15, 2026. Copilot is basically Microsoft’s ChatGPT AI chatbot that lets you ask queries, generate images and search the internet when needed.

Copilot Bites The WhatsApp Dust

Meta has noticed that most of the Businesses using its app API have devised ways to integrate their AI assistant that serves the general users.

The company has now changed the rules by stopping these companies from building their AI assistant bots and making them available on WhatsApp. Meta has clearly stated in its new rules that any bot operating primarily as an AI assistant will not be allowed to run through the messaging app.

This is a big jolt for ChatGPT and Perplexity among others who have offered their WhatsApp bots to make the features widely available. And you can add Microsoft to that roster, and the company is making sure that users can continue using the AI chatbot on mobile, web and PC without any glitches.

WhatsApp Copilot Chats: Save Them

Microsoft has mentioned that using the WhatsApp Copilot bot chats will not be saved when you move to the native apps on different platforms. However, the company says WhatsApp has a built-in tool that will allow you to export the conversation history which is available until January 15, 2026.

Source : https://www.news18.com/tech/microsoft-confirms-copilot-ai-will-not-work-on-whatsapp-from-january-2026-know-the-reason-9734202.html

Elon Musk’s X Gets New Following Timeline Update Powered By Grok AI

Elon Musk has promised to fix the mess around people’s timeline on X and he is now relying on Grok AI to make it better

Musk is using Grok AI to clean up the timeline for users

Update your X app and get the new Following tab timeline organised by Grok AI chatbot, as confirmed by X Chief Elon Musk in a post on Thursday. The platform has been working behind the scenes to change the way the timeline algorithm works and it seems Musk has finally resorted to AI’s expertise to fix the Following tab mess that many people have complained about.

X Following Timeline Update With Grok: What’s New?

The new update means Grok has been given the powers to rearrange the clutter in your timeline and show you relevant posts from people you follow. However, while the post doesn’t say it clearly, it seems the Grok-powered timeline cleanup is limited to the US region for now and you need the X app on iPhone to see the changes.

He doesn’t explain what Grok is doing to make the changes, but some users have shared their feedback after getting the refreshed timeline and they seem to like the work done by the AI chatbot.

But the platform is giving users the choice to use the timeline differently. So in case you don’t want the Grok-powered timeline cleanup, you get an arrow next to the Following tab from where you can use the normal chronological order to view the timeline. We haven’t been able to test the new update on X but will be keeping a close eye on the changes that should be available soon.

Meanwhile, X Premium users are getting a special first-month offer which lets you try out the features and Grok AI for just Rs 89 per month or Rs 890 if you want the advanced version. This is part of the third year anniversary of X Premium which lets you sign up for its subscription with the same benefits, including access to Grok AI.

Source : https://www.news18.com/tech/elon-musks-x-gets-new-following-timeline-update-powered-by-grok-ai-9734739.html

 

Apple Challenges India’s Antitrust Penalty Law With A Risk Of $38 Billion Fine, Here’s What Happened

Apple Inc has taken a bold enough step to stand against the antitrust penalty law of India and incurred a risk of $38 billion in fines. Here’s everything you need to know about the story.

Apple
Photo : iStock

Apple Inc, one of the most popular smartphone makers in the world, are contesting India’s new antitrust penalty law under which the Cupertino-based tech giant could face up a fine of up to $38 billion, as per a court filing in Delhi High Court, seen by Reuters. This comes as the first challenge against the antitrust penalty law that suggests that the Competition Commission of India (CCI) can use global turnover of a company to calxulate the penalties it imposes on them for abusing their market positioning and dominance.

For those who are unaware, since 2022, Indian startups, Tinder-owner Match and others have been trapped in an antitrust battle with Apple at CCI where the investigators published a report saying the tech giant has engaged in abusive conduct on the apps market of its iPhone Operating System, popularly known as iOS.

As of now, Apple has denied all the wrongdoing and CCI’s final decision is pending in the case that will levy any penalty on the company. The Cupertino giant has asked the judged to declare as the 2024 law illegal that lets the CCI to use global turnover to impose a fine on a company. The company has submitted a 545-page court filing that is not available in public domain currently.

Source : https://www.timesnownews.com/technology-science/apple-challenges-indias-antitrust-penalty-law-with-a-risk-of-38-billion-fine-heres-what-happened-article-153210462

When Darkness Shines: How Dark Stars Could Illuminate The Early Universe

NASA’s James Webb Space Telescope has spotted some potential dark star candidates. (Credit: NASA, ESA, CSA, and STScI)

Scientists working with the James Webb Space Telescope discovered three unusual astronomical objects in early 2025, which may be examples of dark stars. The concept of dark stars has existed for some time and could alter scientists’ understanding of how ordinary stars form. However, their name is somewhat misleading.

“Dark stars” is one of those unfortunate names that, on the surface, does not accurately describe the objects it represents. Dark stars are not exactly stars, and they are certainly not dark.

Still, the name captures the essence of this phenomenon. The “dark” in the name refers not to how bright these objects are, but to the process that makes them shine — driven by a mysterious substance called dark matter. The sheer size of these objects makes it difficult to classify them as stars.

As a physicist, I’ve been fascinated by dark matter, and I’ve been trying to find a way to see its traces using particle accelerators. I’m curious whether dark stars could provide an alternative method to find dark matter.

What Makes Dark Matter Dark?

Dark matter, which makes up approximately 27% of the universe but cannot be directly observed, is a key idea behind the phenomenon of dark stars. Astrophysicists have studied this mysterious substance for nearly a century, yet we haven’t seen any direct evidence of it besides its gravitational effects. So, what makes dark matter dark?

Humans primarily observe the universe by detecting electromagnetic waves emitted by or reflected off various objects. For instance, the Moon is visible to the naked eye because it reflects sunlight. Atoms on the Moon’s surface absorb photons – the particles of light – sent from the Sun, causing electrons within atoms to move and send some of that light toward us.

More advanced telescopes detect electromagnetic waves beyond the visible spectrum, such as ultraviolet, infrared or radio waves. They use the same principle: Electrically charged components of atoms react to these electromagnetic waves. But how can they detect a substance – dark matter – that not only has no electric charge but also has no electrically charged components?

Although scientists don’t know the exact nature of dark matter, many models suggest that it is made up of electrically neutral particles – those without an electric charge. This trait makes it impossible to observe dark matter in the same way that we observe ordinary matter.

Dark matter is thought to be made of particles that are their own antiparticles. Antiparticles are the “mirror” versions of particles. They have the same mass but opposite electric charge and other properties. When a particle encounters its antiparticle, the two annihilate each other in a burst of energy.

If dark matter particles are their own antiparticles, they would annihilate upon colliding with each other, potentially releasing large amounts of energy. Scientists predict that this process plays a key role in the formation of dark stars, as long as the density of dark matter particles inside these stars is sufficiently high. The dark matter density determines how often dark matter particles encounter, and annihilate, each other. If the dark matter density inside dark stars is high, they would annihilate frequently.

What Makes a Dark Star Shine?

The concept of dark stars stems from a fundamental yet unresolved question in astrophysics: How do stars form? In the widely accepted view, clouds of primordial hydrogen and helium — the chemical elements formed in the first minutes after the Big Bang, approximately 13.8 billion years ago — collapsed under gravity. They heated up and initiated nuclear fusion, which formed heavier elements from the hydrogen and helium. This process led to the formation of the first generation of stars.

In the standard view of star formation, dark matter is seen as a passive element that merely exerts a gravitational pull on everything around it, including primordial hydrogen and helium. But what if dark matter had a more active role in the process? That’s exactly the question a group of astrophysicists raised in 2008.

In the dense environment of the early universe, dark matter particles would collide with, and annihilate, each other, releasing energy in the process. This energy could heat the hydrogen and helium gas, preventing it from further collapse and delaying, or even preventing, the typical ignition of nuclear fusion.

The outcome would be a starlike object — but one powered by dark matter heating instead of fusion. Unlike regular stars, these dark stars might live much longer because they would continue to shine as long as they attracted dark matter. This trait would make them distinct from ordinary stars, as their cooler temperature would result in lower emissions of various particles.

Can We Observe Dark Stars?

Several unique characteristics help astronomers identify potential dark stars. First, these objects must be very old. As the universe expands, the frequency of light coming from objects far away from Earth decreases, shifting toward the infrared end of the electromagnetic spectrum, meaning it gets “redshifted.” The oldest objects appear the most redshifted to observers.

Since dark stars form from primordial hydrogen and helium, they are expected to contain little to no heavier elements, such as oxygen. They would be very large and cooler on the surface, yet highly luminous because their size — and the surface area emitting light — compensates for their lower surface brightness.

They are also expected to be enormous, with radii of about tens of astronomical units — a cosmic distance measurement equal to the average distance between Earth and the Sun. Some supermassive dark stars are theorized to reach masses of roughly 10,000 to 10 million times that of the Sun, depending on how much dark matter and hydrogen or helium gas they can accumulate during their growth.

So, have astronomers observed dark stars? Possibly. Data from the James Webb Space Telescope has revealed some very high-redshift objects that seem brighter — and possibly more massive — than what scientists expect of typical early galaxies or stars. These results have led some researchers to propose that dark stars might explain these objects.

In particular, a recent study analyzing James Webb Space Telescope data identified three candidates consistent with supermassive dark star models. Researchers looked at how much helium these objects contained to identify them. Since it is dark matter annihilation that heats up those dark stars, rather than nuclear fusion turning helium into heavier elements, dark stars should have more helium.

The researchers highlight that one of these objects indeed exhibited a potential “smoking gun” helium absorption signature: a far higher helium abundance than one would expect in typical early galaxies.

Dark Stars May Explain Early Black Holes

What happens when a dark star runs out of dark matter? It depends on the size of the dark star. For the lightest dark stars, the depletion of dark matter would mean gravity compresses the remaining hydrogen, igniting nuclear fusion. In this case, the dark star would eventually become an ordinary star, so some stars may have begun as dark stars.

Supermassive dark stars are even more intriguing. At the end of their lifespan, a dead supermassive dark star would collapse directly into a black hole. This black hole could start the formation of a supermassive black hole, like the kind astronomers observe at the centers of galaxies, including our own Milky Way.

Dark stars might also explain how supermassive black holes formed in the early universe. They could shed light on some unique black holes observed by astronomers. For example, a black hole in the galaxy UHZ-1 has a mass approaching 10 million solar masses, and is very old – it formed just 500 million years after the Big Bang. Traditional models struggle to explain how such massive black holes could form so quickly.

Source : https://studyfinds.org/dark-stars-illuminate-the-early-universe/

Google, Accel partner to back Indian AI startups

FILE PHOTO: The Google logo is seen on the Google house at CES 2024, an annual consumer electronics trade show, in Las Vegas, Nevada, U.S. January 10, 2024. REUTERS/Steve Marcus/File Photo

Alphabet’s Google and venture capital firm Accel will partner to fund at least 10 early-stage Indian AI startups, marking the U.S. technology giant’s first such funding partnership, top executives at the companies said on Thursday.

The move comes as several U.S. tech firms like Microsoft, Amazon and OpenAI make a beeline for the world’s most populous nation, seen as a critical growth market where nearly a billion users access the internet.

Under the partnership, Google’s AI Futures Fund and Accel will co-invest up to $2 million in each startup, Prayank Swaroop, partner at Accel, told Reuters in an interview, with the investments focussed on the wide areas of entertainment, creativity, work and coding.

The announcement comes after Google in October said it would invest $15 billion over five years to set up an AI data centre in the southern Indian state of Andhra Pradesh, its biggest-ever investment in the country.

Its AI Futures Fund, launched six months ago, has funded over 30 companies, including Indian webtoon startup Toonsutra and U.S.-based legal-tech firm Harvey. Google has also teamed up with India’s largest telecom operator Reliance Jio to provide free access to Gemini AI for 505 million users.

“We firmly believe that the founders in India, they are going to be playing a leading role in defining that next era of global technology,” Jonathan Silber, co-founder and director of Google’s AI Futures Fund, said.

Source : https://www.channelnewsasia.com/business/google-accel-partner-back-indian-ai-startups-5488261

India GCCs Shift From AI Pilots To Large-Scale Adoption With Strong Push For Agentic AI In 2025

According to the EY India’s report, GCCs are applying GenAI where it matters most — enhancing customer service (65 per cent), followed by finance (53 per cent), operations (49 per cent), IT and cybersecurity (45 per cent). Business intelligence adoption has increased to 86 per cent from 80 per cent last year, while data strategy has risen to 67 per cent from 51 per cent.

India-based global capability centres (GCCs) have moved from AI experimentation to enterprise-scale adoption. | IANS

India-based global capability centres (GCCs) have moved from AI experimentation to enterprise-scale adoption, with 58 per cent of centres currently investing in Agentic AI and another 29 per cent planning to scale over the next year, a report said on Sunday.

About 83 per cent of GCCs are already investing in GenAI, where pilots have increased from 37 per cent last year to 43 per cent as of 2025.

According to the EY India’s report, GCCs are applying GenAI where it matters most — enhancing customer service (65 per cent), followed by finance (53 per cent), operations (49 per cent), IT and cybersecurity (45 per cent).

Business intelligence adoption has increased to 86 per cent from 80 per cent last year, while data strategy has risen to 67 per cent from 51 per cent.

Meanwhile, according to the report, two-thirds of GCCs (67 per cent) are creating dedicated innovation teams and incubation programs to generate, test and globalise ideas from India.

“Global enterprises are rethinking how they run their operations. They want simpler models, tighter oversight and a place where AI, data and risk teams can operate in sync. Our survey shows that this shift is well underway at GCCs in India,” said Manoj Marwah, Partner and GCC Sector Leader – Financial Services, EY India.

The combination of talent, cross-functional maturity and a rapidly advancing AI ecosystem gives global firms something they can’t easily build elsewhere. The GCCs we set up now are poised to operate as decision centres shaping enterprise strategy around risk, new products, digital transformation and more, he added.

As per the report, GCCs in the nation are becoming key collaborators in global decision-making, with over half of India’s centres (52 per cent) holding shared accountability for global decisions, while another 26 per cent are formally consulted.

Source : https://www.freepressjournal.in/tech/india-gccs-shift-from-ai-pilots-to-large-scale-adoption-with-strong-push-for-agentic-ai-in-2025

Google Chrome Users At Risk: Indian Government Issues Urgent Security Alert

CERT-In has issued a security alert for Google Chrome users, urging immediate updates for Windows, macOS, and Linux due to severe vulnerabilities identified as CVE-2025-13223 and CVE-2025-13224.

Google Chrome Security Warning

The Indian Computer Emergency Response Team (CERT-In) has issued a fresh security alert for Google Chrome users, advising immediate updates across Windows, macOS, and Linux. The warning comes after multiple high-risk vulnerabilities were discovered in the browser, raising concerns about potential remote attacks. If you rely on Chrome for your daily browsing, work or banking, this is one alert you shouldn’t ignore. Here’s everything you need to know about the newly flagged threat and what action you should take.

What CERT-In Has Identified
In its latest advisory, tagged CIVN-2025-0330, CERT-In highlighted two major security flaws in Chrome. These vulnerabilities, identified as CVE-2025-13223 and CVE-2025-13224, have been classified as “high severity,” meaning attackers could use them to compromise a system remotely.
The root of the problem lies in a Type Confusion error inside Chrome’s V8 engine. This engine is responsible for processing JavaScript and WebAssembly, both essential parts of how modern websites function. When Type Confusion occurs, the browser may attempt to access memory in an unsafe way, which can open the door for malicious code execution. As CERT-In explains, this could allow attackers to run harmful programs on your computer simply by directing you to a specially crafted webpage.
What Google Has Said
Google confirmed that one of the vulnerabilities, CVE-2025-13223, is already being exploited “in the wild.” That means hackers have found a working method to take advantage of the flaw before many users have updated their browsers.
The company stated that Chrome versions prior to 142.0.7444.175/.176 on Windows, 142.0.7444.176 on macOS, and 142.0.7444.175 on Linux are affected. Google has already pushed out patched updates to the stable channel, and they will continue rolling out globally over the coming days.
What Users Should Do Now
If you’re using Google Chrome, CERT-In has made one thing clear: updating immediately is the best way to stay protected. You can check your browser version by going to Help > About Google Chrome in the settings menu. Chrome will automatically start downloading the latest updates, and a restart will apply the fix.

Google Developing Safe, Trusted AI To Protect Vulnerable Users In India

Ahead of the AI Impact Summit 2026 in New Delhi, the company outlined a series of initiatives focused on shielding users from sophisticated scams, strengthening enterprise cybersecurity, and building inclusive, equitable AI models suited for India and the Global South.

Google Pay issues over 1 million weekly warnings for fraudulent transactions.

Google on Thursday said that it is developing safe and trusted Artificial Intelligence as part of a broader effort to protect vulnerable users in India, emphasising that safety must serve as the foundation for transformational AI.

Ahead of the AI Impact Summit 2026 in New Delhi, the company outlined a series of initiatives focused on shielding users from sophisticated scams, strengthening enterprise cybersecurity, and building inclusive, equitable AI models suited for India and the Global South.

The India-AI Impact Summit 2026, announced by India at the France AI Action Summit and scheduled for February 19-20 in New Delhi, will be the first-ever global AI summit hosted in the Global South.

Highlighting the growing threat of digital arrest scams, screen-sharing fraud and voice cloning, Google said its approach centres on protections that are “faster than the scammer” and built directly into everyday technology. The company is rolling out real-time scam detection on Pixel phones, powered by Gemini Nano, which analyses suspicious calls on-device without recording audio. A new pilot with Google Pay, Navi and PayTM alerts users if they open financial apps while screen-sharing with an unknown contact, offering one-tap options to exit safely.

Google Play Protect has blocked over 115 million attempted installations of high-risk sideloaded apps in India, while Google Pay issues over 1 million weekly warnings for fraudulent transactions. Google is also pioneering Enhanced Phone Number Verification, replacing SMS OTPs with a secure SIM-based check to strengthen sign-ins.

To counter deepfakes, Google is expanding access to SynthID, its AI watermarking tool, to partners such as PTI, Jagran and India Today. On the cybersecurity front, Google introduced CodeMender, an AI agent that autonomously identifies and patches vulnerabilities.

The company is also investing in large-scale digital literacy efforts. Programs like LEO, Super Searchers, and senior-focused DigiKavach campaigns aim to equip millions with the skills to identify online risks. Through Google.org’s APAC Digital Futures Fund, the CyberPeace Foundation will receive USD 200,000 to strengthen AI-driven cyber-defence tools.

Source : https://www.ndtv.com/india-news/google-developing-safe-trusted-ai-to-protect-vulnerable-users-in-india-9673160

SPACE LIFE HOPE Moss survives 9 MONTHS in space leaving scientists astonished & raising hopes humans could live on Mars

MOSS has survived nine months exposed to outer space, giving hope it can help develop ecosystems for human life on Mars.

The plant even made it back to Earth after the experiment and was still capable of reproducing.

The surprise durability of moss was confirmed during tests on the International Space StationCredit: Getty

Researchers calculated it could have lasted for up to 15 years in conditions in which most living organisms cannot survive even briefly.

Its durability was confirmed during tests on the International Space Station.

Spores from a plant called spreading earthmoss were transported to the vessel in 2022.

There it was attached to the outside for 283 days before being returned to Earth in January 2023, with more than 80 per cent of the sample surviving.

They predicted that the encased spores could have survived for up to 5,600 days under space conditions.

But the team emphasised that this number is just a rough estimate, and that much more data is needed to make more realistic predictions.

They hope that their work helps advance research on the potential of extraterrestrial soils for facilitating plant growth.

Researcher Prof Tomomichi Fujita struck on the idea of testing the moss in space after seeing it survive in some of the harshest environments on Earth — but still expected none to last.

Source : https://www.the-sun.com/tech/15525233/moss-survives-space-scientists-astonished-humans-live-mars/

GOOG THINKING? Google launches brainy Nano Banana Pro image maker for FREE to solve common AI nightmare – & gives clue for next upgrade

GOOGLE has launched its next-gen Nano Banana Pro AI bot that makes any image you can dream up – and it’s free.

And a Google insider has shared its brainy new tricks with The Sun, and even teased what’s next for the clever AI.

Google’s new Nano Banana Pro image-maker is much more advanced than the original Nano BananaCredit: Google

Earlier this year, Google’s Gemini AI chatbot added a new feature called Nano Banana.

It allowed users to create stunningly lifelike images, as well as craft pics using consistent characters – including your own face, or a pal’s.

Now Google is using its upgraded Gemini 3 Pro tech to offer a new image-maker called Nano Banana Pro and it has some big advantages, including properly displaying text.

LOOKING GOOG

“It’s much better at rendering text than any of our models have been before,” said Google’s Nicole Brichtova, speaking to The Sun.

Getting written text right has been a major bug-bear for AI bots.

A telltale sign of AI images has been scrambled words that don’t make sense or use letters that don’t exist.

But not only can Nano Banana Pro do text right, it can also help you out on holiday.

“It can also do that in more than 10 languages,” Nicole, the product lead for Image & Video at Google DeepMind, explained.

“So if you have a poster that’s in English, and you now want to localise it into Spanish, the model will keep the visual the same.

“And the font and the style and everything, but it will actually just translate the text into Spanish.”

The original Nano Banana became a hit, allowing users to restore old photos or turn themselves into action figures.

But it still struggled with text, which meant that it couldn’t create detailed text-heavy images.

Now Google says that the new model is good enough that you could create entire infographics – a major help for youngsters at school or uni.

“You can be like: ‘give me an infographic about how the pyramids were built that’s suitable for a 10-year-old’,” Nicole told us.

“And the model will adapt the content to that audience.”

It’s not just education either: sports fans can take advantage of it.

That’s because Google’s Nano Banana Pro can tap into up-to-date information via Google.

“The model can also tap into search’s knowledge database,” Nicole said.

“So if you want fresh, like ‘give me an infographic about the latest sports game for my favourite team’, you can now get that individual format.”

Nicole continued: “You can make your own comic book as well.

“You provide an image of yourself and then specify what you want to be.

“I just did a superhero one for myself because why not? And then you can get a full comic book that you can skip through.”

But it’s not just text that Google’s new AI image maker is better at.

FACE FIRST

Google says that the new Pro version is much better at keeping faces consistent across images – and with more people too.

“So Nano Banana took three images as input that you can compose into a new piece of content,” Nicole told The Sun.

“This model takes 14 – and the 14 is up to five actual characters or people.

“So you can keep five people consistent, and then you have other references, like their headphones, or a table, or plants or the background.

“And then you compose all of those elements into a new scene.”

On top of that, you can also be more specific about how you want your image files to be.

That means you can choose resolutions or the shape of images to fit your needs.

“We’re also now offering 2K and 4K resolution, up from one,” Nicole told us.

“So just sharper detail, which has been one of the top-requested features that we’ve had from Nano Banana.

“And we also offer a wide range of aspect ratios.”

NEXT UP

Google told us that there’s no word limit for the amount of text you can demand from a Nano Banana Pro image.

But Nicole warned that if you serve up page-length text, the small size of the font might start to create issues.

“And it’s really very, very small text where we still have some headroom to push on improving the model,” she added.

That’s not the only upgrade that Google is hoping to deliver in the future either.

Millions of office workers across the land spend hours crafting slide decks on apps like Google Sheets, Microsoft Powerpoint or Apple’s Keynote.

But Nicole said that Nano Banana Pro is getting very close to being able to take a lot of that work on itself.

However, despite its powerful AI brain, there’s still one issue that hasn’t been resolved yet.

“People have made full slide decks with this model internally,” Nicole said.

“If you look at it with a very critical design eye, you might still want a little more continuity.

“But single slides, it can absolutely do. With some work, you can absolutely make an entire slide deck.

“And then I think the next phase, which is not on this model, would also be able to edit the content.

“Because if you make a slide deck, you want to be able to edit the text, right?

“The output that you get from the model is pixels. And so you will get an image that you’re not able to edit. And so I think next up is being able to do that.”

Source : https://www.the-sun.com/tech/15522269/google-nano-banana-pro-gemini-3-image-maker-app/

Digital Signature, Global Access: Every Indian To Have E-Passport By 2035, With 80 Lakh Issued So Far

This initial rollout marks a major technological leap, which officials have likened to an upgrade from ‘3G to 5G’ in telecommunications technology

While all newly issued or renewed passports are now the chipped version, the MEA has assured citizens that existing non-electronic passports will remain valid until their natural expiry date. (Representational image/Getty)

The Ministry of External Affairs (MEA) has ushered in a new era for Indian travellers, launching the nationwide issuance of e-passports to significantly enhance security standards and expedite global travel. This transformation is part of the upgraded Passport Seva Programme (PSP) Version 2.0, which integrates cutting-edge technology into the nation’s travel documentation system.

The core of the new document is an embedded Radio Frequency Identification (RFID) chip and antenna, compliant with International Civil Aviation Organisation (ICAO) standards, which securely stores the holder’s personal and biometric data. This information is digitally signed using Public Key Infrastructure, making the passport highly resistant to forgery, tampering, and fraudulent duplication, addressing a crucial security challenge.

This ambitious programme has rapidly gained traction. According to MEA officials, the government has already issued over 80 lakh e-passports domestically as of May 2025, with an additional 60,000 issued through Indian missions abroad. This initial rollout marks a major technological leap, which officials have likened to an upgrade from “3G to 5G” in telecommunications technology. The e-passport, visually identifiable by a small gold-coloured symbol on the front cover, facilitates faster and smoother immigration clearance at international e-gates and automated border control systems, supporting a global “trusted traveller programme”.

Source: https://www.news18.com/tech/digital-signature-global-access-every-indian-to-have-e-passport-by-2035-with-80-lakh-issued-so-far-9716498.html

Elon Musk Launches X Chat: Features, Availability, Everything You Need To Know About The New WhatsApp Rival

Elon Musk has introduced X Chat, a messaging platform within X, designed to prioritise user privacy and security. It features end-to-end encryption for messages and file sharing, as well as disappearing messages.

Elon Musk has introduced X Chat, a messaging platform within X.

Elon Musk has officially launched X Chat, a new messaging platform integrated within the social media site X. This new service aims to provide a privacy-focused alternative to established messaging applications such as WhatsApp and Arattai. X Chat is designed to enhance user communication while prioritising data security and user privacy. Here’s everything you need to know about this.

Key Features Of X Chat
X Chat incorporates several advanced features that set it apart from its competitors. Notably, it offers end-to-end encryption for all messages, ensuring that only the sender and receiver can access the content. This encryption extends to file sharing, enhancing the overall security of user communications. Additionally, users can send disappearing messages, allowing for greater control over their conversations. Unlike WhatsApp, deleted messages on X Chat will vanish without leaving a trace, providing a cleaner messaging experience.
Elon Musk announced the launch on X, highlighting that the platform now supports audio and video calls, along with file transfer capabilities. He stated, “X just rolled out an entire new communications stack with encrypted messages, audio/video calls, and file transfer.” This comprehensive approach aims to cater to the growing demand for secure communication in the digital age.

Privacy Features And User Control

X Chat also includes features aimed at protecting user privacy. Users can block screenshots of their chats, which adds an additional layer of security. Furthermore, notifications can be activated to alert users if someone attempts to take a screenshot of their messages. The platform will be ad-free and will not track user data, making it particularly appealing to those concerned about privacy.

The messaging service consolidates existing messaging functions, allowing users to access both new X Chat messages and legacy direct messages in one unified inbox. Future updates are expected to enhance the platform further, including the introduction of support for voice memos.

Source : https://www.timesnownews.com/technology-science/elon-musk-launches-x-chat-features-availability-everything-you-need-to-know-about-the-new-whatsapp-rival-article-153165716

India’s First Privacy-Focused Law DPDP Rules 2025 Goes Live: What It Means For Users

The Digital Personal Data Protection Rules 2025, established by the Ministry of Electronics and Information Technology, initiate the full implementation of India’s first comprehensive data law.

DPDP Rules 2025

The Ministry of Electronics and Information Technology has introduced the Digital Personal Data Protection Rules 2025 or DODP Rules 2025. It marks the initiation of full operations of the Digital Personal Data Protection Act, 2023, hailed as India’s first comprehensive data law. It revolves around protecting digital data and setting out obligations of entities handling this data and rights and duties of the individuals.
The DPDP Rules 2025 offer a consent-based system in order to protect the personal data of people using social media platforms like Instagram, Facebook, or ecommerce platforms like Flipkart and Amazon, or any digital payments application.
The main focus here is to make sure that the users know how their data is used so that they can have more control over the same. Under the new ruled, platforms and companies found mishandling the fata will face heavy penalties comprising up to Rs 250 crores for serious failures of data protection. All the apps and companies that collect data as of now have been asked to comply with all the rules in 18 months, after which, they can incur a fine.

The law mainly works on two things – Data Principals and Data Fiduciary. Starting with Data Fiduciary, it is a company or individual that determines why the data is being processed and how that processing will be carried out. On the other hand, the Data Principal is the individual to whom the data belongs or the person whose information is being collected and stored.

Biggest Changes For Users Due To DPDP Rules 2025

-There are a few things that the companies or the apps you use will have to comply with. Some of the major pointers are:
-Companies will have to inform the users immediately about a personal data breach and how they have to tackle the same, or what the next steps are.

Flying 1,000 km per day: Tiny Indian Falcon stuns scientists with marathon flight

The tiny bird covered an astonishing 3,100 kilometres in just 76 hours, an aerial marathon that saw him slice across central India, glide past Gujarat, and skim out over the Arabian Sea.

Only a handful of migratory species undertake such long, uninterrupted flights, and the Amur Falcon is among the smallest of them. (Photo: X)

Three new aerial travellers are rewriting the limits of endurance in the natural world.

On November 11 2025, wildlife scientists tagged three Amur Falcons, Apapang (an adult male), Alang (a young female), and Ahu (an adult female), as part of the Manipur Amur Falcon Tracking Project (Phase 2) led by the Wildlife Institute of India. Within days, one of them has already emerged as the season’s breakout performer.

Apapang, wearing the orange track on the satellite map, has surprised even veteran trackers. Barely 150 grams in weight, he launched into an extraordinary non-stop flight soon after tagging.

In just 76 hours, he has covered an astonishing 3,100 kilometres, an aerial marathon that saw him slice across central India, glide past Gujarat, and skim out over the Arabian Sea.

With favourable easterly tailwinds boosting his speed, Apapang has been averaging nearly 1,000 kilometres a day, a pace that places him among the fastest migrating raptors on the planet.

But the real test is only beginning.

Along with Alang (yellow track) and Ahu (red track), Apapang is now attempting the most dangerous segment of the Amur Falcon’s annual migration: a nonstop 6,000-kilometre oceanic crossing from India to Somalia.

With no place to rest, feed, or retreat, the Arabian Sea becomes a vast gamble of energy, endurance, and favourable weather. Only a handful of migratory species undertake such long, uninterrupted flights, and the Amur Falcon is among the smallest of them.

Their journey began in the dense forests and farmlands of Manipur, a crucial stopover site where thousands of Amur Falcons refuel each winter during their journey from East Asia to southern Africa.

Once victims of rampant hunting, the birds are now part of a celebrated community-led conservation success story. Manipur’s villages have transformed into guardians of the species, turning falcon season into a symbol of coexistence and pride.

Source : https://www.indiatoday.in/science/story/flying-1000-km-per-day-tiny-indian-falcon-stuns-scientists-with-marathon-flight-2821069-2025-11-17

Astronauts Share Images Of Mars Volcano, Reveal Stunning Details Of Frozen Lava Rivers

The images, captured by the Mars Express orbiter, reveal the volcano’s southeast flank with hundreds of overlapping lava flows.

The European Space Agency (ESA) has shared stunning images of the foot of Mars’ giant volcano, Olympus Mons, which stands at an impressive 27 km high and has a base over 600 km wide, making it the largest volcano in our solar system – more than twice the height of Mauna Kea on Earth.

The images shared by the astronauts on Instagram show frozen rivers of lava which flowed down Olympus Mons.

See the images here:

Olympus Mons was first discovered by NASA’s Mariner 9 spacecraft in 1971. Initially, scientists believed it to be a mountain, but subsequent missions revealed its true nature.

It is believed that Olympus Mons was formed around 3.5 billion years ago, during Mars’ early geological period. The volcano is considered dormant, with no recent eruptions. Its gentle slopes and lack of impact craters suggest a relatively young surface, shaped by lava flows.

The images, captured by the Mars Express orbiter, reveal the volcano’s southeast flank with hundreds of overlapping lava flows, steep cliffs and traces of ancient collapse.

As per the ESA, the scarp, which is a cliff up to 9 km high, encircles the entire volcano, formed by huge landslides that sent debris hundreds of kilometres away.

“The lava flows – now solid rock – once streamed down the volcano’s slopes, spreading into wide fans and carving channels and tubes as they cooled. Some ended in smooth, rounded “tongues” before reaching the plains,” wrote ESA in the social media post.

The space agency further mentioned a “horseshoe-shaped channel” may once have carried lava, as well as water, in the lower plains. It hints at a more complex past.

“With only a few small craters, this surface is geologically young – perhaps just tens of millions of years old – a blink in Mars’ 4.6-billion-year history,” said ESA.

Source: https://www.ndtv.com/science/astronauts-share-images-of-volcano-on-mars-reveal-stunning-details-of-frozen-lava-rivers-9643713?pfrom=home-ndtv_lateststories

 

5 Best Google Pixel 10 Alternatives You Should Consider Right Now

The Google Pixel 10, priced around Rs 80,000, features a triple rear camera setup but with reduced sensor capabilities compared to its predecessor. For those seeking alternatives, five noteworthy options are highlighted.

Google Pixel 10
Google Pixel 10 launched this year at a price point near Rs 80,000. The phone got a triple rear camera setup as compared to the dual one in its predecessor. However, Google watered down the sensor capability to add one. And if you are someone who is looking to buy a new phone and wants Google Pixel 10 alternatives, then you are in the right place. Here, we have listed the 5 best alternatives to the Google Pixel 10.
iPhone 17
Price: 12GB RAM with 256GB storage available for Rs 82,900.
Camera: Packs a dual rear camera setup, including a 48MP primary sensor and a 48MP ultra wide angle shooter. For selfies and video calls, the device houses an 18MP front snapper.

Specs: The Apple iPhone 17 gets a 6.3-inch LTPO Super Retina XDR OLED display with a 120Hz refresh rate. It runs on the Apple A19 processor alongside the Apple 5-core GPU. The device packs a 3692mAh battery along with 25W wireless MagSafe charging.

OnePlus 15

Price: 12GB RAM variant with 256GB storage available for Rs 72,999.
Camera: It sports a triple rear camera setup, including a 50MP primary sensor, a 50MP ultra wide angle sensor, and a 50MP periscope telephoto shooter. At the front, we get to see a 32MP snapper for selfies and video calls.
Specs: The OnePlus 15 includes a 6.78-inch LTPO AMOLED display with a 165Hz refresh rate. It runs on the Snapdragon 8 Elite Gen 5 processor and is based on the Android 16 operating system. It packs a massive 7,300mAh battery with 120W wired and 50W wireless charging support.
Samsung Galaxy S25

Price: 12GB RAM variant with 256GB internal storage available for Rs 80,999.
Camera: It flaunts a triple rear camera setup comprising a 50MP primary lens, a 12MP ultra wide angle lens, and a 10MP telephoto lens. We also get to see a 12MP front camera for selfies and video calls.
Specs: The Samsung Galaxy S25 brings a 6.2-inch Dynamic LTPO AMOLED 2X display coupled with a 120Hz refresh rate and 2600 nits of peak brightness. It runs on the Snapdragon 8 Elite processor. The device is powered by a 4000mAh battery along with 25W wired charging support.
iPhone 16

Price: 128GB storage variant available for Rs 66,900.
Camera: It sports a dual rear camera system – 48MP primary shooter and a 12MP ultra wide angle shooter. The selfies and video calls on this one are managed by a 12MP front shooter.
Specs: The Apple iPhone 16 gets a 6.1-inch Super Retina XDR OLED display. It is based on the iOS 18 operating system. The device runs on the Apple A18 processor based on a 3nm process. It is powered by a 3561mAh battery along with 25W MagSafe wireless charging support.

Sourcehttps://www.timesnownews.com/technology-science/5-best-google-pixel-10-alternatives-you-should-consider-right-now-article-153151571

Computing At Light Speed: New System Performs AI Calculations In Single Flash

Laser light computing could significantly reduce AI energy consumption. (Credit: Summit Art Creations on Shutterstock)

Researchers have built a computer that performs complex AI calculations in a single pass of light, completing what today’s fastest AI chips need multiple steps to accomplish. The breakthrough promises substantial gains in parallelism and energy efficiency for AI computations.

The system, called parallel optical matrix-matrix multiplication or POMMM, performs complex mathematical operations by encoding data into laser beams and letting physics do the work. Published in Nature Photonics, the technology executes an entire matrix multiplication (the core calculation in AI neural networks) through a single propagation of coherent light. No waiting for sequential processing. Just light passing through optical elements, minimizing data movement during the core computation.

Why Light Beats Electronics for AI Calculations

The research team from Shanghai Jiao Tong University, Aalto University and the Chinese Academy of Sciences notes that current optical computing methods struggle with tensor-based tasks because they require multiple light propagations for each operation. POMMM collapses that entire sequence into a single instant.

When a GPU multiplies two matrices together, it performs thousands or millions of individual calculations in sequence. It reads values from memory, multiplies them, adds results and writes back to storage. Each step takes time. Each data movement consumes energy.

POMMM takes a different approach. It encodes one matrix into the amplitude and position of a spatial optical field, applies distinct phase patterns to different rows of data, then uses cylindrical lenses to perform optical Fourier transforms. These transforms naturally separate and combine the calculations simultaneously.

Testing showed the optical system produces results matching GPU computations with high consistency. Across matrix sizes ranging from 10×10 to 50×50 elements, POMMM maintained low average error that closely matched GPU results. The calculations happened during a single pass of light through the optical system.

Optical Hardware Runs Real Neural Networks

For practical AI applications, the researchers demonstrated POMMM running actual neural networks originally designed for GPUs. Their experimental prototype processed convolutional neural networks for image recognition, achieving 94.44% accuracy on handwritten digit classification and 84.11% on clothing item recognition. Vision transformer models showed similar performance. These tests used neural network weights trained on GPUs and deployed directly to the optical system.

The physics enabling this advantage comes from properties of light understood for over a century but not previously combined this way for computing. POMMM exploits two key properties of Fourier transforms: moving a signal in space doesn’t alter its frequency spectrum, and applying phase modulation shifts the spatial frequency. By encoding different rows of a matrix with different phase gradients, then performing optical transforms in perpendicular directions, the system makes all the partial products naturally separate into distinct spatial locations where a camera captures them simultaneously.

Inside the Speed-of-Light Computer

The experimental prototype uses spatial light modulators to encode input matrices onto a 532-nanometer laser beam, cylindrical lens assemblies to perform the parallel optical transforms and a high-resolution quantitative CMOS camera to record results. The core calculation happens during a single pass of light through the optical elements. The speed of the modulators and camera determines overall throughput.

Taking this further, the team demonstrated wavelength multiplexing for processing higher-dimensional data. By encoding the real and imaginary parts of a complex matrix onto two different laser wavelengths (540 and 550 nanometers), they performed complete complex-valued calculations in parallel. This wavelength-multiplexing capability points toward processing three-dimensional tensors—the multidimensional data arrays common in modern deep learning—through single-shot operations across multiple colors of light simultaneously.

Energy Efficiency and Future Potential

Theoretical analysis suggests POMMM’s single-propagation architecture could outperform existing optical computing paradigms by multiple orders of magnitude in both computational parallelism and energy efficiency, particularly when implemented with purpose-built photonic hardware rather than off-the-shelf components.

“Imagine you’re a customs officer who must inspect every parcel through multiple machines with different functions and then sort them into the right bins,” says lead author Dr. Yufeng Zhang, from the Photonics Group at Aalto University’s Department of Electronics and Nanoengineering, in a statement. “Normally, you’d process each parcel one by one. Our optical computing method merges all parcels and all machines together — we create multiple ‘optical hooks’ that connect each input to its correct output. With just one operation, one pass of light, all inspections and sorting happen instantly and in parallel.”

The work addresses a critical bottleneck in modern AI hardware. Neural networks generate massive data movement between processors and memory, and today’s GPU tensor cores spend substantial time and energy shuttling information back and forth. Optical computing performs calculations through physical light propagation rather than electronic data transfer, potentially reducing memory bandwidth limitations.

Current challenges include the complexity of cascading multiple optical layers for deep neural networks and the precision required in aligning optical components. The researchers found that training neural networks with POMMM-specific error characteristics can compensate for some hardware imperfections, though physical implementation still demands careful engineering.

Source : https://studyfinds.org/new-system-performs-ai-calculations-in-single-flash/

Forgot Your Android PIN, Pattern Or Password? Try These Fixes

Getting into your phone can be very tricky if you do not remember your PIN, pattern, or password. However, there are some ways in which you can try to get back into your phone, and some methods require giving up all your data. Let us dive into all the options you have to regain control over your phone once again.

Forgot Your Android PIN, Pattern Or Password? Try These Fixes | Photo: Pexels

Many a times, we forget our smartphone’s PIN, pattern or password. This usually happens when a phone is left abandoned for a long time. Even though fingerprint or face unlock has been registered, the smartphone will ask for additional authentication for added security.

Getting into your phone can be very tricky if you don not remember your PIN, pattern, or password. However, there are some ways in which you can try to get back into your phone, and some require giving up all your data. Let us dive us into all the options you have

Use Extend Lock

Android’s Extend Unlock feature is a temporary solution to get into your phone and backup all your data. Extend Unlock lets you bypass the lock screen when your phone is on your home Wi-Fi or any of the selected locations you’ve pre-approved earlier. The problem is, that Extend Unlock should have been setup earlier, and if not, then this solution is not for you.

But if you have, then you can simply take your phone to your trusted preset location, and you should automatically be able to enter into your phone without a PIN, pattern, or password.

If you haven’t already, go to Settings, search for Extend Unlock, and setup your trusted places. You can also turn on on on-body detection, which essentially keeps it unlocked while in motion. The latter could be risky, if the phone is ever stolen, so we recommend just sticking to places.

Also, the problem with this solution is that it will let you enter the smartphone, but won’t help in chaning the PIN, pattern, or password that you have forgotten. Therefore, you will need to backup all your data if you have gotten in once using Extend Unlock. Furthermore, if you restart your phone, then this feature won’t work.

Factory Reset

Sadly, one of the only way you can reset your password is by doing a factory reset. This is why we recommend backing up your data after accessing your phone using Extend Unlock. You can factory reset remotely as well, if you do not have Extend Unlock setup, but this means losing all your data with no backup.

In any case, use Google Find Hub to factory reset remotely. Fo to google.com/android/find, and click on the device which is logged in using your Google account. The ‘Factory reset device’ should show up, click on it to completely erase all data and start anew. Users will then have to enter a new PIN, pattern, or password, and this time we recommend storing those sensitive details somewhere else as well.

If you are able to acess your phone using Extend Unlock, you can factory reset by heading to

Source : https://www.freepressjournal.in/tech/forgot-your-android-pin-pattern-or-password-try-these-fixes

Archeologists discover an Atlantis-like metropolis at the bottom of a lake

Plato’s legend of Atlantis has come to life once again, with archaeologists from the Russian Academy of Sciences having just discovered “traces of a submerged city” destroyed by a devastating 15th-century earthquake underneath Kyrgyzstan’s Lake Issyk Kul, the eighth deepest lake in the world.

The city at the flooded Toru-Aygyr complex, which lies near the lake’s northwest point, has now been excavated by the explorers, who surveyed four underwater zones at shallow depths ranging from 3 to 13 feet around the lake’s shoreline.

Could Kyrgyzstan’s Issyk-Kul Lake be home to a real-life Atlantis in its depths?
Stockbym – stock.adobe.com

There, they found a wealth of everyday items that painted the picture of a once-thriving metropolis or “large commercial agglomeration.” Discoveries included multiple fired brick structures (one contained a millstone, which was used to crush and grind grain), caved-in stone structures and wooden beams.

In one of the zones, archaeologists also believe they’ve found what was once a public building that could have served as a mosque, bathhouse or school, also called a “madressa.”

In the three others, remnants of some kind of burial ground, a 13th-century Muslim necropolis — a large cemetery typically belonging to an ancient city — and mudbrick structures in round and rectangular shapes were also discovered.

There were also burials found that showed evidence of traditional Islamic rituals, with the skeletons facing north. Their faces are turned toward the qibla, the direction Muslims face during their daily prayers.

“All this confirms that an ancient city really once stood here,” a representative from the Russian Geographical Society (who funded the expedition), told the Daily Mail.

The lost Toru-Aygyr settlement sat on a major section of the historical culture-accelerating Silk Road, where merchants would trade silk, spices and precious metals — not to mention thoughts and ideas — between China and the Mediterranean from the second century BC to the mid-15th century.

Experts believe the complex once thrived but went under when a “terrible earthquake” hit near the start of the 15th century, Valery Kolchenko, the lead expeditionist and a researcher at the National Academy of Sciences of the Kyrgyz Republic, shared with the Daily Mail.

Luckily, Kolchenko and his colleagues believe that the area was abandoned by residents before the natural disaster.

Source : https://nypost.com/2025/11/14/science/archeologists-discover-an-atlantis-like-metropolis-at-the-bottom-of-a-lake/

AI Catches Hidden Airway Foreign Bodies In CT Scans Better Than Radiologists

The AI is not intended to replace radiologists, but it can help them decide when a bronchoscopy is warranted. (Credit: Gumpanat on Shutterstock)

Ever had a persistent, nagging cough that just wouldn’t go away? You may have had a small piece of food stuck in your airway. It may sound like a rare occurrence, but plenty of people deal with this issue, as these objects are essentially invisible on a standard X-ray. Now, relevant research suggests AI can help catch these tricky cases earlier.

Published in Digital Medicine, a research team in Wuhan, China, trained a system to spot subtle signs of a radiolucent foreign body on routine chest CT scans. In a head-to-head test, the AI found about twice as many true cases as experienced radiologists, while still keeping solid precision. That’s a big difference. Missing a foreign body means weeks of symptoms and repeated misdiagnoses.

Why ‘Invisible’ Objects Get Missed

Foreign body aspiration happens when something you swallow goes down the “wrong pipe” and lodges in your lower airway. Dense items like metal show up clearly on imaging. Common food fragments and plant material often do not. Doctors rely on symptoms, history, and CT scans, yet radiolucent objects can hide in plain sight.

Prior clinical reviews have found that a large share of adult airway foreign bodies are invisible to X-rays, that many patients are misdiagnosed at first, and that the illness can drag on for months before the real problem is identified. That delay is rough: cough, chest discomfort, repeated infections, and time away from work or daily life.

What The Team Built

The researchers combined two steps:

  1. Map the airways: a tool called MedpSeg draws a detailed 3D map of the bronchial tree from the CT. Think of it like tracing a city’s roads before looking for traffic jams.
  2. Take many “snapshots”: the system then captures 12 different views of that 3D map and feeds them to a compact image classifier (ResNet-18). Looking from several angles helps the model pick up small blockages that could be easy to miss when scrolling through slices one by one.

All of this runs on standard CT data. No special scanner protocol is required. After a quick preprocessing step to keep image quality consistent, the software segments the airway, generates the 12 views, and makes a yes/no call: likely foreign body vs no foreign body.

AI vs. Radiologists: By The Numbers

In an independent test at a separate hospital, the AI and three board-certified thoracic radiologists read the same 70 cases without seeing bronchoscopy results. The AI showed higher recall: it correctly identified about 71% of true foreign bodies, compared with about 36% for the human readers. The AI’s overall balance of correctness—the F1 score—was also higher. Precision told a different story. The radiologists did not call any false positives, while the AI kept precision in the high-70s. In plain terms, the humans were very cautious and missed more true cases, and the AI caught many more but was willing to raise a few extra flags.

That is exactly what you want from a second reader. You would rather get a careful nudge to take a closer look than send someone home with an object still lodged in the airway.

How Artificial Intelligence Could Be Used In Clinics

Today, if an X-ray does not give answers, doctors look to a CT scan. The challenge is that radiolucent objects blend in. The AI is not meant to replace expertise. It is designed as a support tool that highlights suspicious spots so a clinician can decide whether a bronchoscopy, the definitive look with a camera, is worthwhile. Earlier bronchoscopy can shorten the long, frustrating path many patients travel: cough, antibiotics, repeat visits, and only then the true cause.

Since the system analyzes standard CTs, it could fit into existing workflows. For example, run in the background, flag a case, and let a radiologist or pulmonologist make the final call. This is most helpful in settings without a specialized chest radiologist or when a second opinion would be useful.

Across three different datasets used for development and testing, the model’s accuracy hovered around 90% or better. The independent test (the toughest one) matters most for real-world trust. There, the AI’s recall advantage and higher F1 score compared with experts suggest it can reduce missed cases. Precision was a bit lower than the radiologists’ perfect score, yet still strong. In practical terms, a few more false alarms are the price of catching more real problems, and a scoped exam can settle the question safely.

What’s Next?

Every study has limits, and this one was no different. To start, the data came from three hospitals in one city. Testing across many regions and scanner types would have resulted in a more universally applicable dataset. The study was retrospective, which is common early on, but future work should be prospective and multi-centered. The system also reads images alone. Adding basic clinical clues—like sudden choking, voice changes, or how long the cough has lasted—might make it even smarter. Finally, CT scans use radiation. That is acceptable for diagnosis, yet it is not a screening tool, especially for children.

Young children are more likely to put objects in their mouths. Many older adults have trouble swallowing. Both groups are at higher risk for aspiration and delayed diagnosis. Hospitals that do not have a dedicated thoracic radiologist could also benefit from an extra set of (digital) eyes. Even large centers may want a second reader for tricky cases or after-hours coverage.

The Bottom Line For Patients And Families

No one should have to wait months to figure out a nagging cough is being caused by a stuck bone fragment. This study showed an AI approach that doubled the catch rate compared with routine expert reads in a blinded test via everyday CT scans. To be clear, doctors still make the final call. The aim here is faster answers and fewer misses.

Source : https://studyfinds.org/ai-catches-hidden-airway-foreign-bodies/

Jeff Bezos’s Blue Origin Launches Huge Glenn Rocket Carrying Twin NASA Spacecraft To Mars

Jeff Bezos’ space company Blue Origin on Friday launched the giant New Glenn rocket from Florida. This is the rocket’s debut mission, sending two NASA satellites toward Mars and nailing the return landing of its reusable booster for the first time.

With Thursday’s launch, NASA’s twin EscaPADE spacecraft became the first science payload that Blue Origin has delivered to space for NASA or any customer.

A Blue Origin New Glenn rocket launches on NASA’s EscaPADE mission (Photo: Reuters)

The powerful two-stage rocket’s first flight since its inaugural launch in January and the successful booster landing at sea represented key milestones for Blue Origin in its quest to compete on a more equal footing with Elon Musk’s SpaceX, the world’s leading rocket-launch service.

A live Blue Origin webcast showed the rocket ascending from its launch tower through clear afternoon skies in a thunder of flames and billowing clouds of vapour moments after its seven BE-4 liquid-fueled engines roared to life.

The launch followed several days of delays due to cloudy skies and a geomagnetic storm.

Some 10 minutes after liftoff, the 17-story-tall New Glenn first-stage booster made a return landing on the deck of a barge, named Jacklyn in honour of Bezos’ mother, floating in the Atlantic, achieving for Blue Origin an important feat in reusability that was pioneered by SpaceX.

About 20 minutes later, mission control confirmed that New Glenn’s upper stage had achieved its primary mission, deployment of the EscaPADE spacecraft into outer space to embark on a 22-month voyage to Mars.

The rocket also carried a secondary payload from the satellite company Viasat that remained attached to its upper stage for a technical demonstration of an in-space relay of telemetry data above Earth. Blue Origin said the test was a success.

When the rocket made its debut flight in January, it carried Blue Origin’s own payload to space, a prototype for its manoeuvrable Blue Ring spacecraft that the company is developing for the Pentagon and commercial customers.

Public Wifis Are Not Safe And Here Is Why Google Wants You To Stop Using Them

Public WiFi networks offer convenience but pose significant security risks, according to Google. The tech giant warns users to avoid connecting their devices to public networks found in cafes, airports, and hotels, as they can be exploited by attackers to access sensitive information like banking details.

Public Wifi

Public WiFis are nothing less than a convenience in today’s day and age. Who would not love to get free internet on the go without worrying about anything? But, the main question is that are these public WiFi networks are safe enough to keep your personal devices connected to them. Well, Google doesn’t think so, and the latest warning by the tech giant is proof of it. Google has issued a warning for all smartphone users to not use public WiFi available at hotel lobbies, cafes, airports, and other places.
According to Google, these networks serve as a super accessible entry point for attackers. And in these attacks, a user can end up giving up crucial details like banking credentials, private chats, and more. Google’s advisory has been published in the Android: Behind the Screen report on text-based scams.
The report highlights that public WiFi networks are prone to security flaws and could compromise connected devices. Users all over the globe have been asked to avoid public WiFi as much as possible, specifically when they are performing any tasks related to shopping online, banking, or accessing accounts that consist of personal details.

Skyrocketing Smartphone Scam Cases On A Global Level

The warning serves as a stark reminder of the ever-increasing smartphone scam cases that have affected a lot of users in the past. Google states that smartphone scams have become a global underground industry developed to induce massive financial losses and emotional challenges for end users. As per the report, in 2024, these scams duped around $400 billion from consumers all over the globe.

Source : https://www.timesnownews.com/technology-science/public-wifis-are-not-safe-and-here-is-why-google-wants-you-to-stop-using-them-article-153139977

When It Comes To Empathy, ChatGPT Is Acting More ‘Human’ Than Some Doctors

‘Botside’ manner may soon replace bedside manner. (Credit: Andrey_Popov on Shutterstock)

Healthcare workers and patients feel more warmth from AI-generated medical responses than from actual doctors, a surprising analysis of 15 studies shows. The largest study examined 2,164 patient interactions, with similar patterns emerging across smaller datasets.

ChatGPT and similar AI chatbots scored roughly two points higher than human healthcare professionals on 10-point empathy scales when responding to patient questions via text. AI had a 73% probability of being rated as more empathic than human practitioners in head-to-head comparisons.

“In text-only scenarios, AI chatbots are frequently perceived as more empathic than human HCPs,” study authors wrote. The meta-analysis from the Universities of Nottingham and Leicester pooled data from 13 of the 15 studies comparing AI chatbots to doctors, nurses, and other healthcare workers.

The results, published in the British Medical Bulletin, challenge long-held assumptions about human connection in medicine and run counter to a 2019 UK government report that called empathy an “essential human skill that AI cannot replicate.”

AI Shows Empathy Edge Across Medical Specialties

ChatGPT-4 outperformed human clinicians in nine separate studies spanning cancer care, thyroid conditions, mental health, autism, and general medical inquiries. For thyroid questions, the AI scored 1.42 standard deviations above human surgeons in empathy ratings. Mental health queries showed similar patterns, with ChatGPT-4 scoring 0.97 standard deviations higher than licensed mental health professionals.

Patient complaints revealed the starkest gaps. When handling grievances across hospital departments, ChatGPT-4 scored 2.08 standard deviations higher than human patient relations officers.

The AI advantage appeared consistent regardless of who evaluated the responses. When both physicians and patients reviewed the same set of answers about systemic lupus, ChatGPT-4 received higher empathy ratings from physicians. For questions about multiple sclerosis, patient representatives using a validated empathy scale rated AI responses more favorably than neurologist responses.

Studies drawing from Reddit health forums and patient portals showed similar trends. Questions ranged from interpreting blood test results to managing chronic conditions to understanding cancer treatment options. Across this variety, AI responses were more likely to be rated as warm, understanding, and considerate of patient concerns.

Dermatology provided the sole exception. In both studies examining skin-related questions, dermatologists outperformed ChatGPT-3.5 and Med-PaLM 2, though researchers couldn’t explain this specialty-specific pattern.

The Text Message Caveat

All studies evaluated text-based interactions exclusively. Even when one study converted AI responses to audio, empathy ratings came from written transcripts alone.

A doctor’s nod, forward lean, or eye contact often conveys understanding as powerfully as words. Text-based healthcare interactions represent a small portion of patient care, though their use grows with patient portals and telemedicine.

Studies also relied on proxy evaluators rather than patients receiving actual care. Healthcare professionals, medical students, patient representatives, and researchers rated empathy in responses to real patient questions. Direct patient feedback might differ, particularly since healthcare providers and patients often rate empathy differently.

Most studies used custom, unvalidated empathy scales. Raters typically scored responses on 1-5 or 1-10 scales ranging from “not empathetic” to “very empathetic.” Only one study employed the CARE scale, a validated 10-item instrument designed specifically for measuring therapeutic empathy in clinical consultations.

The studies couldn’t determine whether AI’s perceived empathy advantage translates to better health outcomes. While empathic communication has been linked to reduced patient pain and anxiety, improved medication adherence, and higher satisfaction with care, these studies measured perception rather than clinical impact.

Twenty Percent of UK Doctors Already Use ChatGPT

The research lands as AI adoption in healthcare accelerates. One in five UK general practitioners now uses generative AI tools for tasks like writing patient correspondence. Over 117,000 patients across 31 NHS mental health services have interacted with Wysa, an AI-powered digital therapist, according to Wysa’s website.

Study authors propose a collaborative model where doctors draft initial responses while AI enhances tone and empathic language, with clinicians ensuring medical accuracy. This approach could reduce physician workload while potentially improving patient satisfaction.

Empathic delivery means little if medical advice proves wrong. AI reliability concerns persist, and gains in perceived warmth could vanish if responses contain factual errors or incomplete guidance.

How the Research Was Conducted

Researchers searched seven databases for studies published through November 2024, identifying 15 qualifying studies from 2023-2024. Most used unvalidated single-item scales asking raters to score empathy from 1-5 or 1-10. Only one employed a validated instrument, the CARE scale designed for measuring therapeutic empathy.

Fourteen studies assessed ChatGPT variants (versions 3.5 or 4), while others examined Claude, Gemini Pro, Le Chat, ERNIE Bot, and Med-PaLM 2. Patient questions came from emails in private medical records, Reddit and public forums, real-time chat transcripts, and in-person reception interactions. The largest dataset included 2,164 live outpatient queries at a Chinese hospital.

Nine studies had moderate risk of bias; six showed serious risk. Common problems included curated patient queries potentially skewing results, reliance on Reddit communities where users may face barriers to formal care, and supervised AI designs where human experts reviewed outputs before release.

Source : https://studyfinds.org/empathy-chatgpt-more-human-than-doctors/

 

There are more than 100 autoimmune diseases, and they mostly strike women. Here’s what to know

Our immune system has a dark side: It’s supposed to fight off invaders to keep us healthy. But sometimes it turns traitor and attacks our own cells and tissues.

What are called autoimmune diseases can affect just about every part of the body – and tens of millions of people. While most common in women, these diseases can strike anyone, adults or children, and they’re on the rise.

New research is raising the prospect of treatments that might do more than tamp down symptoms. Dozens of clinical trials are testing ways to reprogram an immune system-gone-rogue, with some promising early successes against lupus, myositis and certain other illnesses. Other researchers are hunting ways to at least delay brewing autoimmune diseases, spurred by a drug that can buy some time before people show symptoms of Type 1 diabetes.

“This is probably the most exciting time that we’ve ever had to be in autoimmunity,” said Dr. Amit Saxena, a rheumatologist at NYU Langone Health.

Here are some things to know.

What are autoimmune diseases?

They’re chronic diseases that can range from mild to life-threatening, more than 100 with different names depending on how and where they do damage. Rheumatoid arthritis and psoriatic arthritis attack joints. Sjögren’s disease is known for dry eyes and mouth. Myositis and myasthenia gravis weaken muscles in different ways, the latter by attacking how nerves signal them. Lupus has widely varied symptoms including a butterfly-shaped facial rash, joint and muscle pain, fevers and damage to the kidneys, lungs and heart.

They’re also capricious: Even patients faring well for long periods can suddenly have a “flare” for no apparent reason.

Why autoimmune diseases are so difficult to diagnose

Many start with vague symptoms that come and go or mimic other illnesses. Many also have overlapping symptoms – rheumatoid arthritis and Sjögren’s also can harm major organs, for example.

Diagnosis can take multiple tests, including some blood tests to detect antibodies that mistakenly latch onto healthy tissue. It usually centers on symptoms and involves ruling out other causes. Depending on the disease it can take years and seeing multiple doctors before one puts the clues together. There are efforts to improve: The National MS Society is educating doctors about newly updated guidelines to streamline diagnosis of multiple sclerosis.

How the immune system gets out of whack

The human immune system is a complex army with sentinels to detect threats like germs or cancer cells, a variety of soldiers to attack them, and peacemakers to calm things down once the danger is over. Key is that it can distinguish what’s foreign from what’s “you,” what scientists call tolerance.

Sometimes confused immune cells or antibodies slip through, or the peacemakers can’t calm things down after a battle. If the system can’t spot and fix the problem, autoimmune diseases gradually develop.

Autoimmune diseases are often set off by a trigger

Most autoimmune diseases, especially in adults, aren’t caused by a specific gene defect. Instead, a variety of genes that affect immune functions can make people susceptible. Scientists say it then takes some “environmental” trigger, such as an infection, smoking or pollutants, to set the disease into motion. For example, the Epstein-Barr virus is linked to MS.

Scientists are zeroing in on the earliest molecular triggers. For example, white blood cells called neutrophils are first responders to signs of infection or injury — but abnormally overactive ones are suspected of playing a key role in lupus, rheumatoid arthritis and other diseases.

Women are at highest risk for autoimmune diseases

Women account for about 4 of 5 autoimmune patients, many of them young. Hormones are thought to play a role. But also, females have two X chromosomes while males have one X and one Y. Some research suggests an abnormality in how female cells switch off that extra X can increase women’s vulnerability.

But men do suffer from autoimmune diseases. One especially severe one named VEXAS syndrome wasn’t discovered until 2020. It mainly affects men over 50 and in addition to typical autoimmune symptoms it can cause blood clots, shortness of breath and night sweats.

Certain populations also have higher risks. For example, lupus is more common in Black and Hispanic women. Northern Europeans have a higher risk of MS than other groups.

Source : https://apnews.com/article/autoimmune-symptoms-rheumatology-diagnosis-steps-ecc5981788b598fe08d2c19a0fa1523b

The ‘Software’ Running Your Body Predates Animals By Millions Of Years

(© adimas – stock.adobe.com)

Scientists say the origins of the ‘Hippo pathway’ date back to single-celled organisms

Genetic instructions controlling how big your organs grow date back to long before animals even existed, according to research conducted on microscopic organisms considered our closest single-celled relatives.

Scientists studying choanoflagellates discovered these organisms use the very same molecular machinery to control colony size that animals later adapted to regulate organ growth. Dubbed the Hippo signaling pathway, the system was already operational before the first animals evolved. Today it helps control tissue size in animals and malfunctions in many cancers.

Thibaut Brunet, an evolutionary biologist at Institut Pasteur in Paris who led the research, compared the discovery to finding out that a smartphone’s operating system was originally written for a completely different device. The findings appear in Cell Reports.

How The Hippo Pathway Holds Clues to Animal Evolution

Choanoflagellates are single-celled organisms that occasionally form multicellular colonies called rosettes. These colonies resemble the earliest developmental stage of animal embryos, making choanoflagellates a living window into how our ancestors made the leap from single cells to multicellular life.

The research team used CRISPR gene editing to delete genes in the Hippo pathway, including one called warts. Without this gene, the choanoflagellate colonies grew to twice their normal size, with rosettes averaging about 21 cells instead of the usual 11. Some mutant colonies ballooned to 60 cells, more than twice the maximum size ever seen in normal colonies.

The mechanism behind this growth spurt turned out to be surprising. In animals, the Hippo pathway often controls tissue size by regulating cell division. But in choanoflagellates, the pathway doesn’t control how fast cells divide. Instead, it regulates the production of extracellular matrix, the molecular scaffolding that holds colonies together.

How Ancient Organisms Stuck Together

When the researchers examined the giant colonies under microscopes, they found dramatically more extracellular matrix material in the mutants compared to normal colonies. The matrix formed elaborate branching structures, creating more attachment points for cells to stick together.

RNA sequencing revealed that disabling the Hippo pathway activated genes for matrix production, including one called couscous that helps build the glycosylated proteins forming the colony’s core, along with fibrillar collagen and C-type lectins. Without Hippo pathway regulation, cells overproduced the biological glue holding them together, allowing colonies to grow larger before splitting apart.

This discovery suggests the Hippo pathway may have had an ancestral role in managing extracellular matrix in early organisms. Animals appear to have repurposed this ancient system for a related but different job: controlling tissue size by regulating cell division.

From Single Cells to Cancer Research

The connection between choanoflagellates and animals isn’t just an academic curiosity. The Hippo pathway malfunctions in many human cancers, allowing tumors to grow unchecked. Understanding how this system worked in our single-celled ancestors could offer insights into what goes wrong in disease.

The pathway’s core components (Hippo, Warts, and Yorkie) exist in choanoflagellates, in animals, and in filastereans, another close relative of animals. The genes encoding these proteins have been faithfully copied and passed down through hundreds of millions of years of evolution, accumulating modifications but never disappearing entirely. These are true homologs inherited from a common ancestor that lived before animals appeared.

The research was made possible by a new gene-editing technique the team developed specifically for choanoflagellates. Previous methods for deleting genes in these organisms were laborious and unpredictable, with success rates as low as 0.3 percent. The new approach boosted efficiency to between 40 and 100 percent among antibiotic-resistant clones, making such experiments practical for the first time.

The technique inserts an antibiotic resistance gene into the target location, allowing researchers to use antibiotics to select only the cells where gene editing succeeded. This eliminated the need to isolate and test hundreds of individual cells to find the rare successful edits. Using this method, the team successfully knocked out five of the six genes they targeted.

The findings add nuance to how the Hippo pathway evolved. The work in choanoflagellates suggests size control through extracellular matrix management predates animals, complicating earlier ideas that the pathway’s growth-control function arose only within animals.

In filastereans, the Hippo pathway controls yet another function: cell shape and contractility rather than proliferation or matrix production. This patchwork of functions across different organisms paints a picture of a versatile genetic toolkit that evolution has repeatedly adapted for new purposes.

Source : https://studyfinds.org/the-software-running-your-body-predates-animals/

Digital platforms are a danger for democracy.

Does technological progress equal social progress? Many have lost faith in this idea in an era of hatred, “fake news” and echo chambers. Can regulation make X, TikTok and co. better?

Some believe that social media platforms are a danger to democracyImage: Yui Mok/dpa/picture alliance

In January 2025, Elon Musk conducted an interview on X with Alice Weidel, the leader of Germany’s far-right AfD party, some regional branches of which are considered right-wing extremist by German intelligence services.

“Only the AfD can save Germany. End of story,” he said in an undisguised interference by a powerful social network in Germany’s election campaign.

In Romania in 2024, the far-right presidential candidate Calin Georgescu won the first round of the elections to the surprise of many: The political outsider had not participated in any TV debates and had not invested any money in his campaign. His success came mainly through the video platform TikTok; his videos were very prominent in the feeds of many Romanians.

Suspicions quickly arose that social bots (automated accounts) and trolls (human users who are sometimes paid to act on behalf of a foreign body or government agency) must have been involved. The election was annulled. It is also known that bots and trolls have been used to manipulate public opinion in many other digital discussions and topics, such as Brexit and the COVID-19 pandemic.

Social media: Extreme positions and vocal minorities get most attention

What happens in the digital sphere can have a huge influence on public opinion. At a conference entitled “Big Tech and digital democracy: How much regulation does public discourse need?” organized by DW and the University of Cologne as part of a series of events on Global Media Law, media and constitutional law expert Dieter Dörr stated that “democracy is under serious threat.”

Established and respected media outlets are present on these platforms and use Instagram, YouTube and others as channels for their content. But there are also numerous other players. They don’t even have to be bots or trolls: There are many accounts that do not maintain certain standards, andincite hatred against others, spread false claims or use artificial intelligence (AI) to manipulate and generate images and videos.

The algorithms used by social media to decide what content is displayed when and shown to whom reward this kind of behavior.

“Extreme opinions, which have a wide-reaching scope, are pushed to the top,” said Dörr, explaining that this is what keeps users on platforms for longer, allowing for more money to be earned from them.

EU’s Digital Services Act offers glimmer of hope

Social media platforms have become an important, if not the only, source of information for many people. Politicians and researchers have long recognized that the power wielded by these platforms is a problem. But can anything be done about it?

The European Union (EU) has stepped up efforts in recent years to regulate the digital world, primarily through its Digital Services Act (DSA), which came into force in February 2024. It requires major online platforms and search engines such as Amazon, Google, X and Facebook to provide greater transparency and protection for users.

Renate Nikolay, the deputy director-general of the Directorate-General for Communications Networks, Content and Technology (DG CONNECT) at the European Commission, which is responsible for enforcing the Digital Services Act, says: “We are pursuing three principles: First, platforms must assess and minimize systemic risks. Second, we are strengthening users’ rights, for example by providing complaint mechanisms. Third, we demand transparency in algorithms and require platforms to give researchers access to their data.”

This sounds like a big step forward: Platforms have to provide information on their algorithms and even offer users the option to disable personalized content or advertising. After all, algorithms tend not only to disadvantage moderate and differentiated content. Ultimately, they also create filter bubbles or echo chambers, in which users are mainly surrounded by content and other users that reflect their own views. This puts them at risk of falling into a spiral of radicalization.

The TikTok algorithm is particularly notorious. A recent study by the University of Potsdam and the Bertelsmann Foundation showed that during the last German election campaign, political parties were not equally visible in the TikTok feeds of young users. Videos from official party accounts on the political fringes, especially the AfD, were played more frequently than those from the accounts of more centrist parties.

During the period under review, the AfD uploaded 21.5% percent of all the videos, but these accounted for 37.4% of videos that appeared in feeds. The AfD’s videos were therefore overrepresented. For its part, the center-right CDU/CSU party of Chancellor Friedrich Merz uploaded 17.1% of all party videos, but these accounted for only 4.9% of videos in feeds.

When asked about this at the conference, Tim Klaws, Director of Government Relations and Public Policy for DACH, Israel and BeNeLux at TikTok, gave an evasive answer. He said that digital platforms had no interest in operating in an environment full of disinformation and populism, and were trying to minimize “fake news”, hate speech, etc. with the help of AI and their staff members.

Source : https://www.dw.com/en/digital-platforms-are-a-danger-for-democracy-what-can-be-done/a-74668315

AI is not wiping out all entry-level jobs, but it’s changing the game and fresh jobseekers need to level up

Experts say AI isn’t wiping out all junior roles, but it’s forcing fresh grads to level up and prove the one thing machines still can’t replace: human judgment.

When her six-month internship in public relations abruptly ended at the halfway mark, communications graduate K Sudhiksha, 23, wasn’t entirely surprised.

Officially, she was told it was due to a company restructuring, but she suspected that it had something to do with how her job could be done by artificial intelligence (AI).

 

While AI is not wiping out entry-level jobs across the board, its impact is most visible in routine roles. (Illustration: CNA/Nurjannah Suhaimi)

“I was spending most of my time running prompts on ChatGPT,” she told CNA TODAY, referring to the popular AI chatbot.

“We were all encouraged to do it. I could do my tasks faster, but it also made me feel creatively stunted.”

Ms Sudhiksha, who had joined the PR firm in July hoping to learn how to craft press releases and pitch news stories to the media, found that much of her work revolved around using AI tools to generate first drafts of media releases and summarise weekly news coverage for clients.

While there were warnings to carefully fact-check the output generated by ChatGPT, she said the reliance on AI made the experience feel hollow as she had hoped for a more hands-on, creative process that would let her flex her own brain muscles.

Three months into her internship, her role was made redundant, Ms Sudhiksha said.

Currently between jobs, she admitted that her experience has left her feeling pessimistic and frustrated, as she has to compete with machines: “I wish I had experienced PR before the AI era.”

For Mr Mitchell Yap, 25, a customer service specialist at a tech firm, the impact of AI on job security has also been tangible. The company recently introduced a support bot designed to handle as many customer queries as possible before transferring them to a human agent.

“As the bot improves, my team now handles only the more complex or sensitive cases, but we can’t ignore that this also means the overall workload is shrinking.”

While he is not overly anxious yet, Mr Yap admits that every new update to the bot makes him and his colleagues wonder how long their roles will stay essential.

The experiences of Ms Sudhiksha and Mr Yap reflect a growing concern among young workers and jobseekers, including those who have not even entered the job market yet: Is AI going to take away the first jobs they’ve worked so hard to qualify for?

For some, that answer is affirmative. In the legal industry, for instance, recruiters like Ms Shulin Lee, managing director of legal executive search firm Aslant Legal, have already seen how automation and AI are impacting entry-level hiring.

“In 2024, law firms prioritised mid-level to senior hires. There were almost no openings for juniors with one to two years’ experience,” she said. “It was one of the toughest years for junior hiring I’ve seen in my 15 years in recruitment.”

While AI wasn’t the only factor behind the decline – cost and competency gaps among Gen Z hires also played a role – Ms Lee recalled law firm partners telling her that AI tools can now conduct due diligence on 200 contracts in two hours, hence reducing the need for juniors.

According to data from Jobstreet by Seek, the number of entry-level postings in Singapore fell by more than 25 per cent in the first half of 2025 compared with the same period in 2024, even as total job openings rose slightly, by 4 per cent.

The data points to what Jobstreet calls a “recalibration” of the job market. Many entry-level roles are being reshaped as more companies embrace automation to handle routine tasks traditionally assigned to junior team members, said Ms Yuh Yng Chook, director of Asia sales and APAC service at Seek, which owns Jobstreet and Jobsdb.

Similarly, human resource (HR) platform Remote surveyed 250 Singapore employers in its Global Workforce Report 2025, and found that four in five had reduced the number of entry-level hires at their companies due to AI.

Eighteen per cent of Singapore firms said they had eliminated roles or reduced headcount due to AI, while another 18 per cent had hired or reassigned roles specifically to support AI-related initiatives.

Still, some experts stressed that it’s not just AI that is driving the decline in recruitment activity. Mr Lewis Garrad, partner and career practice leader for Asia at global consulting firm Mercer, said the slowdown in graduate hiring reflects both technological change and a more cautious business climate.

“AI can support and complete certain tasks, but it rarely replaces an entire job,” he said, adding that companies are automating routine parts of work while rethinking roles amid slower growth and tighter budgets.

Mr Chiew Chun Wee, regional policy and insights lead for Asia Pacific at the Association of Chartered Certified Accountants (ACCA), agreed.

“‘AI is coming for your jobs’ makes for compelling headlines, but the reality is far more nuanced,” he said.

According to Mr Chiew, most organisations are trying out tools for limited tasks such as drafting written work, transcribing meeting minutes and supporting research – not replacing entire roles.

Adoption also varies by size and sector, as smaller firms tend to be nimbler in trying out new apps, while larger ones are developing in-house tools.

“The nuance lies in how AI reconfigures work … Automating knowledge work is actually quite hard,” Mr Chiew said.

“Processes are messy and full of judgment calls. So the future of work won’t be about replacing people. It’ll be a blend of automation, augmentation and human judgment.”

That’s also why some experts believe that companies cannot afford to stop hiring young people altogether. As Ms Lee put it: “If you stop hiring young people now, you’ll be short of mid-levels later. The pipeline will dry up.”

WHERE ENTRY-LEVEL ROLES ARE DISAPPEARING
While AI is not wiping out entry-level jobs across the board, its impact is most visible in routine roles.

JobStreet’s data shows that in Singapore, entry-level sales roles have fallen 61 per cent and entry-level customer service positions by 45 per cent, as chatbots, automated lead-generation tools and self-service systems take over tasks once handled by new hires.

Ms Gillian O’Brien, general manager of Remote Recruit at Remote, said similar declines are appearing in customer support, software development, sales development and marketing content production. Remote Recruit is a product under the Remote brand.

“These are roles where most of the tasks, such as IT ticket triaging, entry-level coding, sales lead list building and drafting blog content can be done by AI,” she added.

Globally, the pattern mirrors Singapore’s: In the United States, a 2025 study by ADP Research, the global thought leader on labour market and employee performance research and the Stanford Digital Economy Lab, shows employment among workers aged 22 to 25 in AI-exposed jobs has dropped 6 per cent between late 2022 and July 2025.

Within that, junior software developers fell 20 per cent and customer-service roles 11 per cent – the very functions most easily automated.

The ripple effects are being felt far beyond frontline roles. Across industries, companies are reorganising their operations in response to the accelerating impact of AI and automation.

At Amazon, for instance, the company announced an overall reduction of about 14,000 corporate roles in October as part of efforts to “reduce layers” and “increase ownership”.

In a note to employees, Ms Beth Galetti, Amazon’s senior vice-president of people experience and technology, called AI “the most transformative technology since the internet”, saying it enables companies to innovate faster and must be met with leaner, more agile structures.

While she did not link the layoffs directly to AI, her comments reflect how major firms are reorganising to stay competitive in an AI-driven economy.

Source: https://www.channelnewsasia.com/today/big-read/ai-junior-entry-level-jobs-young-workers-5449836

Expert Panel Asks Serum Institute To Tighten HPV Vaccine Trial Before Testing On Women Of Childbearing Age

Committee seeks stronger trial design, urging SII to sharpen objectives and raise scientific standards for proving CERVAVAC’s non-inferiority to Merck’s Gardasil

The panel wants the company to collect blood samples before giving the third vaccine dose. (Representational image)

India’s expert committee on vaccines has asked the Serum Institute of India (SII) to revise its Phase III trial plan before moving ahead with tests of its cervical cancer vaccine in women of childbearing age. The vaccine is currently approved only for girls and young women aged 9 to 26 years.

The committee has suggested key changes in the study design, including strengthening the trial’s objectives and tightening the scientific criteria for proving non-inferiority against Merck’s Gardasil.

The Pune-based vaccine maker had presented the protocol of its proposed study titled, “A Phase-III, double-blind, randomized, active-controlled, multicentric clinical trial to evaluate the immunogenicity and safety of qHPV Vaccine (CERVAVAC) administered intramuscularly in women aged 27 to 45 Years as compared to Merck’s HPV 6/11/16/18 Vaccine (Gardasil).”

According to the minutes of the meeting of the subject expert committee (SEC), seen by News18, “the study vaccine is already approved in the age group of 9-14 years (male and female), at two-doses schedule (0 and 6 months) and for age group of 15-26 years (male and female), at three-dose schedule (0, 2 and 6 months) for prevention of the disease caused by Human Papilloma Virus types 6, 11, 16 and 18.”

CERVAVAC, developed by the Serum Institute in collaboration with the Department of Biotechnology, is India’s first indigenously made quadrivalent HPV vaccine, protecting against HPV types 6, 11, 16, and 18—the strains responsible for the majority of cervical cancer cases. If approved for the expanded age group, the vaccine could mark a significant step in widening India’s cervical cancer prevention programme.

What is the main goal?
The main goal of the study is to show that women who get CERVAVAC develop an immune response against Human Papillomavirus (HPV) types 16 and 18 that is just as strong as the response seen in women who get Merck’s Gardasil. A secondary goal is to check if the immune response against HPV types 6 and 11 with this vaccine is also as good as that with Gardasil.

According to the proposal, “the primary objective of the present study is to demonstrate that the immune response to HPV types 16 and 18 among women receiving CERVAVAC is non-inferior to that in the women receiving Gardasil and one of the secondary objectives is to demonstrate that the immune response to HPV types 6 and 11 among women receiving CERVAVAC is non-inferior to that in the women receiving Gardasil.”

However, the panel has advised the company to strengthen the trial by upgrading some of the study goals. It said that checking how well the vaccine triggers an immune response against HPV types 6 and 11 should be treated as a main goal of the study, not just a side one. It also advised the company to measure how well the vaccine protects women from long-lasting HPV infections (types 6, 11, 16, and 18) at 24 and 36 months, and to count this as an important study goal, not just an optional or exploratory part. “The firm should include determination of immune response to HPV types 6 and 11 as primary objective instead of secondary objective,” it recommended.

It also said the company should “consider assessing the protection against incident persistent cervical HPV 6, 11, 16 and 18 infections in women receiving CERVAVAC and Gardasil at 24, 36 months as secondary objective instead of exploratory objective for serotypes 6 and 11 infections.”

Source: https://www.news18.com/india/expert-panel-asks-serum-institute-to-tighten-hpv-vaccine-trial-before-testing-on-women-of-childbearing-age-9685995.html

 

Google proposes app store reforms in settlement with ‘Fortnite’ maker Epic Games

Alphabet’s (GOOGL.O), opens new tab Google has reached a comprehensive U.S. court settlement with “Fortnite” video game maker Epic Games, agreeing to Android and app store reforms aimed at lowering fees, boosting competition and expanding choices for developers and consumers.
In a joint filing, opens new tab on Tuesday in the federal court in San Francisco, the companies asked U.S. District Judge James Donato to consider a proposal resolving Epic’s 2020 antitrust lawsuit, which accused Google of illegally monopolizing how users access apps and make in-app purchases on Android devices.

The new Google logo is seen in this illustration taken May 13, 2025. REUTERS/Dado Ruvic/Illustration/File Photo Purchase Licensing Rights

Google has denied any wrongdoing throughout the closely watched litigation.
The proposal requires Donato’s approval. The judge oversaw a jury trial in 2023 that Epic won and last year he issued a sweeping injunction mandating Play app store reforms that Google said went too far. Google said the reforms potentially harmed its competitive position and compromised user safety.
Under the new proposal, Google would allow users to more easily download and install third-party app stores that meet new security and safety standards.
Developers will also be allowed to direct users to alternative payment methods both within apps and via external web links. Google said it would implement a capped service fee of either 9% or 20% on transactions in Play-distributed apps that use alternative payment options. Those caps apply to apps first installed or updated from Google Play after October 30.

Sameer Samat, Google’s president of Android Ecosystem, said, opens new tab on Tuesday the proposed changes maintained user safety while increasing flexibility for developers and consumers. Samat said Google looked forward to discussing the resolution with Donato, who is expected on Thursday to meet with lawyers involved with the case at a previously scheduled hearing.
Epic Games CEO Tim Sweeney called, opens new tab Google’s proposal “awesome” and said it “genuinely doubles down on Android’s original vision as an open platform.”
Google unsuccessfully challenged Donato’s injunction in a federal appeals court, which upheld it in a ruling in July. The U.S. Supreme Court last month declined Google’s request to temporarily freeze parts of the injunction.
Tuesday’s court filing from Google and Epic asked Donato to modify his injunction, while keeping many parts of it intact. The proposal keeps in place a three-member technical committee to review disputes over implementing the injunction.

Source: https://www.reuters.com/sustainability/boards-policy-regulation/google-proposes-app-store-reforms-settlement-with-fortnite-maker-epic-games-2025-11-05/

Google Maps Will Soon Watch The Road For You, But Only In Select Cars

Google says the feature is designed to make navigation more accurate, stress-free, and safer, especially on busy highways or in unfamiliar areas

Google Maps

Google Maps is taking its navigation experience to the next level. The tech giant has announced a new feature called Live Lane Guidance, which uses artificial intelligence (AI) and car sensors to tell drivers exactly when and where to change lanes, almost like having a co-driver who’s always alert.
However, there’s a catch. This feature will be available only in vehicles with Google’s built-in system and not on smartphones. It will launch first in the Polestar 4 in the United States and Sweden, before expanding to more cars and regions in the coming months.
How It Works
Unlike the regular lane guidance on the Google Maps app — which simply shows lane arrows on the screen — this new version will actually detect your car’s position on the road using its front-facing camera and sensors. AI then analyses the road markings and signs in real time, integrating them with Google Maps navigation.

So, if you’re in the wrong lane for an upcoming turn or exit, the system will alert you through both visual and audio warnings — guiding you safely to the correct lane.

For example, if you’re stuck in the left lane but your exit is on the right, the car will detect that and tell you exactly when to move.

Source: https://www.timesnownews.com/technology-science/google-maps-will-soon-watch-the-road-for-you-but-only-in-select-cars-article-153106645

Exit mobile version