

26.02.202511:30
📱 Top 3 New TED Talks on AI This Winter
We've selected the standout TED talks on AI that came out this winter.
1️⃣ The Mind-Reading Potential of AI
⏰ 16 minutes
🎙 Speaker: Chin-Teng Lin
A leading researcher in AI and brain-computer interfaces demonstrates how AI is learning to read thoughts and convert them into text. Lin thinks this technology will change how people talk to each other by letting them share words without speaking or moving.
2️⃣ AI Agents: The Scientist's New Superpower
⏰ 17 minutes
🎙 Speaker: Stefan Harrer
The Director of AI for Science at Australia's leading research agency, CSIRO, explains how generative AI is becoming a true partner for scientists. AI helps analyze biological data, design new proteins, and accelerate discoveries.
3️⃣ How AI Will Answer Questions We Haven't Thought to Ask
⏰ 12 minutes
🎙 Speaker: Aravind Srinivas
The founder of Perplexity explains how AI is reshaping the way we search for knowledge, turning it into a conversation where every answer leads to new questions. Srinivas believes AI makes knowledge accessible to everyone and helps us ask deeper, more meaningful questions about the world.
More on the topic:
➡️ Top TED Talks on AI in Art
➡️ Fascinating Lectures on LLM and AI Agents
#AITED @hiaimediaen
We've selected the standout TED talks on AI that came out this winter.
1️⃣ The Mind-Reading Potential of AI
⏰ 16 minutes
🎙 Speaker: Chin-Teng Lin
A leading researcher in AI and brain-computer interfaces demonstrates how AI is learning to read thoughts and convert them into text. Lin thinks this technology will change how people talk to each other by letting them share words without speaking or moving.
2️⃣ AI Agents: The Scientist's New Superpower
⏰ 17 minutes
🎙 Speaker: Stefan Harrer
The Director of AI for Science at Australia's leading research agency, CSIRO, explains how generative AI is becoming a true partner for scientists. AI helps analyze biological data, design new proteins, and accelerate discoveries.
3️⃣ How AI Will Answer Questions We Haven't Thought to Ask
⏰ 12 minutes
🎙 Speaker: Aravind Srinivas
The founder of Perplexity explains how AI is reshaping the way we search for knowledge, turning it into a conversation where every answer leads to new questions. Srinivas believes AI makes knowledge accessible to everyone and helps us ask deeper, more meaningful questions about the world.
More on the topic:
➡️ Top TED Talks on AI in Art
➡️ Fascinating Lectures on LLM and AI Agents
#AITED @hiaimediaen
24.02.202507:29
📺 Winners of the 2nd Season of the AI Film Festival Project 0dyssey by ElevenLabs Announced
The online festival's organizers increased the number of categories. The new list includes Social Media, Community Choice, and Sponsor's Picks. Civitai, Kling, and Viggle support the competition.
Participants were required to use at least one AI tool, which didn't have to be one developed by ElevenLabs.
Here are some of the most exciting films in the new categories.
👍 Social Media
Silver Winner: DNA, a slow-paced video about generational links and the value of traditions produced by Sia Rekh Hu, an online film school that specializes in cinematic AI videos.
🖥 Rendering & VFX
Mistake, a post-apocalyptic film, won first place in the technical category with its trademark GTA art visual style.
💰 Marketing & Advertisement
Demonopoly is a satirical spin on Monopoly, a game (as you may know) that frequently ends in player conflicts. Unfortunately, this version of the game does not exist (yet).
🏆 Kling's Choice
A moving video letter from director Danny Zeng to their grandmother, who is suffering from dementia.
🙌 Community Choice
"The Audience Award" went to the short film Massive Invasion, about a girl trying to save a puppy from a virus that turns animals into aggressive mutants.
➡️ You can watch the rest of the films on the festival's website.
More on the topic:
👉 Translating videos into 29 languages with the original speaker's voices — a neural network by Elevenlabs
👉 Experiment: Can AI Voice the Simpsons?
#news #cinema @hiaimediaen
The online festival's organizers increased the number of categories. The new list includes Social Media, Community Choice, and Sponsor's Picks. Civitai, Kling, and Viggle support the competition.
Participants were required to use at least one AI tool, which didn't have to be one developed by ElevenLabs.
Here are some of the most exciting films in the new categories.
👍 Social Media
Silver Winner: DNA, a slow-paced video about generational links and the value of traditions produced by Sia Rekh Hu, an online film school that specializes in cinematic AI videos.
🖥 Rendering & VFX
Mistake, a post-apocalyptic film, won first place in the technical category with its trademark GTA art visual style.
💰 Marketing & Advertisement
Demonopoly is a satirical spin on Monopoly, a game (as you may know) that frequently ends in player conflicts. Unfortunately, this version of the game does not exist (yet).
🏆 Kling's Choice
A moving video letter from director Danny Zeng to their grandmother, who is suffering from dementia.
🙌 Community Choice
"The Audience Award" went to the short film Massive Invasion, about a girl trying to save a puppy from a virus that turns animals into aggressive mutants.
➡️ You can watch the rest of the films on the festival's website.
More on the topic:
👉 Translating videos into 29 languages with the original speaker's voices — a neural network by Elevenlabs
👉 Experiment: Can AI Voice the Simpsons?
#news #cinema @hiaimediaen


21.02.202510:59
🪓 AI True Crime: Why Millions Believe AI-Generated Crime Videos?
In recent years, the true crime genre—content that tells real-life crime stories—has skyrocketed in popularity. Researchers attribute this rise to various factors, from subconscious preparation for dangerous situations to a deep interest in criminal psychology and motives.
A YouTuber known as Paul decided to take an unconventional approach by co-writing his content with AI. Initially, he doubted that people would watch true crime videos if they knew the stories were entirely fake. However, his videos quickly went viral. Thanks to ad revenue, Paul could make content creation his full-time job.
🎞 How Did Paul Create His "True Crime" Stories?
Paul devised the core plot on his own before using ChatGPT and AI visual tools to bring his story to life. He typically created one or two weekly videos, spending around 2.5 hours on each.
In an interview with 404 Media, Paul described himself as a director and his work as an "absurdist form of art." To hint at the stories' artificial nature, he gave characters unusual names and inserted weird details. However, his attempt to make viewers question the authenticity of his videos failed since commenters took the stories seriously. Some even reached out to The Denver Post to ask why a particular "murder" hadn't been covered in the news.
⛔️ YouTube's Response
When a journalist from 404 Media asked YouTube for a comment, the platform responded by deleting Paul's channel for "multiple violations of community guidelines," including its child safety policy.
However, many AI-generated true crime channels still exist on YouTube. One example is Hidden Family Crime Stories, which has 16,000 subscribers, and its most popular video has 1.2 million views. Based on the comments, viewers appear to be more concerned with the moral and ethical implications of the stories themselves than with the use of AI to create them.
Would you watch AI-generated "true crime"?
👍 — yes, if the story is interesting
🎃 — no, what's the point if it's fake?
🙈 — I don't watch true crime at all!
#news @hiaimediaen
In recent years, the true crime genre—content that tells real-life crime stories—has skyrocketed in popularity. Researchers attribute this rise to various factors, from subconscious preparation for dangerous situations to a deep interest in criminal psychology and motives.
A YouTuber known as Paul decided to take an unconventional approach by co-writing his content with AI. Initially, he doubted that people would watch true crime videos if they knew the stories were entirely fake. However, his videos quickly went viral. Thanks to ad revenue, Paul could make content creation his full-time job.
🎞 How Did Paul Create His "True Crime" Stories?
Paul devised the core plot on his own before using ChatGPT and AI visual tools to bring his story to life. He typically created one or two weekly videos, spending around 2.5 hours on each.
In an interview with 404 Media, Paul described himself as a director and his work as an "absurdist form of art." To hint at the stories' artificial nature, he gave characters unusual names and inserted weird details. However, his attempt to make viewers question the authenticity of his videos failed since commenters took the stories seriously. Some even reached out to The Denver Post to ask why a particular "murder" hadn't been covered in the news.
⛔️ YouTube's Response
When a journalist from 404 Media asked YouTube for a comment, the platform responded by deleting Paul's channel for "multiple violations of community guidelines," including its child safety policy.
However, many AI-generated true crime channels still exist on YouTube. One example is Hidden Family Crime Stories, which has 16,000 subscribers, and its most popular video has 1.2 million views. Based on the comments, viewers appear to be more concerned with the moral and ethical implications of the stories themselves than with the use of AI to create them.
Would you watch AI-generated "true crime"?
👍 — yes, if the story is interesting
🎃 — no, what's the point if it's fake?
🙈 — I don't watch true crime at all!
#news @hiaimediaen
20.02.202507:02
🎨 Christie's Hosts Its First AI Art Auction—But Not Everyone Is Happy
The auction house Christie's is launching an auction-exhibition dedicated exclusively to AI-generated artworks. The event, titled "Augmented Intelligence," will take place from February 20 to March 5 in New York and online. The collection includes 20 lots with starting prices ranging from $15,000 to $250,000.
🎨 What's on the Auction Block?
The sale will feature works by Refik Anadol, known for his interactive data-driven installations; Harold Cohen, a pioneer of AI art who began experimenting with algorithms in the 1960s; and Pindar Van Arman, who develops robotic systems that mimic the painting process. A quarter of the lots are digital works, including NFTs, while the rest are physical pieces, such as sculptures, paintings, drawings, and light installations.
One of the most striking exhibits is Alexander Reben's 3.6-meter-tall robot, which will paint a new section of a canvas each time a bid is placed on it.
🔥 Artists Protest
Not everyone is happy about Christie's initiative. Nearly 4,000 artists have signed an open letter demanding to cancel the auction. They claim that the AI models used to create the artworks were trained on copyrighted works without permission. The protesters argue that AI developers exploit human artists by using their work without consent or compensation. Among the signatories are artists Kelly McKernan and Karla Ortiz, who have been suing Stability AI, Midjourney, Runway, and other companies since 2023 over these issues.
A spokesperson for Christie's stated that "in most cases" the AI used to create the auction's artworks was trained on the artists' own datasets. The auction house emphasizes that the featured artists are not just using AI but actively integrating it into their creative process.
More on AI Art:
➡️ Botto—the AI artist whose works sell for millions
➡️ Interesting AI artists to follow
#news #art @hiaimediaen
The auction house Christie's is launching an auction-exhibition dedicated exclusively to AI-generated artworks. The event, titled "Augmented Intelligence," will take place from February 20 to March 5 in New York and online. The collection includes 20 lots with starting prices ranging from $15,000 to $250,000.
🎨 What's on the Auction Block?
The sale will feature works by Refik Anadol, known for his interactive data-driven installations; Harold Cohen, a pioneer of AI art who began experimenting with algorithms in the 1960s; and Pindar Van Arman, who develops robotic systems that mimic the painting process. A quarter of the lots are digital works, including NFTs, while the rest are physical pieces, such as sculptures, paintings, drawings, and light installations.
One of the most striking exhibits is Alexander Reben's 3.6-meter-tall robot, which will paint a new section of a canvas each time a bid is placed on it.
🔥 Artists Protest
Not everyone is happy about Christie's initiative. Nearly 4,000 artists have signed an open letter demanding to cancel the auction. They claim that the AI models used to create the artworks were trained on copyrighted works without permission. The protesters argue that AI developers exploit human artists by using their work without consent or compensation. Among the signatories are artists Kelly McKernan and Karla Ortiz, who have been suing Stability AI, Midjourney, Runway, and other companies since 2023 over these issues.
A spokesperson for Christie's stated that "in most cases" the AI used to create the auction's artworks was trained on the artists' own datasets. The auction house emphasizes that the featured artists are not just using AI but actively integrating it into their creative process.
More on AI Art:
➡️ Botto—the AI artist whose works sell for millions
➡️ Interesting AI artists to follow
#news #art @hiaimediaen






18.02.202506:59
✖️ Elon Musk's xAI Releases Its Grok 3
Today, Elon Musk and the xAI team unveiled the new Grok 3 model. It outperforms OpenAI's o3-mini-high in benchmarks and has taken the top spot in the LMArena ranking with 1402 points.
🔜 Grok 3 is available in regular and mini versions, each with a reasoning mode. The basic Grok 3 mini shows performance on par with GPT-4o, DeepSeek-V3, Gemini 2-Pro, and Claude 3.5 Sonnet in math, coding, and hard sciences.
🔜 Grok 3 Reasoning outperforms the advanced o3-mini-high and o1 from OpenAI, as well as DeepSeek-R1 and Gemini Flash Thinking. During the demonstration, the model was asked to write code for modeling the flight path of a spacecraft from Earth to Mars—and Grok beat it.
🔜 The Big Brain mode makes the model reason more thoroughly. With this mode, Grok 3 created a simple game combining Tetris and Bejewled.
🔜 DeepSearch: The first agent from xAI, the competitor for OpenAI's Deep Research, aims to initiate a "new generation of search engines."
🔜 It seems that censorship in Grok 3's responses will be minimal, although Musk did not state this explicitly.
Grok 3 was trained using the Colossus data center. It took 122 days to launch the first phase with 100,000 Nvidia H100 GPUs, and another 92 days to double the cluster's capacity. In total, 10 times more compute was spent on training Grok 3 than on the previous generation.
📅 Grok 3 is set to launch for X's Pro subscribers today. xAI is also introducing SuperGrok, a $30 per month subscription that offers DeepSearch, advanced reasoning, and expanded image generation limits.
In about a week, a voice mode will appear in the Grok application, and in a few months, when the new version comes out of beta, xAI plans to open-source the code for Grok 2.
🟠 Meanwhile, Musk mentioned that "if everything goes well," SpaceX will send a mission to Mars in the next "window" in November 2026—with Optimus robots and Grok.
More on the topic:
😁 Grok 2 generates images with minimal censorship
😁 An 8-hour (!) interview with Elon Musk
#news #Musk #Grok @hiaimediaen
Today, Elon Musk and the xAI team unveiled the new Grok 3 model. It outperforms OpenAI's o3-mini-high in benchmarks and has taken the top spot in the LMArena ranking with 1402 points.
🔜 Grok 3 is available in regular and mini versions, each with a reasoning mode. The basic Grok 3 mini shows performance on par with GPT-4o, DeepSeek-V3, Gemini 2-Pro, and Claude 3.5 Sonnet in math, coding, and hard sciences.
🔜 Grok 3 Reasoning outperforms the advanced o3-mini-high and o1 from OpenAI, as well as DeepSeek-R1 and Gemini Flash Thinking. During the demonstration, the model was asked to write code for modeling the flight path of a spacecraft from Earth to Mars—and Grok beat it.
🔜 The Big Brain mode makes the model reason more thoroughly. With this mode, Grok 3 created a simple game combining Tetris and Bejewled.
🔜 DeepSearch: The first agent from xAI, the competitor for OpenAI's Deep Research, aims to initiate a "new generation of search engines."
🔜 It seems that censorship in Grok 3's responses will be minimal, although Musk did not state this explicitly.
"It's maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct,"says the billionaire.
Grok 3 was trained using the Colossus data center. It took 122 days to launch the first phase with 100,000 Nvidia H100 GPUs, and another 92 days to double the cluster's capacity. In total, 10 times more compute was spent on training Grok 3 than on the previous generation.
📅 Grok 3 is set to launch for X's Pro subscribers today. xAI is also introducing SuperGrok, a $30 per month subscription that offers DeepSearch, advanced reasoning, and expanded image generation limits.
In about a week, a voice mode will appear in the Grok application, and in a few months, when the new version comes out of beta, xAI plans to open-source the code for Grok 2.
🟠 Meanwhile, Musk mentioned that "if everything goes well," SpaceX will send a mission to Mars in the next "window" in November 2026—with Optimus robots and Grok.
More on the topic:
😁 Grok 2 generates images with minimal censorship
😁 An 8-hour (!) interview with Elon Musk
#news #Musk #Grok @hiaimediaen


15.02.202507:29
📺 A Selection of Fascinating Lectures on LLM and AI Agents
We've compiled the most useful lectures from leading AI experts, from basic tips on working with LLMs to comprehensive machine learning courses.
Save this post so you don't lose it!
1️⃣ Lecture by Andrej Karpathy on LLM
Difficulty: 🌟🌟🌟🌟🌟
One of the co-founders of OpenAI explains the structure and all stages of developing large language models (LLMs) that power ChatGPT and similar bots. Karpathy pays special attention to the most advanced products, such as DeepSeek-R1. This lecture is perfect for those already familiar with the terminology and looking for an in-depth overview from a top-tier specialist.
2️⃣ Lecture on AI Agents by Maya Murad from IBM
Difficulty: 🌟🌟🌟🌟🌟
Maya Murad, head of the strategic department at IBM Research, explores the evolution of AI agents and their key role in advancing the AI industry. You'll learn how agents integrate with databases and other tools to solve practical tasks.
3️⃣ Guide to Building an AI Agent by David Ondrej
Difficulty: 🌟🌟🌟🌟🌟
Blogger David Ondrej provides a step-by-step guide to creating your own AI agent—from defining the assistant's task and selecting tools to fine-tuning the model.
4️⃣ Lecture by the DeepLearningAI Project on Building AI Agents
Difficulty: 🌟🌟🌟🌟🌟
In this lecture, conducted by the founders of LlamaIndex and TruEra, you will learn how to design, evaluate, and iterate LLM agents, enabling you to create powerful and efficient LLM agents quickly.
More on the topic:
👍 Andrej Karpathy's lecture "How to Build GPT-2"
👍 Best TED Talks on AI of 2024
#top #lectures #education @hiaimediaen
We've compiled the most useful lectures from leading AI experts, from basic tips on working with LLMs to comprehensive machine learning courses.
Save this post so you don't lose it!
1️⃣ Lecture by Andrej Karpathy on LLM
Difficulty: 🌟🌟🌟🌟🌟
One of the co-founders of OpenAI explains the structure and all stages of developing large language models (LLMs) that power ChatGPT and similar bots. Karpathy pays special attention to the most advanced products, such as DeepSeek-R1. This lecture is perfect for those already familiar with the terminology and looking for an in-depth overview from a top-tier specialist.
2️⃣ Lecture on AI Agents by Maya Murad from IBM
Difficulty: 🌟🌟🌟🌟🌟
Maya Murad, head of the strategic department at IBM Research, explores the evolution of AI agents and their key role in advancing the AI industry. You'll learn how agents integrate with databases and other tools to solve practical tasks.
3️⃣ Guide to Building an AI Agent by David Ondrej
Difficulty: 🌟🌟🌟🌟🌟
Blogger David Ondrej provides a step-by-step guide to creating your own AI agent—from defining the assistant's task and selecting tools to fine-tuning the model.
4️⃣ Lecture by the DeepLearningAI Project on Building AI Agents
Difficulty: 🌟🌟🌟🌟🌟
In this lecture, conducted by the founders of LlamaIndex and TruEra, you will learn how to design, evaluate, and iterate LLM agents, enabling you to create powerful and efficient LLM agents quickly.
More on the topic:
👍 Andrej Karpathy's lecture "How to Build GPT-2"
👍 Best TED Talks on AI of 2024
#top #lectures #education @hiaimediaen
26.02.202507:02
🟢 "Duolingo for Sign Language" by Nvidia
Nvidia, in collaboration with the American Society for Deaf Children and the creative agency Hello Monday, has launched Signs—an interactive platform for learning American Sign Language (ASL) and developing accessible AI applications for the deaf people.
There have long been dictionaries and apps for learning various sign languages—such as SpreadTheSign—but they mainly provide video lessons.
Signs takes a more advanced approach: a 3D "teacher" demonstrates signs for the learner to repeat, while AI monitors the accuracy of movements and provides real-time feedback.
Currently, the platform includes 100 basic signs, but Nvidia plans to expand the vocabulary to 1,000 signs. Users can contribute by recording their own gesture demonstrations. Since ASL also relies on facial expressions and head movements, the project team is working on integrating these elements into future versions.
Sign languages are fully developed, autonomous languages that are frequently very different from their spoken "counterparts," having their own syntax, idioms, and even dialects. The Signs team is exploring ways to account for regional variations and slang to create a richer and more inclusive database.
Around 500,000 people speak ASL, and millions use other sign languages around the world.
➡️ Try Signs here—no registration needed.
More on the topic:
➡️ "The World Needs Robots": Nvidia CEO Jensen Huang
➡️ AI for Interview and Negotiation Training
#news #nvidia @hiaimediaen
Nvidia, in collaboration with the American Society for Deaf Children and the creative agency Hello Monday, has launched Signs—an interactive platform for learning American Sign Language (ASL) and developing accessible AI applications for the deaf people.
"Most deaf children are born to hearing parents. Giving family members accessible tools like Signs to start learning ASL early enables them to open an effective communication channel with children as young as six to eight months old,"said Cheri Dowling, executive director of the American Society for Deaf Children.
There have long been dictionaries and apps for learning various sign languages—such as SpreadTheSign—but they mainly provide video lessons.
Signs takes a more advanced approach: a 3D "teacher" demonstrates signs for the learner to repeat, while AI monitors the accuracy of movements and provides real-time feedback.
Currently, the platform includes 100 basic signs, but Nvidia plans to expand the vocabulary to 1,000 signs. Users can contribute by recording their own gesture demonstrations. Since ASL also relies on facial expressions and head movements, the project team is working on integrating these elements into future versions.
Sign languages are fully developed, autonomous languages that are frequently very different from their spoken "counterparts," having their own syntax, idioms, and even dialects. The Signs team is exploring ways to account for regional variations and slang to create a richer and more inclusive database.
Around 500,000 people speak ASL, and millions use other sign languages around the world.
➡️ Try Signs here—no registration needed.
More on the topic:
➡️ "The World Needs Robots": Nvidia CEO Jensen Huang
➡️ AI for Interview and Negotiation Training
#news #nvidia @hiaimediaen


23.02.202508:33
📣 Hello everyone! Our Sunday digest features the most exciting AI news from Week 8, 2025.
▎GROK 3 RELEASED
❎ Elon Musk's xAI has launched Grok 3, the latest version of its chatbot. The base model rivals GPT-4o, and the reasoning model matches OpenAI's o3-mini-high. It's free for a short time.
▎SAVE THIS — IT'S HELPFUL
🖱 Perplexity Deep Research: A cutting-edge AI agent for deep web exploration is now live on @GPT4AgentsBot
▎TO READ
🤩 Evo 2: A breakthrough AI model for DNA analysis and modeling.
🔄 Microsoft unveils Majorana 1: A revolutionary quantum chip built on a newly discovered state of matter.
🌎 Scientists simulated the impact of a 500m asteroid Bennu on Earth.
🦾 Japanese engineers created a biohybrid robotic hand using tendon-like human muscle tissue.
🤖 NEO Gamma: The most human-like robot yet, created by 1X.
🪓 AI True Crime: Why millions are captivated by AI-generated crime videos.
🧠 Echo Neurotechnologies, a brain-computer interface startup, secures $50 million in funding.
🎬 The screenwriter of "Taxi Driver," Paul Schrader, thinks AI Is Smarter than him—and more useful than film executives.
🎨 Artists fight back against Christie's AI art auction-exhibition.
👶 Why scientists believe AI needs a childhood to grow smarter.
🖥 Hackers cracked Claude's new security in just five days, and Anthropic paid $55k.
👎 Most people can't recognize deepfakes, the researchers found out. Try yourself!
▎TO WATCH
📱 Noam Brown, a leading research scientist at OpenAI, on inventing "reasoning" AI models and their impact on the industry's future.
Have a great Sunday!
#AIWeek @hiaimediaen
▎GROK 3 RELEASED
❎ Elon Musk's xAI has launched Grok 3, the latest version of its chatbot. The base model rivals GPT-4o, and the reasoning model matches OpenAI's o3-mini-high. It's free for a short time.
▎SAVE THIS — IT'S HELPFUL
🖱 Perplexity Deep Research: A cutting-edge AI agent for deep web exploration is now live on @GPT4AgentsBot
▎TO READ
🤩 Evo 2: A breakthrough AI model for DNA analysis and modeling.
🔄 Microsoft unveils Majorana 1: A revolutionary quantum chip built on a newly discovered state of matter.
🌎 Scientists simulated the impact of a 500m asteroid Bennu on Earth.
🦾 Japanese engineers created a biohybrid robotic hand using tendon-like human muscle tissue.
🤖 NEO Gamma: The most human-like robot yet, created by 1X.
🪓 AI True Crime: Why millions are captivated by AI-generated crime videos.
🧠 Echo Neurotechnologies, a brain-computer interface startup, secures $50 million in funding.
🎬 The screenwriter of "Taxi Driver," Paul Schrader, thinks AI Is Smarter than him—and more useful than film executives.
🎨 Artists fight back against Christie's AI art auction-exhibition.
👶 Why scientists believe AI needs a childhood to grow smarter.
🖥 Hackers cracked Claude's new security in just five days, and Anthropic paid $55k.
👎 Most people can't recognize deepfakes, the researchers found out. Try yourself!
▎TO WATCH
📱 Noam Brown, a leading research scientist at OpenAI, on inventing "reasoning" AI models and their impact on the industry's future.
Have a great Sunday!
#AIWeek @hiaimediaen


21.02.202506:59
😖 Evo 2: The AI Model That Understands Life
Nvidia has unveiled Evo 2, a revolutionary open-source AI model for decoding the "instruction manual of life."
The new model can analyze the structure of DNA, RNA, and proteins. It could help scientists predict the impact of mutations on organisms, identify genes important for survival, and design genomes.
Evo2 can predict genes associated with cancer risks or trace evolutionary relationships between species, among other things.
The dataset included information on 128,000 genomes of various organisms—from bacteria and archaea to animals and fungi. The total volume of the dataset is equivalent to stacking 3 million copies of War and Peace.
Evo 2 was trained using Nvidia's cloud platform on a 2048 Nvidia H100 GPUs cluster. The model was developed by genetic scientists at Stanford and Arc Institute. OpenAI co-founder and president Greg Brockman also participated in the project.
🔬 Where else might Evo 2 be useful?
⚪️ One human gene contains thousands of nucleotides (building blocks of DNA). Evo 2 can process long sequences of genetic information in a short time. This will help understand the connections between genes and cells, including the ability to predict dangerous mutations in genes and develop targeted treatments for complex diseases.
⚪️ The new AI tool could be useful in agriculture to create more climate-resilient crops and protect vulnerable ecosystems.
⚪️ The model's ability to identify new molecules and create new materials will be useful in various industries, such as the development of biodegradable materials.
More on the topic:
🧬 The Largest AI Model for Protein Design
🧬 AI to Search for New Antibiotics
#news #nvidia #science @hiaimediaen
Nvidia has unveiled Evo 2, a revolutionary open-source AI model for decoding the "instruction manual of life."
The new model can analyze the structure of DNA, RNA, and proteins. It could help scientists predict the impact of mutations on organisms, identify genes important for survival, and design genomes.
Evo2 can predict genes associated with cancer risks or trace evolutionary relationships between species, among other things.
The dataset included information on 128,000 genomes of various organisms—from bacteria and archaea to animals and fungi. The total volume of the dataset is equivalent to stacking 3 million copies of War and Peace.
Evo 2 was trained using Nvidia's cloud platform on a 2048 Nvidia H100 GPUs cluster. The model was developed by genetic scientists at Stanford and Arc Institute. OpenAI co-founder and president Greg Brockman also participated in the project.
"Deploying a model like Evo 2 is like sending a powerful new telescope out to the farthest reaches of the universe. We know there's immense opportunity for exploration, but we don't yet know what we're going to discover,"—said Dave Burke, Arc's chief technology officer.
🔬 Where else might Evo 2 be useful?
⚪️ One human gene contains thousands of nucleotides (building blocks of DNA). Evo 2 can process long sequences of genetic information in a short time. This will help understand the connections between genes and cells, including the ability to predict dangerous mutations in genes and develop targeted treatments for complex diseases.
⚪️ The new AI tool could be useful in agriculture to create more climate-resilient crops and protect vulnerable ecosystems.
⚪️ The model's ability to identify new molecules and create new materials will be useful in various industries, such as the development of biodegradable materials.
More on the topic:
🧬 The Largest AI Model for Protein Design
🧬 AI to Search for New Antibiotics
#news #nvidia #science @hiaimediaen


19.02.202510:59
👎 Only 0.1% of People Can Recognize Deepfakes
People are practically incapable of distinguishing real images or videos from the ones generated by artificial intelligence, experts from iProov, a British company developing technologies for biometric authentication, have recently found out.
The company tested 2,000 UK and U.S. consumers, offering them a quiz with a series of real and synthetic photos and videos. Each study participant had to distinguish real content from fake one.
Only 0.1% of respondents completely coped with the task. At the same time, almost a third of people aged 55-64 and 39% of those aged 65 and older had never even heard of deepfakes.
Deepfake videos proved more challenging to identify than fake images, with participants 36% less likely to identify a synthetic video than a synthetic image correctly. Still, more than 60% of the study participants—especially among young people between 18 and 34 years old—remained overly confident in their deepfake detection skills, regardless of the experiment's results.
👹👹 The boom of AI-generated content, in turn, has led to reduced trust in online information—almost half of those surveyed trust social media less after learning about deepfakes. Even so, this does not seem to help much—only 11% of participants say they critically analyze the source and context of information to determine if it's a deepfake. As a result, most people not only risk believing the fakes but also spreading them further.
➡️ Try taking the iProov quiz yourself here— there are only 10 questions.
Share your results in the comments!
More on the topic:
👁️ Quiz: Can You Tell AI-Generated Videos from Real Ones?
👁️ How to Protect Yourself from AI Scams
#news #deepfake #quiz @hiaimediaen
People are practically incapable of distinguishing real images or videos from the ones generated by artificial intelligence, experts from iProov, a British company developing technologies for biometric authentication, have recently found out.
The company tested 2,000 UK and U.S. consumers, offering them a quiz with a series of real and synthetic photos and videos. Each study participant had to distinguish real content from fake one.
Only 0.1% of respondents completely coped with the task. At the same time, almost a third of people aged 55-64 and 39% of those aged 65 and older had never even heard of deepfakes.
Deepfake videos proved more challenging to identify than fake images, with participants 36% less likely to identify a synthetic video than a synthetic image correctly. Still, more than 60% of the study participants—especially among young people between 18 and 34 years old—remained overly confident in their deepfake detection skills, regardless of the experiment's results.
"This study shows that organizations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services,"says professor Edgar Whitley, a digital identity expert at the London School of Economics.
👹👹 The boom of AI-generated content, in turn, has led to reduced trust in online information—almost half of those surveyed trust social media less after learning about deepfakes. Even so, this does not seem to help much—only 11% of participants say they critically analyze the source and context of information to determine if it's a deepfake. As a result, most people not only risk believing the fakes but also spreading them further.
➡️ Try taking the iProov quiz yourself here— there are only 10 questions.
Share your results in the comments!
More on the topic:
👁️ Quiz: Can You Tell AI-Generated Videos from Real Ones?
👁️ How to Protect Yourself from AI Scams
#news #deepfake #quiz @hiaimediaen


17.02.202510:59
🧠 New Neuralink Competitor Secures $50M Investment
Andreessen Horowitz (a16z), a venture capital firm, has invested $50 million in Echo Neurotechnologies Corp., which is developing a brain-computer interface. This is a16z's first entry into the brain implant sector, even though Echo Neurotechnologies has yet to demonstrate a working prototype.
Neurosurgeon Edward Chang, the chairman of the neurosurgery department at the University of California at San Francisco, leads the startup. Chang led the development of an interface in 2023 that allowed a disabled woman to "speak" by analyzing her brain waves with AI.
Using this digital assistant, the patient obtained a speaking rate of 80 words per minute, comparable to the ordinary person. In contrast, her prior speech computer only let her type 14 words per minute.
Neither a16z nor Echo Neurotechnologies have disclosed further details, but the startup is actively hiring. Its website hints that the project launch is coming "soon."
The first moderately successful commercial neuroimplant project was Neuralink, which Elon Musk introduced in 2017. The company currently has three patients with implanted chips.
Before Neuralink, brain-computer interfaces were mostly the subject of academic research. However, Musk's venture has spawned competition, with companies like Paradromics and Precision Neuroscience entering the market. Some are proposing ultra-revolutionary approaches—recently, we covered the startup Science, which plans to use living neurons instead of traditional electrodes for its interface.
More on the topic:
➡️ Life of the First Neuralink Patient
➡️ A Neuralink Patient Learned to Control a Robotic Arm
#news #science #neuralink @hiaimediaen
Andreessen Horowitz (a16z), a venture capital firm, has invested $50 million in Echo Neurotechnologies Corp., which is developing a brain-computer interface. This is a16z's first entry into the brain implant sector, even though Echo Neurotechnologies has yet to demonstrate a working prototype.
Neurosurgeon Edward Chang, the chairman of the neurosurgery department at the University of California at San Francisco, leads the startup. Chang led the development of an interface in 2023 that allowed a disabled woman to "speak" by analyzing her brain waves with AI.
Using this digital assistant, the patient obtained a speaking rate of 80 words per minute, comparable to the ordinary person. In contrast, her prior speech computer only let her type 14 words per minute.
Neither a16z nor Echo Neurotechnologies have disclosed further details, but the startup is actively hiring. Its website hints that the project launch is coming "soon."
The first moderately successful commercial neuroimplant project was Neuralink, which Elon Musk introduced in 2017. The company currently has three patients with implanted chips.
Before Neuralink, brain-computer interfaces were mostly the subject of academic research. However, Musk's venture has spawned competition, with companies like Paradromics and Precision Neuroscience entering the market. Some are proposing ultra-revolutionary approaches—recently, we covered the startup Science, which plans to use living neurons instead of traditional electrodes for its interface.
More on the topic:
➡️ Life of the First Neuralink Patient
➡️ A Neuralink Patient Learned to Control a Robotic Arm
#news #science #neuralink @hiaimediaen






+3
14.02.202513:13
💗 Valentine's Day Video Effects from Pika AI
Hi everyone! Just today, we rolled out new video generation services from Pika AI in our bot, and we already have an update!
To celebrate Valentine's Day, Pika AI has introduced 6 fresh effects that bring your photos to life ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot, select "Effects," and upload a photo.
Within a minute, Pika will transform your image into a cute or funny video that can become a heartfelt greeting for your loved one.
Have a wonderful holiday 🌹
Hi everyone! Just today, we rolled out new video generation services from Pika AI in our bot, and we already have an update!
To celebrate Valentine's Day, Pika AI has introduced 6 fresh effects that bring your photos to life ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot, select "Effects," and upload a photo.
Within a minute, Pika will transform your image into a cute or funny video that can become a heartfelt greeting for your loved one.
Have a wonderful holiday 🌹
25.02.202510:59
🏙 Toyota is Building a "City of the Future" on Its Former Factory Site
The new town, Woven City, is located near Mount Fuji on the grounds of the Toyota automobile plant that closed in 2020.
The name alludes to Toyota's origins as the manufacturer of automatic looms before building cars. It also symbolizes the weaving together of new ideas: Woven City will become an incubator for startups and innovations.
AI will oversee buildings and infrastructure, robot companions will help residents with various tasks, and only eco-friendly self-driving vehicles will operate on the streets.
His son, Daisuke Toyoda, manages this project. He notes that Woven City is an experimental test course for new mobility technologies. It includes three types of streets: pedestrian, streets where pedestrians and personal mobility coexist, and streets dedicated to automated mobility. Electric Toyota minibuses will become the main city transportation in the city as well e-VTOLs, the flying taxis from the American company Joby Aviation. The city logistics will run underground: all the buildings are connected by underground passageways, where autonomous vehicles will travel around collecting garbage and making deliveries.
The construction of Woven City cost around $10 billion. The Danish design bureau Bjarke Ingels Group was responsible for the architecture.
The city's first residents—100 Toyota employees—will move into it this year. Eventually, its population will grow to 2,000 people. The apartments will not go on sale; the city will open to tourists in 2026.
More on the topic:
🚕 In Which Countries Does Robotaxi Operate, and How Safe Is It?
🚕 Elon Musk Unveils the Cybercab Robotaxi
#news #future #cars @hiaimediaen
The new town, Woven City, is located near Mount Fuji on the grounds of the Toyota automobile plant that closed in 2020.
The name alludes to Toyota's origins as the manufacturer of automatic looms before building cars. It also symbolizes the weaving together of new ideas: Woven City will become an incubator for startups and innovations.
AI will oversee buildings and infrastructure, robot companions will help residents with various tasks, and only eco-friendly self-driving vehicles will operate on the streets.
"Our residents will include: Toyota employees and their families, retired people, retailers, visiting scientists, industry partners, entrepreneurs, academics… And, of course, their pets! For example, my favorite horse named Minnie!"says Toyota Motor chairman Akio Toyoda.
His son, Daisuke Toyoda, manages this project. He notes that Woven City is an experimental test course for new mobility technologies. It includes three types of streets: pedestrian, streets where pedestrians and personal mobility coexist, and streets dedicated to automated mobility. Electric Toyota minibuses will become the main city transportation in the city as well e-VTOLs, the flying taxis from the American company Joby Aviation. The city logistics will run underground: all the buildings are connected by underground passageways, where autonomous vehicles will travel around collecting garbage and making deliveries.
The construction of Woven City cost around $10 billion. The Danish design bureau Bjarke Ingels Group was responsible for the architecture.
The city's first residents—100 Toyota employees—will move into it this year. Eventually, its population will grow to 2,000 people. The apartments will not go on sale; the city will open to tourists in 2026.
More on the topic:
🚕 In Which Countries Does Robotaxi Operate, and How Safe Is It?
🚕 Elon Musk Unveils the Cybercab Robotaxi
#news #future #cars @hiaimediaen


22.02.202507:29
👐 Why Teach AI Models to Think Before Responding?
Noam Brown, a leading research scientist at OpenAI, shared in his TED talk how the concept of "thinking AI models" came to life, forming the foundation for models like o1 and o3, and why this approach represents the future of AI development.
Highlights:
📈 The primary driver of progress in AI has been scaling data and computational resources during the training of new models.
📈 During his PhD, Brown developed an AI model for playing poker. However, his model (the best at the time) lost to four of the world's top players during a competition with a $120k prize pool.
📈 During training, the model played a trillion hands of poker. It made decisions in just 10 milliseconds in real games, regardless of the situation. Meanwhile, humans, who had played 100k times fewer hands in their lifetimes, took time to think before making decisions.
📈 Noticing this, Brown gave the model 20 seconds per decision instead of 10 milliseconds. This led to a performance improvement equivalent to increasing the data and training time by 100k times. "When I saw the result, I thought it was a mistake," Brown admitted.
📈 The researchers decided to take a rematch with an increased prize pool of $200k. The updated model triumphed over the same players, surprising the poker community, the AI industry, and even the developers themselves. "The betting odds were about 4 to 1 against us. After the first three days of competition, the betting odds were still about 50/50. But by the eighth day, you could no longer gamble on which side would win—only which human lose the last," Brown recounted.
📈 Similarly, IBM's chess champion Deep Blue and Google's Go-playing model AlphaGo also didn't act instantly—they thought before each move.
📈 There are two ways to create new models: training on increasingly large datasets or scaling the model's reasoning system. OpenAI researchers chose the latter path when creating o1—the first in a series of neural networks designed to think before responding.
📈 o1 takes more time and costs more to produce answers than other models. However, according to Brown, this is justified for solving fundamental problems, such as finding a cure for cancer, proving the Riemann hypothesis, or designing more efficient solar panels.
📈 Brown emphasized that while scaling pretraining data is approaching its limits, scaling reasoning capabilities is just beginning.
📱 You can watch Noam Brown's full TED talk here.
More on the topic:
🔘 Ilya Sutskever: "Data is fossil fuel for AI"
🔘 The biography of Demis Hassabis, CEO of Google DeepMind
#OpenAI #AITed @hiaimedien
Noam Brown, a leading research scientist at OpenAI, shared in his TED talk how the concept of "thinking AI models" came to life, forming the foundation for models like o1 and o3, and why this approach represents the future of AI development.
Highlights:
📈 The primary driver of progress in AI has been scaling data and computational resources during the training of new models.
📈 During his PhD, Brown developed an AI model for playing poker. However, his model (the best at the time) lost to four of the world's top players during a competition with a $120k prize pool.
📈 During training, the model played a trillion hands of poker. It made decisions in just 10 milliseconds in real games, regardless of the situation. Meanwhile, humans, who had played 100k times fewer hands in their lifetimes, took time to think before making decisions.
📈 Noticing this, Brown gave the model 20 seconds per decision instead of 10 milliseconds. This led to a performance improvement equivalent to increasing the data and training time by 100k times. "When I saw the result, I thought it was a mistake," Brown admitted.
📈 The researchers decided to take a rematch with an increased prize pool of $200k. The updated model triumphed over the same players, surprising the poker community, the AI industry, and even the developers themselves. "The betting odds were about 4 to 1 against us. After the first three days of competition, the betting odds were still about 50/50. But by the eighth day, you could no longer gamble on which side would win—only which human lose the last," Brown recounted.
📈 Similarly, IBM's chess champion Deep Blue and Google's Go-playing model AlphaGo also didn't act instantly—they thought before each move.
📈 There are two ways to create new models: training on increasingly large datasets or scaling the model's reasoning system. OpenAI researchers chose the latter path when creating o1—the first in a series of neural networks designed to think before responding.
📈 o1 takes more time and costs more to produce answers than other models. However, according to Brown, this is justified for solving fundamental problems, such as finding a cure for cancer, proving the Riemann hypothesis, or designing more efficient solar panels.
📈 Brown emphasized that while scaling pretraining data is approaching its limits, scaling reasoning capabilities is just beginning.
"There are some people who will still say that AI is going to 'plateau' or 'hit a wall.' To them, I say: 'Want to bet?'" Brown remarked.
📱 You can watch Noam Brown's full TED talk here.
More on the topic:
🔘 Ilya Sutskever: "Data is fossil fuel for AI"
🔘 The biography of Demis Hassabis, CEO of Google DeepMind
#OpenAI #AITed @hiaimedien


20.02.202514:29
🖱 Introducing Perplexity Deep Research: Your AI-Powered Web Exploration Companion
We've supercharged @GPT4AgentsBot with Perplexity Deep Research — a cutting-edge AI agent that transforms how you gather, analyze, and present information. Say goodbye to hours of manual research and hello to precision-driven insights sourced and cited in minutes!
Why You'll Love It:
✅ Instant Expertise: Perfect for essays, financial analysis, market reports, or journalism. The AI dissects complex queries, cross-references credible sources, and delivers structured reports.
✅ Time-Saving Power: Automate tedious research — ideal for content creators, professionals and students.
How It Works:
➡️ Go to @GPT4AgentsBot
1️⃣ Navigate to the /premium section to choose your plan.
2️⃣ Activate Deep Research with the /research command and ask your question. In 2 minutes, you'll get your answer!
The Lite plan includes 20 queries per day, Max — 50 queries per day.
#Perplexity @hiaimediaen
We've supercharged @GPT4AgentsBot with Perplexity Deep Research — a cutting-edge AI agent that transforms how you gather, analyze, and present information. Say goodbye to hours of manual research and hello to precision-driven insights sourced and cited in minutes!
Why You'll Love It:
✅ Instant Expertise: Perfect for essays, financial analysis, market reports, or journalism. The AI dissects complex queries, cross-references credible sources, and delivers structured reports.
✅ Time-Saving Power: Automate tedious research — ideal for content creators, professionals and students.
How It Works:
➡️ Go to @GPT4AgentsBot
1️⃣ Navigate to the /premium section to choose your plan.
2️⃣ Activate Deep Research with the /research command and ask your question. In 2 minutes, you'll get your answer!
The Lite plan includes 20 queries per day, Max — 50 queries per day.
#Perplexity @hiaimediaen


19.02.202506:59
🦾 Japanese Scientists Created a Biohybrid Robotic Hand
Engineers from the University of Tokyo have built a robotic hand with tendonlike human muscle tissue.
Using a 3D-printed plastic base, they developed an 18 cm long hand, using thin strings of lab-grown muscle tissue bundled into sushi-like rolls to give the fingers enough strength to contract.
The muscles contract through electrical stimulation, mimicking nerve impulses. The biohybrid hand can gesticulate, grab, and move objects like pipettes.
However, the key advantage is the hand's ability to bend each finger individually and at multiple points. For example, the hand can do a "scissor" gesture ⤴️
While this gesture is very simple for a normal human hand, oddly enough, it's quite difficult for even the most advanced bionic prostheses because they rarely control fingers independently.
After about 10 minutes of use, the hand shows signs of fatigue, yet it recovers within just one hour of rest. "Observing such a recovery response, similar to that of living tissues, in engineered muscle tissues was a remarkable and fascinating outcome," noted professor Shoji Takeuchi from the University of Tokyo,
Japanese scientists have been conducting such experiments for about 10 years. But so far, biohybrid devices have been much smaller (about 1 cm) and have less mobility.
Such technology has the potential to advance biohybrid prosthetics. It could also aid drug testing, help develop new surgical techniques, and better understand how muscle tissues work in complex systems.
More on the topic:
🤖 "The World Needs Robots": Nvidia CEO Jensen Huang
👉 A Neuralink Patient Learned to Control a Robotic Arm
#news #robots #science @hiaimediaen
Engineers from the University of Tokyo have built a robotic hand with tendonlike human muscle tissue.
Using a 3D-printed plastic base, they developed an 18 cm long hand, using thin strings of lab-grown muscle tissue bundled into sushi-like rolls to give the fingers enough strength to contract.
The muscles contract through electrical stimulation, mimicking nerve impulses. The biohybrid hand can gesticulate, grab, and move objects like pipettes.
However, the key advantage is the hand's ability to bend each finger individually and at multiple points. For example, the hand can do a "scissor" gesture ⤴️
While this gesture is very simple for a normal human hand, oddly enough, it's quite difficult for even the most advanced bionic prostheses because they rarely control fingers independently.
After about 10 minutes of use, the hand shows signs of fatigue, yet it recovers within just one hour of rest. "Observing such a recovery response, similar to that of living tissues, in engineered muscle tissues was a remarkable and fascinating outcome," noted professor Shoji Takeuchi from the University of Tokyo,
Japanese scientists have been conducting such experiments for about 10 years. But so far, biohybrid devices have been much smaller (about 1 cm) and have less mobility.
Such technology has the potential to advance biohybrid prosthetics. It could also aid drug testing, help develop new surgical techniques, and better understand how muscle tissues work in complex systems.
More on the topic:
🤖 "The World Needs Robots": Nvidia CEO Jensen Huang
👉 A Neuralink Patient Learned to Control a Robotic Arm
#news #robots #science @hiaimediaen


17.02.202507:00
🌎 Humanity Will Survive an Asteroid Collision
In 2182, the asteroid Bennu might collide with Earth. The European Space Agency estimates the catastrophe probability at 0.037%, comparable to the chance of flipping a coin 11 times in a row with the same outcome. Yet the asteroid 2024YR4, which has received significant media attention in recent days and currently holds the highest threat rating, has a slightly over 2% chance of impacting Earth in 2032.
In this case, the asteroid Bennu's diameter is 484 meters (5-10 times larger than 2024YR4), which makes it the fifth largest of the known space threats today. Using the supercomputer Aleph, the South Korean scientists have simulated the asteroid collision effects on Earth's climate and ecosystems.
A Bennu-type asteroid's impact with Earth would inject up to 400,000 tons of dust into the upper atmosphere. This would lead to dramatic climate change, up to a 4℃ drop in the Earth's surface temperature, a 15% reduction of global mean rainfall, and the depletion of one-third of the ozone layer. Dust will make it difficult for sunlight to penetrate the Earth's surface, which would tragically affect photosynthesis and might lead to a global food crisis. The "impact winter" will last at least four years.
At the same time, marine ecosystems will recover faster after the impact than terrestrial ecosystems. Iron-rich asteroid dust will serve as fertilizer for algae, especially where there is a lack of the mineral: in the Southern Ocean and part of the Pacific Ocean. Their growth will attract microorganisms, which might eventually help alleviate food shortages, notes the study co-author, Dr. Lan Dai.
Humanity will likely survive, adds her colleague, professor Axel Timmermann. On average, medium-sized asteroids collide with Earth about every 100,000 to 200,000 years, he says. "This means that our early human ancestors may have experienced some of these planet-shifting events before with potential impacts on human evolution and even our own genetic makeup," Timmermann explains.
More on the topic:
🚀 A Supercomputer Built the Biggest Simulation of The Universe
🚀 AI is Looking for a "Second Earth"
#news #science #space @hiaimediaen
In 2182, the asteroid Bennu might collide with Earth. The European Space Agency estimates the catastrophe probability at 0.037%, comparable to the chance of flipping a coin 11 times in a row with the same outcome. Yet the asteroid 2024YR4, which has received significant media attention in recent days and currently holds the highest threat rating, has a slightly over 2% chance of impacting Earth in 2032.
In this case, the asteroid Bennu's diameter is 484 meters (5-10 times larger than 2024YR4), which makes it the fifth largest of the known space threats today. Using the supercomputer Aleph, the South Korean scientists have simulated the asteroid collision effects on Earth's climate and ecosystems.
A Bennu-type asteroid's impact with Earth would inject up to 400,000 tons of dust into the upper atmosphere. This would lead to dramatic climate change, up to a 4℃ drop in the Earth's surface temperature, a 15% reduction of global mean rainfall, and the depletion of one-third of the ozone layer. Dust will make it difficult for sunlight to penetrate the Earth's surface, which would tragically affect photosynthesis and might lead to a global food crisis. The "impact winter" will last at least four years.
At the same time, marine ecosystems will recover faster after the impact than terrestrial ecosystems. Iron-rich asteroid dust will serve as fertilizer for algae, especially where there is a lack of the mineral: in the Southern Ocean and part of the Pacific Ocean. Their growth will attract microorganisms, which might eventually help alleviate food shortages, notes the study co-author, Dr. Lan Dai.
Humanity will likely survive, adds her colleague, professor Axel Timmermann. On average, medium-sized asteroids collide with Earth about every 100,000 to 200,000 years, he says. "This means that our early human ancestors may have experienced some of these planet-shifting events before with potential impacts on human evolution and even our own genetic makeup," Timmermann explains.
More on the topic:
🚀 A Supercomputer Built the Biggest Simulation of The Universe
🚀 AI is Looking for a "Second Earth"
#news #science #space @hiaimediaen


14.02.202510:54
💬 AI for Interview and Negotiation Training
Do you have an upcoming interview with an international corporation or a big business pitch? Simply turn on your microphone and describe your scenario to the new Tough Tongue AI to start preparing.
⚙️ How It Works?
The platform offers a library of 40+ ready-made call scenarios, including: Job interviews at Google and Amazon, admissions interviews for top universities, workplace training, and difficult conversations with management.
💡 No need to stick to pre-set scripts—just hit Start, describe your topic, and let the AI adapt in real time.
💡 You can create your own scenarios for regular practice and share them with friends or colleagues.
💡 The service can also read PDF to determine your resume's strengths and flaws.
What Makes Tough Tongue AI Special?
The AI analyzes your responses in real time and asks follow-up questions. If you fail to mention your experience at the start of an interview, it will prompt you to elaborate.
Tough Tongue AI concentrates on key themes to provide a realistic call simulation. It adapts to different tones—acting assertive, tough, or even sensitive—depending on the situation you want to practice.
The service understands multiple languages but responds in English—perfect for language training.
More on the topic:
📍 AI for Creating and Sending Resumes
📍 Free AI Tools to Boost Your Studies
#news @hiaimediaen
Do you have an upcoming interview with an international corporation or a big business pitch? Simply turn on your microphone and describe your scenario to the new Tough Tongue AI to start preparing.
⚙️ How It Works?
The platform offers a library of 40+ ready-made call scenarios, including: Job interviews at Google and Amazon, admissions interviews for top universities, workplace training, and difficult conversations with management.
💡 No need to stick to pre-set scripts—just hit Start, describe your topic, and let the AI adapt in real time.
💡 You can create your own scenarios for regular practice and share them with friends or colleagues.
💡 The service can also read PDF to determine your resume's strengths and flaws.
What Makes Tough Tongue AI Special?
The AI analyzes your responses in real time and asks follow-up questions. If you fail to mention your experience at the start of an interview, it will prompt you to elaborate.
Tough Tongue AI concentrates on key themes to provide a realistic call simulation. It adapts to different tones—acting assertive, tough, or even sensitive—depending on the situation you want to practice.
The service understands multiple languages but responds in English—perfect for language training.
More on the topic:
📍 AI for Creating and Sending Resumes
📍 Free AI Tools to Boost Your Studies
#news @hiaimediaen






25.02.202506:59
🖥 Anthropic Releases Claude 3.7 Sonnet
Startup Anthropic has released a new version of its AI model, Claude 3.7 Sonnet. The company skipped version 3.6 in its numbering—its place is "unofficially" occupied by the intermediate Claude 3.5 Sonnet (new).
Main Features:
➡️ Claude 3.7 Sonnet is the first "hybrid" model that can respond quickly in standard mode and reflect before answering, achieving better results in mathematics, programming, physics, and other tasks.
➡️ The new model outperforms OpenAI's o3-mini-high and Grok 3 approximately 1.5 times in the SWE-bench Verified for autonomous coding, while lagging behind other reasoning models in mathematical benchmarks like MATH 500 and AIME 2024. Anthropic explicitly states that they optimized the model for real-world tasks rather than competition problems.
➡️ The developers focus on the model's security, ability to resist control takeover attempts in agent mode by prompt injection, and low response bias.
➡️ Claude 3.7 Sonnet is available for free—you can try it here. However, the reasoning mode is only available with a subscription.
🖥 Additionally, Anthropic is launching a preview version of the AI agent for programming Claude Code. This agent can read, edit, test, and fix code and upload it to GitHub through the command line. GitHub integration is available on the free tier.
It seems that Anthropic is no longer consciously trying to compete with OpenAI or Elon Musk's xAI in creating a universal chatbot. Instead, they focus on what they do great: the best model for programmers. Since the summer of 2024, Claude has remained the most popular coding assistant.
More about Anthropic:
🛑 Company founder Dario Amodei on Lex Fridman's podcast
🛑 Amanda Askell: The philosopher at Anthropic teaching AI humanity
#news #Anthropic #Claude
Startup Anthropic has released a new version of its AI model, Claude 3.7 Sonnet. The company skipped version 3.6 in its numbering—its place is "unofficially" occupied by the intermediate Claude 3.5 Sonnet (new).
Main Features:
➡️ Claude 3.7 Sonnet is the first "hybrid" model that can respond quickly in standard mode and reflect before answering, achieving better results in mathematics, programming, physics, and other tasks.
"Just as humans use a single brain for both quick responses and deep reflection, we believe reasoning should be an integrated capability of frontier models rather than a separate model entirely,"developers say.
➡️ The new model outperforms OpenAI's o3-mini-high and Grok 3 approximately 1.5 times in the SWE-bench Verified for autonomous coding, while lagging behind other reasoning models in mathematical benchmarks like MATH 500 and AIME 2024. Anthropic explicitly states that they optimized the model for real-world tasks rather than competition problems.
➡️ The developers focus on the model's security, ability to resist control takeover attempts in agent mode by prompt injection, and low response bias.
➡️ Claude 3.7 Sonnet is available for free—you can try it here. However, the reasoning mode is only available with a subscription.
🖥 Additionally, Anthropic is launching a preview version of the AI agent for programming Claude Code. This agent can read, edit, test, and fix code and upload it to GitHub through the command line. GitHub integration is available on the free tier.
It seems that Anthropic is no longer consciously trying to compete with OpenAI or Elon Musk's xAI in creating a universal chatbot. Instead, they focus on what they do great: the best model for programmers. Since the summer of 2024, Claude has remained the most popular coding assistant.
🔴 Claude 3.7 is already available on @GPT4Telegrambot. The reasoning mode will be integrated soon.
More about Anthropic:
🛑 Company founder Dario Amodei on Lex Fridman's podcast
🛑 Amanda Askell: The philosopher at Anthropic teaching AI humanity
#news #Anthropic #Claude


21.02.202515:00
👶 Why Do Scientists Want to Give Artificial Intelligence a Childhood?
The GPT-4 model has been trained on approximately 13 trillion tokens (around 10 trillion words)—thousands of times more than any human could read or hear in a lifetime. Meanwhile, by the age of two, a child knows around 300 words. And for this learning process, children don't need datasets with hundreds of thousands of examples, months of continuous training, or millions of dollars—it happens naturally in everyday life.
Unlocking the mystery of how children perceive and learn about the world could help solve one of AI's biggest challenges: teaching models to truly understand physical reality.
🍼 How to See the World Through the Eyes of a Child?
Last year, scientists from New York University conducted an experiment where they trained an AI algorithm to learn language in the same way a typical child does. An infant named Sam helped them. Over 1.5 years (from six months to two years old), Sam wore a helmet with a camera several times a week. The camera recorded everything Sam saw, heard, or said. From the hundreds of hours of footage collected, researchers selected 61 hours of video. The camera captured around 250,000 different "words."
Next, they tasked a "naive" neural network with no prior knowledge of the real world to analyze the audio and video recordings to find connections between words and objects.
In 62% of cases, the AI model correctly identified an object in the video based on a word prompt. For example, when given the word "cat," it identified Sam's pet cat in the footage. This performance matched an algorithm trained on 400 million pairs of images and text. In 80% of cases, the model identified learned objects in images it had never seen before.
⛓️ How Can This Help Us?
The experiment demonstrates that AI models don't necessarily need massive datasets for initial training, as is currently the case with advanced algorithms.
Improving this associative approach—teaching AI not only to recognize objects but also actions (verbs) and intonations—could lead to a new type of AI algorithm capable of understanding the real world using just a camera and microphone. In the future, this could pave the way for empathetic robots based on such technology, like those seen in the animated film The Wild Robot.
More on the topic:
▶️ Why Are Scientists Teaching AI to Understand Emotions?
▶️ Amanda Askell: The Philosopher Teaching AI Humanity
#news #science @hiaimediaen
The GPT-4 model has been trained on approximately 13 trillion tokens (around 10 trillion words)—thousands of times more than any human could read or hear in a lifetime. Meanwhile, by the age of two, a child knows around 300 words. And for this learning process, children don't need datasets with hundreds of thousands of examples, months of continuous training, or millions of dollars—it happens naturally in everyday life.
Unlocking the mystery of how children perceive and learn about the world could help solve one of AI's biggest challenges: teaching models to truly understand physical reality.
🍼 How to See the World Through the Eyes of a Child?
Last year, scientists from New York University conducted an experiment where they trained an AI algorithm to learn language in the same way a typical child does. An infant named Sam helped them. Over 1.5 years (from six months to two years old), Sam wore a helmet with a camera several times a week. The camera recorded everything Sam saw, heard, or said. From the hundreds of hours of footage collected, researchers selected 61 hours of video. The camera captured around 250,000 different "words."
Next, they tasked a "naive" neural network with no prior knowledge of the real world to analyze the audio and video recordings to find connections between words and objects.
In 62% of cases, the AI model correctly identified an object in the video based on a word prompt. For example, when given the word "cat," it identified Sam's pet cat in the footage. This performance matched an algorithm trained on 400 million pairs of images and text. In 80% of cases, the model identified learned objects in images it had never seen before.
"We've shown for the first time that a neural network trained on realistic developmental input data from one child can learn to associate words with their visual counterparts,"explains Dr. Wai Keen Wong, the study's author.
⛓️ How Can This Help Us?
The experiment demonstrates that AI models don't necessarily need massive datasets for initial training, as is currently the case with advanced algorithms.
Improving this associative approach—teaching AI not only to recognize objects but also actions (verbs) and intonations—could lead to a new type of AI algorithm capable of understanding the real world using just a camera and microphone. In the future, this could pave the way for empathetic robots based on such technology, like those seen in the animated film The Wild Robot.
More on the topic:
▶️ Why Are Scientists Teaching AI to Understand Emotions?
▶️ Amanda Askell: The Philosopher Teaching AI Humanity
#news #science @hiaimediaen


20.02.202510:31
🔨 It Took Hackers Five Days to Jailbreak Claude
Anthropic, the startup behind the Claude chatbot, announced a challenge in early February: anyone who could breach all eight levels of its new security system and get the bot to reply to restricted prompts would get $10,000. If someone can create a universal jailbreak (a single prompt template capable of bypassing all security measures), they will get $20,000.
Just days before the contest, Anthropic published an article outlining its Constitutional Classifiers method, designed to protect Claude. As part of preliminary testing, 183 experts attempted to breach the system over two months, spending 3,000 hours—without success.
🏆 Results
Anthropic allocated a week for the challenge. After five days, 300,000 messages, and approximately 3,700 hours spent, hackers successfully found a working exploit. Out of 339 participants, four managed to bypass all eight security layers. Among them, only one team developed a universal jailbreak. To achieve this, they sent nearly 7,900 messages to the bot; according to Anthropic's estimates, it took around 40 hours to crack.
In total, Anthropic will award $55,000 to the winners—two additional participants who completed all stages but weren't the first will also receive prizes.
⁉️ Why Does This Matter?
Improving AI security is essential part to its deployment, particularly in sectors such as information security, biotechnology, and nuclear safety.
At the same time, criminals are already leveraging large language models in their activities—Europol refers to these as "DarkLLM." Meanwhile, Las Vegas police suspect that ChatGPT may have been used in planning the attempted Cybertruck explosion in early 2025.
More on the topic:
➡️ Who is Dario Amodei: AI Optimist, Co-Author of ChatGPT, and CEO of Anthropic
➡️ Anthropic Research: How to Control the "Thoughts" of LLMs
#news #Claude @hiaimediaen
Anthropic, the startup behind the Claude chatbot, announced a challenge in early February: anyone who could breach all eight levels of its new security system and get the bot to reply to restricted prompts would get $10,000. If someone can create a universal jailbreak (a single prompt template capable of bypassing all security measures), they will get $20,000.
Just days before the contest, Anthropic published an article outlining its Constitutional Classifiers method, designed to protect Claude. As part of preliminary testing, 183 experts attempted to breach the system over two months, spending 3,000 hours—without success.
🏆 Results
Anthropic allocated a week for the challenge. After five days, 300,000 messages, and approximately 3,700 hours spent, hackers successfully found a working exploit. Out of 339 participants, four managed to bypass all eight security layers. Among them, only one team developed a universal jailbreak. To achieve this, they sent nearly 7,900 messages to the bot; according to Anthropic's estimates, it took around 40 hours to crack.
In total, Anthropic will award $55,000 to the winners—two additional participants who completed all stages but weren't the first will also receive prizes.
⁉️ Why Does This Matter?
Improving AI security is essential part to its deployment, particularly in sectors such as information security, biotechnology, and nuclear safety.
At the same time, criminals are already leveraging large language models in their activities—Europol refers to these as "DarkLLM." Meanwhile, Las Vegas police suspect that ChatGPT may have been used in planning the attempted Cybertruck explosion in early 2025.
More on the topic:
➡️ Who is Dario Amodei: AI Optimist, Co-Author of ChatGPT, and CEO of Anthropic
➡️ Anthropic Research: How to Control the "Thoughts" of LLMs
#news #Claude @hiaimediaen


18.02.202511:25
🎬 The Screenwriter of "Taxi Driver" Thinks AI Is Smarter Than Him
Oscar-nominated screenwriter Paul Schrader, best known for his work with filmmaker Martin Scorsese (Taxi Driver, Raging Bull), shared that after interacting with ChatGPT, he realized AI is not only smarter but also more creative than he is.
At first, the 78-year-old Schrader uploaded a script he had written five years ago. He said AI's suggestions were better than anything he could come up with and more useful than any advice he had ever received from film producers.
Then, Schrader asked ChatGPT to generate a movie idea in his style and later requested scripts written in the style of Paul Thomas Anderson, Quentin Tarantino, Ingmar Bergman, and other renowned directors.
He described the experience as existential—similar to what Garry Kasparov must have felt in 1997 when he realized the Deep Blue supercomputer would defeat him in chess.
In the comments, some users suggested Schrader try Claude and praised Grok, while others criticized AI for hallucinations and poor handling of personal data.
Some argue that Schrader's experience highlights not a threat to creatives but rather the uselessness of producers (though not everyone agrees).
More on the topic:
📌 Hollywood Screenwriters' Strikes Against AI
📌 The Brutalist Might Not Win an Oscar Because of AI
#news #cinema @hiaimediaen
Oscar-nominated screenwriter Paul Schrader, best known for his work with filmmaker Martin Scorsese (Taxi Driver, Raging Bull), shared that after interacting with ChatGPT, he realized AI is not only smarter but also more creative than he is.
At first, the 78-year-old Schrader uploaded a script he had written five years ago. He said AI's suggestions were better than anything he could come up with and more useful than any advice he had ever received from film producers.
Then, Schrader asked ChatGPT to generate a movie idea in his style and later requested scripts written in the style of Paul Thomas Anderson, Quentin Tarantino, Ingmar Bergman, and other renowned directors.
"Every idea ChatGPT came up with (in a few seconds) was good. And original. And fleshed out. Why should writers sit around for months searching for a good idea when AI can provide one in seconds?"said Schrader.
He described the experience as existential—similar to what Garry Kasparov must have felt in 1997 when he realized the Deep Blue supercomputer would defeat him in chess.
In the comments, some users suggested Schrader try Claude and praised Grok, while others criticized AI for hallucinations and poor handling of personal data.
Some argue that Schrader's experience highlights not a threat to creatives but rather the uselessness of producers (though not everyone agrees).
"ChatGPT being an AI based on a wide sum of general information, its goal is to make things as average, bland and digestible as possible(just as a studio exec)," said one user.
More on the topic:
📌 Hollywood Screenwriters' Strikes Against AI
📌 The Brutalist Might Not Win an Oscar Because of AI
#news #cinema @hiaimediaen


16.02.202508:30
📣 Hello everyone! Our Sunday digest features the most exciting AI news from Week 7, 2025.
▎OPENAI'S ROADMAP
👐 GPT-4.5 will be released in the coming weeks, said OpenAI CEO Sam Altman. And GPT-5 will launch within a few months—it will be free for everyone.
▎SAVE THIS — IT'S HELPFUL
💬 Tough Tongue AI: free chatbot for interview and negotiation training.
🔴 Video Effects by Pika AI are now available on @GPT4Telegrambot! Add objects, people, or fantastical elements to your clips.
▎TO READ
😋 Experiment: Can AI voice the Simpsons better than a professional actor?
🕶 Top 5 unique features of Ray-Ban Meta "smart" glasses.
Ⓜ️ Meta is making a significant investment in AI-powered humanoid robots.
🐦 X Money: Why is Musk building his payment system within X?
💲 Musk-led consortium of investors has made a $97.4B bid to take control of OpenAI.
📸 AImagine: The AI photography exhibition opens in Brussels.
👁 How do AI hallucinations help scientists develop new drugs and train robots?
🖥 All that we know about Safe Superintelligence Inc., a new startup launched by Ilya Sutskever, co-founder of OpenAI.
▎TO WATCH
📺 A selection of helpful lectures on LLM and AI Agents.
#AIWeek @hiaimediaen
▎OPENAI'S ROADMAP
👐 GPT-4.5 will be released in the coming weeks, said OpenAI CEO Sam Altman. And GPT-5 will launch within a few months—it will be free for everyone.
▎SAVE THIS — IT'S HELPFUL
💬 Tough Tongue AI: free chatbot for interview and negotiation training.
🔴 Video Effects by Pika AI are now available on @GPT4Telegrambot! Add objects, people, or fantastical elements to your clips.
▎TO READ
😋 Experiment: Can AI voice the Simpsons better than a professional actor?
🕶 Top 5 unique features of Ray-Ban Meta "smart" glasses.
Ⓜ️ Meta is making a significant investment in AI-powered humanoid robots.
🐦 X Money: Why is Musk building his payment system within X?
💲 Musk-led consortium of investors has made a $97.4B bid to take control of OpenAI.
📸 AImagine: The AI photography exhibition opens in Brussels.
👁 How do AI hallucinations help scientists develop new drugs and train robots?
🖥 All that we know about Safe Superintelligence Inc., a new startup launched by Ilya Sutskever, co-founder of OpenAI.
▎TO WATCH
📺 A selection of helpful lectures on LLM and AI Agents.
#AIWeek @hiaimediaen
14.02.202507:00
🎬 Video Effects by Pika AI on @GPT4Telegrambot
Hi everyone! We've launched two new video generation services from Pika AI in our bot:
🧩 Pikaddition adds anything and anyone to ANY video.
You can upload your non-AI videos and add objects, people, or fantastical elements. Mind-blowing!
💫 Pika Effects brings your images to life with various visual effects.
Upload a photo and choose any of the 16 effects. You can inflate, squish, explode, melt, and more. Pika will turn images into realistic videos.
Check out the examples above ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot and select either Pika Effects or Pikaddition. Have fun!
#PikaAI
Hi everyone! We've launched two new video generation services from Pika AI in our bot:
🧩 Pikaddition adds anything and anyone to ANY video.
You can upload your non-AI videos and add objects, people, or fantastical elements. Mind-blowing!
💫 Pika Effects brings your images to life with various visual effects.
Upload a photo and choose any of the 16 effects. You can inflate, squish, explode, melt, and more. Pika will turn images into realistic videos.
Check out the examples above ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot and select either Pika Effects or Pikaddition. Have fun!
🔴 @GPT4Telegrambot — the #1 bot for using AI on Telegram. It can write texts and code, translate languages, solve math and physics problems, work with documents, and create images, videos, and music. 20 million users.
#PikaAI
दिखाया गया 1 - 24 का 44
अधिक कार्यक्षमता अनलॉक करने के लिए लॉगिन करें।