
خبرفوری

آهنگیفای

TrueCaller

Notcoin Community

Whale Chanel

Proxy MTProto | پروکسی

iRo Proxy | پروکسی

Findo Lucky

My Proxy | مای پروکسی

خبرفوری

آهنگیفای

TrueCaller

Notcoin Community

Whale Chanel

Proxy MTProto | پروکسی

iRo Proxy | پروکسی

Findo Lucky

My Proxy | مای پروکسی

خبرفوری

آهنگیفای

TrueCaller

Hi, AI | Tech News
Technology
Media from the creators of @GPT4Telegrambot — 18 million users worldwide. We write about AI and the people behind it.
For all questions: @anveklich
News of the bot: @GPT4Telegram
Media in Russian: @hiaimedia
For all questions: @anveklich
News of the bot: @GPT4Telegram
Media in Russian: @hiaimedia
TGlist rating
0
0
TypePublic
Verification
Not verifiedTrust
Not trustedLocation
LanguageOther
Channel creation dateFeb 06, 2025
Added to TGlist
Jul 31, 2024Linked chat

Hi, AI | Comments
3.5K
Subscribers
328 184
24 hours
358-0.1%Week
340%Month
3160.1%
Citation index
0
Mentions1Shares on channels0Mentions on channels1
Average views per post
7 426
12 hours7 077
4.9%24 hours7 426
15.3%48 hours12 794
4%
Engagement rate (ER)
0.75%
Reposts8Comments1Reactions42
Engagement rate by reach (ERR)
2.32%
24 hours
0.35%Week
0.04%Month0%
Average views per ad post
7 426
1 hour50.07%1 – 4 hours4 36558.78%4 - 24 hours3 60748.57%
Total posts in 24 hours
1
Dynamic
1
Records
24.02.202508:21
329.3K
Subscribers22.08.202423:59
100
Citation index19.01.202523:59
10.2K
Average views per post19.01.202519:38
10.2K
Average views per ad post08.02.202523:59
1025.00%
ER16.09.202423:59
3.99%
ERR

07.02.202507:03
🇨🇳 Best Free Chinese AI Models
Chinese AI has gone from "ChatGPT clones" to serious Silicon Valley competition in just a few years. Here are the top models from China you should try.
1️⃣ DeepSeek: As Good as ChatGPT
The DeepSeek-V3 language model and the R1 reasoning model, based on V3, perform on par with GPT-4o and o1 from OpenAI, even surpassing them in some benchmarks. Apps available for iOS and Android.
2️⃣ Qwen: Ultimate Versatility
Qwen Chat, developed by Chinese tech giant Alibaba, is a powerful multimodal chatbot. It can recognize and generate images and videos, features built-in web search, and even includes an interactive coding sandbox. The latest model, Qwen 2.5-Max, delivers results comparable to DeepSeek-V3 and GPT-4o.
Accessible via browser.
3️⃣ Kling AI: Best for Video
Kling AI is one of the top models for video generation, rivaling Google Veo 2 and Sora by OpenAI. It offers a "virtual fitting room" feature and allows users to add objects from photos directly into videos.
4️⃣ Hailuo AI: Best for Voice Cloning & Dubbing
Developed by MiniMax, this AI voice model can clone a voice from just a 10-second sample and generate up to 10,000 characters of speech in 17 languages. The company also offers advanced video generation models and a text-based AI assistant.
Try it on the official website.
Other Useful Picks:
🔖 Top 5 Free AI Services for Summarizing YouTube Videos
🔖 Top 7 AI Books of 2024
#top #deepseek @hiaimediaen
Chinese AI has gone from "ChatGPT clones" to serious Silicon Valley competition in just a few years. Here are the top models from China you should try.
1️⃣ DeepSeek: As Good as ChatGPT
The DeepSeek-V3 language model and the R1 reasoning model, based on V3, perform on par with GPT-4o and o1 from OpenAI, even surpassing them in some benchmarks. Apps available for iOS and Android.
2️⃣ Qwen: Ultimate Versatility
Qwen Chat, developed by Chinese tech giant Alibaba, is a powerful multimodal chatbot. It can recognize and generate images and videos, features built-in web search, and even includes an interactive coding sandbox. The latest model, Qwen 2.5-Max, delivers results comparable to DeepSeek-V3 and GPT-4o.
Accessible via browser.
3️⃣ Kling AI: Best for Video
Kling AI is one of the top models for video generation, rivaling Google Veo 2 and Sora by OpenAI. It offers a "virtual fitting room" feature and allows users to add objects from photos directly into videos.
🔴 How to use the latest Kling 1.6 Pro in our bot — read here.
4️⃣ Hailuo AI: Best for Voice Cloning & Dubbing
Developed by MiniMax, this AI voice model can clone a voice from just a 10-second sample and generate up to 10,000 characters of speech in 17 languages. The company also offers advanced video generation models and a text-based AI assistant.
Try it on the official website.
Other Useful Picks:
🔖 Top 5 Free AI Services for Summarizing YouTube Videos
🔖 Top 7 AI Books of 2024
#top #deepseek @hiaimediaen


05.02.202513:13
🧠 AI Usage Weakens Critical Thinking
Swiss researchers discovered that relying on AI for information and decision-making can impair critical thinking. Their study included 666 people aged 17 and older.
Young people under 25 were particularly vulnerable. They rely more on AI daily and perform poorly on critical thinking tests. Older participants, on average, used AI less frequently and performed better in analytical tasks.
However, better-educated individuals demonstrated stronger cognitive abilities regardless of their AI usage.
🧑💻 How Does It Work?
This phenomenon is known as "cognitive offloading," the delegating of thinking and problem-solving to external technologies. The more a person offloads these tasks to AI, the weaker their independent analytical skills become.
⚡️ At the same time, previous studies show that AI can aid learning. For example, AI-powered news aggregators and personalized recommendations help users focus on relevant information.
A similar trend was observed with the "Google Effect," which emerged shortly after web search became an essential aspect of life. Instead of remembering information itself, people started remembering where to find it—a concept known as "transactive memory."
⏳ What's Next?
Researchers believe that with the right approach, AI can enhance rather than replace analytical skills. They call for educational programs to help people use AI wisely without harming critical thinking.
Have you noticed a decline in critical thinking due to AI?
🎃 — yes, I rely on bots more and more
❤️ — no, I always verify everything
🙊 — I don't trust AI with crucial tasks
#news #science @hiaimediaen
Swiss researchers discovered that relying on AI for information and decision-making can impair critical thinking. Their study included 666 people aged 17 and older.
Young people under 25 were particularly vulnerable. They rely more on AI daily and perform poorly on critical thinking tests. Older participants, on average, used AI less frequently and performed better in analytical tasks.
However, better-educated individuals demonstrated stronger cognitive abilities regardless of their AI usage.
🧑💻 How Does It Work?
This phenomenon is known as "cognitive offloading," the delegating of thinking and problem-solving to external technologies. The more a person offloads these tasks to AI, the weaker their independent analytical skills become.
"This relationship underscores the dual-edged nature of AI technology. While it enhances efficiency and convenience, it inadvertently fosters dependence, which can compromise critical thinking skills over time,"the study authors wrote.
⚡️ At the same time, previous studies show that AI can aid learning. For example, AI-powered news aggregators and personalized recommendations help users focus on relevant information.
A similar trend was observed with the "Google Effect," which emerged shortly after web search became an essential aspect of life. Instead of remembering information itself, people started remembering where to find it—a concept known as "transactive memory."
⏳ What's Next?
Researchers believe that with the right approach, AI can enhance rather than replace analytical skills. They call for educational programs to help people use AI wisely without harming critical thinking.
Have you noticed a decline in critical thinking due to AI?
🎃 — yes, I rely on bots more and more
❤️ — no, I always verify everything
🙊 — I don't trust AI with crucial tasks
#news #science @hiaimediaen


15.02.202507:29
📺 A Selection of Fascinating Lectures on LLM and AI Agents
We've compiled the most useful lectures from leading AI experts, from basic tips on working with LLMs to comprehensive machine learning courses.
Save this post so you don't lose it!
1️⃣ Lecture by Andrej Karpathy on LLM
Difficulty: 🌟🌟🌟🌟🌟
One of the co-founders of OpenAI explains the structure and all stages of developing large language models (LLMs) that power ChatGPT and similar bots. Karpathy pays special attention to the most advanced products, such as DeepSeek-R1. This lecture is perfect for those already familiar with the terminology and looking for an in-depth overview from a top-tier specialist.
2️⃣ Lecture on AI Agents by Maya Murad from IBM
Difficulty: 🌟🌟🌟🌟🌟
Maya Murad, head of the strategic department at IBM Research, explores the evolution of AI agents and their key role in advancing the AI industry. You'll learn how agents integrate with databases and other tools to solve practical tasks.
3️⃣ Guide to Building an AI Agent by David Ondrej
Difficulty: 🌟🌟🌟🌟🌟
Blogger David Ondrej provides a step-by-step guide to creating your own AI agent—from defining the assistant's task and selecting tools to fine-tuning the model.
4️⃣ Lecture by the DeepLearningAI Project on Building AI Agents
Difficulty: 🌟🌟🌟🌟🌟
In this lecture, conducted by the founders of LlamaIndex and TruEra, you will learn how to design, evaluate, and iterate LLM agents, enabling you to create powerful and efficient LLM agents quickly.
More on the topic:
👍 Andrej Karpathy's lecture "How to Build GPT-2"
👍 Best TED Talks on AI of 2024
#top #lectures #education @hiaimediaen
We've compiled the most useful lectures from leading AI experts, from basic tips on working with LLMs to comprehensive machine learning courses.
Save this post so you don't lose it!
1️⃣ Lecture by Andrej Karpathy on LLM
Difficulty: 🌟🌟🌟🌟🌟
One of the co-founders of OpenAI explains the structure and all stages of developing large language models (LLMs) that power ChatGPT and similar bots. Karpathy pays special attention to the most advanced products, such as DeepSeek-R1. This lecture is perfect for those already familiar with the terminology and looking for an in-depth overview from a top-tier specialist.
2️⃣ Lecture on AI Agents by Maya Murad from IBM
Difficulty: 🌟🌟🌟🌟🌟
Maya Murad, head of the strategic department at IBM Research, explores the evolution of AI agents and their key role in advancing the AI industry. You'll learn how agents integrate with databases and other tools to solve practical tasks.
3️⃣ Guide to Building an AI Agent by David Ondrej
Difficulty: 🌟🌟🌟🌟🌟
Blogger David Ondrej provides a step-by-step guide to creating your own AI agent—from defining the assistant's task and selecting tools to fine-tuning the model.
4️⃣ Lecture by the DeepLearningAI Project on Building AI Agents
Difficulty: 🌟🌟🌟🌟🌟
In this lecture, conducted by the founders of LlamaIndex and TruEra, you will learn how to design, evaluate, and iterate LLM agents, enabling you to create powerful and efficient LLM agents quickly.
More on the topic:
👍 Andrej Karpathy's lecture "How to Build GPT-2"
👍 Best TED Talks on AI of 2024
#top #lectures #education @hiaimediaen






12.02.202511:00
🕶 What Unique Features Do Ray-Ban Meta Glasses Have?
The first Ray-Ban Meta glasses launched in 2021, and they mainly seemed useful for taking quirky photos. However, the second generation, released in 2023, brought a variety of AI-powered features that made them a genuine gadget of the future.
Here are some of the most unusual use cases for these "smart" glasses:
1️⃣ Wearable Assistant. The Live AI feature allows you to talk to your glasses, create notes or reminders, and get answers to questions about objects the gadget's camera sees. For example, you can open your fridge and ask the glasses what you can cook with the available ingredients or request a story about an interesting landmark.
2️⃣ Spy in Stylish Glasses. Harvard students turned Ray-Ban Meta into a personality identification system. The glasses can livestream video, and an algorithm processes the stream, identifies faces in the frame, and retrieves information about them from the internet. Within seconds, students could uncover personal details of random passersby, such as home addresses, phone numbers, and even relatives' names.
3️⃣ Assistance for the Visually Impaired. The glasses can describe surroundings, recognize signs or bus numbers, read books aloud, and even assist with cooking.
4️⃣ Real-Time Conversation Translation. With a delay of just a few seconds, conversations are also synced to a companion app. Currently, only four languages are supported: Spanish, French, Italian, and English.
5️⃣ Finding Your Car in a Parking Lot. The glasses can remember where you parked your car and save you from wandering around looking for it.
According to Mark Zuckerberg, Ray-Ban Meta became a massive hit for the company in 2024. The third generation of the device is expected to launch this year, with rumors suggesting it may include a display embedded in the lens.
More Stories About Smart Glasses:
🪩 Will we live in AR glasses like characters from The Simpsons?
🪩 AI glasses that recognize emotions and count calories
#news #Meta @hiaimediaen
The first Ray-Ban Meta glasses launched in 2021, and they mainly seemed useful for taking quirky photos. However, the second generation, released in 2023, brought a variety of AI-powered features that made them a genuine gadget of the future.
Here are some of the most unusual use cases for these "smart" glasses:
1️⃣ Wearable Assistant. The Live AI feature allows you to talk to your glasses, create notes or reminders, and get answers to questions about objects the gadget's camera sees. For example, you can open your fridge and ask the glasses what you can cook with the available ingredients or request a story about an interesting landmark.
2️⃣ Spy in Stylish Glasses. Harvard students turned Ray-Ban Meta into a personality identification system. The glasses can livestream video, and an algorithm processes the stream, identifies faces in the frame, and retrieves information about them from the internet. Within seconds, students could uncover personal details of random passersby, such as home addresses, phone numbers, and even relatives' names.
3️⃣ Assistance for the Visually Impaired. The glasses can describe surroundings, recognize signs or bus numbers, read books aloud, and even assist with cooking.
4️⃣ Real-Time Conversation Translation. With a delay of just a few seconds, conversations are also synced to a companion app. Currently, only four languages are supported: Spanish, French, Italian, and English.
5️⃣ Finding Your Car in a Parking Lot. The glasses can remember where you parked your car and save you from wandering around looking for it.
According to Mark Zuckerberg, Ray-Ban Meta became a massive hit for the company in 2024. The third generation of the device is expected to launch this year, with rumors suggesting it may include a display embedded in the lens.
More Stories About Smart Glasses:
🪩 Will we live in AR glasses like characters from The Simpsons?
🪩 AI glasses that recognize emotions and count calories
#news #Meta @hiaimediaen






+3
14.02.202513:13
💗 Valentine's Day Video Effects from Pika AI
Hi everyone! Just today, we rolled out new video generation services from Pika AI in our bot, and we already have an update!
To celebrate Valentine's Day, Pika AI has introduced 6 fresh effects that bring your photos to life ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot, select "Effects," and upload a photo.
Within a minute, Pika will transform your image into a cute or funny video that can become a heartfelt greeting for your loved one.
Have a wonderful holiday 🌹
Hi everyone! Just today, we rolled out new video generation services from Pika AI in our bot, and we already have an update!
To celebrate Valentine's Day, Pika AI has introduced 6 fresh effects that bring your photos to life ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot, select "Effects," and upload a photo.
Within a minute, Pika will transform your image into a cute or funny video that can become a heartfelt greeting for your loved one.
Have a wonderful holiday 🌹






18.02.202506:59
✖️ Elon Musk's xAI Releases Its Grok 3
Today, Elon Musk and the xAI team unveiled the new Grok 3 model. It outperforms OpenAI's o3-mini-high in benchmarks and has taken the top spot in the LMArena ranking with 1402 points.
🔜 Grok 3 is available in regular and mini versions, each with a reasoning mode. The basic Grok 3 mini shows performance on par with GPT-4o, DeepSeek-V3, Gemini 2-Pro, and Claude 3.5 Sonnet in math, coding, and hard sciences.
🔜 Grok 3 Reasoning outperforms the advanced o3-mini-high and o1 from OpenAI, as well as DeepSeek-R1 and Gemini Flash Thinking. During the demonstration, the model was asked to write code for modeling the flight path of a spacecraft from Earth to Mars—and Grok beat it.
🔜 The Big Brain mode makes the model reason more thoroughly. With this mode, Grok 3 created a simple game combining Tetris and Bejewled.
🔜 DeepSearch: The first agent from xAI, the competitor for OpenAI's Deep Research, aims to initiate a "new generation of search engines."
🔜 It seems that censorship in Grok 3's responses will be minimal, although Musk did not state this explicitly.
Grok 3 was trained using the Colossus data center. It took 122 days to launch the first phase with 100,000 Nvidia H100 GPUs, and another 92 days to double the cluster's capacity. In total, 10 times more compute was spent on training Grok 3 than on the previous generation.
📅 Grok 3 is set to launch for X's Pro subscribers today. xAI is also introducing SuperGrok, a $30 per month subscription that offers DeepSearch, advanced reasoning, and expanded image generation limits.
In about a week, a voice mode will appear in the Grok application, and in a few months, when the new version comes out of beta, xAI plans to open-source the code for Grok 2.
🟠 Meanwhile, Musk mentioned that "if everything goes well," SpaceX will send a mission to Mars in the next "window" in November 2026—with Optimus robots and Grok.
More on the topic:
😁 Grok 2 generates images with minimal censorship
😁 An 8-hour (!) interview with Elon Musk
#news #Musk #Grok @hiaimediaen
Today, Elon Musk and the xAI team unveiled the new Grok 3 model. It outperforms OpenAI's o3-mini-high in benchmarks and has taken the top spot in the LMArena ranking with 1402 points.
🔜 Grok 3 is available in regular and mini versions, each with a reasoning mode. The basic Grok 3 mini shows performance on par with GPT-4o, DeepSeek-V3, Gemini 2-Pro, and Claude 3.5 Sonnet in math, coding, and hard sciences.
🔜 Grok 3 Reasoning outperforms the advanced o3-mini-high and o1 from OpenAI, as well as DeepSeek-R1 and Gemini Flash Thinking. During the demonstration, the model was asked to write code for modeling the flight path of a spacecraft from Earth to Mars—and Grok beat it.
🔜 The Big Brain mode makes the model reason more thoroughly. With this mode, Grok 3 created a simple game combining Tetris and Bejewled.
🔜 DeepSearch: The first agent from xAI, the competitor for OpenAI's Deep Research, aims to initiate a "new generation of search engines."
🔜 It seems that censorship in Grok 3's responses will be minimal, although Musk did not state this explicitly.
"It's maximally truth-seeking AI, even if that truth is sometimes at odds with what is politically correct,"says the billionaire.
Grok 3 was trained using the Colossus data center. It took 122 days to launch the first phase with 100,000 Nvidia H100 GPUs, and another 92 days to double the cluster's capacity. In total, 10 times more compute was spent on training Grok 3 than on the previous generation.
📅 Grok 3 is set to launch for X's Pro subscribers today. xAI is also introducing SuperGrok, a $30 per month subscription that offers DeepSearch, advanced reasoning, and expanded image generation limits.
In about a week, a voice mode will appear in the Grok application, and in a few months, when the new version comes out of beta, xAI plans to open-source the code for Grok 2.
🟠 Meanwhile, Musk mentioned that "if everything goes well," SpaceX will send a mission to Mars in the next "window" in November 2026—with Optimus robots and Grok.
More on the topic:
😁 Grok 2 generates images with minimal censorship
😁 An 8-hour (!) interview with Elon Musk
#news #Musk #Grok @hiaimediaen
24.02.202507:29
📺 Winners of the 2nd Season of the AI Film Festival Project 0dyssey by ElevenLabs Announced
The online festival's organizers increased the number of categories. The new list includes Social Media, Community Choice, and Sponsor's Picks. Civitai, Kling, and Viggle support the competition.
Participants were required to use at least one AI tool, which didn't have to be one developed by ElevenLabs.
Here are some of the most exciting films in the new categories.
👍 Social Media
Silver Winner: DNA, a slow-paced video about generational links and the value of traditions produced by Sia Rekh Hu, an online film school that specializes in cinematic AI videos.
🖥 Rendering & VFX
Mistake, a post-apocalyptic film, won first place in the technical category with its trademark GTA art visual style.
💰 Marketing & Advertisement
Demonopoly is a satirical spin on Monopoly, a game (as you may know) that frequently ends in player conflicts. Unfortunately, this version of the game does not exist (yet).
🏆 Kling's Choice
A moving video letter from director Danny Zeng to their grandmother, who is suffering from dementia.
🙌 Community Choice
"The Audience Award" went to the short film Massive Invasion, about a girl trying to save a puppy from a virus that turns animals into aggressive mutants.
➡️ You can watch the rest of the films on the festival's website.
More on the topic:
👉 Translating videos into 29 languages with the original speaker's voices — a neural network by Elevenlabs
👉 Experiment: Can AI Voice the Simpsons?
#news #cinema @hiaimediaen
The online festival's organizers increased the number of categories. The new list includes Social Media, Community Choice, and Sponsor's Picks. Civitai, Kling, and Viggle support the competition.
Participants were required to use at least one AI tool, which didn't have to be one developed by ElevenLabs.
Here are some of the most exciting films in the new categories.
👍 Social Media
Silver Winner: DNA, a slow-paced video about generational links and the value of traditions produced by Sia Rekh Hu, an online film school that specializes in cinematic AI videos.
🖥 Rendering & VFX
Mistake, a post-apocalyptic film, won first place in the technical category with its trademark GTA art visual style.
💰 Marketing & Advertisement
Demonopoly is a satirical spin on Monopoly, a game (as you may know) that frequently ends in player conflicts. Unfortunately, this version of the game does not exist (yet).
🏆 Kling's Choice
A moving video letter from director Danny Zeng to their grandmother, who is suffering from dementia.
🙌 Community Choice
"The Audience Award" went to the short film Massive Invasion, about a girl trying to save a puppy from a virus that turns animals into aggressive mutants.
➡️ You can watch the rest of the films on the festival's website.
More on the topic:
👉 Translating videos into 29 languages with the original speaker's voices — a neural network by Elevenlabs
👉 Experiment: Can AI Voice the Simpsons?
#news #cinema @hiaimediaen


06.02.202510:59
🤩 A Neuralink Patient Learned to Control a Robotic Arm
Elon Musk's company Neuralink has, for the first time, demonstrated how a patient with a brain-implanted neurochip can control a robotic arm using only the power of thought ⤴️
The patient wrote the word "Convoy" with a marker—the name of the company's new research project on applying neurochips in robotics. U.S. regulators recently approved the study.
👉 The Neuralink neurochip is smaller than a coin, yet it processes signals from 1,500 electrodes—and this number can be increased to 4,000. The electrodes transmit impulses from the brain's motor neurons to external devices via Bluetooth. When the patient thinks about moving their hand, the signal from their brain activates motors in the robotic arm, causing it to move.
👉 No additional devices (joysticks, cables, or control panels) are needed to operate the robotic arm.
👉 Neuralink could help people with neurological conditions (such as amyotrophic lateral sclerosis, which Stephen Hawking lived with) or severe spinal injuries control prosthetics or even a full robotic exoskeleton (like in the movie "Atlas"). The company has started accepting applications from potential patients to be included in a special registry of candidates for neural implantation.
More about brain implants:
🧠 Blindsight: How Neuralink Plans to Restore Vision
🧠 Cyberpunk in Action: How Living Neurons Work from Neuralink's Competitor
#news #neuralink @hiaimediaen
Elon Musk's company Neuralink has, for the first time, demonstrated how a patient with a brain-implanted neurochip can control a robotic arm using only the power of thought ⤴️
The patient wrote the word "Convoy" with a marker—the name of the company's new research project on applying neurochips in robotics. U.S. regulators recently approved the study.
🖥 The first person with a Neuralink implant, Noland Arbo, is already fully utilizing a computer. He writes long texts and plays games on par with people using regular keyboards and mice.
👉 The Neuralink neurochip is smaller than a coin, yet it processes signals from 1,500 electrodes—and this number can be increased to 4,000. The electrodes transmit impulses from the brain's motor neurons to external devices via Bluetooth. When the patient thinks about moving their hand, the signal from their brain activates motors in the robotic arm, causing it to move.
👉 No additional devices (joysticks, cables, or control panels) are needed to operate the robotic arm.
👉 Neuralink could help people with neurological conditions (such as amyotrophic lateral sclerosis, which Stephen Hawking lived with) or severe spinal injuries control prosthetics or even a full robotic exoskeleton (like in the movie "Atlas"). The company has started accepting applications from potential patients to be included in a special registry of candidates for neural implantation.
More about brain implants:
🧠 Blindsight: How Neuralink Plans to Restore Vision
🧠 Cyberpunk in Action: How Living Neurons Work from Neuralink's Competitor
#news #neuralink @hiaimediaen
25.02.202510:59
🏙 Toyota is Building a "City of the Future" on Its Former Factory Site
The new town, Woven City, is located near Mount Fuji on the grounds of the Toyota automobile plant that closed in 2020.
The name alludes to Toyota's origins as the manufacturer of automatic looms before building cars. It also symbolizes the weaving together of new ideas: Woven City will become an incubator for startups and innovations.
AI will oversee buildings and infrastructure, robot companions will help residents with various tasks, and only eco-friendly self-driving vehicles will operate on the streets.
His son, Daisuke Toyoda, manages this project. He notes that Woven City is an experimental test course for new mobility technologies. It includes three types of streets: pedestrian, streets where pedestrians and personal mobility coexist, and streets dedicated to automated mobility. Electric Toyota minibuses will become the main city transportation in the city as well e-VTOLs, the flying taxis from the American company Joby Aviation. The city logistics will run underground: all the buildings are connected by underground passageways, where autonomous vehicles will travel around collecting garbage and making deliveries.
The construction of Woven City cost around $10 billion. The Danish design bureau Bjarke Ingels Group was responsible for the architecture.
The city's first residents—100 Toyota employees—will move into it this year. Eventually, its population will grow to 2,000 people. The apartments will not go on sale; the city will open to tourists in 2026.
More on the topic:
🚕 In Which Countries Does Robotaxi Operate, and How Safe Is It?
🚕 Elon Musk Unveils the Cybercab Robotaxi
#news #future #cars @hiaimediaen
The new town, Woven City, is located near Mount Fuji on the grounds of the Toyota automobile plant that closed in 2020.
The name alludes to Toyota's origins as the manufacturer of automatic looms before building cars. It also symbolizes the weaving together of new ideas: Woven City will become an incubator for startups and innovations.
AI will oversee buildings and infrastructure, robot companions will help residents with various tasks, and only eco-friendly self-driving vehicles will operate on the streets.
"Our residents will include: Toyota employees and their families, retired people, retailers, visiting scientists, industry partners, entrepreneurs, academics… And, of course, their pets! For example, my favorite horse named Minnie!"says Toyota Motor chairman Akio Toyoda.
His son, Daisuke Toyoda, manages this project. He notes that Woven City is an experimental test course for new mobility technologies. It includes three types of streets: pedestrian, streets where pedestrians and personal mobility coexist, and streets dedicated to automated mobility. Electric Toyota minibuses will become the main city transportation in the city as well e-VTOLs, the flying taxis from the American company Joby Aviation. The city logistics will run underground: all the buildings are connected by underground passageways, where autonomous vehicles will travel around collecting garbage and making deliveries.
The construction of Woven City cost around $10 billion. The Danish design bureau Bjarke Ingels Group was responsible for the architecture.
The city's first residents—100 Toyota employees—will move into it this year. Eventually, its population will grow to 2,000 people. The apartments will not go on sale; the city will open to tourists in 2026.
More on the topic:
🚕 In Which Countries Does Robotaxi Operate, and How Safe Is It?
🚕 Elon Musk Unveils the Cybercab Robotaxi
#news #future #cars @hiaimediaen


14.02.202510:54
💬 AI for Interview and Negotiation Training
Do you have an upcoming interview with an international corporation or a big business pitch? Simply turn on your microphone and describe your scenario to the new Tough Tongue AI to start preparing.
⚙️ How It Works?
The platform offers a library of 40+ ready-made call scenarios, including: Job interviews at Google and Amazon, admissions interviews for top universities, workplace training, and difficult conversations with management.
💡 No need to stick to pre-set scripts—just hit Start, describe your topic, and let the AI adapt in real time.
💡 You can create your own scenarios for regular practice and share them with friends or colleagues.
💡 The service can also read PDF to determine your resume's strengths and flaws.
What Makes Tough Tongue AI Special?
The AI analyzes your responses in real time and asks follow-up questions. If you fail to mention your experience at the start of an interview, it will prompt you to elaborate.
Tough Tongue AI concentrates on key themes to provide a realistic call simulation. It adapts to different tones—acting assertive, tough, or even sensitive—depending on the situation you want to practice.
The service understands multiple languages but responds in English—perfect for language training.
More on the topic:
📍 AI for Creating and Sending Resumes
📍 Free AI Tools to Boost Your Studies
#news @hiaimediaen
Do you have an upcoming interview with an international corporation or a big business pitch? Simply turn on your microphone and describe your scenario to the new Tough Tongue AI to start preparing.
⚙️ How It Works?
The platform offers a library of 40+ ready-made call scenarios, including: Job interviews at Google and Amazon, admissions interviews for top universities, workplace training, and difficult conversations with management.
💡 No need to stick to pre-set scripts—just hit Start, describe your topic, and let the AI adapt in real time.
💡 You can create your own scenarios for regular practice and share them with friends or colleagues.
💡 The service can also read PDF to determine your resume's strengths and flaws.
What Makes Tough Tongue AI Special?
The AI analyzes your responses in real time and asks follow-up questions. If you fail to mention your experience at the start of an interview, it will prompt you to elaborate.
Tough Tongue AI concentrates on key themes to provide a realistic call simulation. It adapts to different tones—acting assertive, tough, or even sensitive—depending on the situation you want to practice.
The service understands multiple languages but responds in English—perfect for language training.
More on the topic:
📍 AI for Creating and Sending Resumes
📍 Free AI Tools to Boost Your Studies
#news @hiaimediaen
14.02.202507:00
🎬 Video Effects by Pika AI on @GPT4Telegrambot
Hi everyone! We've launched two new video generation services from Pika AI in our bot:
🧩 Pikaddition adds anything and anyone to ANY video.
You can upload your non-AI videos and add objects, people, or fantastical elements. Mind-blowing!
💫 Pika Effects brings your images to life with various visual effects.
Upload a photo and choose any of the 16 effects. You can inflate, squish, explode, melt, and more. Pika will turn images into realistic videos.
Check out the examples above ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot and select either Pika Effects or Pikaddition. Have fun!
#PikaAI
Hi everyone! We've launched two new video generation services from Pika AI in our bot:
🧩 Pikaddition adds anything and anyone to ANY video.
You can upload your non-AI videos and add objects, people, or fantastical elements. Mind-blowing!
💫 Pika Effects brings your images to life with various visual effects.
Upload a photo and choose any of the 16 effects. You can inflate, squish, explode, melt, and more. Pika will turn images into realistic videos.
Check out the examples above ⤴️
How to Use:
➡️ Go to @GPT4Telegrambot
1️⃣ Purchase the "Video" package in the /premium section.
2️⃣ Send the /video command to the bot and select either Pika Effects or Pikaddition. Have fun!
🔴 @GPT4Telegrambot — the #1 bot for using AI on Telegram. It can write texts and code, translate languages, solve math and physics problems, work with documents, and create images, videos, and music. 20 million users.
#PikaAI






25.02.202506:59
🖥 Anthropic Releases Claude 3.7 Sonnet
Startup Anthropic has released a new version of its AI model, Claude 3.7 Sonnet. The company skipped version 3.6 in its numbering—its place is "unofficially" occupied by the intermediate Claude 3.5 Sonnet (new).
Main Features:
➡️ Claude 3.7 Sonnet is the first "hybrid" model that can respond quickly in standard mode and reflect before answering, achieving better results in mathematics, programming, physics, and other tasks.
➡️ The new model outperforms OpenAI's o3-mini-high and Grok 3 approximately 1.5 times in the SWE-bench Verified for autonomous coding, while lagging behind other reasoning models in mathematical benchmarks like MATH 500 and AIME 2024. Anthropic explicitly states that they optimized the model for real-world tasks rather than competition problems.
➡️ The developers focus on the model's security, ability to resist control takeover attempts in agent mode by prompt injection, and low response bias.
➡️ Claude 3.7 Sonnet is available for free—you can try it here. However, the reasoning mode is only available with a subscription.
🖥 Additionally, Anthropic is launching a preview version of the AI agent for programming Claude Code. This agent can read, edit, test, and fix code and upload it to GitHub through the command line. GitHub integration is available on the free tier.
It seems that Anthropic is no longer consciously trying to compete with OpenAI or Elon Musk's xAI in creating a universal chatbot. Instead, they focus on what they do great: the best model for programmers. Since the summer of 2024, Claude has remained the most popular coding assistant.
More about Anthropic:
🛑 Company founder Dario Amodei on Lex Fridman's podcast
🛑 Amanda Askell: The philosopher at Anthropic teaching AI humanity
#news #Anthropic #Claude
Startup Anthropic has released a new version of its AI model, Claude 3.7 Sonnet. The company skipped version 3.6 in its numbering—its place is "unofficially" occupied by the intermediate Claude 3.5 Sonnet (new).
Main Features:
➡️ Claude 3.7 Sonnet is the first "hybrid" model that can respond quickly in standard mode and reflect before answering, achieving better results in mathematics, programming, physics, and other tasks.
"Just as humans use a single brain for both quick responses and deep reflection, we believe reasoning should be an integrated capability of frontier models rather than a separate model entirely,"developers say.
➡️ The new model outperforms OpenAI's o3-mini-high and Grok 3 approximately 1.5 times in the SWE-bench Verified for autonomous coding, while lagging behind other reasoning models in mathematical benchmarks like MATH 500 and AIME 2024. Anthropic explicitly states that they optimized the model for real-world tasks rather than competition problems.
➡️ The developers focus on the model's security, ability to resist control takeover attempts in agent mode by prompt injection, and low response bias.
➡️ Claude 3.7 Sonnet is available for free—you can try it here. However, the reasoning mode is only available with a subscription.
🖥 Additionally, Anthropic is launching a preview version of the AI agent for programming Claude Code. This agent can read, edit, test, and fix code and upload it to GitHub through the command line. GitHub integration is available on the free tier.
It seems that Anthropic is no longer consciously trying to compete with OpenAI or Elon Musk's xAI in creating a universal chatbot. Instead, they focus on what they do great: the best model for programmers. Since the summer of 2024, Claude has remained the most popular coding assistant.
🔴 Claude 3.7 is already available on @GPT4Telegrambot. The reasoning mode will be integrated soon.
More about Anthropic:
🛑 Company founder Dario Amodei on Lex Fridman's podcast
🛑 Amanda Askell: The philosopher at Anthropic teaching AI humanity
#news #Anthropic #Claude


22.02.202507:29
👐 Why Teach AI Models to Think Before Responding?
Noam Brown, a leading research scientist at OpenAI, shared in his TED talk how the concept of "thinking AI models" came to life, forming the foundation for models like o1 and o3, and why this approach represents the future of AI development.
Highlights:
📈 The primary driver of progress in AI has been scaling data and computational resources during the training of new models.
📈 During his PhD, Brown developed an AI model for playing poker. However, his model (the best at the time) lost to four of the world's top players during a competition with a $120k prize pool.
📈 During training, the model played a trillion hands of poker. It made decisions in just 10 milliseconds in real games, regardless of the situation. Meanwhile, humans, who had played 100k times fewer hands in their lifetimes, took time to think before making decisions.
📈 Noticing this, Brown gave the model 20 seconds per decision instead of 10 milliseconds. This led to a performance improvement equivalent to increasing the data and training time by 100k times. "When I saw the result, I thought it was a mistake," Brown admitted.
📈 The researchers decided to take a rematch with an increased prize pool of $200k. The updated model triumphed over the same players, surprising the poker community, the AI industry, and even the developers themselves. "The betting odds were about 4 to 1 against us. After the first three days of competition, the betting odds were still about 50/50. But by the eighth day, you could no longer gamble on which side would win—only which human lose the last," Brown recounted.
📈 Similarly, IBM's chess champion Deep Blue and Google's Go-playing model AlphaGo also didn't act instantly—they thought before each move.
📈 There are two ways to create new models: training on increasingly large datasets or scaling the model's reasoning system. OpenAI researchers chose the latter path when creating o1—the first in a series of neural networks designed to think before responding.
📈 o1 takes more time and costs more to produce answers than other models. However, according to Brown, this is justified for solving fundamental problems, such as finding a cure for cancer, proving the Riemann hypothesis, or designing more efficient solar panels.
📈 Brown emphasized that while scaling pretraining data is approaching its limits, scaling reasoning capabilities is just beginning.
📱 You can watch Noam Brown's full TED talk here.
More on the topic:
🔘 Ilya Sutskever: "Data is fossil fuel for AI"
🔘 The biography of Demis Hassabis, CEO of Google DeepMind
#OpenAI #AITed @hiaimedien
Noam Brown, a leading research scientist at OpenAI, shared in his TED talk how the concept of "thinking AI models" came to life, forming the foundation for models like o1 and o3, and why this approach represents the future of AI development.
Highlights:
📈 The primary driver of progress in AI has been scaling data and computational resources during the training of new models.
📈 During his PhD, Brown developed an AI model for playing poker. However, his model (the best at the time) lost to four of the world's top players during a competition with a $120k prize pool.
📈 During training, the model played a trillion hands of poker. It made decisions in just 10 milliseconds in real games, regardless of the situation. Meanwhile, humans, who had played 100k times fewer hands in their lifetimes, took time to think before making decisions.
📈 Noticing this, Brown gave the model 20 seconds per decision instead of 10 milliseconds. This led to a performance improvement equivalent to increasing the data and training time by 100k times. "When I saw the result, I thought it was a mistake," Brown admitted.
📈 The researchers decided to take a rematch with an increased prize pool of $200k. The updated model triumphed over the same players, surprising the poker community, the AI industry, and even the developers themselves. "The betting odds were about 4 to 1 against us. After the first three days of competition, the betting odds were still about 50/50. But by the eighth day, you could no longer gamble on which side would win—only which human lose the last," Brown recounted.
📈 Similarly, IBM's chess champion Deep Blue and Google's Go-playing model AlphaGo also didn't act instantly—they thought before each move.
📈 There are two ways to create new models: training on increasingly large datasets or scaling the model's reasoning system. OpenAI researchers chose the latter path when creating o1—the first in a series of neural networks designed to think before responding.
📈 o1 takes more time and costs more to produce answers than other models. However, according to Brown, this is justified for solving fundamental problems, such as finding a cure for cancer, proving the Riemann hypothesis, or designing more efficient solar panels.
📈 Brown emphasized that while scaling pretraining data is approaching its limits, scaling reasoning capabilities is just beginning.
"There are some people who will still say that AI is going to 'plateau' or 'hit a wall.' To them, I say: 'Want to bet?'" Brown remarked.
📱 You can watch Noam Brown's full TED talk here.
More on the topic:
🔘 Ilya Sutskever: "Data is fossil fuel for AI"
🔘 The biography of Demis Hassabis, CEO of Google DeepMind
#OpenAI #AITed @hiaimedien


21.02.202515:00
👶 Why Do Scientists Want to Give Artificial Intelligence a Childhood?
The GPT-4 model has been trained on approximately 13 trillion tokens (around 10 trillion words)—thousands of times more than any human could read or hear in a lifetime. Meanwhile, by the age of two, a child knows around 300 words. And for this learning process, children don't need datasets with hundreds of thousands of examples, months of continuous training, or millions of dollars—it happens naturally in everyday life.
Unlocking the mystery of how children perceive and learn about the world could help solve one of AI's biggest challenges: teaching models to truly understand physical reality.
🍼 How to See the World Through the Eyes of a Child?
Last year, scientists from New York University conducted an experiment where they trained an AI algorithm to learn language in the same way a typical child does. An infant named Sam helped them. Over 1.5 years (from six months to two years old), Sam wore a helmet with a camera several times a week. The camera recorded everything Sam saw, heard, or said. From the hundreds of hours of footage collected, researchers selected 61 hours of video. The camera captured around 250,000 different "words."
Next, they tasked a "naive" neural network with no prior knowledge of the real world to analyze the audio and video recordings to find connections between words and objects.
In 62% of cases, the AI model correctly identified an object in the video based on a word prompt. For example, when given the word "cat," it identified Sam's pet cat in the footage. This performance matched an algorithm trained on 400 million pairs of images and text. In 80% of cases, the model identified learned objects in images it had never seen before.
⛓️ How Can This Help Us?
The experiment demonstrates that AI models don't necessarily need massive datasets for initial training, as is currently the case with advanced algorithms.
Improving this associative approach—teaching AI not only to recognize objects but also actions (verbs) and intonations—could lead to a new type of AI algorithm capable of understanding the real world using just a camera and microphone. In the future, this could pave the way for empathetic robots based on such technology, like those seen in the animated film The Wild Robot.
More on the topic:
▶️ Why Are Scientists Teaching AI to Understand Emotions?
▶️ Amanda Askell: The Philosopher Teaching AI Humanity
#news #science @hiaimediaen
The GPT-4 model has been trained on approximately 13 trillion tokens (around 10 trillion words)—thousands of times more than any human could read or hear in a lifetime. Meanwhile, by the age of two, a child knows around 300 words. And for this learning process, children don't need datasets with hundreds of thousands of examples, months of continuous training, or millions of dollars—it happens naturally in everyday life.
Unlocking the mystery of how children perceive and learn about the world could help solve one of AI's biggest challenges: teaching models to truly understand physical reality.
🍼 How to See the World Through the Eyes of a Child?
Last year, scientists from New York University conducted an experiment where they trained an AI algorithm to learn language in the same way a typical child does. An infant named Sam helped them. Over 1.5 years (from six months to two years old), Sam wore a helmet with a camera several times a week. The camera recorded everything Sam saw, heard, or said. From the hundreds of hours of footage collected, researchers selected 61 hours of video. The camera captured around 250,000 different "words."
Next, they tasked a "naive" neural network with no prior knowledge of the real world to analyze the audio and video recordings to find connections between words and objects.
In 62% of cases, the AI model correctly identified an object in the video based on a word prompt. For example, when given the word "cat," it identified Sam's pet cat in the footage. This performance matched an algorithm trained on 400 million pairs of images and text. In 80% of cases, the model identified learned objects in images it had never seen before.
"We've shown for the first time that a neural network trained on realistic developmental input data from one child can learn to associate words with their visual counterparts,"explains Dr. Wai Keen Wong, the study's author.
⛓️ How Can This Help Us?
The experiment demonstrates that AI models don't necessarily need massive datasets for initial training, as is currently the case with advanced algorithms.
Improving this associative approach—teaching AI not only to recognize objects but also actions (verbs) and intonations—could lead to a new type of AI algorithm capable of understanding the real world using just a camera and microphone. In the future, this could pave the way for empathetic robots based on such technology, like those seen in the animated film The Wild Robot.
More on the topic:
▶️ Why Are Scientists Teaching AI to Understand Emotions?
▶️ Amanda Askell: The Philosopher Teaching AI Humanity
#news #science @hiaimediaen


21.02.202510:59
🪓 AI True Crime: Why Millions Believe AI-Generated Crime Videos?
In recent years, the true crime genre—content that tells real-life crime stories—has skyrocketed in popularity. Researchers attribute this rise to various factors, from subconscious preparation for dangerous situations to a deep interest in criminal psychology and motives.
A YouTuber known as Paul decided to take an unconventional approach by co-writing his content with AI. Initially, he doubted that people would watch true crime videos if they knew the stories were entirely fake. However, his videos quickly went viral. Thanks to ad revenue, Paul could make content creation his full-time job.
🎞 How Did Paul Create His "True Crime" Stories?
Paul devised the core plot on his own before using ChatGPT and AI visual tools to bring his story to life. He typically created one or two weekly videos, spending around 2.5 hours on each.
In an interview with 404 Media, Paul described himself as a director and his work as an "absurdist form of art." To hint at the stories' artificial nature, he gave characters unusual names and inserted weird details. However, his attempt to make viewers question the authenticity of his videos failed since commenters took the stories seriously. Some even reached out to The Denver Post to ask why a particular "murder" hadn't been covered in the news.
⛔️ YouTube's Response
When a journalist from 404 Media asked YouTube for a comment, the platform responded by deleting Paul's channel for "multiple violations of community guidelines," including its child safety policy.
However, many AI-generated true crime channels still exist on YouTube. One example is Hidden Family Crime Stories, which has 16,000 subscribers, and its most popular video has 1.2 million views. Based on the comments, viewers appear to be more concerned with the moral and ethical implications of the stories themselves than with the use of AI to create them.
Would you watch AI-generated "true crime"?
👍 — yes, if the story is interesting
🎃 — no, what's the point if it's fake?
🙈 — I don't watch true crime at all!
#news @hiaimediaen
In recent years, the true crime genre—content that tells real-life crime stories—has skyrocketed in popularity. Researchers attribute this rise to various factors, from subconscious preparation for dangerous situations to a deep interest in criminal psychology and motives.
A YouTuber known as Paul decided to take an unconventional approach by co-writing his content with AI. Initially, he doubted that people would watch true crime videos if they knew the stories were entirely fake. However, his videos quickly went viral. Thanks to ad revenue, Paul could make content creation his full-time job.
🎞 How Did Paul Create His "True Crime" Stories?
Paul devised the core plot on his own before using ChatGPT and AI visual tools to bring his story to life. He typically created one or two weekly videos, spending around 2.5 hours on each.
In an interview with 404 Media, Paul described himself as a director and his work as an "absurdist form of art." To hint at the stories' artificial nature, he gave characters unusual names and inserted weird details. However, his attempt to make viewers question the authenticity of his videos failed since commenters took the stories seriously. Some even reached out to The Denver Post to ask why a particular "murder" hadn't been covered in the news.
⛔️ YouTube's Response
When a journalist from 404 Media asked YouTube for a comment, the platform responded by deleting Paul's channel for "multiple violations of community guidelines," including its child safety policy.
However, many AI-generated true crime channels still exist on YouTube. One example is Hidden Family Crime Stories, which has 16,000 subscribers, and its most popular video has 1.2 million views. Based on the comments, viewers appear to be more concerned with the moral and ethical implications of the stories themselves than with the use of AI to create them.
Would you watch AI-generated "true crime"?
👍 — yes, if the story is interesting
🎃 — no, what's the point if it's fake?
🙈 — I don't watch true crime at all!
#news @hiaimediaen
Log in to unlock more functionality.