Tesla, Chatbot and Grok
Digest more
AI, Grok
Digest more
Researchers say popular mental health chatbots can reinforce harmful stereotypes and respond inappropriately to users in distress.
Generative artificial intelligence tools like ChatGPT, Gemini, and Grok have exploded in popularity as AI becomes mainstream. These tools don’t have the ability to make new scientific discoveries on their own,
A recent study by Stanford University offers a warning that therapy chatbots could pose a substantial safety risk to users suffering from mental health issues.
Happy Tuesday! Imagine trying to find an entire jury full of people without strong feelings about Elon Musk. Send news tips and excuses for getting out of jury duty to: [email protected]
Explore more
Therapy chatbots powered by large language models may stigmatize users with mental health conditions and otherwise respond inappropriately or even dangerously, according to researchers at Stanford University.
Built using huge amounts of computing power at a Tennessee data center, Grok is Musk's attempt to outdo rivals such as OpenAI's ChatGPT and Google's Gemini in building an AI assistant that shows its reasoning before answering a question.
Chatbots may give students quick answers when they have questions, but they won’t help students form relationships that matter for college and life success.
People are leaning on AI tools to figure out what is real on topics such as funding cuts and misinformation about cloud seeding. At times, chatbots will give contradictory responses.
When Kayla's* partner of eight months sent her a "happy birthday" text, it didn't take long for her to figure out that he had used AI to craft the message.