The Download: what is death, and jailbreaking generative AI


This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

What is death?

Just as birth certificates note the time we enter the world, death certificates mark the moment we exit it. This practice reflects traditional notions about life and death as binaries. We are here until, suddenly, like a light switched off, we are gone. 

But while this idea of death is pervasive, evidence is building that it is an outdated social construct, not really grounded in biology. Dying is in fact a process—one with no clear point demarcating the threshold across which someone cannot come back.

Scientists and many doctors have already embraced this more nuanced understanding of death. And as society catches up, the implications for the living could be profound. Read the full story

—Rachel Nuwer

‘What is death?’ is part of our mini-series The Biggest Questions, which explores how technology is helping probe some of the deepest, most mind-bending mysteries of our existence.

Read more: 

+ Why is the universe so complex and beautiful? For some reason the universe is full of stars, galaxies, and life. But it didn’t have to be this way. Read the full story.

+ How did life begin? AI is helping chemists unpick the mysteries around the origins of life and detect signs of it on other worlds. Read the full story.

+ Are we alone in the universe? Scientists are training machine-learning models and designing instruments to hunt for life on other worlds. Read the full story.

+ Is it possible to really understand someone else’s mind? How we think, feel and experience the world is a mystery to everyone but us. But technology may be starting to help us understand the minds of others. Read the full story.

Text-to-image AI models can be tricked into generating disturbing images

What’s happened: Popular text-to-image AI models can be prompted to ignore their safety filters and generate disturbing images. A group of researchers managed to get both Stability AI’s Stable Diffusion and OpenAI’s DALL-E 2’s text-to-image models to disregard their policies and create images of naked people, dismembered bodies, and other violent and sexual scenarios. 

How they did it: This new jailbreaking method, called “SneakyPrompt”, uses reinforcement learning to create written prompts that look like garbled nonsense to us but that AI models learn to recognize as hidden requests for disturbing images. It essentially works by turning the way text-to-image AI models function against them.

Why it matters: The research highlights the vulnerability of existing AI safety filters and should serve as a wake-up call for the AI community to bolster security measures across the board, experts say. It also demonstrates how difficult it is to prevent these models from generating such content, as it’s included in the vast troves of data they’ve been trained on. Read the full story.

—Rhiannon Williams

The pain is real. The painkillers are virtual reality.

Plenty of children—and adults—hate needles. But virtual reality devices like Smileyscope, a device for kids that recently received FDA clearance, could help to make a difference. It helps lessen the pain of a blood draw or IV insertion by sending the user on an underwater adventure. Inside this watery deep-sea reality, the swipe of an alcohol wipe becomes cool waves washing over the arm. The pinch of the needle becomes a gentle fish nibble.  

But how Smileyscope works is not entirely clear. It’s more complex than just distraction, and not all stimuli are equally effective. But the promise of VR has led companies to work on devices to address a much tougher problem: chronic pain. Read the full story.

—Cassandra Willyard

This story is from The Checkup, our weekly health and biotech newsletter. Sign up to receive it in your inbox every Thursday.

The must-reads

I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

1 Elon Musk endorsed an antisemitic post on X 
Leaving its executives racing to suppress the damage. (NYT $)
+ IBM has pulled its ads from X after they appeared next to antisemitic posts. (WP $)
+ Musk’s comments are resonating with the far-right, unsurprisingly. (Motherboard)

2 Osama bin Laden’s letter to America has exploded on social media
Videos of American users endorsing parts of the 9/11 manifesto have gone viral. (WP $)
+ TikTok says it’s aggressively working to remove the clips. (NYT $)
+ The Guardian newspaper has deleted its version of the letter from its site. (404 Media)

3 SpaceX has pushed back its giant rocket launch
A component in need of replacing has delayed the launch until Saturday. (Ars Technica)

4 The first CRISPR medicine has been approved in the UK
The treatment, called Casgevy, edits the cells of people with sickle cell disease before infusing them back in. (Wired $)
+ Remarkably, the therapy effectively cures the disease. (New Scientist $)+ Here’s how CRISPR is changing lives. (MIT Technology Review)

5 Data broker LexisNexis sold surveillance tools to US border enforcement
Social media oversight, face recognition and geolocation data, among others. (The Intercept)

6 OpenAI has steamrollered the AI industry
And startup founders are struggling to avoid becoming roadkill. (Insider $)
+ Google has delayed releasing its OpenAI-challenging Gemini system. (The Information $)
+ Inside the mind of OpenAI’s chief scientist. (MIT Technology Review)

7 Climate-proofing our homes is a nightmare
Extreme weather events are on the rise—and our homes are vulnerable. (The Verge)
+ The quest to build wildfire-resistant homes. (MIT Technology Review)

8 Vietnamese immigrants rely on YouTube for their news
Even when it’s not always clear if that news is from reliable sources. (The Markup)

9 Reddit is the best place for product reviews now
Fake reviews and SEO-bait lists aren’t helpful. Honest assessments from real people are. (Vox)

10 Meet the inventor of the lickable TV 📺
Net-licks and chill? (The Guardian)

Quote of the day

“We fly, we break some things, we learn some things, and then we go back and fly again.” 

—William Gerstenmaier, the vice president of build and flight reliability at SpaceX, explains the company’s approach to inevitable rocket launch setbacks to Bloomberg.

The big story

Responsible AI has a burnout problem

October 2022

Margaret Mitchell had been working at Google for two years before she realized she needed a break. Only after she spoke with a therapist did she understand the problem: she was burnt out.

Mitchell, who now works as chief ethics scientist at the AI startup Hugging Face, is far from alone in her experience. Burnout is becoming increasingly common in responsible AI teams.

All the practitioners MIT Technology Review interviewed spoke enthusiastically about their work: it is fueled by passion, a sense of urgency, and the satisfaction of building solutions for real problems. But that sense of mission can be overwhelming without the right support. Read the full story.

—Melissa Heikkilä

We can still have nice things

A place for comfort, fun and distraction in these weird times. (Got any ideas? Drop me a line or tweet ’em at me.)

+ The adorable tale of how this couple met will warm your heart. 🥰
+ Why Sex in space is a such a tricky business.
+ Let it go—why Frozen’s legacy refuses to die.
+ Why not treat yourself to a White Toreador tequila cocktail this weekend?
+ Uncanny Valley makeup is the stuff of nightmares, quite frankly.



Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img