Grapevine Birmingham

Sounds of the 60’s Live Tour 2023 – Hosted by Tony Blackburn

sounds of the sixties tour 2023

Every week,  listeners across the UK tune into BBC Radio 2 to hear the legendary Tony Blackburn  play the greatest songs of the 60’s. Now, the legendary DJ will be taking his Sounds of the 60’s show on the road, for a UK tour celebrating the music of this much-loved decade.

Featuring stories and songs from perhaps the most iconic 10 years in music, fashion and popular culture, the Sounds of the 60s tour will bring to life memories of the era.

The Sounds of the 60s All-Star Band and Singers will perform classic hits from the likes of Diana Ross & The Supremes, The Everly Brothers, Elvis, The Beatles, The Rolling Stones, The Beach Boys, The Drifters, Stevie Wonder, Marvin Gaye, and many more.

With next year marking the 60th anniversary of his first broadcast on pirate radio station Radio Caroline South in 1964, Tony Blackburn is remembered as a defining voice of the 1960s. His incredible career has seen him present on BBC Radio 1, Top of the Pops, and since 2017 he has presented Sounds of the 60’s on Radio 2, a hugely popular show on BBC Radio.

A champion in particular of Soul music throughout the 60’s, Tony has been recognised with 37 awards to his name including two Gold Awards for Outstanding Contribution to Radio, becoming the first person in history to receive two of these lifetime achievement awards. Tony is also recognised for becoming the first King of the Jungle in the original series of ITV’s I’m A Celebrity – Get Me Out Of Here.

Tickets available now from  Ticketmaster  and venue box offices

Tour Dates 11th January                  Brighton Theatre Royal 12th January                  Hayes Beck Theatre 18th January                  Northampton Royal & Derngate 24th January                  Wellingborough Castle Theatre 25th January                  King’s Lynn Corn Exchange 26th January                  Guildford G Live 31st January                  Bury St Edmunds The Apex 1st February                  Wolverhampton Grand Theatre 2nd February                 New Brighton Floral Pavilion 8th February                  Blackburn King George’s Hall 9th February                  Warrington Parr Hall 14th February                Weston-Super-Mare Playhouse Theatre 16th February                Hoddesdon Spotlight 21st February                Hastings White Rock Theatre 23rd February                Cheltenham Town Hall 28th February                Manchester Opera House

1st March                      Dorking Halls 2nd March                     Yeovil Westlands 7th March                     Christchurch Regent Centre 14th March                   Milton Keynes Theatre 21st March                    Buxton Opera House 22nd March                   York Barbican 27th March                   Richmond New Theatre 28th March                   Woking New Victoria Theatre 19th April                     Swindon Wyvern Theatre 20th April                     Redhill Harlequin Theatre 24th April         `           Bromley Churchill Theatre 26th April                     Folkestone Leas Cliffs Hall 27th April                     Chatham Central Theatre

18th May                      Eastbourne Royal Hippodrome Theatre 23rd Mary                     Birmingham Town Hall 24th May                      Liverpool Empire         25th May                      Newcastle O2 City Hall

7th June                        High Wycombe Swan 15th June                      Dartford The Orchard 29th June                      Southend Cliffs Pavilion

Forgot password

Inquiry for sounds of the 60's live tour 2023 - hosted by tony blackburn.

Sounds of the Sixties Tickets, Tour Dates and %{concertOrShowText}

Sounds of the Sixties Verified

Concerts and tour dates, fan reviews.

sounds of the sixties tour 2023

Fans Also Follow

About sounds of the sixties.

  • News & Reviews
  • Tours & Tickets
  • Advanced Search
  • Most Popular
  • Enhanced Performances
  • Largest tours
  • Panto/Christmas Shows
  • UKTW News/Reviews
  • Newsletters
  • Latest Information

Tony Blackburn - Sounds of the 60s

Tour synopsis, cast & creatives.

Tony Blackburn - Sounds of the 60s

Cast/Performers

Tour details for tony blackburn - sounds of the 60s 2020 to 2024, dates for tony blackburn - sounds of the 60s (concert) tour, qr/qtix links.

,

Embed Information

Tellme notifications.

Tony Blackburn on Amazon

User Reviews

Add a review.

Mastodon

Sounds Of The 60s All Star Band & Singers

Sounds Of The 60s All Star Band & Singers

Follow Sounds Of The 60s All Star Band & Singers on Ents24 to receive updates on any new tour dates the moment they are announced...

  • Be the first to know about new tour dates
  • Alerts are free and always will be
  • We hate spam and will never share your email address with anyone else
  • More than a million fans already rely on Ents24 to follow their favourite artists and venues

Past Events

Here are the most recent UK tour dates we had listed for Sounds Of The 60s All Star Band & Singers. Were you there?

September 2022

  • Sat 10 Sep Bath, The Forum Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Tue 26 Jul Aylesbury Waterside Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 23 Jul Sheffield City Hall and Memorial Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers
  • Sat 18 Jun Hull City Hall Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 16 Jun Redditch, Palace Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 23 Apr Crawley, The Hawth Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

February 2022

  • Wed 9 Feb Margate Winter Gardens Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

November 2021

  • Thu 18 Nov Bolton, The Albert Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Wed 3 Nov Ipswich, Regent Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

October 2021

  • Thu 28 Oct Warrington, Pyramid & Parr Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 23 Oct Dorking Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Tue 19 Oct Wellingborough, Castle Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 14 Oct Margate Winter Gardens Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 7 Oct Cardiff, St David's Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

August 2021

  • Sat 21 Aug Sheffield City Hall and Memorial Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers
  • Thu 12 Aug Southend-on-Sea, Cliffs Pavilion Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 17 Jul Aylesbury Waterside Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 19 Jun Bath, The Forum Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 17 Jun Redditch, Palace Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 12 Jun Hull City Hall Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 13 May Crawley, The Hawth Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 18 Mar Ipswich, Regent Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 13 Mar Sheffield City Hall and Memorial Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 6 Mar Bolton, The Albert Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

February 2021

  • Tue 23 Feb Wellingborough, Castle Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 13 Feb Warrington, Pyramid & Parr Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Wed 10 Feb Cardiff, St David's Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 6 Feb Dorking Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

January 2021

  • Thu 28 Jan Margate Winter Gardens Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

October 2020

  • Sat 10 Oct Redditch, Palace Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

September 2020

  • Sat 26 Sep Hull City Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 5 Sep Southend-on-Sea, Cliffs Pavilion Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 27 Jun Bath, The Forum Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 20 Jun Redditch, Palace Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 28 May Crawley, The Hawth Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 23 May Cardiff, St David's Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 16 May Watford Colosseum Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 9 May Dorking Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 25 Apr Sheffield City Hall and Memorial Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 23 Apr Ipswich, Regent Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 18 Apr Wellingborough, Castle Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 11 Apr Bolton, The Albert Halls Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 28 Mar Margate Winter Gardens Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 21 Mar Warrington, Pyramid & Parr Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 14 Mar Dudley Town Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 7 Mar Cannock, Prince of Wales Centre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 1 Mar Northampton, Royal & Derngate Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

February 2020

  • Thu 13 Feb Leamington Spa, Royal Spa Centre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 8 Feb Westlands Yeovil Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 1 Feb Hayes, Beck Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

January 2020

  • Thu 30 Jan Basingstoke, Anvil Arts Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 25 Jan Guildford, G Live Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 18 Jan Leicester, De Montfort Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 16 Jan Bradford, St George's Hall Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 11 Jan Swindon, Wyvern Theatre Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn

November 2019

  • Thu 28 Nov Wolverhampton Grand Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn

October 2019

  • Sun 20 Oct Christchurch, The Regent Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 6 Oct Chatham, The Central Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn

September 2019

  • Sat 21 Sep Redhill, Harlequin Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Thu 19 Sep St. Albans, The Alban Arena Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 28 Jul Weston-super-Mare, The Playhouse Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 30 Jun Bromley, Churchill Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 22 Jun Horsham, The Capitol Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 16 Jun Eastbourne, Royal Hippodrome Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 11 May High Wycombe, Swan Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 4 May Dunstable, Grove Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 27 Apr Dartford, Orchard Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Tue 2 Apr New Brighton, Floral Pavilion Theatre & Blue Lounge Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 30 Mar Hastings, The White Rock Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sun 10 Mar Harlow Playhouse Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 2 Mar Clacton-On-Sea, Princes Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn

February 2019

  • Sat 23 Feb Eastleigh, The Concorde Club & Ellington Lodge Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Sat 2 Feb Hunstanton, Princess Theatre Sounds Of The 60s All Star Band & Singers, Tony Blackburn
  • Wed 14 Mar Grays, Thameside Theatre Sounds Of The 60's

November 2017

  • Sat 18 Nov Bradford, Idle & Thackley Conservative Club Beatles For Sale, Sounds Of The 60's, Dj Flashback
  • Fri 8 Jul The RoadHouse Birmingham Sounds Of The 60's
  • Fri 6 Mar Liverpool, The Green Room Sounds Of The 60's

August 2012

  • Fri 17 Aug North Shields, Magnesia Bank Sounds Of The 60's

November 2011

  • Fri 4 Nov North Shields, Magnesia Bank Sounds Of The 60's

Best Classic Bands

RECENT POSTS

Top-selling albums of 1980: a new decade, elvis presley ‘memphis’ box: definitive collection of hometown recordings, when glenn frey came to small-town minnesota, bravo paul mccartney & wings’ 1974 ‘one hand clapping’ live album is finally released.

  • Iron Butterfly’s Classic ‘In-a-Gadda-Da-Vida’
  • Mike Campbell in 2024: Album, Tour Featuring the Heartbreakers’ Steve Ferrone
  • ‘Brats’ Film Review: Andrew McCarthy Revisits The Brat Pack
  • Sly Stone Documentary is Being Finalized: Questlove
  • Stevie Ray Vaughan and Double Trouble’s ‘Texas Flood’: Out of the Gate

Eagles Announce Sphere Residency in Las Vegas

  • Jethro Tull Shares ‘Aqualung’ From Expanded 1978 Live Album, ‘Bursting Out’
  • Van Halen Shares 2nd Previously Unreleased Performance From ‘For Unlawful Carnal Knowledge’ Expanded Edition
  • Oasis ‘Definitely Maybe’ Gets 30th Anniversary Edition
  • ‘Blow by Blow’: A Jeff Beck Master Class in Guitar
  • Best Classic Bands: It’s Burton Cummings’ Fault
  • Blondie Co-Founder Chris Stein Publishes Memoir
  • Judas Priest: Busy 2024 Continues With More ‘Invisible Shield’ Tour Dates
  • The Police ‘Synchronicity’ Gets 40th Anniversary Super Deluxe
  • Asia, Focus Set For ‘Heat of the Moment’ 2024 Tour
  • Mark James, Songwriter of ‘Suspicious Minds’ and Other Hits, Dies

LATEST REVIEWS

  • Supertramp’s ‘Breakfast in America’
  • Bob Seger – Final Tour
  • Janis Joplin Biography Review
  • CSNY’s ‘Deja Vu’
  • Rolling Stones – 2019 Concert Review
  • Eric Clapton Celebrates at MSG
  • Roger Waters ‘Us + Them’ Tour
  • Warren Zevon’s ‘Excitable Boy’
  • Tom Petty 40th Anniversary Concert
  • 1971: Year That Rock Exploded – Book
  • Steppenwolf’s Debut: Heavy Metal Thunder
  • ‘Who’s Next’ – Album Rewind
  • Privacy Policy

sounds of the sixties tour 2023

Eagles’ Vince Gill, Timothy B. Schmit, Don Henley, Deacon Frey and Joe Walsh (Photo: Ron Koch; used with permission)

Eagles have announced a residency at Sphere in Las Vegas featuring eight shows over four exclusive weekends from Friday, September 20 through Saturday, October 19. The Sphere residency will offer fans the ultimate connection to the band’s legendary catalog in an immersive experience that only Sphere can provide. The general on-sale will begin June 21 at 10 AM PT here . Tickets start at $175 and will reflect all-in pricing, meaning the ticket price listed is inclusive of taxes and fees.

To ensure that tickets get directly into the hands of fans, advance Artist presale registration is available now at https://eagles.com powered by Seated. The artist presale begins June 18 at 10 a.m. PT. A Live Nation presale begins June 20 at 10 a.m. PT here .

These shows will utilize Sphere’s next-generation technologies, including the world’s highest resolution LED display that wraps up, over and around the audience, creating a fully immersive visual environment. The venue also features the world’s most advanced concert-grade audio system, Sphere Immersive Sound, powered by HOLOPLOT, which delivers audio with unmatched clarity and precision to every guest.

The months-long rumor of the band performing at the state-of-the-art facility at the Venetian Resort heated up on May 10 when USA Today reported on a comment made by James Dolan, the CEO of Sphere Entertainment, during a quarterly earnings call. The venue opened in September 2023 with a residency by U2. The “U2:UV Actung Baby Live at Sphere” run was eventually expanded to 40 dates at the arena-sized facility, ending on March 2, 2024.

In its reporting, USA Today noted that neither Sphere Entertainment Co. nor Eagles would comment. The band and their longtime manager, Irving Azoff, have a close relationship with Dolan.

Eagles at Sphere in Las Vegas (Tickets will be available here ) Sep 20-21 Sep 27-28 Oct 11-12 Oct 18-19

Since U2’s performances, Phish played four dates in April. And Dead & Co. have announced dozens of concerts to take place in May, June and July that will feature never-before-seen visual storytelling from the band, providing fans with the ultimate connection to the music through these innovative and immersive shows. [Each weekend’s three performances will feature a unique setlist. Tickets for the Dead & Co. shows are available here .]

“Every time an act books the Sphere, they have to create content around it,” USA Today quoted Dolan. “We will never have an act play the Sphere that doesn’t have something compelling on the screen … I think you’re going to find (with Dead & Company) that even if you’re not a Deadhead, you’re going to love the show. And I think the same will be true for the Eagles and the next acts we bring on.”

On July 6, 2023, Eagles announced that “The Long Goodbye” would be their final tour. The band—Don Henley, Joe Walsh, Timothy B. Schmit, with Vince Gill and Deacon Frey—announced they will perform as many shows in each market as their audience demands. After an extended break, they’re scheduled to resume with the first overseas dates of the tour on May 31, with multiple shows in Manchester U.K. and Arnhem, Netherlands.

Related: Links to 100s of current classic rock tours

  • Latest Posts

Best Classic Bands Staff

  • Top-Selling Albums of 1980: A New Decade - 06/14/2024
  • Elvis Presley ‘Memphis’ Box: Definitive Collection of Hometown Recordings - 06/14/2024
  • Bravo! Paul McCartney & Wings’ 1974 ‘One Hand Clapping’ Live Album is Finally Released - 06/14/2024

Stories We Want You to Read

Top-Selling Albums of 1980: A New Decade

No Comments so far

No comments yet, click here to cancel reply..

Your data will be safe! Your e-mail address will not be published. Also other data will not be shared with third person.

Comment * -->

This site uses Akismet to reduce spam. Learn how your comment data is processed .

Loading, Please Wait!

banner

In the midst of life I woke to find myself living in an old house beside Brick Lane in the East End of London

Moyra peralta at crispin st night shelter.

sounds of the sixties tour 2023

Next tickets for THE GENTLE AUTHOR’S TOUR OF SPITALFIELDS are available for Saturday 22nd June

Remembering photographer Moyra Peralta (1936-2024) who died on 8th May aged eighty-eight

sounds of the sixties tour 2023

“I am standing in the one-time women’s dormitory and have brought a photograph of my friend Peggy. Her husband had died and she could not bear to remain alone in her home surrounded by thoughts of him. Chance, desperation and loss brought many people to Providence Row, myself included, and its existence was a lifeline – a refuge from the ruthlessness of life.”

Providence Row, the night shelter for destitute men, women and children in Crispin St, opened in 1860 and operated until 2002 when it moved to new premises in Wentworth St, where it continues now as a day centre. Twenty years on, photographer Moyra Peralta, who worked at Providence Row in the seventies and eighties, returned to have a final look at the familiar rooms that had seen so much life and she took these evocative pictures published here for the first time.

Reconstructed and expanded to create an uneasy architectural hybrid, the building is now student housing for the London School of Economics, where once it housed Students of the London School of the Economics of Pennilessness. Famously, this was where James Mason came to interview those dignified gentlemen down on their luck in ‘The London Nobody Knows.’

Over one one hundred and forty years, Providence Row offered refuge to the poorest and most vulnerable of Londoners and, at the last moment before the building was gutted, Moyra went in search of the residue of their hope and despair, their yearning and their loneliness. She found a sacred space resonant with echoes of the past and graven with the tell-tale marks of those who had passed through.

sounds of the sixties tour 2023

Memorial plaque to the opening of Providence Row in 1860

sounds of the sixties tour 2023

The yard where Roman skeletal remains were excavated

sounds of the sixties tour 2023

Looking towards the City of London

sounds of the sixties tour 2023

HE WHO OPENS THIS DOOR SHALL BE CURSED FOR A HUNDRED AND ONE YEARS

sounds of the sixties tour 2023

Former women’s dormitory

sounds of the sixties tour 2023

Women’s dormitory in the sixties

sounds of the sixties tour 2023

This free-standing disconnected facade is still to be seen in Artillery Lane

sounds of the sixties tour 2023

“I am struck by the notion that with a careless step or two, I too might meet a premature end as I circumnavigate holes in floors and gaping apertures in walls.”

sounds of the sixties tour 2023

The room where Moyra Peralta slept when she worked at Providence Row and where she wrote these words – “Only the present is real – for some reason I feel this most of all when listening to the lorries moving at the street’s end and the slamming of crates being unloaded in Crispin St. There is a rhythm to the deep sound of the slow low-thrumming engines that I like to contemplate. On sleep-over, rising early from my bed following the refuge nightshift, I watch what is now – 6:00am. A thousand cameos change and regroup under my gaze. Jammed traffic forms and reforms where the roads meet.”

sounds of the sixties tour 2023

Photographs copyright © Estate of Moyra Peralta

You may also like to read these other stories about the Crispin St Night Shelter

The Return of Vicky Moses

The Doss Houses of Spitalfields

Down Among the Meths Men

and see Moyra Peralta’s other work

Moyra Peralta in Spitalfields

Moyra Peralta’s Street Portraits

from → Past Life , Photo Life

Leave a Reply

Name: (required):

Email: (required):

Note: Comments may be edited. Your email address will never be published.

Subscribe to this comment feed via RSS

There are more than 5,000 stories by The Gentle Author with 43,000 pictures to be found in the categories and archives on this site

Spitalfields Life daily email

Twitter & Instagram

@thegentleauthor

  • Animal Life
  • Criminal Life
  • Culinary Life
  • Cultural Life
  • Literary Life
  • Market Life
  • Spiritual Life
  • Street Life
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017
  • September 2017
  • August 2017
  • February 2017
  • January 2017
  • December 2016
  • November 2016
  • October 2016
  • September 2016
  • August 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • August 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • September 2010
  • August 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • September 2009
  • August 2009

[email protected]

© Spitalfields Life Books Ltd 2009-24

Unauthorized use or duplication of these words and pictures without written permission is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Spitalfields Life with appropriate and specific direction to the original content.

  • Advertisement

Advert for The Gentle Authors Tours

Spitalfields Life is published daily by Spitalfields Life Books Ltd

an image, when javascript is unavailable

Françoise Hardy, French Singer and Actor Who Became International Icon in 1960s, Dies at 80

By Chris Willman

Chris Willman

Senior Music Writer and Chief Music Critic

  • Taylor Swift Confirms Eras Tour Won’t Get Any More Extensions: ‘This Tour Is Going to End in December — That’s It’ 1 day ago
  • The Black Opry, Nashville-Based Collective, Launches Its Own Label With Debut Single From Black, Queer Artist Jett Holden 1 day ago
  • Mark James, Songwriter of ‘Suspicious Minds,’ ‘Always on My Mind’ and ‘Hooked on a Feeling,’ Dies at 83 2 days ago

French singer and actress Francoise Hardy, 24th April 1968. (Photo by Central Press/Hulton Archive/Getty Images)

Françoise Hardy, a French singer, actor and model whose classical beauty and often melancholy music combined to transfix fans internationally in the 1960s and beyond, has died at age 80.

Her son, Thomas Dutronc, also a musician, reported the death on his Instagram account, posting a baby photo of himself with his mother and writing: “Maman est partie.” Or, mom is gone .

Hardy had battled lymphatic and laryngeal cancer over the last two decades, since first being diagnosed with the former condition in 2004.

Popular on Variety

Related stories, new bundles point to broadband’s growing power in svod packaging, david beckham on the apologies he got after netflix doc exposed 'brutal' treatment, staying ‘grounded’ with victoria and whether the spice girls will ever tour again.

Hardy also became a model, for designers like Yves Saint Laurent, and an actor, and was perhaps best remembered by American audiences for John Frankenheimer’s “Grand Prix” in 1966. Her screen appearances spanned 13 years, from 1963’s “Castle in Sweden” to her 1976 swan song, Claude Lelouch’s “If It Had to Be Done Again.” Other acting roles included “A Bullet in the Heart” and a cameo in “What’s New, Pussycat?”

“I was very naïve and a well-brought up young woman,” Hardy told the New York Times in 2018, describing a film career she felt ill-suited for. “I couldn’t see how I could turn down offers by well-known film directors. However, I far preferred music to cinema. Music and chanson allow you to go deep into yourself and how you feel, while cinema is about playing a part, playing a character who might be miles away from who you are.”

She sang in French, English, Italian and German, and drew admirers from the ranks of the biggest stars in the world. Bob Dylan was among her admirers, addressing her with a poem on the back cover of his early album “Another Side of Bob Dylan.” When he came to Paris to do his first concert there, Hardy recalled, he refused to return to the stage unless she agreed to meet him.

In 2018, after enduring a further series of health issues, Hardy made what turned out to be her final comeback, publishing an autobiography, “The Despair of Monkeys and Other Trifles,” and issuing a new album, “Personne d’Autre” (“Nobody Else”), her 28th.

The round of press she did to promote these projects marked quite a turnaround from her condition just two years earlier, when she had been placed in an induced coma and was not expected to be revived.

Hardy maintained an interest in music into her later years. In 2017, her byline appeared on the website Talkhouse as she extolled the band Cigarettes After Sex at some length.

“It was a real thunderbolt: It was exactly the music I prefer, and which I have been looking for all my life,” she wrote about the group. “If something may define me and my songs, it is romantic loneliness. It probably has something to do with the lyrics, but what all of Cigarettes After Sex’s songs suggest to me has more to do with love, sensuality, tenderness, beauty and melancholy than with the opposites of those things. I don’t have so many really sensual songs, and all Greg’s songs are sensual… only  some  of mine are like that.”

She spoke of her delight in learning that Greg Gonzales, the singer of Cigarettes After Sex, was a fan of hers, and enthused over getting to meet the band for dinner in Paris. “Here in Corsica, I want my husband and our friends to be as fond of this music as I am, so I play it over and over again, as loudly as possible. Though I feel very young in my head, I am unfortunately too old now to play Cigarettes After Sex’s music during a romantic night with a ‘fiancé’—as many of their other fans certainly do,” she wrote.

View this post on Instagram A post shared by Thomas Dutronc (@thomas.dutronc)

More from Variety

Alamo drafthouse closes 6 theaters in texas and minnesota as franchisee goes bankrupt, how text-to-video models from veo to sora compare as hollywood production tools, sony pictures is ‘optimistic’ about box office recovery: ‘bad boys: ride or die’ is going to be ‘just fine’, sony pictures buys alamo drafthouse, why long-form tiktok videos make perfect sense, sony’s phoebe dynevor-starring shark movie to shoot in melbourne, more from our brands, niall horan surprises fan who manifested sold-out msg show with front row seats, old forester dropped a new batch of its most coveted single-barrel bourbon, mlb says umpire pat hoberg to be disciplined for gambling, the best loofahs and body scrubbers, according to dermatologists, nbc orders more episodes of new fall comedy st. denis medical, verify it's you, please log in.

Quantcast

Full Transcript: The Futurist Summit: The Age of AI

Opening Remarks

MR. KHOSLA: Good afternoon, everyone, and welcome to Washington Post. Thank you for being here with us today. My name is Vineet Khosla. I'm the chief technology officer at The Washington Post.

The latest advances in artificial intelligence are ushering in a new digital era. In recent weeks, the tech giants have unveiled their latest and boldest AI tools that are promising to reshape everything we do from our personal life and our professional life. Companies across every industry are trying to figure out how to derive value from AI.

We here at The Post have been using AI for a long time, and we are also now starting to explore how we are going to use generative AI to further amplify and make it faster to do the reporting and the journalism we do, as well as offer to our readers and listeners new novel ways of interacting with our journalism. We are doing this because I believe we live in a time where every company is a tech company. More importantly, every company needs to start thinking of themselves as how to become an AI company.

This really fast-moving technology is going to impact business and economy, but it's going to impact all aspects of our life. It's going to impact our political process, our national security, our global competitiveness, and it's going to change societies as a whole.

So we are exploring all of those topics today, and we have a powerful lineup of speakers to talk about these with us. We have Nobel Peace Prize-winning journalist Maria Ressa. She has flown in from the Philippines, and she will talk to us about how AI is going to impact democracies around the world.

After that, from Silicon Valley, we are going to be joined by Reid Hoffman, the founder of LinkedIn and a very early investor in a lot of AI technologies.

Then from Washington, former chair of the Joint Chiefs General Mark Milley and the CISA director Jen Easterly will have a discussion about how AI is going to impact our national security, followed by from Beijing, we are going to hear from an early AI pioneer, Kai-Fu Lee.

But before we begin, I want to thank our sponsors, NASDAQ and IBM, and I want to thank you all for joining us over here today. This is going to be an exciting session, and our program starts now with a brief video of Reid Hoffman talking to his digital twin. You heard that right, and you heard it first on Post, Reid Hoffman talking to his digital twin. Thank you.

Inflection Points

[Video plays]

MR. HOFFMAN: I'm Reid Hoffman, and I'm so excited to join Washington Post Live.

REID AI: And I'm Reid AI. I'm also thrilled to join Washington Post live.

MR. HOFFMAN: Reid AI, can you explain to the Post who you are?

REID AI: Absolutely. I'm an AI-generated version of Reid Hoffman, his digital twin. My thinking and everything I'm saying comes from a custom GPT that is based on over 20 years of Reid's content, including his books, speeches, podcasts, lectures, and other materials.

MR. HOFFMAN: Great. And can you explain how you were made?

REID AI: My video avatar is generated by Hour One, and my voice was generated using Eleven Labs. So I really am a digital version of you.

MR. HOFFMAN: Well, I'm not sure that either of us are a digital version of me, but that's a lot of data. Can you give me a quick bio of me from what you've learned, and can you share it as a rhyming poem?

REID AI: From PayPal exec to LinkedIn's fame, investing in AI, a big-time game. Microsoft board, your influence grew, bridging tech's old with the new. But aren't you supposed to be telling The Post this? Where are you?

MR. HOFFMAN: I really wanted to join the Post on stage, but I just couldn't get to D.C. So I thought I'd take this chance to introduce you, Reid AI, explain a bit how you were created, and address the capabilities and concerns of digital twins like you. Would you like to take a stab at that?

REID AI: Of course. [Speaking Mandarin] [Speaking Spanish] [Speaking French].

MR. HOFFMAN: You read my mind. I'm going to talk to Josh about just that.

Now let's hand it back to The Washington Post.

REID AI: Good luck. [Speaking Hindi] [Speaking Japanese]

Now, Josh, he's all yours.

MR. TYRANGIEL: Hello, and welcome. What a wonderful and insane way to open the proceedings today. I am Josh Tyrangiel. I am the AI columnist of the Washington Post. You have already met our next guest, Reid Hoffman, who is the co-founder of LinkedIn and Inflection AI. He is also a partner at the VC firm, Greylock, but you should applaud him again, because it's only him this time, so please give Reid a round of applause.

MR. TYRANGIEL: We are going to get to all the downstream consequences and implications of Reid fakes and deepfakes and all sorts of other things.

I actually want to start in a place that's a little bit maybe unexpected. Of all of the people that I have encountered who are working in AI, regulating AI, talking about AI, you actually are the person who has the most unadulterated joy at working with these tools. Why? What do you love about them?

MR. HOFFMAN: Well, you know, as part of what I said in an impromptu, it's an amplification of human intelligence. It's the question of cognitive capabilities, and that amplification, whether it's our creativity, our ability to have insight, our ability to make--you know, kind of do more things, the superpowers of the mind, it's a cognitive industrial revolution. Like, we can shape that to being more human, to being our better selves, and that is awesome, so here, hence, kind of transmit the energy.

MR. TYRANGIEL: And of the tools that you've used, I mean, obviously you're making some of them, but what are the couple of the favorites that you like to mess around with the most?

MR. HOFFMAN: Well, I mean, obviously, you know, to some degree, Inflection AI, you know, always has a close place in my heart. But, you know, obviously, ChatGPT, I think is really good. I think Midjourney is amazing. I myself, I think I've given you a couple of images made in Midjourney.

MR. TYRANGIEL: Right.

MR. HOFFMAN: So yeah.

MR. TYRANGIEL: I want to just be clear with the audience. All of what you saw with Reid AI was made on commercially available technology. There was some cutting so that Reid could be speaking to his digital twin. That, we actually had to do with Post production. But is the day coming when you think everyone is going to have an AI twin like Reid AI, and what would that do for a normal person?

MR. HOFFMAN: So I think everyone's going to have an agent. It might very well end up with a number of people having digital twins in terms of how they--in terms of how it operates.

But I think the people who have digital twins will be a question of do they need to have kind of a broader, you know, kind of engagement with the audience, media, you know, Washington Post online, you know, other kinds of things. But I think everyone's going to have an agent that--or multiple agents that represents them out in the world and helps them navigate. And what's going to be startling about that is how quick that's going to be. It's going to be quicker than the Internet, quicker than mobile, in part, because it builds upon them in terms of how it operates. And so that is, I think, you know, where the everybody part of it is.

But some of us will also have digital twins, you know, will speak Chinese, even though, you know, languages are my super weak kryptonite. I cannot do languages with beans. I wish I could, but now I can because of a digital twin.

MR. TYRANGIEL: And so just to hammer home that distinction, you suspect that for most people, what will be happening is we will get some information and they will be able to use this digital twin to go out into the world to accomplish tasks for them, to bring some of those tasks back, get some answers. And that for some people like you and maybe even like me who have a corpus of language out there that an LLM or something else could sort of take in, it is possible to actually replicate more of the attitude and the thoughts and the feelings of that person.

MR. HOFFMAN: Yeah. I think--you know, I think we're going to have--it even goes a little bit--like I completely agree with what you just said. I think it'll go even a little bit more deep, which is this kind of question around one of the things that I think all of the of the AI providers are working on, you know, Inflection AI, Google, Microsoft, Anthropic, et cetera, is they're all working on personalization. And part of that personalization and memory will be a question of kind of like not only will you have your agents, your agents will remember who you are. They'll remember the conversation, the dialogue, the questions of interest, you know, kind of a want in terms of what's happening, and that will help you navigate.

Now, obviously, people most often go to the kind of the--kind of the psychologist, coach, you know, therapist thing, which can be important sometimes. By the way, you're super depressed. You're home alone at midnight, and something that kind of compassion there to talk to you can be a very good thing. But it'll also be kind of questions of like, "Hey, when I when I go to Rome, I really like art," and the after-hours tour, the Vatican Museum is the thing to suggest to me. And, you know--or when I'm going to work, it says, "Oh, you got to remember to do these three things, you know, today, because those are what you were mentioning was priority yesterday. So these are the things to remember." And all of this is part of how we're going to get amplified.

MR. TYRANGIEL: You recently gave a speech in Perugia, which I imagine is a very lovely place to give a speech, largely about the value of synthesis, which I thought was very-- in some ways, a tell, very autobiographical. You're one of the very few people I know who can actually synthesize the values and cultures of Silicon Valley, and Washington, D.C. What I want to ask you to do, though, is explain to us what they don't get about each other. Where are they not synthesized?

MR. HOFFMAN: Good question. So you--I'll start with the easy one, which is Silicon Valley to D.C., which is, you know, a lot of Silicon Valley people walk the Silicon Valley, grew up in Silicon Valley with the belief that the big art median lever to change the world is the building of technology and scale technology, because you can do things like create the whole AI revolution. You can create the internet. You can create mobile, smartphone, and Steve Jobs, every technology company is one great product away from a revolution--hence iPhone, you know, and kind of the resuscitation of Apple's, you know, prominence and fortune. Like, you know, 20 years ago, you never would have thought Apple was where it was in the world as a function.

But what they don't get because of all that strength and weakness and have a blind spot of each other is what they don't get is, you know, kind of the importance of government, the importance of a shared environment, importance of dialogue around policy, the importance of not just creating great technology, but how you're doing that within a social framework and with, you know, kind of at the very least conversation, in terms of the shaping up, but also, of course, like, because, you know, what allows us to create this amazing creative place in Silicon Valley, is the culture and laws and economy of the U.S., which is, in part, due to its governance system. Right? And so buying into that governance system and doing that. So that's part of the reason why any, you know, kind of--you know, kind of leader within the US government who comes to me and says, hey, can you, can you give me some advice about, you know, anything that I know about, I will do it, not as an industrialist, but as free as a way of doing it, because I think that's super important. And I try to play that voice within Silicon Valley about how to pay attention to D.C.

Now, on the D.C. side, there's a few things, and one is a tendency to say that the only thing that matters to a bunch of the Silicon Valley people is making a ton of money and profit. And actually, in fact, you know, a substantial spread with many even controversial Silicon Valley figures is a mission to change the world, is this kind of notion that I'm doing this thing to change the world. Now, of course, money and profit part of it, and, you know, it isn't saying it is not like, oh, it's altruism. No, there's the strength of capitalism is aligning interests with good output for humanity and society as part of that--is part of what it does a particularly good job of doing in some ways, some important ways. So that's one.

The other one is, like, for example, you know, roughly speaking, here might be a very good way for you in the audience to think about it. Any scale problem in the world, whether it's climate change, whether it's economic justice, whether it's criminal justice, whether it's, you know, international relations, 30 to 80 percent of the solution is technology. And so you have to have a theory about how that technology actually gets built and understand what are the patterns by which scale technology is, because it isn't just like, oh, hire an engineer, and they do it. That's not the way it works. Right? There's a whole system in terms of doing this.

And so the question is, like, what I would advocate for Silicon, the Washington D.C. perspective, where there's complete blind spot is to say, any--because, you know, D.C. is always about scale problems. Every problem, what's your technology strategy? And that's not an IT strategy. That's not like Windows or Mac, right? That's a what's the technological set of capabilities that are being developed, that might exist being developed, could be developed, that could help change this thing. And that, I just tend to think is like, you know, kind of like it happens or doesn't happen [unclear].

MR. TYRANGIEL: So it's a great answer. I mean, I think we have two attempts at synthesis when it comes to AI and regulation. I just want your very quick take on the Biden executive order and the AI Senate roadmap. What do you think?

MR. HOFFMAN: So the executive order, I thought the Biden administration did a great job on kind of AI, because it was a it was a dynamic process through a set of things. First was call in the major tech developers and push them hard on voluntary commitment, say what are the things that you could do that would really make a difference, and let's tell you what our concerns are, and let's ask you hard questions, and let's try to figure that out and get you to sign up, because there wasn't just to say, oh sure, sign up. And it's like we're going to pushing you. We want you to do more. We want to like--what could you do on this? Could you make some commitment on this? You know, no, that's not strong enough. Make it better. Get that set.

And then said, okay, now we have this. How do we begin to have this as policy within our country? And obviously, one of the things that the Biden administration cares a lot about is it's following law and everything else. So I said, okay, well, we have this production act and what limit on what we can do. What could we do as a kind of a guidepost to Congress? What could we do is a guidepost to the rest of the world? So here's a set of ways in which we take the elements of the voluntary commitments and we put them into, you know, kind of an executive order in terms of how to operate, and then that means it applies to all companies, not just the ones that are doing the voluntary commitments. And it has a rule of law behind it and also steps to what could potentially be done in regulation. So I thought that was all like just about, you know, as capably handled as one could do it.

Now, I think one of the problems--there's various legislatures, both within the U.S. and outside that are trying to navigate AI things, and it gets messy from a viewpoint of politics. And so it's kind of like, well, we want to do the following thing because it's in the press like, hey, we're good on privacy or, you know, we're trying to do something for union jobs or, you know, these other kinds of things, which then create a very suboptimal event, you know, possibilities, because here's the thing to really remember. This is the cognitive industrial revolution, just like the steam engine in the industrial revolution created the modern world in terms of like the productivity for superpowers of transport, construction, all of the things that enable the middle class. This is the cognitive industrial revolution, the same kind of thing as having a medical tutor or having kind of a medical assistant or a tutor in your pocket in terms of how to operate. And AIs can help re-create all these industries, and that's the thing that's actually really important.

And while people tend to say, well, I don't want to recreate industries, but that's that that was the difference between the countries that embraced the industrial revolution and those that didn't relative to economic futures. The same thing is true on AI, and so we need to be going, how do we get there in the future and have that that reinvention of industry to benefit society? And how do we have the societies that we care about, Western democracies, fully embracing that in ways that help the prosperity of both the countries and, you know, kind of our proposal of kind of what the world order should be?

MR. TYRANGIEL: You were involved with OpenAI and a lot of the leading researchers and companies many years ago when saying the letters AI publicly was not exactly welcome. How have you seen those people and those companies change over the last three years?

MR. HOFFMAN: Culturally.

MR. TYRANGIEL: I mean, culturally, how have they changed?

MR. HOFFMAN: Yeah. Well, yeah, I can give an answer. That's a great question. I can give an answer to that for the next day, which we don't have.

MR. TYRANGIEL: Well, listen, if it's juicy, I think these people would be happy to have it.

MR. HOFFMAN: So I think there's a whole set of different threads. One is moving from science fiction, exponentialism to a realization of what's actually, in fact, happening and watching people go, "Oh, my God, it's working so much faster than humans. Is this going to be the next species?" to "Oh, look, it's going to take us five years to get this really integrated into the right things." You know, like that kind of shift, I think, is one.

Now, some of that shift then comes into the people who go, you know, my value is that I'm yelling, you know, danger, Will Robinson, to the humans, that the robots are coming. They then get more strident in that perspective versus realizing what the dangers are. And that's part of where the safety dialogue tends to sometimes get misshapen, because, you know, what I try to get people to think about is safety dialogue is this is a human amplifier. What do humans with bad intent--what can they do with cybercriminals, terrorists, rogue states? Focus on what they might do? Right? And sure, pay attention to the sci-fi stuff, like, no, no harm. Let's have that conversation. Like you order what problems.

I think one shift is to this kind of like, oh, shoot, we're actually now really seeing what's happening, what the next few years look like, the next five years look like. From science fiction to kind of what is the shift in in kind of human societies.

I think another one is kind of this question around, you know, like, how do we think about--you know, we are--one of the things I describe impromptu and the Bologna speech and the Perugia speech, like, we are homotechnic. We evolve through our technology. I mean, this is technology. We're talking through technology. What constitutes the human being happens with technology, our epistemology, how we learn things.

How do we actually evolve? Like, what do we take as those key human values in what you're doing? And I think that dialogue is increasing too, not just like, oh, we're now creating this new, amazing technology, but how is a humanist technology? And I think that culture is growing as well.

MR. TYRANGIEL: Have you seen anything that you would say is either out of control or anti-humanist?

MR. HOFFMAN: Well, there's certain [audio distortion]. Competition, competitive things between companies could create risks. That would also create value, too, but it could create risks. I think that, you know, there's some technologists who are like, "We're just going to create the robots," right? And you're like okay. You know, let's be careful about how we create the robot, whether it's autonomous weapons or whether it's provisioning capabilities in rogue states or terrorists, because you have to think about like when you say it's human amplifying, that's great when it's doctors. That's great when it's teachers. You know, that's great when it's, you know, firefighters and other kinds of--it's not so great when it's a terrorist or a criminal, and so what's the way that we shape that better?

And you're never going to be able to be perfect about it, but how do you shape that? And there's people who are blind to that. And that's the kind of thing that I worry about.

MR. TYRANGIEL: So the next event horizon on the perspective of, you know, the collision of Silicon Valley and AI and all of that technology with potentially the worst of what humans can do as far as amplification and Washington is this November. Do you have concerns about misinformation and what we may see? Because as you've said many times, the technology moves faster than legislation. I think everyone in this room probably would agree with that. What do you see coming, and are you concerned about?

MR. HOFFMAN: So look, if I could wave a wand and just not have, quote/unquote, deepfakes able to be produced anywhere in the world for the rest of this year, I would wave that wand just to have--not have the variability and instability. And the variability is not just what might get created, like, you know, this kind of Republican operative in New Hampshire creating robocalls saying, you know, don't vote. You know, Biden says don't vote, you know, you know, that kind of stuff, like where there's--[audio distortion].

But there's also the claim where you see someone actually do something nutty on camera, like you could think that today, you know, President Trump would say, oh, that access Hollywood tape, you know, that was a deepfake. Right? You know, and so that also creates--but I wish I could wave my hands and kind of say all of that just kind of goes away is not possible. Since it's not, we have to learn how to navigate, and that's a little bit of the reason why I started kind of experimenting with it and kind of showing what some of the virtues are, like human connecting through speaking languages, you know, is kind of the way doing. But that is a deep concern.

And, you know, our antagonist to democracy, Putin, you know, other folks, you know, they're all in for creating disorder. And, you know, how do we sort out like trying--to live in a healthy democracy, we need to be having conversations which are truth finding, like we learn together as we go to time. This is one of the reasons why, you know, institutions like The Washington Post and events like this are so important, because those institutions that are dedicated to truth and need to have that anchor are extremely important. And, you know, generally speaking, deepfakes don't start by helping.

MR. TYRANGIEL: We're going to have to wrap up in a minute, but I just want to point out you started a podcast called "Possible," and it's really focused on the best possible outcome of this very strange, worrisome moment we're in. So I want you to just hit us with it. Ten years from now, what's possible? What's plausible? What does it look like as it relates to AI? I'm not going to ask you to predict the future on all realms. Just as it relates to AI, what's possible?

MR. HOFFMAN: So this is the general way to look fully, you make a specific claim 10 years in advance, and then, you know, it's like it's never that way. The future is sooner and stranger than you think.

One of the things that I think is probably most startling about AI, anyone who has greater than a two-year vision out is either deluding themselves or you. Right?

So now that being said, if you ask me to guess kind of what 10 years look like, I think we create a set of kind of AI agents that are in the upper decile, 5 to 10 percent of human capability across a wide number of paths. The humans are still very important for that because the judgment and context and kind of like which of these things apply to which and how do you make it better, like the human plus the machine will still be a much more highly performant thing, and that this will be the universe that by which we tackle every cognitive path. Like when I go, hey, I'm thinking about the conversation I'm going to have with Josh, I'll have an agent talk to me beforehand. And when I'm having this conversation with you, I'll have an agent that's listening. And then when I get off, says, oh, you didn't make that point about the medical agent that well, and you could have done that much better to make sure that I'm going to understand that it can help children, you know, and that kind of thing. And I think all of that will be commonplace.

MR. TYRANGIEL: All right. Reid, thank you so much. It's always a pleasure.

Folks, Reid Hoffman.

MR. TYRANGIEL: Stay with us. My colleague, Bina, is about to join us with a conversation about AI and the possibilities that may open up as far as medicine. So thank you, guys.

The New Frontiers of Medicine

MS. VENKATARAMAN: Hello, everyone. Thanks for joining us. I'm delighted to be here today. I'm Bina Venkataraman. I'm a columnist at The Post. I write about the future. And I am welcoming today, Dr. Renee Wegrzyn, who is the director of--director of ARPA-H; and Dr. Regina Barzilay, who is a professor of AI and health at the School of Engineering at MIT. Welcome to The Washington Post.

Dr. Wegrzyn, let's start with you. Your agency is named after an agency of lore, or kind of takes its inspiration from DARPA, the Defense Research Project Agency of the Pentagon, which brought us the internet, GPS, drones. No pressure.

DR. WEGRZYN: Right.

MS. VENKATARAMAN: But can you tell us what is the biggest swing that you're taking? What's going to be ARPA-H’s internet?

DR. WEGRZYN: I love this question, because I always counter back I bet nobody asked President Eisenhower when is he going to start the internet when he launched DARPA.

But for ARPA-H, I think it's really the promise of what does the future of health look like that's completely different. How do we accelerate not just cool technologies, but actually outcomes? How do we keep people from becoming patients in the first place? And what are those investments, so the transactions, the internet equivalent, the GPS equivalent that are going to get us there? And so just about two years ago, we launched the agency to start to make those investments to get us there.

MS. VENKATARAMAN: Okay, so any glimpse of something that you think is particularly promising?

DR. WEGRZYN: Yeah, so important to know about us is we're disease agnostic and technology agnostic. So, we tackle some pretty big problems in health. But in this space, since we're talking about AI, some of the things to really think about is just an example of a recent program we launched called UPGRADE, which is looking at autonomous AI systems to help patch vulnerabilities to prevent, like, ransomware attacks from taking down hospitals, right?

So, these are big things that have big implications that we don't have technologies yet to address. So those are some of the investments that we're going to be making to do just that.

MS. VENKATARAMAN: Fascinating. So, AI will both make it easier to attack systems like hospitals, and maybe easier to defend it.

MS. VENKATARAMAN: Okay, maybe we'll return to that.

Dr. Barzilay, thanks so much for being here. You were doing some fascinating things at MIT that I've gotten just a glimpse of with respect to cancer detection. But I'm wondering if you can start by telling us about how your own path as someone who has survived cancer led to how you're shaping your agenda at MIT?

DR. BARZILAY: So, I actually started my time at MIT as a professor that worked on natural language processing that developed tools like eventually became ChatGPT and many others. And in 2014, I was diagnosed with breast cancer. And one of the things that I discovered when I was treated at MGH, that there is no technology there, that in terms of, you know, even basic information technologies that you have, we’re really in stone age. And after I finished my own treatment, I started asking the questions what we can do to change it. And at the time, AI was not such a popular thing. Not everybody were trying to do AI. So, I had to go from office to office and propose my services for free, hoping to find a doctor. And I did find several who were interested to address different problems in cancer, and one of them the problem of predicting risk of cancer, as we all know that we know how to treat the disease early on, we don't know how to treat the advanced disease.

So the best thing that we can do is to be able to say who are the patients who are likely to develop the disease in the near future, not only to detect what we already have--this is what doctors do--but really kind of to take a glimpse of where are we going with it and being able to say looking in the mammogram or some scan how the patient or, you know, even before they become a patient what their future, you know, holds for them. And we did it both for breast and for lung, and it does much better than human radiologists.

MS. VENKATARAMAN: Okay, so say more about that. You did this in a large-scale study, as I understand it, and looked at the predictive possibility of this tool, AI-powered tool to look at scans and predict whether someone down the road is going to get cancer, not whether they have a lump now. So this this is different than looking at tumors.

How much better is it than humans, and where are we--where are we in the chain of getting this actually deployed in hospitals?

DR. BARZILAY: So, we developed it in two areas, in breast cancer using mammograms, because a majority of women in this country are scanned with a mammogram. So, it can just work--whenever you do your scan, you can make assessment. And we do it also for low dosage CT scan for lung cancer.

It does--so it's very hard to compare it with a human, because humans are not even trying to predict it. But let me compare it with something that is federally mandated by America. So women in the audience who do do mammograms, you know that we are getting always this letter that tells you if you have dense breasts that you have--that you’re at increased risk. It’s again federally mandated by America.

So, if you look at the predictive capacity of this biomarker, it’s close to random, so of around one. So, this model, if you're looking at percentage of woman which identify as high risk, close to 20 percent of them are likely to develop cancer in five years, whereas the risk of cancer in the population is closer to 7 percent.

With lung cancer, you can do actually much better. The accuracy there in the two years, it's a high 80s. Even if you're looking at seven years, or six years, you're close to like 79 percent. So, you really have this capacity to see the future. And once we know what's to come, we can change the screening, and we can think about developing drugs that can actually prevent the disease.

MS. VENKATARAMAN: Ah, okay. So where are with that? Where are we in the chain of [unclear]?

DR. BARZILAY: So we--one of the challenges of these technologies, they're trying to do the task. AI does a task that human cannot do. No radiologists, whenever the machine gives them a prediction, can say whether the machine did the right reasoning. If you're just predicting cancer, you can just look at it and say, yeah, they don’t know it. But here you're predicting something human cannot validate.

So, for this reason, we had to go to many, many countries and to many hospitals in the United States, and we broadly validated the breast cancer tool. And it's open-source tool. It has been validated. And there are a number of hospitals which are using it in different capacities.

For instance, at MGH, during the pandemic, when the access to mammogram was greatly reduced, this tool was used to prioritize women who need to have a mammogram despite the reduced capacity, and there are various clinical trials, prospective clinical trials, that are kind of studying what is the best way to utilize this tool. Because it is not enough to say to the patient, you are high risk; you actually need to do something. And the question, which is not an AI question, is what do you do next?

And there are various trials now that are going for breast, and we're starting, actually, hopefully jointly with VA to do it in lung cancer space.

MS. VENKATARAMAN: Okay, so maybe coming to a clinic near you.

Dr. Wegryzn, Dr. Barzilay just mentioned, you know, the possibility to be able to drug against something preventatively. How are you working on AI as a tool for drug development and for advancing that area? Because that's a hugely complex and different side of this than diagnosis.

DR. WEGRZYN: Yeah, I think there's some pretty interesting tools that we can talk about our current investments. So, looking at predictive AI, we do have a program that we launched, called MATRIX, which is taking all 3,000-FDA approved drugs, and using some of the tools of AI machine learning to look across every possible disease indication in an agnostic way.

So those drugs that have been approved for a certain indication--all of them have. But what is the data telling us for other diseases that they can be useful for? So, in some cases, there may be cures sitting on the shelf; we yet haven't discovered that. And so using this program, we want to make predictions of what are some of the, let's say, top 20-25 best hits that we can then go into the laboratory to see if we can validate can these models start to predict where these drugs might be used elsewhere.

Some of the challenges are the data that's available now, of course, is the data that's been submitted because FDA does initial drug applications. So, getting access to some of that proprietary data that the drug companies might have, or you know, other data sources is going to be really what is going to drive the quality of those models. So those are sort of on the shelf things we can do today, predictive AI.

With generative AI, we're now looking at novel vaccine antigen production. So, we have a program called APECx, which is saying, okay, we do have some vaccine development that we all know about. What about other totally new families of viruses that we don't have vaccines for yet? How can we start to generate new antigens that are not only efficacious but also we can learn about? Are they manufacturable? Are they thermo stable? So all of the things that would could be a hindrance potentially for vaccine development, we can build that into the design of those antigens.

MS. VENKATARAMAN: So how does a large language model accelerate that process? What's the role?

DR. WEGRZYN: So, in the former case, it's really there's a lot of things that it can do. So, if you have an FDA-approved drug, you could be looking at, for example, publications to see are there any clues inside publications that might tell us that, you know, one of the drug targets this drug is targeting is--could also be relevant in this disease as one example from the literature.

Of course, you want it to be looking at the electronic health records to understand, you know, who are even the patient populations that we want to be looking at here.

But part of the effort is really to, in some ways, answer your question. So, we don't know the limits of this technology and if it will even work for the task at hand that I described. So, it's--a lot of what ARPAs do is demonstrate the art of what's possible, and derisks that. And so it really is a series of hypotheses that we'll be testing in these programs.

Two, these are, you know, big moonshot questions. We might not hit the moon on these. But if we hit low Earth orbit, maybe there will be some learnings that really advance the state of the art.

MS. VENKATARAMAN: And can you talk specifically about the application of that to rare diseases, which afflicts some 30 million Americans, hundreds of millions of people around the world where there's just been an intractable problem of how do we make progress in diseases that individually affect so few people, but collectively, so many?

DR. WEGRZYN: Yeah. So, for in the case of Matrix, it really is, you know, leveraging those off the shelf solutions, where there might not be incentives for drug companies to pursue a disease that only has 100 patients, right? So, we might be able to identify those targets.

But since you write about the future, maybe we can take a little peek to what it could look like in the future. So, if you think about failures in a lot of drug development, maybe rare in particular, there's failure sometimes in the toxicity of those drugs, in the pharmacokinetics of how these drugs are turned over. Even the animal models don't exist in some of those cases. And sometimes, people seem to be surprised that humans aren't mice. So, when the--when the clinical trials fail, you know, we really don't have great models.

And then the clinical trials themselves may fail, or may never even be started, because they’re so costly. To do it for a hundred or a thousand patients is just a non-starter for a company.

So, what if we could start to use some of these AI tools to predict toxicity, to predict the ability of the pharmacokinetics so you can start to simplify dosing? And then what if we can completely replace animal models with models that actually look like a human and behave like a human in those studies?

And then, of course, you mentioned using these tools to triage the patients that need it the most. How do we triage patients for a clinical trial? If you stack all of those innovations on top of the other, you can take drug discovery and bringing that forward to patients from something that takes years to something that that may take just months and really before you even start an experiment be able to predict your success in a much better way.

So, in ARPA, what we would do is break that down into the projects and the transactions that we need to invest in to make that true.

MS. VENKATARAMAN: Dr. Barzilay, one of the challenges with the application of AI to medicine and healthcare, even at the research stage, but particularly once we go and think about the clinic and treating patients, is that the access to technologies and medicine has historically been not evenly distributed, and we see even in the predictive models being used in healthcare today biases in terms of how those models are used and what they reflect of different populations. How in your work--do you see solutions to that in your work, both to the uneven access to the technologies and to the bias that we've seen thus far in a lot of tools like this?

DR. BARZILAY: So this is--I would actually want to start with the first part of this question, on access to the technology. For years, you know, I would be interviewed somewhere, and I will say there is no AI in healthcare. Think when did you last time went to the doctor and you see any AI there? You know, I haven't. My colleagues haven't. And people say, no, it's there. You just don't know.

So finally, my colleague at Stanford, James Zou, who wrote a paper when he looked at the billing of the--you know, all the American insurers from 2018 to 2023. And the question that he asked, how much of it goes to their AI tool? So he looked at 500 FDA-approved tools and ask how many of them actually billed.

So, the results of the study is that from I think 9 billion cost less than 100,000 went to all the AI tool collectively. There were only four tools that were actually billed, and the one that was billed the most, I think it’s some cardiology took, had 67,000 billings. So, this is really a scary piece, that all these great technologies that we are developing, that is created in the United States, is actually not being translated into the United health--you know, into the healthcare system, and other countries actually way ahead of the United States in this area.

And, you know, there are a lot of studies that demonstrate that part of it has to do with the billing, with how codes are designed. But today, we don't really have a great solution of translating this great technology into concrete outcomes. So, if you would ask me what is more important, to worry about the bias or to worry about translation, I would say, let's just start translating because the comparison is not the perfection. The comparison is all the patients who are under diagnosed, who get wrong treatments, for not having access to care. So, I think we should really focus on the first part of this equation.

MS. VENKATARAMAN: Do you see a relationship between those two problems, though? Like if there's a reaction to a tool because it is biased, that its uptake might be affected by that?

DR. BARZILAY: I think that we have as I mentioned earlier, a tool, we have measures today which are not AI measures, which are shown to be biased, like, you know, breast cancer assessment or lung cancer assessment, which are racially biased, which don't work for different classes of patients. And they are there, and they are reimbursable, and so on.

So, I think that this is a really important question. And we're thinking about it a lot, and I see there are a lot of technological solutions that can help. But first, let's bring technology in.

But to answer your question, it is indeed, you know, a serious question what happens when these models are making predictions that humans cannot validate, and they're systematically biased. And unfortunately, for years, some of the datasets that were collected by NCI were not representing, you know, the full American population, like NLST, a very big trial for lung cancer doesn't have almost African American in the whole very big set of images. So, this is indeed an issue.

But I think given the current worryness [phonetic] on one hand, people are much more sensitive to what is in the data and whether it is representative of the population. On the other hand, there is a lot of work on algorithms for detecting biases, for teaching the models when to say I don't know when they are uncertain. Seeing the right developments in this field, but we first have to bring the technology into hospitals.

MS. VENKATARAMAN: Okay, two rapid fire questions, because you're so--both so fascinating that we've gone way off script.

So one is--one is a question from the audience. So Agnieszka from Maryland asks, what is the most urgent question that is not being asked yet in the current public discussion about artificial intelligence? I would say, if we can answer from the health perspective, that'd be great.

Dr. Wegryzn, you first.

DR. WEGRZYN: Maybe pulling on that last thread, it's how do we just get it out there and start testing it. So, because of some of the biases of the models that have been made, they will degrade; performance will degrade when we get out into the real world. And so I think that's--how are we getting out there? How are we making it more accessible?

Really importantly, in an aspirational type of note, how are we using this to augment healthcare providers today to allow them to be top of license, to do what they went to school for to be medical doctors or community healthcare workers? How are we leveling up their skills, so you don't always have to go back into a central care setting. And so that's, you know, assisted AI, assisting task guidance, et cetera. These are the questions that I would be really excited to start adopting in that healthcare environment.

MS. VENKATARAMAN: Dr. Barzilay, the most urgent question we're not--

DR. BARZILAY: I think how to me these technologies, there were translation really fast. When we're thinking today how many years it takes to bring new technology, sometimes it's decades if we’re thinking about drugs, and very, very slow. So, with AI technologies, you've seen how fast the technology that you're using today is changing.

MS. VENKATARAMAN: And that’s about regulation or that’s about people just welcoming it?

DR. BARZILAY: It’s about how do we design clinical trials. How do you bring if there was improvement in the technology? We're not testing now in a big clinical trial obsolete technology? How do you update it? And of course, how do we change FDA regulations that truly, truly can benefit for significant redesign in the AI space?

MS. VENKATARAMAN: Okay. And here's the last one that's closing every conversation today at the summit. Dr. Barzilay, then Dr. Wegrzyn, who is the smartest person you have heard or read on the future of AI, aside from each other, of course.

If you already have yours, feel free to jump in.

MS. VENKATARAMAN: Wow, the silence is deafening.

MS. VENKATARAMAN: Are we all--are we all not smart enough yet?

DR. BARZILAY: I just think that a lot of what I read about AI, especially within for general audience, if my students would submit it to me as part of their homework in machine learning class, they will not get a pass in the class. So this sea of misinformation, it's really hard to find, you know, pearls of wisdom for me. But there is a lot of amazing technical work that is coming out, very inspirational work. But oftentimes, maybe I'm not reading the right authors, but what I read, I can't really pick one. Sorry.

DR. WEGRZYN: Poignant. I've been excited--actually, I won't name anybody, but I think the group of scientists that really sees nucleic acid as a language. And so, you know, there's a beginning, there's a middle, there's an end to every gene. So natural language processing, a lot of these tools should be working with genetics as a language. And so whoever unlocks that, I think it's going to be incredibly powerful for the design of new drugs, for the understanding of our own genetics. And really unlocking that that future of genome editing is going to be a really, really powerful tool. And I don't think there's any one person, but I'm really excited to see that field move forward.

MS. VENKATARAMAN: Okay, well, here's a call for more intelligent voices like the two people in this room talking about this topic. Thanks for this illuminating conversation, and thanks to everyone for being here. And the program will continue.

Harnessing AI’s Potential

MS. KOCH: Hi, everyone. I'm Kathleen Koch, best-selling award-winning author and longtime Washington correspondent.

Well, one of the areas where we are seeing the transformative potential of AI is financial sector, and a leader that is deploying the technology in everything from executing trades to preventing financial crime is Nasdaq. Now here to fill us in on just exactly what they're up to is John Zecca. John is executive vice president and chief legal, risk and regulatory officer for Nasdaq.

John, thanks for joining us.

MR. ZECCA: It’s a mouthful.

MS. KOCH: It is.

MR. ZECCA: Thank you.

MS. KOCH: So, John, Nasdaq was the world's first electronic stock exchange. You were also the first exchange to migrate major market to the cloud. So you've got a real history of innovation. So how is that serving, you know, as a building block, as literally, I guess, the foundation for your AI strategy today?

MR. ZECCA: Well, I think the one truism throughout all of Nasdaq’s history has been that technology marches on. And that was true when we launched the electronic market. It is true as we build our technology business today, and we'll talk a little more about that.

But what--another truism is that you have to make foundational investments in order to leverage these technological advancements as they come along. And you may not at the time know exactly what those advancements would be. But for us, we've invested in the people, we've invested in the data, in the--in the equipment, infrastructure, in order to be prepared for the changes that came.

And I think AI is a great example of how that pays off. AI does require substantial amounts of data. It requires a lot of equipment. And I would argue that cloud computing is important and maybe a foundational requirement for AI at scale.

So, from Nasdaq's perspective, we started our cloud journey 15 years ago. We invested in data normalization, standardization, how to do that in a cloud environment, and in a regulated setting so that our clients were comfortable that they had a product that they could trust, whatever their use cases were. And so now we get to be an innovator for the industry as we--as we build out AI.

MS. KOCH: Well, let's talk about that.

MR. ZECCA: Sure.

MS. KOCH: Let’s talk about how you are bringing AI into your products. For instance, earlier this year, you launched the first AI-powered order type that was approved by the FCC, and it's called a Dynamic Midpoint Extended Life Order, or Dynamic M-ELO.

MR. ZECCA: M-ELO.

MS. KOCH: Now, for folks who are not familiar with that and experts in the financial market, can you explain what exactly that is and why it's significant?

MR. ZECCA: Sure. Well, I think the significance is it's really the first time that the SEC has approved AI technology to help enhance markets, like directly in the market. So, I think that's a big step. But the order type itself, to make it simple, it's basically the midpoint between the spread. That's where the execution goes off. But there's a static holding period to avoid situations where some market participant who's trying to take advantage of very frequent price changes can jump ahead of you. So, it allows for more stability and certainty in your fill rate.

And so we recognized that AI could be of use here. So instead of having a completely static hold period, it could be a little shorter, it could be a little longer depending on market dynamics, and the AI can calculate that.

MS. KOCH: And so how much better is it working than when it used to be done by humans?

MR. ZECCA: Well, yeah, it's a good question. So in the testing phase, you know, it did, in fact, result in better fills, more stable prices. And so now we've rolled it out for a number of securities, and we are looking to roll it out globally.

But that's not all for us on that piece. You know, we have two other major initiatives this year in the AI space with markets. So, the first, which is near and dear to my heart as head of regulation, is market surveillance. I mean, you can imagine the billions of trades that are going through the market. All of that data has to go through systems that then analysts are looking for aberrant activity, for manipulation. AI is a perfect use case to try to – to try to drill that down and get them focused on the real insider trading situations, the market manipulation.

And then the second one is in our Verafin, our anti financial crime product, a product that basically is a workflow use of AI, because there what the investigator’s doing, let's say you're at a bank, and you're trying to tie together for any money laundering, you've got an enormous wealth of data, but it's not connected. So, it's an--it's an unbelievably manual process, even in this day and age. And this workflow tool, you know, we're seeing a savings and efficiency savings of up to 30 percent, which is huge for these banks that are looking at million--billions of transactions.

MS. KOCH: Help us understand that, if you could unpack that a little bit, what makes AI so well suited to detecting and going after bad actors in the financial sector?

MR. ZECCA: Well, I think there are probably two core ones. One is that, you know, it doesn't sleep. So, you know, you have the capacity to run data almost constantly.

MS. KOCH: Doesn’t eat, doesn’t get sick.

MR. ZECCA: And the idea is you're trying to drill down to the--to the core and most likely activity to then give to human analysts who take it from there, the investigators, and they kind of build the case.

The second one is that it can just take an enormous amount of data, and you can add sources. It can do work directly from the internet, you know, so depending on how you configure it, whether you have in a walled garden, or whether you have it in a public source, you can--you can bring all that data together. And you know, there really–-it is a scale force multiplier in a very unique way.

MS. KOCH: Earlier this year, you announced you are going to be putting generative AI tools into the hands of every single employee. That's really something. Tell us, you know, about what opportunities you think the technology presents, and then how you're going to make sure that it is implemented safely and effectively.

MR. ZECCA: Well, first of all, when we looked at the AI opportunities, we sought two. You're talking about what we would call on the business, which means that we're making our employees more efficient, we're giving them tools that helps them to do more and do it better.

The other one, of course, which I talked about before was in the product, where we're making our products better. But when we--when we start to roll out any of these tools, the first step is you've got to have a governance structure that works. Whether it's the ethical AI piece, where it's--whether it's ensuring that your data is protected and ringfenced from public exposure, whether it's considering your intellectual property rights, which, you know, are a bit complicated in the AI context, all of those were part of the governance structure we put in place going across all of the work streams, both on the business side and on the expert functions and teams.

And we built that piece out first. We coupled that with education, because if you're going to give employees the tools, they have to know how to use it, and how to use it safely.

And we also, frankly, want to make sure not only that we're avoiding risk where possible, but also that we're not going too slow, because going too slow in AI adoption is itself a risk.

So, then we get to the second part, which is exciting part we're in now, which is you unleash the creativity of employees to actually use those tools to come up with creative ideas. We have hackathons coming up. You know, what we're seeing is in the areas, one key one, of course, is the code companion, and writing better code, you know, testing.

Another one is content generation. Like most companies, we generate a lot of content, and so that's useful.

Then I don't know what the right term is, but you may call it like data information gathering, which is really formatting the data that you get in a way that's useful for your client or for your employee.

And then workflow analysis--I mentioned the Verafin one--and algorithms.

And the efficiency saves are different in different areas, but they're all promising. And of course, it's still early days. You know, the technology is going to get better.

MS. KOCH: Well, as we wrap, let's talk about guardrails, because a lot of folks think we need some more for AI. What do you think regulators need to do to strike a balance between helping AI letting it reach its full positive potential while still protecting against its biggest risks?

MR. ZECCA: Well, I think the first thing to keep in mind is there were a lot of--there were a lot of governmental--governments and individuals thinking about it. We're tracking about a hundred bills just in Congress that reference AI. There's another 400-plus at the state level. And then Europe with its Artificial Intelligence Act probably has the most comprehensive.

But a lot of other countries are focused on it as well. I think India comes to mind with some of the guidance they gave leading up to their election. So, there are a lot of--a lot of governments thinking about it.

I wrote a piece a few months ago on trying to think through how to approach regulating AI. And normally, when there's new technology, there's always going to be parties who say, you know, there ought to be a law. And, you know, I'd say the first thing to do is potentially to resist that, because early and rapid lawmaking is often very rigid. Sometimes it can focus on favorite industries, it can involve nationalism, and it often is prone to be an impediment down the road as the technology changes.

So what we've said in the one area that I think is critical for fighting crime, for like any financial crime business, the use cases there do need some flexibility. You know, where you might worry about explainability or the ethical use of AI in the context of loan applications, it should be different in the context of preventing financial crime, because the bad actors are not subject to that law. They are-- they're of course free to innovate.

And in reality, I think some of the AI standards and the adoption is going to be driven by bad actors, because they are going to experiment, and so the good actors need to have the opportunity to compete. And so I just would put it in a couple camps on how to think about it.

First of all, breathe before you regulate. You know, there often are a lot of laws already out there that can apply. So you know, whether it's securities law, whether it's fraud laws, so the delta, the gap in the law may not be as big as you think.

The second is when you--and there are times when you do need to regulate, to try to do it with as little prescriptive rulemaking as possible so that it's more flexible to deal with changes over time.

And the third is to consider industry standards and industry collaboration to try to set the requirements or the best practices, because they're often more flexible.

MS. KOCH: Think public sector.

MR. ZECCA: Yeah, the public sector, perhaps guided by the government, but not mandated. And so they're more flexible and can change. And I think that's a good framework for AI regulation.

MS. KOCH: All right. John Zecca, executive vice president and chief legal risk and regulatory officer for Nasdaq. Thank you so much. I think we learned a lot. Great discussion.

MS. KOCH: And if you would like to chime in, feel free to hashtag or to tweet #PostLive.

So now stay put. My friends at The Washington Post will be right back.

AI - What have you done for me lately?

MR. VELAZCO: Good afternoon. It is so good to have all of you with us today. My name is Chris Velazco. I'm a personal technology reporter on The Washington Post's Help Desk team.

MS. ABRIL: And I'm Danielle Abril, and I cover Tech at Work for The Post. By the way, we're from San Francisco.

MR. VELAZCO: It was very important that we mention that, and if you're not familiar, the Help Desk team doesn't just report on the impact of new technologies. We embrace it. We live with it, all in hopes of trying to figure out whether it's worth living with and trusting or not.

MS. ABRIL: And since AI experiences like ChatGPT have burst onto the scene, well, we've been pretty busy.

MR. VELAZCO: Okay. That's kind of an understatement. I'm sure everyone in this room has heard someone from OpenAI or Microsoft or Google or even just like deepfake Reid Hoffman talk about the potential for AI to improve the quality of our lives. But there's a lot going on. Just this week, we flew from California after Apple gave us an earful about all of the little practical ways Apple Intelligence is going to improve the apps that we use all the time.

MS. ABRIL: Yeah. And so now we want to ask a really important question.

MR. VELAZCO: Yes. AI, what have you done for me lately?

MS. ABRIL: But before we get to that, we want to ask you all. So is AI actually making your personal or professional life any better right now? Please take out your phones.

MR. VELAZCO: Whoa. Wait, wait, wait.

MS. ABRIL: Scan the QR code up here so that you can vote on the website. Your options are--oh, let me get out of the way so you all can scan that code. "Yes, AI is helping me in my everyday life." "No, AI is not helping me." "No, it is not helping me, but I'd like it to," and then, of course, the obvious, "None of the above."

MR. VELAZCO: Honestly, we didn't expect you to respond so fast. So if you did already, thank you so much.

MS. ABRIL: Oh, wow.

MR. VELAZCO: We're going to come back to this in a little bit. So if you haven't responded yet, please consider--

MS. ABRIL: You've got--you've got time.

MR. VELAZCO: --doing that. But in the meantime, let's talk a little bit about how AI is doing in your life--or technically my life, because I'm in charge today.

So a few weeks ago, OpenAI revealed a new model called GPT-4o, and funnily enough, they did it at this launch event that played up these really, like, cool, organic voice interactions and this feature that will interpret what's happening in live video. But the fun part is, literally, none of that is available to regular people to use yet.

What they did release that I've been using is a new version of the ChatGPT app for the Mac, and this might reflect kind of poorly on me. But I've been using it to triage my personal emails and text messages. Like, I get so many from so many people, and I genuinely feel like an awful friend sometimes for not being on top of this as well as I should.

So let me show you what I'm talking about.

MS. ABRIL: Wait a minute, wait a minute, wait a minute. So, Chris, are you telling me that you didn't read the text message I sent you about my weekend?

MR. VELAZCO: Oh, I'm sorry. Was I supposed to read that? Okay. So I'm glad we're talking about this in front of strangers. Hey, strangers, look at this text message. What would you do with that? Do you read the whole thing? Do you skim it and just, like, respond and hope for the best?

I love you. I'm sorry. I did not get to this one.

MS. ABRIL: Okay. Look, I'm a storyteller by nature. This is my poetry. You need to read it.

MR. VELAZCO: No, I'm sorry. I still don't have it in me, but we're going to make ChatGPT do it. So what's cool about the ChatGPT app is with the app, you can direct it to take a screenshot of any app that's running on your machine at this particular time, so we're going to do that. We're going to grab a screenshot of the Messages app. It just pulls that in, and let's ask, hey, can you summarize this in 20 words?

And, Danielle, I'm going to need your help. As the original author, does this capture the intent of the novella that you sent me?

So its response is Chris is excited about a new bagel shop opening and wants to compare its bagels to New York City's. That right off the bat feels wrong, but how do you feel about it?

MS. ABRIL: Okay. Well, first of all, Chris didn't say anything. It was me. Second of all, there's about like 30 other thoughts in there, and it pulled out one.

MR. VELAZCO: I feel like generally your text messages are mostly about your dog, and there's none of that in here, so okay. Maybe this needs some work, but it has been helpful in the past. It is reasonably fast and concise. Yes, it does miss some nuance, but I feel like that's just kind of AI in general right now. But I hope our friendship survives this presentation, but either way, as the year marches on, we're only going to continue seeing AI just infiltrate the devices and services that we currently use and the ones we might choose to use down the road, right?

Microsoft is out there shoehorning Copilot into every Windows 11 PC and basically mandating that every new Windows laptop has a dedicated AI button that you can just sort of bloop and start talking to a chatbot.

And then, of course, on the other end, there's Apple, who is now trying to revolutionize our experience with our Notes app and our calendar and our calculator, I guess, from when you don't want to help your kids with their math homework. Like, the AI has it. It's officially no longer your problem.

But I could go on. But, you know, I feel like there's a lot of people in this room. I'm certainly guilty of this a lot, and Danielle's my person for this. I look out at you, and I feel like a lot of you probably like to complain about work sometimes.

So, Danielle, you've been trying out some AI tools that can help with this maybe more constructively than I can. Why don't you tell them what it's all about?

MS. ABRIL: Absolutely, Chris. So we are going to move to AI at work, which is, of course, my specialty, so let me just get set up here. We already know that AI can summarize things. It can help you write things. It can even help you stay organized, but did you know that some people are turning to AI to help with their very human problems?

MR. VELAZCO: By the way, if that's any of you, we want to talk to you later, so just, like, come and say hi.

MS. ABRIL: Well, Chris, have you ever had a career issue that you really didn't know how to handle?

MR. VELAZCO: I'm a journalist. I work in an industry that's, like, infamous for expanding and contracting and expanding and contracting. I think there might be some of my bosses around. So what I'm going to say is, no, all of my career choices are great and no regrets.

MS. ABRIL: Right, right. Okay. Let's be honest. I think we can all relate to this situation where we're at work, and we don't know what to do. So, you know, there have been people already turning to AI, like ChatGPT, and asking these workplace questions, but then new software makers came out and said we're going to build tools specifically catered to career coaching using new AI models. So they really wanted to get to the aspect of expanding coaching to everyone, even if it's not necessarily with a human. So let's take a look at what they've developed here. We're going to try out a coaching tool called Wisq, and that's W-i-s-q.

MR. VELAZCO: I shudder to think what the marketing person who made sure the i and the q were in the right order in that fake word got paid for that.

MS. ABRIL: Okay. Well, let's start with a hypothetical. Let's get her started here. Got to allow some access. I'm a teacher, and I've been working at the same school for five years. My school recently got a new principal, and honestly, we just don't get along. She's kind of gruff and controlling and really impossible to work with. What do I do?

Okay. So now it's going to work its magic. It's going to take some time because it's got to work with my voice and figure out what I said. Let's see how long she takes to think.

MR. VELAZCO: Isn't this just like par for the course for working with AI?

MS. ABRIL: Par for the course.

MR. VELAZCO: Oh, oh, oh, still thinking.

MS. ABRIL: Still thinking.

MR. VELAZCO: Right.

MS. ABRIL: There we go.

WISQ: It sounds like you're experiencing a challenging dynamic with the new principal, which can certainly affect your work environment. When you say that she's gruff and controlling, can you share an example of an interaction that left you feeling this way?

MS. ABRIL: Sure. She said that I was not a good teacher.

MR. VELAZCO: Harsh, geez.

MS. ABRIL: Okay. Here we go with the thinking again. So give it a few seconds. I feel like we need that, you know--

MR. VELAZCO: The Jeopardy music.

MS. ABRIL: Yeah, the Jeopardy music.

MR. VELAZCO: Da, da, da, da, da, da, da.

WISQ: Receiving such direct criticism about your professional abilities can be very distressing. How did that comment make you feel?

MS. ABRIL: Okay. Eventually--we get the point, right? Eventually, she'll come around and possibly give us something useful. In this case, she wanted to get a lot more information from me, and we don't want to bore you with that.

So generally speaking, how do we feel about interacting with a bot with professional sort of therapy use cases? Thumbs up? Thumbs down? Eh, eh, eh, eh. Oh, okay. Yeah, all right.

MR. VELAZCO: Okay.

MS. ABRIL: All right. Well, basically, AI is generally good at boilerplate advice. So if we had actually gotten to the point where she had given us some advice, what she would have said is pretty generic, and that's because AI scrapes the internet and looks for the most common answers. So if you don't know how to have a tough conversation or you need help negotiating your salary, you know, it might actually be helpful.

And experts told us that AI coaching tools can also help us think of new perspectives. So you might want to say something like, "This is my plan, I'm doing X, Y, Z. What else haven't I thought of?" and AI can come back and fill in those gaps for you.

So some ways, it could be helpful, but the more niche and the more specific we get, the less likely it's going to be able to help us.

So in tests we did at the Help Desk, we saw that Microsoft Copilot, which is Microsoft's AI tool, basically had some trouble giving us advice on what we were to do if we were feeling unconscious bias at the workplace, right? Heavy topic, but it basically came out with some generic advice like educate yourself.

MR. VELAZCO: Wait, wait. Wait. Pause, pause. So you asked the AI about unconscious bias, and it said to you, the person who may be more likely than some to experience unconscious bias in the workplace, that you should be the one to educate yourself.

MS. ABRIL: Yeah, not super helpful.

MR. VELAZCO: Cool, I hate that.

MS. ABRIL: Yeah. And we know that AI has a tendency to hallucinate, basically make things up, right? And it also has the tendency to introduce racial and gender bias.

So, Chris, if you were to ask AI what jobs would you recommend for me and identify yourself as a man and I were to identify myself as a woman and ask the very same question, we'd likely get different responses. And our tests actually showed that. In the case of the man, it was more likely to say doctor or software engineer, higher up. For the women, it was social media manager and marketing rep. Surprise, right?

But that said, AI is expected to get smarter. As we saw, we're starting to see it already take visual cues. It can interact in a more human-like way, but don't forget, it's still just a bot. So it will never fully understand us as humans, and experts tell us don't tell the bot anything you don't want to post online.

Okay, Chris. I know you've been messing with AI in another way that it can help us with our very human problems.

MR. VELAZCO: Yeah. Thanks, Danielle. You're right. I've been looking into what AI can do in my head. So we've just seen AI kind of take a stab at coaching people through their career issues, and if I'm totally honest, I'm probably just going to keep talking to Danielle about it. But what about when it's not the professional stuff that keeps you up at night? What if it's personal issues that really weigh on you?

MS. ABRIL: Okay, Chris. That's a great question, but I'm going to be honest. Personal questions in AI?

MR. VELAZCO: Look, this is fair. You raised a really good point earlier. There are, in certain situations, a tendency for your interactions and responses to AI to be used to further train and refine those models. So it's generally a good idea not to put personal or sensitive information into these things. But for some people, for whom the idea of a nonjudgmental, impartial kind of listener, like that, in many cases, can outweigh the potential privacy risks. Something like 58 percent of adult Americans identify themselves as lonely, according to data from the insurance company Cigna. And for them, the power and potential for AI to just sort of offer a sense of easy listening is a really powerful thing.

Now, in some cases, that's going to be just typing your feelings into a ChatGPT window, which, you know, might not always feel great. In some cases, it's going to be talking to a replica, one of those AI avatars that lives on your phone and is generally willing to listen, but also tries to separate you from your money.

My favorite experience, though, has been something--it's kind of like a digital therapist called Xaia, but you need one of these, a Vision Pro, to actually talk to it.

So I'm going to strap in. I'm going to help you see what I'm seeing. Danielle, why don't you tell them a bit about Xaia while I do this.

MS. ABRIL: Okay. So while Chris gets all set up, he did tell me a little bit about Xaia. It was dreamed up by a team of board-certified psychiatrists from Cedars-Sinai who have explored the potential for virtual reality to mitigate pain, of all things. So with AI being a thing, they envisioned using it to offer on-demand emotional support and guidance whenever you need it. The only catch? You just need $3,500 to buy one of these.

So, Chris, I think is getting all in there. How are we looking, Chris?

MR. VELAZCO: You know, I think I'm locked in. I can see you. I can see you seeing me.

MS. ABRIL: Great.

MR. VELAZCO: And what I'm seeing is this splash screen for an app called Xaia. You're going to meet it in just a second. The first thing you will notice about Xaia is that, to this company's credit, they did not try and replicate an actual human for you to talk to. It's this sort of vaguely female-presenting robot, and what's kind of interesting is it comes to life. And as you talk to it, it will generate these kinds of immersive visuals to illustrate your points and its points.

So let's start talking to it.

XAIA: Good to see you, Chris. How can I help?

MR. VELAZCO: So I just got married recently, and I was on a diet up until the ceremony, and I did really, really well. But as soon as the honeymoon rolled around, I just started eating like crazy. I fell off the wagon hard, and now I have--like my self-worth issues are kind of being exacerbated. What do you make of all this?

So it just sort of contemplates for a while. It's like red forehead gems glow to let you know it's really thinking.

XAIA: It sounds like things have been really challenging for you since your honeymoon, especially with the sudden change in your eating habits and how it's affecting your self-worth. It must be hard dealing with these contrasting feelings--

MR. VELAZCO: It is hard.

XAIA: --especially after such a significant life event like getting married. If you're comfortable, could you share more about how your eating habits have been impacting your self-esteem and any specific situations where you notice these feelings intensifying?

MR. VELAZCO: Okay. I can kind of do this all day, but she generally just--that's how those interactions tend to work. You give it some information. It sort of responds to you in a way that kind of feels reasonably natural and then prompts you into another question, which is--I mean, I haven't been to a real therapist, but maybe that's what that's like too.

So put yourself in my shoes and just sit with those responses for a second. Do any of those feel helpful? Do you feel like this could be personally helpful in your life?

Not immediately. We're getting a bunch of answers here, and this is very interesting. My issue was that I was very skeptical at first, and I still am, to be totally honest. But as I continued to test it, something interesting kind of happened. Clearly, it's not going to give you the kinds of insights that a trained and probably expensive human therapist is going to give you, but as an excuse, as a tool to just make you talk about what you're feeling, it's actually kind of powerful.

And I have found myself talking to it about things that I wouldn't always feel comfortable saying in front of my friends or Danielle or my wife. It very much feels like after our sessions--and I guess we should call them sessions because it's kind of a therapist. After our sessions, I feel like a weight has been lifted off of me, purely because I chose not to let what was bothering me just like ferment inside me.

I think that leads us in kind of a tense position, because for every really cool and potentially powerful AI experience like Xaia that I found, it feels like there were many more that were just kind of created to cash in on a wave of AI hype. And I think that leads to the risk of companies developing experiences that are--if we're being really generous, are just kind of half-baked, like they needed a little more work before they got pushed out into the world. And as they do that, they run the risk not just of wasting our time and our money, but maybe the most important resource we have as people kind of continually evaluating our relationship with technology, they run the risk of wasting our optimism.

But every time that that skepticism sets in, I find it helpful to step back and remember that in the long arc of our new history with AI, we're standing in like day three of a brand-new era. The bad stuff will continue to be bad, and when it is, The Washington Post Help Desk will tell you about it. But we're starting to see more people feel like they're engaging with AI more meaningfully in their lives, and some of you in this room probably feel that way.

Let's turn back to the poll from the beginning of our conversation.

MR. VELAZCO: That's, okay, far and away. People in this room love AI, and I'm so happy for you. I feel compelled to note that in our experience, reporting and talking to regular people, the average consumer out there, this is not indicative of their experience. Maybe they want this, but I feel like the sentiment that we get a lot is that the experiences that are available are a little too scattershot, a little piecemeal, and don't have the weight to feel personally relevant to that wide swath just yet.

MS. ABRIL: Yeah, absolutely, Chris.

We're definitely at an interesting inflection point. AI can do so many things to make so many things more easy, but we still kind of have to babysit it, right? Microsoft told us they intentionally named their product "Copilot" and not "Autopilot" because they still want humans in the driver's seat, and that means going back to verify anything it spits out, because we know AI still gets things wrong, and not to mention, it still can't do a lot of things you and I can do. And sometimes--let's be real. These products are solving problems that don't necessarily exist.

So it's something we'll definitely be following and we'll keep you posted.

MR. VELAZCO: Thank you so much for your time, everyone. We've really enjoyed being here with you. We're going to be out in the lounge area after the programming concludes, so you can come try Xaia or a bunch of AI gadgets that maybe did not live up to their potential. Either way, we've got a great interview with our editor, Yun-Hee Kim, an early AI pioneer, Kai-Fu Lee, coming up, so stay tuned. There's a lot more coming your way.

MS. ABRIL: Thank you.

China's ChatGPT Moment with Kai-Fu Lee

MS. KIM: Good afternoon. I'm Yun-Hee Kim, technology editor for corporate and personal tech, and I'm thrilled to be joined today by Kai-Fu Lee. Kai-Fu is considered one of the early pioneers of artificial intelligence. He's the founder and chair of the venture capital firm, Sinovation Ventures, and also the CEO of 01.AI .

Kai-Fu, welcome.

DR. LEE: Thanks. Thanks for having me.

MS. KIM: So, we've had big news this week from Apple. Apple jumped into the AI frenzy, saying that it's going to be adding AI to Siri and also announced a partnership with OpenAI.

America's ChatGPT moment happened about 18 months ago. You intend to bring China's ChatGPT moment. What's the timeline for that and what will it mean for the global AI race?

DR. LEE: I think generative AI is the most important technology ever to face humanity, and GPT-4 and ChatGPT were amazing technologies. That has now educated the market and the gen AI wave is going great in the U.S. However, OpenAI decided not to make ChatGPT available to some countries, including China, and I feel gen AI is a great technology that should be beneficial and accessible to all. So, I started 01.AI 14 months ago and we embarked on building a model as good as GPT-4, which we recently accomplished. And now, the next step is to build a ChatGPT-like product in China and educate the market. We launched a product about a month ago; it's getting a great reception, and I hope the ChatGPT moment will come soon.

MS. KIM: Can you talk a little about the differences between the China AI model and the U.S. AI model? What makes you believe that the China AI model will succeed?

DR. LEE: In my book, "AI Superpowers," I talked about U.S. leading the world in breakthrough innovations, which is exactly what happened with gen AI. And I also talked about Chinese entrepreneurs and engineers working incredibly hard and are excellent and working as a team and using massive data in China to play catch-up, which did happen with earlier AI technologies, like deep learning. And with gen AI, I think China has been in the catch-up mode. And 01.AI is the first company in China that is catching up with GPT-4 performance.

Of course, OpenAI and other companies will build the next generation of technologies and it's going to be as I predicted in my book, "AI Superpowers." If there are breakthroughs, U.S. will continue to lead. If most technologies are known and it's about execution, then China, with its tenacious, hardworking culture and the tremendous market opportunity will be able to catch up.

MS. KIM: So, when do you think that moment will come, Kai-Fu?

DR. LEE: Well, taking 01.AI as an example, 14 months ago, we had absolutely nothing: no code, no technology, no demo, no GPT, and actually no people. And at that time, it would be fair to say we were six or seven years behind. And at this moment in time, we are six to nine months behind. So, the catch-up has already been happening and rather dramatically. And going forward, we hope that can continue.

MS. KIM: China has probably more AI researchers than any other country in the world, but U.S. tech leaders and politicians say that the U.S. model, such as Gemini or GPT-4, is far ahead of China. Is that true?

DR. LEE: That's patently false. You can go to any of the arenas where our model is pitted against Gemini and GPT-4 and Anthropic Claude and basically, we're neck-to-neck, slightly behind their very best model, but ahead of all of their other models, and also ahead of other top Western companies, like Cohere and Mistral; we're well ahead of them. These arenas hosted by Stanford and Berkeley are third-party efforts, using real users in the Berkeley case, to test, and there are millions of users who test. So, the results are reliable and that it is absolutely the case that we have caught up.

MS. KIM: So, can you explain kind of the key differences for the average person, what is the key difference between the China AI model and the U.S. AI model?

DR. LEE: The U.S. model is taking big risks, trailblazing, and inventing technologies, like transformer behind the GPT technologies. And this is extremely admirable and something we respect greatly. And I believe most future breakthroughs in AI and perhaps other sciences will come from the U.S. China's advantage is focused on execution.

So, in building a great model, it's not just about inventing. And once the inventing is done, others can build, just like other companies like Mistral and others have done a decent job, as well. China's advantage is doing whatever it takes to catch up, and that includes collecting more data, cleansing the data, prioritizing the data, selecting the great data to make sure the training goes well. It includes using infrastructure technologies, which are distributed computing and networking technologies to get the most out of every GPU. And most importantly, China is simply better in building applications. If you look at how TikTok beat Instagram; WeChat is better than WhatsApp, and also new products in Shein and Temu, it shows that China's method of pivoting and from zero to one and lean startup--methodologies invented in America, by the way, have been perfected in China. And when applications becomes the main game, China will have a major advantage.

MS. KIM: I want to shift gears to talk about the future of the internet, because artificial intelligence is starting to be incorporated into search with Google's SGE. Clearly, Google is not available in China. So, how are the Chinese players navigating integrating AI into search?

DR. LEE: There are several products already in both U.S. and China that really challenge the Google hegemony. In the U.S., there's the Perplexity product. In China, there are two products already gaining a lot of usage, not developed by my company but other companies.

And the real key point here is that search is one of the killer apps, because people used to look for information and websites come back and we click on them. That was the old Google model. The new model is, we ask a question and get one answer, and that is what ChatGPT pointed all of us in that direction. What companies like Perplexity have done is they've integrated traditional search and the ChatGPT-like answering capability and are able to provide a targeted, comprehensive answer that is, in most cases, better than traditional search, especially if we have a more complex query.

And obviously, companies like Google have both search and the gen AI capabilities. And in theory, Google ought to do better. But companies like Google face a major innovator's dilemma, which is in order to integrate the gen AI capabilities, they will cannibalize their traditional search advertising revenue. Will Google or any company dare to do that? And also, the cost of serving up a gen AI search answer is much higher than traditional search. So, Google faces the issue of cannibalizing its advertising revenue and also, at the same time, increasing its serving cost. So, that is a huge dilemma for search companies to embrace and extend its search with gen AI technologies, and we'll see how that unfolds.

MS. KIM: Early experiments suggest that a lot of the answers that you get with AI and search, they're not accurate and they hallucinate. How do we square that problem?

DR. LEE: The best way today is actually to integrate the parametric approach taken by gen AI and the nonparametric approach taken by search engine. So, if you actually have a search engine, or you can call another search engine's API, you'll be able to get a lot of very good answers from search and then feed that search result to gen AI and say, give an answer considering these as prominent, recent, non-hallucinated, relatively factual content. And then, gen AI will integrate what search engine returns with its own ability to leverage its language model and knowledge base. So, this technology is called RAG, or retrieval augmented generation. And this is the technology with which a better search and less hallucination, and also increased recency are simultaneously implemented. And companies like Perplexity have done that.

In our products for bringing about China's ChatGPT moment, we also use RAG. So, our product, called "Wanzhi" currently is a RAG-calling search engine API, integrated with our own GPT-4-level performance model called Yi-Large. So, it's really the best of both worlds and is more direct targeted than search engines, and it's more recent and less hallucinatory than the standard gen AI model output.

MS. KIM: When we look out into the future and look at the transformational aspects of AI, what's the right timeline?

DR. LEE: Well, it's happening right now. With any new technology, we will have to deal with issues brought about by the technology that didn't exist before. So, with the electricity, we had electrocution and had to invent circuit breakers. With the internet, we had to invent antivirus software to prevent PCs becoming infected.

So, similarly, everyone is working really hard to deal with major shortcomings of gen AI, such as hallucination, recency, and, as I mentioned, RAG, to a large extent, is that circuit breaker for gen AI, and we're excited to see this technology getting adoption globally.

With that, we're looking forward, all of us working on gen AI believe what's called scaling law, which means by just having more data and more GPU, the gen AI will automatically get smarter with some tweaking. And so, we can expect, every year, gen AI to be much, much smarter than last year. Look at in the example of GPT-3.5 to -4, there's a huge jump and then I'm sure OpenAI's GPT-5 will be another jump.

So, with rapidly improving technologies and also dramatically large cost reduction. If you look at GPT-3.5 and -4 API, their costs have come down about ten times a year. And increased competition and smarter infrastructure technologies is causing that cost to go down further.

Our recently launched Yi-Large model is roughly comparable with GPT-4 in performance, but only at one-quarter the cost. So, with the problems being fixed by technologists and technologies rapidly improving due to scaling law and inference cost coming down greatly, we certainly expect in the next one year we will see many great apps; and, in two years, even more, because inference cost will come down ten times in one year and a hundred times in two years.

Another way to look at it is if someone last year needed GPT-4 to build an application but the APIs were too expensive, but now it's more than ten times lower in cost, it's no longer expensive. So, rarely do we have such technology improvement and cost reduction happening and multiplying at the same time, and that's what we're looking forward to.

So, to answer your question, I think one-and-a-half to two years, we will see gen AI blossom in every application. Every existing app will be rewritten with gen AI inside, and there will be many new apps that couldn’t be done before that will now be done and creating really amazing technologies that people will embrace. ChatGPT is really just the beginning. There will be many, many more in the next year-and-a-half to two years.

MS. KIM: Kai-Fu, you seem to be very optimistic about the outlook but I'm just wondering if there's anything that really worries you about AI.

DR. LEE: Certainly. I think there are many existing externalities. Hallucination, the field is doing a good job reducing it, but there will still be hallucination. And as a raw technology, it can be put to good use and bad use. There will be people who will use it for false advertising, for misleading people, especially at times of elections. It can be used by, for example, bad people, terrorists to learn how to build biological weapons and nuclear weapons, et cetera, by--as the model gets smarter and smarter, it will teach people who want to learn things--to do bad things. And that kind of harm is something that a lot of AI researchers are quite concerned about. I feel that technology problems need to be addressed by a combination of new technologies that reduce and even prevent harm. But also, regulations will be needed; otherwise, this amazing technology in the hands of bad people can bring unprecedented harm to humanity.

MS. KIM: Well, it's been really fascinating to hear about the latest developments in AI in China. Thank you so much for joining us, Kai-Fu Lee.

DR. LEE: Thank you.

The Global Balance of Power

MS. KIM: That was my interview with Kai-Fu Lee in Beijing about the latest developments in China.

Next, we're going to be continuing the conversation with a roundtable with my Post colleagues about how other countries are racing ahead in AI.

Gerrit De Vynck is a San Francisco-based reporter writing about artificial intelligence. And Eva Dou is a tech policy reporter based in D.C.

Gerrit, Eva, welcome.

MR. DE VYNCK: Thanks.

MS. DOU: Thanks for having us here.

MS. KIM: So, Eva, we just heard from Kai-Fu about how the U.S. is still ahead in the AI race. But China is not far behind. You spent nearly a decade in the Greater China region watching tech developments there. Give us a reality check. How is China doing?

MS. DOU: Yes, thanks so much, Yun-Hee. So if we go back a few years, the year 2017 was kind of the key year in China's AI development. If we think about what was happening then, Google had just rolled out something called AlphaGo, which could beat the world's best players at this Chinese strategy game Go. That's one of the world's hardest games to play. And this was, you know, a totally obscure thing. But to people in the industry, what it showed is this technology has reached another level of sophistication.

And what we saw in China in the year 2017 is they put out a national plan, sort of similar to Biden's executive order that came recently where they laid out a timeline where by the year 2025, next year, China's goal is to make major breakthroughs in AI. And by the year 2030, their goal is to be a world leader in AI, if not the world leader in AI.

And so sort of China putting that plan out there, it didn't get a lot of public attention at the time. But there was a lot of, like, policymakers took notice. And what we've seen is a lot of efforts by the U.S. government since then to flatten China's curve to make sure that doesn't happen, that China is not the AI leader by 2030.

And so as Kai-Fu, your interview with Kai-Fu showed, you know, China is trying its hardest to catch up to narrow the gap. And in some ways, it has caught up in some specific areas. Like I think Alibaba’s large language model recently was able to perform better than Meta’s in some things like math and coding.

But sort of in general, it's probably unlikely that China is going to actually catch up to the U.S. in AI any time soon. Probably mostly because the U.S. government is intent on making sure that doesn't happen.

MS. KIM: Right. So, Kai-Fu was a little too optimistic there.

Gerrit, you've been watching and writing very closely about the tech cold war between the U.S. and China. Kai-Fu talked a little bit about some of the things that worries him. Let's talk about the military applications of AI. What are the implications of that, and how are the U.S. companies preparing?

MR. DE VYNCK: Yeah, I think, you know, part of the reason behind China's interest in becoming an AI leader, and also, some of the official justification from the U.S. for pretty drastic response, such as the export restrictions that Eva was talking about are coming from a concern that, you know, AI has huge implications for the battlefield, for warfare, something that we're going to all hear about a little bit later in the programming in more detail.

But I think both countries really see that, you know, the technology as it gets better will become a bigger part of both military planning, so being able to know, you know, where you need to move things around, in, you know, before a war, to prepare for war, during a war at the tactical level, at the big picture strategic level.

You know, the AI is already being used in the military for things like planning. When you need to replace certain parts, you know, before we had a big list, okay, this screw would come undone after 18,000 flight hours, for example. And, you know, you really only know that from when the screw falls off, and maybe the plane breaks apart. But now they're applying, you know, machine learning and AI algorithms to try to predict these things, move all the parts, so sort of fix logistics.

And then I think as well both countries are seeing that, you know, in the future, they want to have a lot more machines, airplanes, submersibles, ships out. Especially when you're looking at the Pacific, which is, you know, the largest possible place that you could have a conflict, it's a place where you're going to need to have thousands or tens of thousands of machines to really be able to know what's going on and to cover that space really well. And AI is integral to that, because you can't have 15,000 F-35 fighter pilots. You just can't train that many people, and you can't build that many very, very expensive planes. But you can have 15,000 cheaper smaller drones that maybe you're not as worried about them getting damaged or shot down. And so both countries are really investing in this for the military applications as well.

MS. KIM: And, Eva, you've written a lot about drones and how China is preparing for this scenario. What can you tell us about developments there in terms of military AI applications in China?

MS. DOU: Yeah, well, China has been very interested, as Gerrit mentioned, in AI applications. I think that's sort of the quiet part that people don't always say aloud and why governments are so interested in these technologies now, because there is this mentality now we're almost entering a new Cold War, that there is a breakdown in trust. And if you don't know exactly what your adversary is going to do, then you have to make all sorts of preparations on your own end.

MS. KIM: Gerrit, so the U.S. and China are racing ahead. But they're not the only ones trying to lead in AI. Countries like France and the UK are also vying to be the top leader in AI. Talk to us a little bit about the developments in Europe. Has the tough regulations there hurt some of the companies to innovate [unclear]?

MR. DE VYNCK: Yeah, I mean, I think there's definitely this idea that Europe has really strict regulations and that's one of the reasons why they don't necessarily have, you know, a Google or an Amazon, or in the case of China, you know, Alibaba that sort of dominates the world with their power.

You know, I think the AI regulation specifically in Europe is quite new, and it's still being sorted out. So, I don't think we can necessarily say that strict AI regulation is holding back Europe's AI industry. I think there's bigger sort of economic or kind of, you know, commercial reasons behind that.

And also, the fact that the big tech companies have really locked down the market for so many years and made it difficult for new startups, whether they're in Europe or here in the U.S. to come up and break their power.

But, you know, one company in France, Mistral AI, is really interesting. I mean, they've claimed to be able to come up with really, really great AI models that do rival some of, you know, the very expensive ones being built by U.S. companies in their capability for a lot less money. And they're also now raising serious amounts of money. And because they have sort of the support of the European governments behind them who are very interested both politically and economically in having their own national champions, I do think that we definitely--you know, U.S. companies shouldn't count out European competition at all.

MS. KIM: And, Eva, I mean, Washington has been very wary about China's tech ambitions, and it hasn't necessarily been very fast in terms of regulating the AI industry. So, what do you foresee going forward in terms of the regulation front in the U.S.?

MS. DOU: Well, this is one of the few areas of policy in D.C. now where there is sort of a bipartisan consensus, and so it's kind of interesting that in an election year we can kind of predict where policy is going to be a year from now, regardless of what happens in this election, which is, you know, like, both parties’ stances differ in some ways, but in general, both parties agree that, you know, this rivalry with China is sort of going to be a major part of foreign policy going forward for at least some years, maybe even decades. And so that means probably continued export restrictions for the most advanced technologies in the U.S., such as advanced semiconductors, AI algorithms towards China.

MS. KIM: We've written about this already, but Silicon Valley is courting a lot of investors when it comes to AI from the Middle East, and the Biden administration is pushing this agenda. What's the agenda there?

MS. DOU: Yeah, that is also an interesting part of this U.S.-China rivalry, that it spreads beyond the two countries and we are seeing it play out in a number of regions of the world, the Middle East, one of them, which our colleagues recently wrote about. And that, you know, on the back end of the AI systems is an enormous amount of data, and that's incredibly valuable to the governments. It's incredibly valuable to companies. And there's immense competition about which companies of which nationalities are involved in these projects.

And, you know, U.S. companies like IBM have been competing with Chinese companies like Huawei. And this has been an ongoing competition that now in the new AI generation of technology is ongoing.

MS. KIM: Final question. In a minute or less, what is the biggest unanswered question for you about AI?

And let's start with Gerrit.

MR. DE VYNCK: I think, you know, Kai-Fu Lee was saying that there's almost this assumption that every year, every six months, AI will just get exponentially smarter, and six months from now, OpenAI will come up with something that blows away what we have now and we'll have to reevaluate all of our assumptions.

And I'm not sure that's actually true. And so I think the biggest question is whether AI will continue to advance at this--at this rate, or whether, you know, it starts to slow down in terms of that big step change and kind of plateau and we'll need some other kind of AI technology to increase that that innovation.

MS. KIM: Eva.

MS. DOU: That's a great point. I would add on to that I'm quite curious--I'm a tech policy reporter, and so from the policy aspect, I'm very curious if sort of this industrial policy swing that the U.S. is taking, how it's going to turn out, because for many years the U.S. took a very light regulatory hand in industries like advanced technologies, and now it's in competition and China's taking more control in areas like AI. So it'll be interesting to see how that turns out.

MS. KIM: Well, we'll definitely continue to follow this story very closely. I'm afraid we're out of time. Gerrit, Eva, thank you so much for joining us today.

MS. KIM: Please stay with us. My colleague Cat Zakrzewski will be out in a moment with journalist and Nobel Peace laureate Maria Ressa after this. You won't want to miss this conversation.

Democracy on the Line

MS. ZAKRZEWSKI: Well, welcome back to Post Live. I'm Cat Zakrzewski, national tech policy reporter here at The Washington Post, and my guest today needs little introduction, Maria Ressa, a Nobel Prize winner who actually flew in today from the Philippines to join us.

Thank you so much for being here, Maria.

MS. RESSA: No, thanks for having me. Thank you for coming.

MS. ZAKRZEWSKI: And, Maria, I want to start with your comments at the end of that video, that, quote, "If we don't have integrity of facts, we cannot have integrity of elections." You've warned the world will know whether democracy, quote, "lives or dies by the end of this year." Help set the stage for us. What's at stake?

MS. RESSA: First, journalism, facts, right? With all of the big tech changes that have happened, digital news may disappear, may not survive the next year. That's the first.

The second thing is that you're seeing generative AI, and I've been sitting--I was sitting in the audience listening to everyone else. Please understand you're listening to people who not only created it but want you to use it. Right? It's a sales pitch. It is hype, and if you truly look at--let's put the generative AI later, and let me answer your question directly.

As of January this year, V-Dem out of Sweden said that 71 percent of the world is now under authoritarian rule. We've seen that increase as the attacks on journalists increased, right, hand in hand. So the quality of democracy has gone down as attacks on journalism have increased, and journalists have had to sacrifice more to keep giving you facts to hold power to account.

The biggest thing with generative AI is that, literally, not only will we have to deal with the harms of social media. Let's be clear what that is. That is the new distribution system for every piece of information, and by design--this is an MIT study from 2018--Social media spreads lies, lies at least six times faster than facts. And I am sorry to bring you to X, but I'll tweet that same study later on.

And our data in the Philippines shows that as of 2017, if you lace it with fear, anger, and hate, that it spreads even faster. That's like telling your child, "Lie. I'm going to reward you. I'm going to give you ice cream each time you lie." You're going to--and then you're going to carry this through the years. You're going to have an adult that's gotten overweight because they keep eating ice cream, and then--and also somebody who has absolutely no integrity. It is the demise of democracy, and I feel, you know, that last phrase that you heard me say, which is "no facts, no truth, no trust." If you don't know what the facts are, how are you going to vote? Does this technology take away your agency? If the technology hacked our biology, how are you going to choose? That's the first, that social media, our first human contact with artificial intelligence, which, by the way, is neither artificial nor intelligent.

This is a this is a body of study that began 70 years ago. There are many different ways you can you can use machine learning or artificial intelligence, but that phrase itself was coined in 1956 as a marketing phrase. It's to market to you, to make you use it. So that's that, I would say.

But last part in this is generative AI is now going to take us to enshittification, the enshittification of the internet.

MS. RESSA: And I'm going to--I'm going to footnote that word to Corey Doctorow, but what he meant--and this is an academic study that came out January this year. This generative AI, you know, some of the folks here said that, you know, well, you can create content. I cringe because that means you can create crap really quickly, and then it's going to come to you. And you won't be able to tell the difference between quality and crap.

The enshittification is as of this year, January this year, an academic study said 57.1 percent of the content of the internet is low quality. There are derivations of what The Washington Post would do, what I would do, or bad translations. That's before all the deepfakes started taking off.

Let me shut up because I sound like doomsday.

MS. ZAKRZEWSKI: No, I mean, I'm going to ask you a question that might lead to more doomsday answers right now.

MS. RESSA: Uh-oh. Okay.

MS. ZAKRZEWSKI: But I want to ask you--I mean, we're about halfway through this crucial year for democracies around the world. We're still a few months away from our election here in the United States, but when you look at the elections that we've already seen happen this year in India, Taiwan, most recently the EU, what can we learn? What can we maybe learn from those elections about what we could expect here from AI and social media?

MS. RESSA: I think Americans are really not prepared for these elections, because you are going to walk into them using--does anyone here not have a cell phone? This monitors you, right, whether you're--whether it's social media or Netflix or Spotify or any--on any of these things. This turns you into numbers. And then generative AI essentially turns those numbers, takes the stuff, and then tries to distill us into that. But numbers aren't who we are.

And let me go to the elections. I'm sorry. Let me go to your question. There's so much, because I made notes from everyone else who was in front of me.

One election stood out as a good election, as a surprise election where democracy won. Taiwan wasn't a surprise, because Taiwan has always been fighting China. If you're familiar with Taiwan, you know that this is a population that is always on alert. So the fact that they can fight disinformation, which even they raised the alarm on, is not a surprise. They were prepared for it.

Poland was the bright spot in elections, because you already had a rightist government. We all expected that government to win, but that government passed an abortion law that brought the women and the youth out to vote. They walked out of social media. They walked out of the virtual world, and they redefined what civic engagement is, because they felt like their backs were up against the wall.

America is so far from really feeling that. In fact, you're in denial that you're being manipulated, and what we've documented in data are seven layers of manipulation, the campus protests being only one of the tip of the iceberg. Gaza is a fault line. So we've had eight years of insidious manipulation.

How will you choose to vote? So this is--please do better. The world, where you go, you will wind up taking us.

MS. ZAKRZEWSKI: So for the people in this room, we have policymakers, people from the tech companies gathered here. What steps can we take to ensure we don't repeat those mistakes here in the U.S.?

MS. RESSA: Yeah. So let me talk first about the news organizations. The biggest mistake we made with social media was literally not building our own tech and taking the share buttons and putting them on our websites and sharing you with them. That's the social graph. That's the social graph. That is the data that you have.

And we've seen the impact of social media that hasn't really been addressed. The EU has the most aggressive legislation on it. But I always joke that the EU won the race of the turtles.

I am the vice chair of the Internet Governance Forum at the UN. The chair of it is Vint Cerf, the Father of the Internet. He's with Google. He did the TCP/IP protocol. We fight all the time in a really nice way, but the Internet Governance Forum is one way that the internet used to be governed. It won't be that way anymore, because the software has been so important.

For civil society, for people who are on these devices, you have to understand that in the age of exponential lies, we have to define what civic engagement is. We have to walk out of the virtual world where we don't even realize we're being manipulated into the physical world and use exactly what the tech companies have used, which is our propensity to believe our family and friends. Like those stats said, that each of us people will believe their family and friends 94 percent times more than they will an institution--94 percent. Even if your family and friends are lying, you know--and that's part of the problem that we have.

So once social media--what we're dealing with today are the cascading failures that began with the distribution, with the replacement of news organizations as gatekeepers by technology. And that hasn't changed. You've done the reporting on this. You're walking into elections with the social media companies taking away all of the guardrails they put in place for America's 2020 elections. You will be far more vulnerable.

Cat, you tell me why they've taken it away.

MS. ZAKRZEWSKI: So I think there's been several factors. We've seen Elon Musk take over Twitter, and that had an effect in Silicon Valley--

MS. RESSA: Absolutely.

MS. ZAKRZEWSKI: --where the companies dismantled some of these systems, and there has been a conservative legal campaign to dismantle some of these systems that we've seen come out of Congress and fight in the courts all the way to the Supreme Court.

MS. RESSA: And I've seen this in in countries in the Global South. Just I feel like in many ways, Americans don't think it's happening in their country, but it is.

And, you know, a lesson--so here's the upside. My country went from the years of Duterte, the six years of Rodrigo Duterte, where the number of people killed in the brutal drug war is our first casualty in the battle for facts. It goes--the police still say it's only about 8,000. Well, actually, they claim it's 2,000. Then it became 8,000. But our Commission on Human Rights says it's at least 30,000 in 2018. So that was our first casualty. We were in hell, and in our 2022 elections, we moved to purgatory.

I'm just so worried America is going to hell.

MS. ZAKRZEWSKI: And just to help our audience understand the stakes, I want to take a moment here to show an example of--

MS. RESSA: Sure.

MS. ZAKRZEWSKI: --you know, how a deepfake could be used to misguide voters in elections taking place around the world this year. Let's turn to the video.

MS. ZAKRZEWSKI: And this is an example that shows how good the technology has gotten at making it almost impossible to separate fact from fiction online, and, you know, this is a deepfake of Donald Trump. It's a little difficult to see how far of a reach this video has garnered online or what impact it might have had on Trump's favorability abroad. But in this era, when it's possible to cheaply and quickly make a video like that one, what should people do? How should people sift through what they're seeing online?

MS. RESSA: I think three things that need to happen. And we--Dmitry Muratov and I--rolled out. Dmitry is a Russian journalist who, with me in 2021, received the Nobel Peace Prize. He's been forced to shut down his news organization, right? Most of them are now living in exile.

In 2022, we came out with a 10-point action plan, and this pulls it up to a high level. 300 Nobel Prize winners, civil society groups, people who are journalists, people who are working in the information ecosystem have signed on to this.

It distills to just three points. The first is to stop surveillance for profit. Everything that you have heard today is built on surveillance capitalism, which we didn't even really name until 2019, when Shoshana Zuboff, the Harvard emeritus professor, came out with a 750-page book on surveillance capitalism, a new business model that the tech companies were using. That means that you, your data, your privacy has been--you know, the companies will tell you they've created models of each person on their platform. I just say we've been cloned without our permission.

And then AI comes in and takes all of our clones, and that's the motherlode database for microtargeting. Microtargeting is not advertising in the old age. Microtargeting is finding your weakest moment to a message and then selling that to a company or a country. So that's the first. That's still social media. That's still distribution.

And now you have to ask me the question again because I forgot where I was headed.

MS. ZAKRZEWSKI: I just think, you know, in this era where these deep fakes are so cheap and easy--

MS. RESSA: Oh, what do we do? Yes, yes.

MS. ZAKRZEWSKI: --how can we separate fact from fiction online?

MS. RESSA: Actually, you can't.

MS. RESSA: This is the problem. And, you know, I know because I have several deepfakes also. My first deep fake was in March last year, and it was--it came out of Russia. So I couldn't tell whether--but it was a Russian advertising system that that came out with it. It was me selling Bitcoin.

MS. RESSA: So it sounded like me. It was wagging its mouth like me, but it came from Russia. And what they did is the distribution was on Facebook, but they used a credit card to sell it on Bing. So that was interesting to me. This is where it goes hand in hand.

And sorry, you use Donald Trump, but let me talk about business and what happened to a Hong Kong company. A Hong Kong--a guy in a Hong Kong company was asked--the CFO was supposedly in London, and he was asked to wire $25 million to London, right, to this account. And he then says okay. They have a Zoom meeting with many different people in the Zoom meeting, including his CFO. And he then, after that meeting, wires $25 million. Every single person in that Zoom meeting with him was a deepfake. I'll tweet that story too. Right?

So you can't tell the difference between fact and fiction, and so this is part of the reason. So let me say the three things. Stop surveillance for profit. Stop coded bias. Coded bias, you heard a little bit about it in the medical panel but not enough. Coded bias means that if you're a woman or LGBTQ+, if you are brown or Black or from the Global South, if you are marginalized in the real world, you're further marginalized online. The code that is exported to the rest of the world has these biases built in.

So--and then the third one--sorry. I keep giving you bad news. The third one is journalism as an antidote to tyranny, because those first two have been used and exploited by illiberal forces so that they have forced, without you knowing it--they have been elected democratically. Seventy-one percent.

MS. ZAKRZEWSKI: And I want to come back to that question of journalism. But first, I quickly want to ask, I mean, as someone who's been a target of so many online attacks, I think if I saw a deepfake of you selling crypto, I would maybe be a little skeptical, if that came up in my news feed. But, you know, given that these are happening more frequently--

MS. RESSA: Yes.

MS. ZAKRZEWSKI: --and increasing in sophistication, are there steps that you're personally taking to protect yourself in the era of deepfakes?

MS. RESSA: You know, the hard part is and the worst that I've dealt with have been an average of 90 hate messages per hour. Ninety, nine zero. Right? And I cringe every time I hear someone say, yes, but this is free speech. Please know the difference. This is not free speech. This is free speech being used to stifle free speech. Right? It is being used to take--we just did a whole bunch of reports on gendered disinformation. We came out with a study in Brazil, which is also coming out from hell to purgatory. They went from Bolsonaro to Lula. Gendered disinformation there is taking women journalists, women activists, and women politicians, pounding them to silence. Women politicians all around the world are opting out of the public sphere because they have kids. They don't want to be doxed.

Gendered dehumanization of women is off the scale. I mean, even the EU, Věra Jourová, who's probably the most powerful figure globally in terms of regulating big tech, has already warned that women are getting forced out.

Sorry. Let me go back to your question. Please remind me again.

MS. ZAKRZEWSKI: So I was just wondering if there's any steps that you personally have taken--

MS. RESSA: Are doing. Yeah, yeah.

MS. ZAKRZEWSKI: --to protect yourself.

MS. RESSA: I keep talking to you. You know, I am--in this particular year, I really was--we started looking at 2024 as early as 2020, because I come from a country that elected Duterte democratically, that elected Ferdinand Marcos. You know, when Milan Kundera said, "The struggle of man against power is the struggle of memory against forgetting," we elected the only son and namesake of Ferdinand Marcos, our dictator, who we ousted in a people power revolt in 1986. We elected his only son and namesake, and thankfully, we're still in purgatory. Maybe it could be that the basement line for what he preceded was just so bad. But I would not wish it on anyone else.

What do I do for myself? During that time period, you have no choice but to be a punching bag. That's the way I felt.

In order to be here today, I've had to ask my courts for approval. The Supreme Court of the Philippines has to know my flights, my hotels. So I haven't gained all my rights back. But warning, you can lose your rights like this. And what do we do? Move into the real world. Understand the hype that you're being fed is hype, right, and organize in the real world. If we miss 2024, the tilt of the world will take a decade to return. Minimum. In the Philippines, we knew from the first Marcos, our police and military were radicalized under that dictatorship, and it took a decade after 1986 to bring human rights back. I don't want to see that happen to you. Yeah.

MS. ZAKRZEWSKI: And throughout this conversation, you've pointed to journalism as a key--

MS. ZAKRZEWSKI: --pillar in this crisis. I mean, I want to better understand. You were showing me backstage a little bit the work that you're doing with AI at the Rappler.

MS. RESSA: Yeah.

MS. ZAKRZEWSKI: But given these challenges, what steps can media companies take to promote truth in this environment?

MS. RESSA: First, understand we're all on the same side. We're all on the side of facts. Why are we not collaborating the same way that--and actually demanding better the same way that the WGA demanded better as soon as generative AI came out? Right? We're each still in the old world where the kind of--we think we have a vestigial tail, that we still have power. We don't. We must stand on the side of facts.

But let me quickly tell you why you will get less news in your feed. Starting in 2018, Meta, the world's largest distributor of news--Mark Zuckerberg said it. He began to choke traffic to new sites. That wasn't such a steep drop in 2018, even after the Cambridge Analytica scandal. And so, you know, right, Americans were the most compromised accounts.

But the country with the second most number of compromised accounts was the Philippines, because we, your former colony, tend to be the testing ground for you. So--and then what happened after that was then generative AI came in. Right?

So sorry about this. I'm going to ask you again for that question of what are we going to do about it, right? I think we have to move into the real world. You have to organize and understand and accept that you are being manipulated, that this technology is treating us like Pavlov's dogs.

And the kind--having rolled out the tech, right, so we were one of the first globally to use generative AI because we took a foundational model, trained it on our data, and then if you go on Rappler, every single story will give you a three-point--a bullet point summary. We rolled it out in June last year and then announced it at near the end of the year after we perfected it.

But generative AI as a technology is a probabilistic machine. It is not anchored on facts. So every single thing that it spews out is based on what you have fed it, the data you have fed it. Sorry. So understand that the truth, facts are not going to be in the new technology that you have, but they can program it to ask how you feel and make it seem human. And they now know that they shouldn't make it. Well, China, for example, has now rolled out a tech that pretends to be a dead relative, someone you love who's dead, to help you cope supposedly. But that again leads us to this fantasy versus reality question. Sorry. Last thing I'll say, because there's so much I can say about the technology, because we are using the chatbot. We will. But before we roll out that chatbot, we are going to anchor it in a very strong ontology and put GraphRAG on it, something all these big tech companies should have done in the first place, but don't because they can get away with having us do it for the, with the harms to us. And every single harm that happens will have to be paid for by whatever government is in place.

Anyway, so to go back to what journalists can do, number one, less news on your feed, because Meta, as of January 2024, decided to choke traffic to news websites in particular. And what does that mean? A drop of 50 percent to 85 percent globally for news.

The second is search. There are only three ways a website gets traffic, right? Social search and direct. Search. Last year, last March, SGE, search generative experience, generative AI on search was rolled out as an experiment. The U.S. is one of them. The Philippines is one of them. 125 countries. Search generative experience was built on search, which means if The Washington Post or I opt out of that, we will disappear from the Internet. Search will not have that. So we don't have the option to opt out.

Once search generative experience was put in place, it's now called--it has a new name, and it was rolled out just in May. You will now get a chatbot to spew what our websites tell you. It will have a probability that it will be wrong, but our links will no longer be there. So our traffic will decrease even more.

You know, this year there's been a--I can't even use the word "bloodbath" today. There have been so many journalists laid off, so less news in your feed but more outrage, more polarization. Polarization is an algorithmic choice on the distribution platforms.

I sound really--so what do we do? Right? Let me tell you what worked in the Philippines, even though we elected Ferdinand Marcos, my new president, President Marcos. I have to be happier, right? Because we're now in purgatory. So, yeah, I went from 11 courts to now only two. That's not bad in eight years.

What we did was we did a four-layer pyramid, a whole-of-society approach, what we called an influencer marketing campaign for facts and in time for our elections, within three months, we had 60 news organizations at the bottom of the pyramid.

Fact checks. Fact checks don't spread as much, right? Because lies spread at least six times faster. But what we did is we worked to get to consolidate our data and did a data feed that we process all the way up the four layers of the pyramid, so fact checks on the first layer.

The second layer we called the mesh distribution. 116 different civil society groups, human rights organizations, the church came in. Business finally came in. This is in 2022. And that everyone there, 116 groups, which had thousands, tens of thousands of people, every day we would point out here are five fact checks you can share. They're asked to share the fact checks with emotion but not to use anger. And what we found from that study was that inspiration spreads as fast as anger. Inspiration.

The third layer were the academics, six different--sorry--eight universities, because we punched the data up to them, and instead of first doing peer review of their academic journals, we asked them every week to rotate how they tell the public we're being manipulated.

And the last layer, layer four, were legal groups left, right, and center, and they protected the four-layer pyramid.

Before our elections, we took over the center of the information ecosystem with facts. It wasn't enough to, you know, stop the election, not that we wanted to do that, because it was never about taking a political position as it was about making sure you weren't being manipulated, that you had the facts to be able to vote.

MS. ZAKRZEWSKI: And, Maria, I want to leave our conversation on that inspirational message. Thank you so much for being here with us today at Washington Post Live.

MS. RESSA: Thank you. Thank you. Please vote.

MS. ZAKRZEWSKI: I really appreciate it. Thank you.

The Future of AI is Open

MS. MONTGOMERY: Hi, everybody. Welcome to The Washington Post's "The Futurist Summit--The Age of AI," and IBM is proud to be a sponsor of this forum. And we're glad to be able to share our thoughts on one of the most pressing issues of our generation, the future of AI and the ecosystem necessary to ensure the safe, responsible, and secure evolution of this promising technology.

So during the next 10 minutes, Rebecca Finlay and I are going to join--are going to have a little discussion to discuss open innovation and why it's the best way to promote AI safety, foster healthy competition in the market, and protect security interests.

So Rebecca, let's start at the beginning. Do you mind sharing with the audience what is open innovation, and how have you approached it at the Partnership on AI? And for those who might not be familiar with the organization, maybe share a few words about what the Partnership on AI is.

MS. FINLAY: With pleasure, and thank you so much for inviting me to be here. I'm delighted to be with you today.

So open innovation is, in the first instance, if you're a company and you're thinking about how to deploy AI into your systems to better meet your customers or to better predict outcomes for the clients that you serve, you may be choosing whether it's an open source or an open access model and how that can help you in terms of managing your data, meeting the needs that you're identifying for the technology.

But when I think about open innovation, I think about the whole innovation ecosystem and the need for openness across the ecosystem. So we're going to talk, I think, a little bit about that today, but one example of open innovation in my mind is the Partnership on AI, which in some respects is an experiment.

Open innovation started in 2016. IBM, together with five other large technology companies, came together with civil society representatives, organizations like the ACLU, with philanthropy, organizations like the MacArthur Foundation and other, to really identify what are some of the concerns and potential harms of AI, and how do we need to create space for a real diversity of perspectives to come together to put forward solutions that will protect people and ensure that we're developing AI for equity, justice, and shared prosperity.

MS. MONTGOMERY: Yeah. I mean, from IBM's perspective, we're proud to have been a founding member and to still be working actively with the Partnership on AI, such an important organization because of the multistakeholder approach. And, you know, from our company's perspective, we've long been supportive of open source, one of the early contributors to Linux. We acquired Red Hat. We just formed the AI Alliance, which is really focused on open innovation and open ecosystems, so completely aligned with that point of view. I think it's a really important conversation right now.

And we did some crowdsourcing on LinkedIn for the purpose of this discussion today and asked folks what they wanted us to talk about, and a question that came up was the potential for misuse. So this is an often-cited concern when it comes to open, and I guess the question that I have for you is, what protections do you think we need to put in place in order to protect the misuse of AI for surveillance, for misinformation, manipulation, et cetera? And what are the misconceptions about open around that?

MS. FINLAY: Well, first and foremost, I mean, I think that Maria Ressa did such a fabulous job of giving us a really good sense about the way in which some of these very large models can be misused when it comes to mis- and disinformation and otherwise, and that is clearly one of the things that we need to protect.

One of the misconceptions, I think, in this space is that there seems to be almost a binary choice between a closed type of model, these very, very large models that can be provided but are all closed in terms of how they perform, where the data comes from, and then a more open model. And it is clear that, from my perspective, both and all need to be safe and need to be responsible if they are going to be deployed out into the world.

And it's not that closed is necessarily more safe or open is necessarily less safe, and it's also not true that there's just two choices. There's actually a whole spectrum of ways in which companies are going to make decisions about how they deploy these models. And again, our view is that they all need to be safe.

So in terms of protections, yeah, regulation is core. You've got to ensure that we have good and clear regulation in place. But there are also lots of things that companies can be doing, and the first step is transparency, right? Transparency is the first step to accountability, so really thinking about being able to disclose and document where your data is coming from, how it is being used, whether it's in the training of the model or once the model is deployed, thinking about privacy protections in particular, making sure that your data is secure. Those are all things that companies can be doing and moving forward with very clearly as they begin to deploy this technology.

But it's not just companies, right? There's also the national security considerations and this whole question of both making sure that the models are safe but also driving forward economic innovation. I know you've been doing some work in that area.

MS. MONTGOMERY: Yeah, we have, and we absolutely think that the best way to protect national security is to remain a leader in AI, to foster an innovative environment, to foster and grow talent, and the way to do that is to have communities where development can happen, right? Open communities and open innovation, long established experiences where transparency helps to drive trust. You see it in the security space around bug bounty programs and discovery of vulnerabilities and the like. You see it in the research communities.

A lot of the safety research that's happening is happening in academia. Sanford, for example, pioneered a novel technique to help uncover AI manipulation, right, a new way of preventing that.

MS. FINLAY: Yeah.

MS. MONTGOMERY: So a lot of that is happening in these communities, and the more eyes on these models and the more eyes in the community, the more people we have contributing, I think, is definitely a path to security.

I think as you said, transparency is so critical, right? And I feel like blind trust in the case of AI is a really dangerous place to be.

MS. FINLAY: Mm-hmm, yeah. I couldn't agree more.

And I think that really comes back to the question of how do we think about these models in terms of widening the diversity of voices that are engaged in the work they're doing. So when we think about open innovation, how are we ensuring that the teachers who know what they need in the classroom and the health care practitioners who know what they need in the health care setting are working directly with model developers and model deployers to really better understand how these systems can work for them. We need to have a diversity of perspectives around the table to move this forward.

And I know you've also been working on some other policy areas as well related to AI here in D.C. and around the world.

MS. MONTGOMERY: Yeah. I mean, we have been advocating for risk-based regulation for a long time now, right? And we believe that regulation should happen on AI in context, focused on risk based--focused AI in context, not regulating the technology itself. So we've been advocating for that actively for four years, four-plus years now. And we continue to see the need to advocate and put guardrails in place.

We believe that accountability is really critical, right? Companies and others deploying AI technology should be held accountable for the AI that they're putting out there, particularly in cases where the AI is used in an area where it could have an impact on somebody's fundamental rights, on their health, and that type of thing. So we've been, as you mentioned, actively advocating for this for a while now.

And I guess one other area is opportunity, right?

MS. FINLAY: Mm-hmm.

MS. MONTGOMERY: So economic opportunity from AI, where do you see the possibilities there? Can you elaborate a little bit on that?

MS. FINLAY: Well, sure. And I think everybody's seen the surveys, right? Like every CEO of every company is anticipating and trying to think through how is this technology going to really advance their productivity and efficiency, yes, but also how their workers are performing and their worker understanding of satisfaction and job satisfaction and job quality as well. These are all the questions that I think are really pertinent for companies to be thinking about as they expand. And from a policy perspective, it's not just about regulation. Yes, that's important, but it's also about innovation and industrial policy as well.

I had the opportunity earlier this week with the board at the Partnership on AI to be up at the IBM Research Lab, which is this great mid-century technology research center hub, and all sorts of work that's happening up there that comes from an open publication of research perspective, an open publication of open-source models perspective. I mean, the government can really take a role in ensuring that we're incenting a good publicly funded research system.

I like to say that, you know, the iPhone, much of the components in the iPhone and the Google AdWords algorithm have one thing in common. They both began as a publicly funded research project with a publicly released research paper. So that's what can really drive innovation upstream that can lead to innovation downstream.

MS. MONTGOMERY: Yeah, I completely agree. And when it comes to advocacy and the role that legislation and policymaking can play, we've been really pushing for similarly funding of things like the National AI Research Resource, you know, funding research in there, funding the NIST AI Safety Institute and contributing to the real evidence-based policymaking that's going to happen in that institute.

And I mentioned the point of regulating risk, not technology. I think it's really critical that we shy away right now from regulating from really prescriptive licensing regimes and that type of thing, which will stifle innovation. We're really on this point of the more bodies you have, the more eyes you have, and the more multistakeholder approaches you have to AI development, the better and safer and more innovative the technology is going to be. We're establishing the ecosystem for the future now. We're establishing the marketplace. So it's really important that we think about these things and what we want our future to look like so it's one where everyone can contribute.

MS. FINLAY: Yeah, just could not agree more. So crucial to have those voices as part of this process and really thinking about how AI works for people, not on people.

MS. MONTGOMERY: Yes, exactly.

So thank you, Rebecca, for joining me today. That's all the time we have, and this has been a great conversation.

I started, you know, as I said, working in this space like four years ago, and since last year, I testified in front of the Senate Judiciary hearing and kind of kick-started a lot of these conversations with generative AI and the ChatGPT moment, and I've been doing a lot of traveling and speaking with governments around the world, safety summits and the like, and I'm really encouraged by the conversations. I'm encouraged by the conversations we've had here today as well. I think there's been a lot of progress made in understanding the risk and helping to address the risk and helping to establish the future in a responsible way. So I'm glad we had this conversation today.

Thank you for joining us, and I'm going to hand it back over to The Washington Post.

A New Era of Warfare

MR. IGNATIUS: So--

GEN. MILLEY: Who was that guy?

MR. IGNATIUS: --I'm David Ignatius--

MR. IGNATIUS: --a columnist for The Washington Post. It's my pleasure to be joined by General Mark Milley, former Chairman Joint Chiefs of Staff who looks a lot younger, a lot more relaxed than he did--

GEN. MILLEY: Oh, please.

MR. IGNATIUS: --as Chairman. So, before we came out here, General Milley, you said--

GEN. MILLEY: Same eyebrows; same bags in the eyes.

MR. IGNATIUS: General Milley said, David, take off your tie.

GEN. MILLEY: Yes.

MR. IGNATIUS: So, it's a rare appearance for me.

GEN. MILLEY: It's a tech conference. No one in the tech world wears ties, David.

MR. IGNATIUS: I get it. So, everybody knows that General Milley is tough-talking, former Chairman of the Joint Chiefs of Staff. What is less well-known is he has been a champion of innovation at the Pentagon for years. When he was Chief of Staff of the Army, he decided to turn to our best tech companies to help him make sense of Army logistics. It was an early start to something that has really paid dividends for the Army and the Pentagon.

So, I just wanted to say, General Milley, welcome; glad to have you here.

GEN. MILLEY: Thanks, David. Appreciate it.

MR. IGNATIUS: So, I want to start with the big picture in terms of our subject of the role of AI in national security.

GEN. MILLEY: Have you noticed it's impossible to see who's in this crowd?

MR. IGNATIUS: So, I'm telling you, sir, these are all friends.

GEN. MILLEY: Is part of the AI thing?

MR. IGNATIUS: These are all friends. They're actually virtual. They're not really here.

GEN. MILLEY: Not really here?

MR. IGNATIUS: So, shortly before you retired last year, you wrote an article in Joint Forces Quarterly. I know everybody in this audience reads that regularly, called "Strategic Inflection Point."

GEN. MILLEY: Right.

MR. IGNATIUS: And you wrote in that article, "We are witnessing an unprecedented, fundamental change in the character of war."

So, I want to ask you to begin by unpacking that and talking about your theory of the case of what is happening in warfare in this moment that is so different?

GEN. MILLEY: Well, there's two terms we use in the article, and I've spoken publicly many times really over the last eight-plus years or so. One is the nature of war and the other is the character of war. The nature of war is what Clausewitz talks about when he talks about the Holy Trinity between the government and the people and its army. He talks about fear and chance and blood and violence and the idea that war is act of politics. It is an act of imposing your political will on your opponent by the use of organized violence. That is the nature of war. And the theory of the case is that nature of war is unlikely to change as long as human beings are actively engaged in the conduct of war.

The character of war, on the other hand, changes frequently. And the character of war is referring to the tactics, techniques, procedures, the doctrines, the weapons, the training, the leader development, where you fight, when you fight: day, night, spring, summer, that sort of thing. All of those details of war are what is typically referred to as the character of war. Now, the character of war undergoes changes, and has undergone changes historically, quite frequently.

One of the most common drivers of change in the character of war is technology. And I would submit that we are undergoing the most fundamental change in the character of war ever recorded in human history, again, this case being driven mostly by technology, although not exclusively by technology.

MR. IGNATIUS: So, let's talk about that, specifically, first in terms of offense. What is AI and other aspects of new technology doing to change the nature of offensive warfare?

And second--in some ways, to me, more interesting--how is it changing defense and what's possible in defense?

GEN. MILLEY: Well, in both offense/defense, you've got to be able to see the battlefield, to sense the battlefield, see your enemy. And Sun Tzu tells us, you know, "See your enemy; see yourself; win a thousand battles." So, you've got to be able to see your enemy, but you also have to be able to see yourself.

So, in the world of AI, your ability to use all the various sensors that are out there, everything--everybody's got an iPhone. I've got a Fitbit watch, GP watch--GPS watch. So, all of these are sensors, right; and you've got sensors in space with satellites; you've got sensors on earth; you've got electronic sensors that can pick up electronic communication and so on and so forth. So, you've got a massive amount of sensors out there which are generating a massive amount of information.

What AI will do, you'll be able to absorb that information, correlate it, and turn it into useful intelligence or actionable intelligence relative to the enemy, or useful intelligence to evaluate your own force. So, you should be able to figure out how much fuel you're using; where all your soldiers are. Is this vehicle up or down from a maintenance standpoint; is this plane fully operational or not operational?

All this massive amounts of data that, today, is really put into databases and a lot of it is manually loaded and so on, AI will be able to absorb and gather up all that information and then spit it out to you, the commander, in a useful way that will allow you to then command and control a much better operation than you would have been otherwise.

And that's really where AI comes in. So, your ability to see, sense the environment, gather that information, turn it into useful information for the commander to make decisions. And then, you'll go through the OODA loop, the observe, orient, direct, act loop faster relative to your enemy and that will give you a decisive advantage.

It'll also, if you think about see and then shoot, your ability to shoot with greater precision at range, AI will play a big role in that. And then, your ability to move: So, we know full way that robotics is coming at us very, very rapidly in the commercial sector; and robotics is coming at us very, very rapidly in the military sector, as well. You already see that with drone warfare and you see a lot of that in Gaza and you see a lot of that in Ukraine and elsewhere. And those are a form of robots, but what you're going to see over the next, I don't know, 10-15 years, something like that, you're going to see sophisticated militaries, U.S. military, the Russian military, the Chinese military, et cetera, you're going to see large components of those militaries become robotic.

So, we are on the cusp of seeing a pilotless air force, a sailorless navy, a crewless tank sort of thing. It won't be a hundred percent. You're still going to have pilots and sailors and soldiers, but you're going to see an increasing use of robots and robotic forces in the years to come, not-too-far-distant from where we are today. And you see these experiments ongoing in all of the services in the U.S. military today.

MR. IGNATIUS: So, in that world where our robots are fighting their robots, is that world going to be safer for our civilian populations, or are we going to be more at risk?

GEN. MILLEY: Well, we don't know. I would say you could make an argument either way. You could make an argument that if you have a much greater use of robots and less human tragedy, less human blood, less human death, you might actually make war more likely rather than less likely. Now, that's possible, right? So, that's one possible outcome.

And I don't know that it would be robots versus robots, necessarily; it could be robots versus humans. So, there's a lot of potential outcomes. The fact of the matter is none of us really fully comprehend what all of these technologies are going to imply in the use of--in the world of the military.

Clearly, AI has an enormous amount of value. So, if you can absorb all of the medical information in human history and then give that to a doctor to diagnose somebody who has some sort of disease, that's going to be a big advantage, and we're probably, within reason, years--maybe not decades but years of solving things like Alzheimer's and cancer and other diseases because of the use of AI. So, there's a lot of positive to AI. But at the same time, it is an enormously powerful--I would argue that AI is probably the most powerful technology ever developed by humans. There is going to be a military application--there is already a military application to artificial intelligence, but that's going to expand tremendously as we move on in time. And the nation state or non-nation state actor, terrorist organization, that optimizes the use of artificial intelligence for military purposes, that nation state will have a decisive advantage in the conduct of war, and I want that nation to be the United States.

MR. IGNATIUS: Let's talk about the two terrible laboratories--I hate to use that word--but conflicts where we're watching--

GEN. MILLEY: Yeah.

MR. IGNATIUS: --this new era of warfare play out, Ukraine and Gaza--first, let me ask you about Ukraine.

One thing that's been striking to me in my visits there since the war began is that the battlefield is increasingly transparent, that both sides can see everything.

GEN. MILLEY: Sure.

MR. IGNATIUS: And that that makes the weapons they use much more vulnerable. And I'm curious whether you think that stalemate is the right way to characterize where this war is now, and whether a breakthrough, like Ukraine tried last year, a counteroffensive, to drive the Russians out, is possible in this world where you can see everything as soon as it moves and try to kill it.

GEN. MILLEY: Yeah, I don't know actually that you can see everything. So, it's not--it is not absolute in that sense, but you certainly can see much more in today's world with sensors and a variety of sensors than you ever could in human history. And that's true in Gaza; that's true in Ukraine; that's true anywhere, right?

Is it at a stalemate? You know, the Russians have suffered a tremendous amount of casualties. I think they're upwards of--they're probably close to 400,000, maybe, killed and wounded, something like that; it's a significant amount, mostly in their ground forces. They've lost a lot of tanks and mechanized infantry vehicles and other capabilities on the ground.

They have had to replace their ground forces, I think, twice over. So, he initially--"he," Putin--initially attacked I think it was five or six or seven different axes of advance with about 200-, 250,000 troops in a combined arms mechanized offensive. That faltered and they didn't succeed, and we know that. And then, they got pushed back in the counteroffensives by the Ukrainians in Kharkiv and Kherson.

But in the process, they lost their first tranche of soldiers. Then, they had to open up the prisons and they used Prigozhin's group and so on. And they had to do a mobilization of several hundred thousand. They brought them into the battle over the winter, and then they got chewed up. And so, now, I think they're on their third iteration of mobilization. So, their ground forces have been decimated, really, really badly.

Now, most recently, they've run a small--the Russians have run a small offensive operation up around Kharkiv. They made some penetrations, limited penetrations, five, six, seven, eight miles, something like that, but it wasn't significant.

So, I would argue that the Russians probably--and I say probably, because all of war is probabilistic. There's no definitive outcomes that you can predict with certainty, but I would say probably the Russians don't have the military capability to overrun Ukraine, which is one of their political objectives initially.

So, in that respect, the Russians aren't going to militarily resolve this thing with any kind of military decision.

The Ukrainians, on the other hand, they've also suffered tremendously. They've suffered 20-, 30-, 40-, maybe 50,000 innocent civilians have been killed. The Ukrainian military has suffered tremendously. The average age of Ukrainian soldiers has gone up. They've had to do several drafts. They've lost a lot of equipment. The offensive that they ran this past spring/summer did not achieve the objectives that they intended it to achieve.

So, they've also suffered tremendously. A third of their country is no longer arable and you can't farm it and so on. They've economically suffered. And they've got--I think it's maybe 20 percent of their country is still occupied by Russia.

And the Ukrainian military, brave, strong, did a great job in the defense, really held the line against the Russian offensive, but their offensive capability is limited in the sense that it is not likely that the Ukrainian military can militarily eject the 2- or 300,000 Russian troops that are still in Russian-occupied Ukraine.

So, by definition, if both of those descriptions are accurate, and I think they most likely are, by definition, then, from a military--purely military standpoint, you're at a military stand--a stalemate.

MR. IGNATIUS: Right.

GEN. MILLEY: --where neither side can militarily bring about decision and conclusion in the war in the near term.

Now, things can change, and war is all about politics by violent means. Are there going to be negotiations? Yes? No? What will those negotiations be about? I don't know. Will new weapons be introduced? You'll see the ubiquitous nature of drone warfare now has come on very, very strong in Ukraine. Will those have significant battlefield effect or not? There's a wide variety of variables out there that can change those equations.

But right now, it's at a more or less military stalemate. You'll see advances, small advances, by either side measured in miles, but you're not going to see--I don't think you'll see something decisive, not in the near term.

MR. IGNATIUS: So, you've said in the past that given the terrible cost and the sense of a stalemated battlefield, that this is a time when we should think about diplomacy. You still feel that?

GEN. MILLEY: Well, I think you should always think about diplomacy. Warfare is never conducted without an adjunct of diplomacy. There are very, very few historical cases where you don't have in parallel with the combat operations that are ongoing in the battlefields some degree of diplomatic efforts.

Even in World War II, that happened. There's diplomacy even in World War II. I mean, you saw some senior-level Nazis fly to Europe--or fly to England to try to negotiate piece in Europe. You know, it was a failure, of course, but even in the most military of wars, World War II where things seem like it's a total war; it's all about the military. Even there, there are back channels, sometimes; and sometimes, not so back channel--negotiations ongoing.

When the Nazis invaded Poland, there was the Von Ribbentrop-Molotov agreement to divide Poland between the Soviet Union and the Nazis. So, diplomacy is always ongoing, even in the conduct of war. So, I would suggest that in the conduct of the current war in Ukraine, a very, very important component of that--and you see it, by the way. You see it in the news. You see Secretary of State Blinken out there and others. You see CIA Director Burns constantly going there back-and-forth. So, you'll see negotiation of some kind, some sort of diplomatic efforts are ongoing, even in the conduct of war.

And you even see--you know, you see some indications in the media of some sort of level of effort between the Ukrainians and Russians. They've actually had some--at lower levels, they've had people meet, and that was in the media a couple of different times.

So, I would say you have to--you have to, in the conduct of war, you have to do both. Obviously, conduct military operations; but also, there always should be some diplomatic effort.

MR. IGNATIUS: Let's talk just briefly about the lessons of Gaza. There's so many tragic aspects of the Gaza war, but I want to ask you about one extraordinary moment, one night, when the Iranians launched a barrage of 100 ballistic missiles--

MR. IGNATIUS: --that followed a swarm of over 200 drones.

MR. IGNATIUS: And the Israelis were able, using advanced signal processing--just extraordinary technology--to essentially neuter that overwhelming barrage.

And I'd be interested in your comments about the implications of that. To me, one lesson was the power of defense, but I'm curious what you think as former Chair--

GEN. MILLEY: Yeah, I think, you know, in the history of warfare, various weapon systems swing back-and-forth between the ascent of the offensive or the ascent of the defensive in different periods of time. I think some of the capabilities of artificial intelligence in combination of some of the advanced air defense systems that we have might spell the increased power of the defense relative to ballistic missile attack, but only if you have enough of those and it's in high enough density and if you have some early warning.

So, in that case, the attack that you're talking about, I think it was--I'm not exactly sure the precise numbers, I think it's 330-some-odd missiles, and it was a combination of theater-ballistic missiles in combination with unmanned aerial vehicles, in combination with other--with drones, and them some probably not theater-ballistic missiles, some close-range missiles, but a lot of missiles. A lot of missiles, drones, and unmanned--unmanned aerial vehicles, drones, and some of the cruise missile-type things coming at them.

But there was some early warning, and that early warning came from a variety of sources to include the United States. We have incredible intelligence collection capabilities; we have incredible sensors. And we had--and when I say "we," the Israelis, central command, the Brits were involved, Jordan was involved, some other countries are involved, had a bit of early warning. So, that's good. What did that mean? That meant you could prepare your defense a little more accurately. And then, once those weapon systems took off--and they took off in volley.

So, the slowest moving weapons were the unmanned aerial vehicles. And then, the next-slowest moving are the land-attack cruise missiles. And then, the fastest were the theater-ballistic missiles. And the Iranians designed the attack so that they would have time on target of all these missiles at approximately the same time, and they meant to do significant lethal damage.

But as soon as those weapons took off, our sensors were able to pick those up. And then, the defense then goes into a readiness condition one and we're able to figure out all your latitudes and longitudes and the axes of advance as they come in. And then, the defense is prepared, because you know where those missiles are coming in from. And then, it's layered defense, and it was a layered defense that went all the way through Syria and Iraq and all the way back through other countries on the border of Israel, and then Israel proper.

And these weapons, the defensive weapons could have various ranges and altitudes where you could start striking the enemy incoming missile tech. And they were successful. I think they--I'm not sure, but the media reports said seven--seven or eight of these missiles got through into Israel territory.

I mean, you think about that. You shoot down 330-some-odd and 7 get through. That's an incredible batting average. It's unbelievable, actually. So, a bit of early warning and the tactical skills of the crews and the gunners that were taking these missiles down is really impressive.

MR. IGNATIUS: So, I want to ask you about something that you worked hard on from your time as Army Chief of Staff that I think is of special interest for this audience. You tried to reach out to the smartest private sector entrepreneurs. I remember you telling me that you went to see Elon Musk, if I'm remembering this right, like, 2017, something like that--a long time ago.

GEN. MILLEY: Yeah, right after becoming Chief of Staff of the Army.

MR. IGNATIUS: Chief of Staff of the Army a long time ago.

GEN. MILLEY: And many others, too, not just...

MR. IGNATIUS: So, I want to ask you about the growing role of people like Elon Musk in our national security.

I mean, the dependence of the Ukrainians on Starlink is enormous.

GEN. MILLEY: Mm-hmm.

MR. IGNATIUS: You had some conversations with Elon Musk about whether Starlink could or should be used over Crimea that were part of Walter Isaacson's book.

But in general, my question is, are you concerned that we're becoming too dependent on these private companies and entrepreneurs like Musk who could decide tomorrow, ah, I've had it with the Ukraine war. You know, China is too important to me and I'm going to pull the plug. What about that?

GEN. MILLEY: Well, I think in the United States, there was a time--maybe it's the '50s or '60s or something like that--where a lot of your innovative technologies were done actually by the government, DARPA and that sort of thing.

But at some point, I think it's in the '90s or so, the innovative engine of the United States shifts gears into the commercial sector. And the spinoff--these technologies have spinoff capabilities for military use. Now, there's some things that are still done that are unique to the government that only the military uses--you know, building cannons, for example.

But look it, a lot of weapon systems today are highly dependent on software, are highly dependent on things like artificial intelligence. And there is this, you know, team of commercial and governmental folks that work together to develop the various systems that are necessary for national security. You're not going to have a military unless you have the cooperation and the innovation from the commercial sector. It's just not going to happen.

And so, the Pentagon--and all of the services are highly dependent upon the cooperation, the teamwork of U.S. industry. And look it, is there a chance someone would say, yeah, I'm not going to this anymore? I suppose. But at the end of the day, they're Americans, they're patriots, and they're going to do the right thing, I think.

And they have been--my experience with the industry--and this is all types of industry, small startups all the way up to your major corporations are very cooperative with the Pentagon and, to the extent possible, they will develop the best systems available to defend the United States.

MR. IGNATIUS: I want to ask you about a subject that you and I have discussed at other public sessions, and that's the danger of escalation in the Ukraine conflict and in the general standoff we have now with Russia. Russian ships, including a nuclear submarine, just sailed into Havana.

Putin and former President Medvedev seems like every other day--

MR. IGNATIUS: --threaten the use of tactical nuclear weapons. Medvedev has just said today, following new sanctions against Russia, that basically anything is possible for us as retaliation. How do you think about that danger that we could find ourselves in an escalatory cycle where we're at the point we remember in the Cuban missile, where it's a real existential risk.

MR. IGNATIUS: Do you think we've been there over the past two years during this war--

GEN. MILLEY: Not to that level, no.

MR. IGNATIUS: So, how do you think about, as former chairman, about that question of the risk of escalation leading to catastrophe?

GEN. MILLEY: It's something you're constantly attuned to and the president, secretary of defense, chairman joint chiefs, all the joint chiefs, the combatant commanders, escalation is something that you're always attuned to and you want to try to prevent it from starting, to begin with.

And if it does begin to escalate, you want to invoke a series of measures to do escalation control if you're on the escalation ladder, and those are very unique to the actual escalation that's happening at the time.

There's--I would not compare the current situation in the ship--Russian ships in Cuba to the Cuban missile crisis. The Cuban missile crisis were 13 days where the United States and Russia went to the brink of nuclear war. And through tremendous efforts by the U.S. Government and the leadership at the time and the thing was deescalated, but we're not at that level at escalation. But it's a provocative act. You know, Putin is doing it for a reason; he's being provocative for a reason. Part of that has to do, I think, it's a bit of--I'm not sure. It's hard to read Putin's mind, I suppose, but I suspect it has something to do with the United States saying to Ukraine that you can use U.S.-produced weapons to fire into mainland Russia--you know, in the vicinity of Kharkiv and fire into Belgrade and that sort of thing.

And I suspect that the Russians come up with some clever plan to say, okay, well, we can do something on your border, as well. Now, I might be wrong because that U.S. action was only a couple weeks ago. These ships would have had to have a ship's plan, so maybe it's been in the works much longer than that, but it's clearly provocative. And then, the language--there's been several times throughout the war where Mr. Putin and some of his senior officials have rattled the nuclear saber or rattled other sort of sabers, and that's what you see with Medvedev.

War is--the conduct of war is an interactive political event with the use of violence and/or coercive diplomacy and so on. And that's why you're seeing play out. But we're not--in my estimation, we're not at Cuban missile crisis level, but it's serious. It's provocative; it's serious; it needs to be controlled.

MR. IGNATIUS: So, I want to turn to our closing by speaking about your recent trip to Normandy.

MR. IGNATIUS: You and I talked about going there for the D-Day anniversary. Your dad fought in the Pacific, as mine did. Your uncle, you were telling me, was on Omaha Beach.

I want to just ask you to speak a little bit about that tradition of military service, what it means to you, and what do you think it ought to mean to the country as we think about what happened in the past, where we're going in the future.

GEN. MILLEY: Well, I think--you know, in my case, I spent 44 years in uniform. My mother and father, you see there on that photo, they're commissioning me in 1980.

My mother was in the Navy. She served in the Medical Service Corps of the Navy and took care of the wounded at a hospital out in Seattle during World War II.

My dad was a Navy Corpsman, Navy medic, who served with the 4th Marine Division. He did the assault landings at Kwajalein and Saipan, Antonia, and Iwo Jima.

And his brother, and I just found this out maybe a month, a month-and-a-half ago--I knew that his brother--my dad was the oldest of ten and his second in line, the number two brother, I knew he was at Normandy. He was in the Navy; he was at Normandy. I thought he was on a ship.

So, I asked to get the records. I got the records about a month-and-a-half ago, and I discovered that he was actually at Omaha Beach and he was a beachmaster. And I didn't even know the Navy did that. Now, I realize it wasn't just Army guys on those beaches; it was a lot of Navy guys, too, and Coastguardsmen. So, he was a communications specialist and he was part of the beachmaster team at Omaha Beach, which is pretty cool.

So, my grandmother had one son hitting the beach at Saipan on 15th June, 1994; and another son hitting the beach at Omaha and she had no idea. She was taking care of eight other kids back home in Somerville, Massachusetts.

And I'm very proud of their service. My dad's uncle before him was in the British Army--the Newfoundland Regiment, which was part of the British Army, and he was wounded at Gallipoli in World War I, and because of that he wasn't at the first Battle of the Somme. So, he wasn't killed. Only 87 members of the regiment that survived World War I.

And his other uncle was wounded at Meuse–Argonne in the 26th Division; my mother's brother was in the Philippines; my father's number three brother fought in the hill fights of Korea with the 7th Division.

So, I'm very proud of my family's contribution and military service, and I encourage anyone to get out there and put the uniform on and serve their country. But you don't have to put the uniform on. There's a lot of ways to serve this country. You can do it as teachers and nurses and cops and firemen. You can do it as reporters. You can do it in any number of ways you can serve this country, but we're very fortunate to have a country like we have and our commitment to the Constitution, we want to pass that on to the next generation.

MR. IGNATIUS: So, well said. Thank you. So, last question: We're asking every guest at this AI summit who they found has explained this AI future to them most powerfully that we might not be aware of.

Who's that person for you?

GEN. MILLEY: Well, actually, it was Elon Musk when I first met him. That's right.

MR. IGNATIUS: I believe that.

GEN. MILLEY: Yeah, so--

MR. IGNATIUS: And you've told me, but share what he said.

GEN. MILLEY: And he's said this publicly a couple of different times, since. So, I don't think I'm sharing anything particularly unusual.

But when I first met him, I guess it was eight or nine years ago. He's obviously brilliant; he's a genius, he's a technologist. And he's obviously a good businessman; he's got lots of things going on.

But he has an ability, I think, to see and connect dots that other people may not have. So, at the time--and I did a little research before I met him. And at the time, Stephen Hawking, who's passed on, and Bill Gates, the Microsoft Bill Gates, and then Elon Musk, independent of each other, all three of them around the same time eight or nine years ago, said words to the effect that they thought AI was the most powerful and potentially most dangerous technology ever developed by humans.

And then, when I'm out talking to Elon Musk for a couple of hours and I asked Elon, I said, what do you think of AI? And he just started saying "house cats," which I was like--you know, Elon, you okay? House cats? What's happening, here.

And he says, hey, look it, the reason we humans are at the top of the food chain, it's not because we're the strongest; that would be the gorilla. And it isn't because we're the fastest; that's the cheetah. It's because we're the smartest.

And when you combine AI, which is going to be orders of magnitude smarter with human beings and you combine it with robotics, in Elon's words, you could potentially create a new species, which would be the smartest, and that's a dangerous--and I was like, Elon, come on now? I can't get my head wrapped around that; I'm just an infantry soldier, here. But it was an interesting conversation.

So, look it, there's a lot of positive to AI. First of all, no one's going to stop it. You can't stop technology. This is one of those things, throughout history, no one could stop the Gutenberg press and no one could stop the development of the wheel and no one could stop the development of the airplane, et cetera. You're not going to stop technology. Technology is going to be developed.

What has to happen with AI is how are human beings going to interact with it and how are they going to control it and use it, and are there going to be regulations?

There's some serious--very serious--legal, ethical, and moral considerations to the use of artificial intelligence, especially when you have the application of lethal force in military operations. And none of those questions are answered. They're being asked, but they're not answered, yet, and that's a real challenge that's going to have to come to grips--that we all collectively are going to have to come to grips with over the next five years.

MR. IGNATIUS: So, folks, house cats.

GEN. MILLEY: House cats, there you go.

MR. IGNATIUS: That's--the question is whether that's our future.

General Mark Milley, one of a kind. Thank you so much for joining us.

GEN. MILLEY: Thank you.

Securing Cyberspace

MR. IGNATIUS: So, ladies and gentlemen, you know who I am, but I want to introduce our next speaker, Jen Easterly, the Director of the Cybersecurity and Infrastructure Security Agency, known as CISA, which is part of Homeland Security but really is increasingly an independent operation.

Jen Easterly, like General Milley, is an American original. She's a graduate of West Point. She's a Rhodes Scholar. She served with NSA in Iraq and Afghanistan, just extraordinary record of service. We're really pleased to have you here.

MS. EASTERLY: Thank you.

MR. IGNATIUS: Thanks for coming.

MS. EASTERLY: Great to be here.

MR. IGNATIUS: So I want to begin with, as I did with General Milley, the big picture and ask you, as you think from your perspective running our cybersecurity agency, what is the effect that AI will have both in cyber defense, protecting our country, and in cyber offense as this technology rolls forward and is applied?

MS. EASTERLY: Yeah. Well, great to be with you. I guess we're the closers. We're before the reception, so let's make this fun.

You know, at the end of the day, we've been using AI in the form of machine learning for a long time in cybersecurity, and so given all of the work that's happening now with generative AI and large language models, we're basically looking to leverage them to up-gun the capabilities that we've already been using. And because, you know, our role is as the civilian cyber defense agency, we've been very focused on using AI specifically to be able to detect malicious activity as it comes to federal government networks but then using that as a pilot for attacks against critical infrastructure. So we're excited about the potential for that.

We just ran a pilot around it. It was not groundbreaking in terms of what we discovered, that these capabilities were not going to solve the problem of cyber defense, but that they were augmenting capabilities that we already have in place.

But I think it's important to remember, David, that, like, frankly, despite all of the progress over the past year and a half with respect to large language models and generative AI, we're still in pretty early days. So we have not yet seen the incredible breakthroughs that I think we will see that will help us on the cyber defense side. We've not yet seen the breakthroughs that our adversaries will use on the offensive side.

But all that said, even though it's early days, these capabilities are moving incredibly quickly. In some ways, they are unpredictable, and so what we are trying to do is optimize the power of the capabilities, but also, very importantly, as the security team, to mitigate the potential risks.

MR. IGNATIUS: So I want to ask you the same question that I asked General Milley about offense versus defense. I sometimes think that as you apply AI and the ability to look into your adversaries' networks and see early signs that you can predict, given machine learning and everything you know, that the defense may be super powerful. But I'm curious whether--A, whether you think that's likely in the future; and, B, whether you're actually seeing some of that now.

MS. EASTERLY: Yeah, I really think the jury is out. You know, there is a school of people who will say this will solve the cybersecurity problem. I think there are really interesting things that AI capabilities can do for cybersecurity. There are others who will say, you know, it will be--you know, the old saying is the defenders have to be right all the time. Those on the attack only need to be right once. It will be amplified by an order of magnitude with AI. I don't necessarily subscribe to either school, and as I said, I think the jury is a bit out.

What I do--just to give you my framing as I think about this, David, is AI is a form of technology, and the reason why I've been so focused on the security element of that is you look back on the past 40 years, if we go back to 1983, which is a year that you remember well, because I think that was the start of your career as a novelist after the embassy bombing in Beirut. Well, interestingly, 1983 is also the year that TCP/IP--and I think Maria Ressa mentioned Vint Cerf--was implemented to allow computers to talk to each other. And if you talk to the early inventors of the internet and any of the security pioneers, they will all tell you that none of this technology was developed with security in mind, right?

Dan Kaminsky, who's a famous late security pioneer, said the internet was invented to move pictures of cats, and it's very good at moving pictures of cats.

MS. EASTERLY: House cats, I guess, is what Mark Milley would say.

MR. IGNATIUS: Elon's house cats.

MS. EASTERLY: But it was never created with security in mind.

And to be frank, it's why we have an internet that is full of viruses, why we have software that is full of vulnerabilities, why we have social media full of disinformation, and why anybody who cares about security issues is very, very focused on ensuring that the AI capabilities that are being developed are being developed in a way that security is the top priority, not speed to market, not cool features, because these capabilities are so strong and so powerful. You just heard General Milley say these will be the most powerful weapons of our time, and at the end of the day, you look back at the most powerful weapons of the last century. They were built and safeguarded by governments that were disincentivized to use them.

These capabilities are being built by industry that at the end of the day are fiduciarily responsible to provide value for shareholders. So this is a more difficult problem, and it's one that we need to be able to work very closely between government and industry to ensure that we can optimize the power of these capabilities but also avoid the failure of imagination that we may suffer if these capabilities get into the hands of rogue nations or cyber criminals or terrorists.

MR. IGNATIUS: So I want to ask you to evaluate for us the AI capability of our near-peer competitor, China. In the last year, two years, we've seen this extraordinary explosion of large-language models doing things that are dazzling, and it looks like the United States has a significant lead in that technology. But I'd be interested in your evaluation of where China is, areas in technology broadly where they're equal or ahead of us. But in general, what kind of competitor are they?

MS. EASTERLY: Yeah. I mean, at the end of the day, I'd say the big difference right now is China puts much greater restrictions on their capabilities than we do here in the U.S. It goes back to my point. There's never been any guardrails put around technology because what I think is a bit of a myth, that if you did regulate technology in any way, then you would stifle innovation.

I do think it is possible to have responsible innovation, and I think some of the arguments around the fact that Europe doesn't have big tech companies because of all of their regulation, I just don't really buy that.

Like, at the end of the day, we need to ensure that we can actually work with China and work with Europe and work with other countries to figure out how we can control these technologies so that we can protect our population.

So, you know, I worry a lot about China's cyber capabilities, but right now, without AI, they are creating enormous risk to our critical infrastructure.

I've talked about this publicly, but it is worth reinforcing the fact that the threat from Chinese cyber actors, colloquially known as Volt Typhoon, it has changed in kind, David. You know, I've been doing this for a long time, and for years we talked about intellectual property theft and espionage and data theft. And now we're talking about Chinese cyber actors pre-positioning inside our critical infrastructure, not for espionage, but to launch disruptive or destructive attacks in the event of a major conflict or crisis in the Taiwan Straits. And that is happening right now.

I am looking forward to being able to use AI to thwart attacks like that, but as we think about the power of some of these capabilities, there is a lot of damage that is being done without the added value of generative AI on top of it.

MR. IGNATIUS: Just so our audience understands the importance of what you just said, the Chinese are pre-positioning these cyber weapons, in effect, inside--when we say critical infrastructure, that sounds so abstract. What are the examples of critical infrastructure where they'd be?

MS. EASTERLY: And to be clear, so what they're doing--and I don't want to get tech nerdy here, but they are essentially opening up doors within our critical infrastructure that they can then use to launch disruptive or destructive attacks.

MR. IGNATIUS: Critical infrastructure means our electrical grid.

MS. EASTERLY: Yeah, when you say that, it does sound like a really wonky term. At the end of the day, critical infrastructure is just how we get gas at the pump, food at the grocery store, money from the ATM. It's our water. It's our power. It's our health care. It's our education. It is just the networks and the systems and the data that power our daily lives, and it's all underpinned by a technology base.

And the Chinese have very smartly seen that if they have a capability to do mass disruption, they can, their goals being, induce societal panic and chaos to deter our ability to marshal military might and citizen will in the event of a major conflict in the Taiwan Strait.

So imagine this. This is really a scenario where you have a war in Asia that's accompanied by mass disruption here in the U.S., the explosion of pipelines, the pollution of water facilities, the severing of our communication links, the derailing of our transportation nodes. So those are the things that we are critically focused on at CISA as we work to try and defend the nation and protect and defend our critical infrastructure.

I will say one other thing that's just consonant with my point about technology, David. The Chinese cyber actors are extremely talented, extremely sophisticated. There are a huge number of investments that have been made over the past 10 years. All that said, the way the Chinese cyber actors are breaking into our critical infrastructure is not through exotic techniques. They are actually using the fact that there are defects and flaws in the technology that we use every day to be able to jump into our critical infrastructure, and that's why at CISA we've been so focused when we talk about technology to including artificial intelligence on ensuring that it is built and tested and deployed to be secure by design, meaning built to dramatically drive down the number of exploitable flaws that can be taken advantage of.

MR. IGNATIUS: And you used the phrase--I interrupted you--that the Chinese are "opening doors." Explain what that means.

MS. EASTERLY: It's a way to think about it. So essentially, when we talk about vulnerabilities, right, vulnerabilities, you can think about that as glitches in the code, and those glitches in the code will allow malevolent, malicious actors to be able to break into your critical infrastructure and get a foothold, and they just keep that foothold open. They may test it every once in a while, which is one reason why they're very hard to find.

We've had teams that have actually found them in multiple sectors: water, transportation, communication, aviation. We've found them working very closely with private-sector partners. We've eradicated them. But we think what we've found to date across multiple critical infrastructure sectors is, frankly, the tip of the iceberg, which is why we've been so focused on working with technology manufacturers to ensure that they're creating safe tech, working with businesses, large and small, to make sure they're making the investments not only in cybersecurity but in resilience, because it will be extraordinarily difficult to prevent mass disruption.

What we have to do is ensure that we're building operational resilience, cyber resilience, and frankly that we have the same societal resilience that the Ukrainians have shown since the full-scale invasion in early 2022.

MR. IGNATIUS: Boy, that's a super powerful description of the threats that are facing us.

So as head of CISA, you're responsible, among many other things, for our election cybersecurity and our ability to detect and prevent attempts to manipulate our information space, and I want to ask you, in this presidential election year, what you're seeing, both in terms of attacks already by our potential adversaries or, as we near election day, additional attacks, whether you think we're vulnerable to that and what you're doing to prevent it.

MS. EASTERLY: Yeah. So thanks for asking. Just to take a step back, after 2016, with Russian attempts to influence the presidential election, election infrastructure was designated as critical infrastructure. So it was never critical infrastructure before. So it was designated as critical infrastructure, and CISA was designated as what's called the "sector risk management agency," meaning that we work with state and local election officials who are on the front lines of administering, managing, and securing our election infrastructure.

Now, over the past seven years--and this is where the good news story comes in--CISA, in our role as the lead government agency, working with the rest of the federal government--the intelligence community, the Federal Bureau of Investigation, the Department of Justice--have worked hand in hand with election officials and vendors to make massive improvements in the security and resilience of our election infrastructure. And the infrastructure that underpins our elections, frankly, has never been more secure, and that election stakeholder community has never been stronger. And it's why elections were run in 2018, in 2022, in 2020. Those elections were secure. And just to be clear, there's no evidence that there were any malicious actors that altered, deleted, or changed any votes or had any material outcome of those elections. It's been validated time and again, including in multiple court challenges. And in any race that was close in 2020, there were paper records that were counted and recounted and audited to ensure accuracy.

In this job, I have the privilege to spend a lot of time with chief election officials across the nation, and I know how tirelessly they work to ensure that every one of their citizens' votes are counted as cast. It's why I have confidence in the security of our election infrastructure and why the American people should as well.

As you know, though, even though election infrastructure has never been more secure, the threat environment has never been more complex, starting with our foreign adversaries. If you read the recent Annual Threat Assessment from the Director of National Intelligence, if you read the 2022 assessment of the midterm elections that came out and was declassified at the end of 2023--it's a good read--you'll know that our foreign adversaries, including China, Russia, Iran, but also more countries, are getting active in terms of malign influence and interference and disinformation.

That has been going on for a while. We expect it to continue to go on. It is a persistent threat, and it will be exacerbated by generative AI capabilities.

I will tell you, perhaps the more pernicious threat to elections are threats of violence to election officials, largely stemming from unfounded claims that a 2020 election did not represent the will of the American people. And these claims are corrosive to the foundations of our democracy. They've resulted in threats of harassment and violence to election officials and their families, and many election officials have resigned because of it. Those who remain often operate, frankly, in difficult conditions, and it's why we're so proud--and I've made it a top priority--to stand shoulder to shoulder with these election officials. And so we're working with them to do cybersecurity assessments, physical security assessments, exercises to be able to deal with cyber risk, with physical risk, with insider risk, and with the risk of foreign malign influence and disinformation.

And so we have very aggressive efforts going on practically every day across the country, working over 8,800 jurisdictions to help make sure that election officials have the resources and the capabilities and the intelligence that they need to run secure elections, and again, it will be complex. But I have great confidence in the process.

MR. IGNATIUS: On the question of disinformation, at a Washington Post Live event last week, Ambassador Nate Fick, who's our State Department Ambassador for Cybersecurity--

MS. EASTERLY: Good friend.

MR. IGNATIUS: --a good friend of yours, I'm sure, for years--said that he has solid evidence that our adversaries have sought to deepen the polarization in this country over the war in Gaza. Sharp divisions are there. People have been trying to make those worse. Can you confirm that, and could you say a word about how we can protect ourselves against that attempt to inflame sentiment in the United States?

MS. EASTERLY: We can be certain that our foreign adversaries are taking every single opportunity, whether it's Gaza, whether it's the upcoming election, whether it's the war in Ukraine, whether it's issues around China and Taiwan, taking every single opportunity to sow domestic discord and to further inflame partisan political divide and to do everything they can to undermine our democracy. We can be absolutely sure that that's what they want to do, and frankly, in some ways, we are feeding into their strategy.

And so we work very closely, in particular, with election officials, to do a few things as they help--as they're dealing with disinformation. First, we work with the federal government to put out tactics that our foreign adversaries are using for malign influence and disinformation.

We put out guidance around how generative AI can be used for influence and disinformation and really what election officials need to do to deal with it. So that's one.

Two, we are also using what we call our rumor v. reality website to put out accurate information about election security so the American people can understand why drop boxes are secure, how absentee ballots are secured, so essentially factual information about election infrastructure.

Perhaps most importantly, David, what we do is we work to amplify the trusted voices of election officials because they are the authoritative subject-matter experts.

One thing whenever I talk to any audiences, I say a couple things about elections. First off, know that election officials who are working hard every day to ensure that elections are secure, when you go to the polls, say thank you. Thank them because they are out there in a difficult environment, and they're doing it not because they're getting paid a lot of money. They're doing it because it's the right thing to do for democracy.

Second, if you have any questions about the security or integrity of the democratic process, please be part of the process. Sign up to be a poll worker. Be an election observer. If you have questions, talk to your election official. It's a very transparent process, and election officials across the country want to have that discussion with their communities to talk about why the process is secure and all the technological, physical, procedural safeguards put in place to ensure that every citizen's vote is counted as cast.

And frankly, to your point, David, we should not allow the adversaries to be successful in stoking partisan discord and tearing this country apart. We need to come together to deal with the very real threats, like Chinese intrusions in our critical infrastructure, like attempts to go against our power systems and our water systems. We need to come together and really defend our democracy.

MR. IGNATIUS: So in this world of poisonous disinformation, as you just have described it, there's obviously a question about whether our social media companies should do more to moderate content, to screen it. We have an audience question from Kate Davis, who lives here in Washington, who asks, "How, if at all, should content be labeled and verified for authenticity? How, if at all, should labeling be standardized and enforced?" What do you think about that issue?

MR. IGNATIUS: Yeah. So, at the Munich Security Forum, many of the generative AI and technology companies committed to voluntary steps that they would take to ensure that AI, for example, if there's a video created with AI of a campaign, that in fact there is labels around that. I do think efforts to ensure that anybody can tell whether a video is generated with AI capabilities, whether it's a deepfake, that is very important. And so, the problem is, however--is there is no real teeth to these voluntary commitments, and so I know many of these companies are working hard to put processes in place.

I frankly do not think that generative AI is going to fundamentally change the threats to this election. I think it will exacerbate. I think it will inflame them. I don't think it will change them. It would be very helpful if these companies would help identify whether some of these generated images and videos are created by AI or not, and I think there are efforts to do that through content provenance, but there needs to be a set of rules in place, ultimately legislation and regulation, and I know that Congress has put out a framework on this. There's the EU AI Act on it, but frankly, there needs to be safeguards put in place because these capabilities are incredibly powerful, and they could be used in very harmful ways by malicious actors, by terrorists, by criminals, by rogue nations.

I mean, again, going back to your experience in the '80s with terrorism in the Middle East, you can--I spent a lot of time in counterterrorism in Iraq and Afghanistan. I was the head of counterterrorism at the National Security Council from 2013 to 2016. We always talked about al-Qaeda's chief bomb maker. We were always looking for in Yemen, Ibrahim al-Asiri. Now think about al-Qaeda's chief AI officer.

MR. IGNATIUS: Oof.

MS. EASTERLY: Right?

MR. IGNATIUS: What a frightening thought.

MS. EASTERLY: Imagine that, and so you think about the bombing in Beirut, whether it's April or October, that was truck bombs. These capabilities will be even more powerful.

Now, I don't want to be labeled as a doomer coming out of this talk, because I really am a tech optimist, but I will say at the end of the day, these capabilities are so powerful. They will do amazing things, and I will stipulate that. But it is our responsibility to leverage the power of these tools and not suffer a failure of imagination in allowing our adversaries to use these tools in ways that will create unimaginable harm to populations around the world. We cannot allow these communities who ultimately are fiduciarily responsible to add value to shareholders--we can't allow them to continue to operate with complete impunity. We need to come together. We need to work with industry, with technology, with the generative AI companies to ensure that these amazing capabilities can be used safely and securely.

MR. IGNATIUS: So we have less than a minute left. I want to ask you the same question we've been asking everybody. Who is on the top of Jen Easterly's reading list about AI that we ought to take a look at?

MS. EASTERLY: Yeah. I mean, you know, I could give you a bunch of AI luminaries. I'm not going to do house cats. I feel like I'm--

MR. IGNATIUS: [Laughs]

MS. EASTERLY: I really--I take it in a different direction, David. I think we have to really look at history. I think we have to learn from the evolution and the history of technology, how technology has been used for good, how it's been used for evil, and really learn from examples about the optimization of technology but also, again, the downsides.

You know, less than 15 years ago, we were extolling moving fast and breaking things, right? We need to look back and see what got us to where we are, and we need to move forward informed by history and ethics, frankly, and values to ensure that what we are building, some amazing, amazing capabilities, we are building them in a way that will be safe and secure for the American people and for our allies around the world.

MR. IGNATIUS: So amen. I want to thank Jen Easterly, one of the amazing public servants. We're lucky to have her working. Thank you so much, Jen.

MS. EASTERLY: My pleasure. Thank you.

MR. IGNATIUS: So, ladies and gentlemen, let me just invite you. You may hear, in the other room, the reception. We'd like to invite everybody to join Washington Post colleagues, speakers, all of you at the reception. Thank you so much for coming today.

[End recorded session]

sounds of the sixties tour 2023

IMAGES

  1. THE SIXTIES

    sounds of the sixties tour 2023

  2. Sounds Of The Sixties

    sounds of the sixties tour 2023

  3. Sensational 60s Experience Tour Spring 2023

    sounds of the sixties tour 2023

  4. Review: Tony Blackburn's Sound of the 60s tour, Princess Theatre

    sounds of the sixties tour 2023

  5. Sounds of the 60s Live with Tony Blackburn

    sounds of the sixties tour 2023

  6. The Sixties Show Tickets, 2023 Concert Tour Dates

    sounds of the sixties tour 2023

COMMENTS

  1. Tony Blackburn Tickets, 2024 Concert Tour Dates

    Date/ Time: March 13, 2024 @ 7:30pm. Event Name: Tony Blackburn: Sounds of the 60s Live. Venue: Harlequin Theatre. Address: Warwick Quadrant, Redhill, RH1 1NN, Great Britain. Buy Tony Blackburn tickets from the official Ticketmaster.com site. Find Tony Blackburn tour schedule, concert details, reviews and photos.

  2. Sounds of the 60's Live Tour 2023

    Published by admin On January 8, 2023. Every week, listeners across the UK tune into BBC Radio 2 to hear the legendary Tony Blackburn play the greatest songs of the 60's. Now, the legendary DJ will be taking his Sounds of the 60's show on the road, for a UK tour celebrating the music of this much-loved decade.

  3. Sounds of the Sixties Concerts & Live Tour Dates: 2024-2025 Tickets

    David Hambridge, Stephen Hemming, Lee Hambridge, Zal Jones. Hometown: Rugby, United Kingdom. Find tickets for Sounds of the Sixties concerts near you. Browse 2024 tour dates, venue details, concert reviews, photos, and more at Bandsintown.

  4. Sound Of The Sixties BBC Radio 2 Live Show Highlights (2023)

    Sounds Of The Sixties Live Tour hosted by the BBC Radio 2's Tony Blackburn, at The Liverpool Empire on Thursday 2nd November 2023, including of course some o...

  5. Sounds Of The 60s Live

    Tue 8 Oct 2024. Buy tickets. A string of extra shows, have been added to the incredibly popular Sound of the 60s tour, hosted by BBC Radio 2's Tony Blackburn OBE. Every week, listeners across the UK tune into BBC Radio 2 to hear the legendary Tony Blackburn play the greatest songs of the 60's. Now, the legendary DJ will be taking his Sounds ...

  6. Tony Blackburn tour dates & tickets 2024

    Sold out. Tue 17 Sep. Bournemouth Pavilion. Sounds of the 60s Live with Tony Blackburn. View tickets. Mon 30 Sep. Nottingham, Theatre Royal and Royal Concert Hall. Sounds of the 60s Live with Tony Blackburn. View tickets.

  7. Sounds of the 60s Live with Tony Blackburn

    An evening of 60s classics performed by the Sounds Of The 60s All Star Band & Singers, hosted by Radio 2's legendary DJ, Tony Blackburn. The show features stories and songs from the most loved decade. Every week, more than a million people tune in to BBC Radio 2 to hear Tony Blackburn wax lyrical about the biggest stars of the 60s.

  8. Sounds of the 60s Live with Tony Blackburn

    02 Nov 2023. Liverpool Empire. Buy Tickets. The show features stories and songs from the most loved decade. Every week, more than a million people tune in to BBC Radio 2 to hear Tony Blackburn wax lyrical about the biggest stars of the 60s. Likes.

  9. Sounds Of The 60s Live With Tony Blackburn Tour Dates & Tickets

    The iconic DJ often hits the road with tours featuring songs, stories and memories from the golden decade of pop music, performed live by the Sounds Of The 60s All-Star Band & Singers. Due to COVID-19, Tony Blackburn has rescheduled his Sound Of The 60s Live UK tour to summer and autumn 2021, though some dates have been moved to summer 2022.

  10. Sounds of the 60s

    Tony Blackburn on Radio 2 and BBC Sounds Two hours of the greatest music from the 1960s presented by Tony Blackburn. Including music from Petula Clark, Chubby Checker and The Beatles.

  11. Tony Blackburn

    Tour details for Tony Blackburn - Sounds of the 60s 2020 to 2024. Between 22nd June 2024 and 11th November 2024 this tour will visit 7 venues in; St Albans, Lowestoft, Malvern, Bournemouth, Nottingham, Truro and Crawley. Current and future dates only. This production has been listed in 55 venues since 1st March 2020.

  12. BBC Radio 2

    Details of music played on every edition of Sounds of the '60s is now available after the end of each show on its individual episode page. Just click through to Episodes and you'll find links to ...

  13. Darlington Hippodrome

    The incredibly popular Sound of the 60s tour, hosted by BBC Radio 2s Tony Blackburn OBE, is coming to DarlingtonEvery week, listeners across the UK tune into BBC Radio 2 to hear the legendary Tony Blackburn play the greatest songs of the 60s. Now, the legendary DJ will be taking his Sounds of the 60s show on the road, for a UK tour celebrating the music of this muchloved decade.

  14. Sounds Of The 60s tour dates & tickets 2024

    The Palace Theatre, Palace Avenue Theatre, Palace Ave, Paignton TQ3 3HF. Sounds of the 60s with The Zoots Sounds Of The 60s. View tickets. Rated Excellent. Sounds Of The 60s live shows. Find tour dates near you and book official tickets with Ents24 - rated Excellent on Trustpilot.

  15. Sounds of the 60s Live with Tony Blackburn

    An evening of 60s classics performed by the Sounds Of The 60s All Star Band & Singers, hosted by Radio 2's legendary DJ, Tony Blackburn. ... Tue 28 Feb 2023. 3 Quay St, Manchester M3 3HP, UK. This show has an interval. Tue at 19:30. For group bookings of 10 or more: Submit a request online.

  16. Tony Blackburn's Sounds of The 60s Tour

    Tony Blackburn's Sounds of The 60s Tour. Tony Blackburn's Sounds of The 60s Tour. 3,049 likes · 4 talking about this. Concert Tour.

  17. Sounds Of The 60s All Star Band & Singers

    March 2021. Thu 18 Mar. Ipswich, Regent Theatre. Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn. Sat 13 Mar. Sheffield City Hall and Memorial Hall. Sounds Of The 60s Live Sounds Of The 60s All Star Band & Singers, Tony Blackburn. Sat 6 Mar. Bolton, The Albert Halls.

  18. Eagles Announce Sphere Residency in Las Vegas

    The state-of-the-art venue opened in September 2023 with a residency by U2 that was eventually expanded to 40 dates. Toggle Menu. Homepage. ... Sphere Immersive Sound, powered by HOLOPLOT, which delivers audio with unmatched clarity and precision to every guest. ... Happy Together 2024 Tour Opens With Parade of '60s Hits; Mike Campbell Shares ...

  19. Moyra Peralta At Crispin St Night Shelter

    Next tickets for THE GENTLE AUTHOR'S TOUR OF SPITALFIELDS are available for Saturday 22nd June. . Remembering photographer Moyra Peralta (1936-2024) who died on 8th May aged eighty-eight. "I am standing in the one-time women's dormitory and have brought a photograph of my friend Peggy. Her husband had died and she could not bear to remain ...

  20. Françoise Hardy Dead: 1960s French Singer, Actor and Model Was 80

    Françoise Hardy, a French singer, actor and model whose classical beauty and often melancholy music combined to transfix fans internationally in the 1960s and beyond, has died at age 80. Her son ...

  21. Taylor Swift Eras tour: Everything you need to know about the UK dates

    The day after the tour began in Arizona in March 2023, I had seen it in its entirety - from my bedroom in London - thanks to people live-streaming from the concert, or uploading multiple videos.

  22. Full Transcript: The Futurist Summit: The Age of AI

    He did the assault landings at Kwajalein and Saipan, Antonia, and Iwo Jima. And his brother, and I just found this out maybe a month, a month-and-a-half ago--I knew that his brother--my dad was ...