Skip to main content

tv   Discussion on AI 2024 Election  CSPAN  May 6, 2024 12:14pm-1:01pm EDT

12:14 pm
plus everyurase you make goes toward supporting our nonprofit operations. start shopping now by scanning to code on the right or visiting us online at c-spanshop.org. >> c-span is your unfiltered view of government. we are funded by this television company and more including comcast. >> are you thinking this is just a community center? it's way more than a. >> comcast is partnering with 1000 community centers to create wi-fi enabled lift zones so students can get the tools they need to be ready for anything. >> comcast supports-span is a public service along with these other television providers giving you a front row seat to democracy. now to discussion on artificial intelligence and is potential effects on the upcoming presidential election with the secretaries of state for kansas
12:15 pm
and nevada. this is about 45 minutes. >> ladies and gentlemen, please welcome vice president and executive director of aspen digital. [applause] >> thank you. so you heard all of it earlier this afternoon talk about the state of local news and then you heard a little bit later from brian talk about the state of national news. right now going to talk for a few minutes about the state of digital media. the story of digital media sort of follows the trajectory of a tale as old as time. it's one, story of hope followed by disillusionment, and then in theory the. last stage is redemption, although i'm not sure what the redemption is yet. let me go back to the hope for a couple of minutes. so you may recall or some of you may recall a maybe some of you are too young, but back in the
12:16 pm
mid to late august and earlyth part of the 2010s, the world was a pretty exciting place when it came to digital media. google really did bring the world's information to our fingertips. we connected with old friends from high school or from other parts of her life on facebook. we hadwe access to some incredie videos on youtube. and when it comes to news it was revolutionary. you may remember the arab spring of 2010 and 2011, the miracle on the hudson, weighs in with access to information because people on the ground with access to these local devices, the powerful computers in our hands were able to report, just regular people, and what they're seeing, what they were experiencing. the sort of the promise of citizen journalism. we learn actually as it turns out about the capture of osama bin- laden by a guy a few, recd of my away saying i'm hearing
12:17 pm
choppers overhead, this is very strange, i forget the name of the town, what's going on? it but it was the democratization of information, or so we thought. fast forward just a few years ever get to the disillusionment phase. when everything began to changen slowly, slowly and then all at once. so what is it that went wrong exactly? i would put it in three categories. first of all people, people is what went wrong. ldyou had foreign actors who gad the system. the internet research agency in st. petersburg who are able to use the powerfulth social media platforms to try to manipulate public opinion in the united states and thehe run-up to the 2016 election. ets who then willy spread those and other kinds of misand disinformation. whether it's about elections. whether it was about covid.
12:18 pm
whether it was the reasons that the crash happened at the bridge in baltimore last week. then you have the profiteers. no ideology other than making a few quick bucks. they were able to game these platforms to bring money to themselves through various means. that was one category. the second cats gorery was the tech companies and platforms themselves. whether or not they did not anticipate what could have happened with these platforms, or whether they knew that their platforms could be gamed and didn't care, we do not know. we may never know. with you we started to see that social media platforms became basically a big game of whack-a-mole. starting in 2014, 2015 and continuing to this day. then after 2020 a lot of the social media platforms decided, threw up their hands and said, enough. we are not going to try to
12:19 pm
moderate, necessarily moderate content on our platforms. it's too expensive. it's too politically fraught. too many people hate us no matter what decision we make. too many subpoenas come interesting this congress. they are like, we are out. we are going to leave it alone. you have e-hropb musk who bought twitter, never perfect, but always an incredibly valuable tool turning it into a dumpster fire it is today. the third gatt roar i of what went wrong is the decline of local news. we have been talking about so much about today. don't need to repeat t it has played in, into that void, has filled in so much of this kind of noise that we hear from platforms. news organizations sometimes did it to themselves. they were chasing in some cases those of us in news, we all laugh at the line about the pivot to video. which was supposed to save journal i. it didn't because as soon as facebook changed their strategy,
12:20 pm
the whole thing went down in flames. the money that supported journalism went to the platforms. a lot of it for good reasons. advertisers, it was more efficient how they reach people. and also the world is -- has become such a polarized place that many just do not trust any news. again, we have been talking about that all afternoon. and now here we are, this is going to be a bit of a segue into the panel that i'm going to be part of that's coming up next, into the a.i. world. and what is that going to do to this already very fraught digital information ecosystem? my message to you, i may repeat this on the panel with the right opportunities, don't be afraid. there's been a lot of coverage about the big spectacular deep fake of trump doing something or biden doing something. that may happen. i don't think that's a big worry. those will be debunked so quickly that those are not going to get a chance to get much
12:21 pm
traction. what i'm much more worried about when it comes to content that is manipulated via artificial intelligence is the things that you can't see. that the media can't see. that the public can't see and debunk. it's coming in on what'sapp. it's coming in on tp-pblg messenger, tell gram -- telegram, all of the peer-to-peer, not always peer-to-peer, you can distribute those. the content that you can't see that can be very damaging. and very targeted information that can be to what a.i. enables is a speed and scale and degree of targeting never before possible. if before you needed the internet research agency funded by the kremlin in st. pete yoursburg to pull this off, now you don't. we are back to the proverbial guy in his pajamas at his
12:22 pm
parents' house who can make the same effort at the same scale to cause a lot of trouble. but if there is one thing that worries me most, it is a phrase that was coined by two people. one a u.t. researcher named bobby, and another academic, danielle, and it is the liars dividend. the liars dividend describes the phenomenon of what happens when we hear -- we are hearing from so many different places how we can't trust information. that we -- a.i. can manipulate video, it can. audio, which it can. it can manipulate images, which it k instead of trying to find out is this real, not real, going to trusted sources? what do we do? we stop believing in anything at all. and this is the liars dividend. it is out of the playbook of
12:23 pm
autocrats and would be autocrats going back for millennia. now enabled by a.i. i was thinking for those of you that were here last night and you heard woodward and bernstein. carl bernstein made a comment that public opinion changed when people were finally able to hear richard nixon's tapes. think about what would happen today if those tapes were released. fake news. this is a.i. manipulated audio. that wasn't me. you know what? a lot of people would be like, yeah. i don't know that i can believe that. that's the world we live in. yeah, i was supposed to talk about redemption. i don't know i have the redemption yet. if there is redemption to be had it is in the promise and the growth which so many people here in this session, you heard it in the first panel, have the promise of local news. local news is -- there is no
12:24 pm
silver bullet to all these problems. if there is any salvation to be had, it is in local news and the growth of local news, people of the community, in the community, communing with the people who are their neighbors, providing them the information that they need, listening to them, and building that trust. we know that leads to civic engagement. we know how important it is. we must all support efforts whether it is press forward or any of the other efforts you heard from sarah, the american journalism project, or the work that elizabeth was doing. we all really need to support that. it is the only real way out. and with that we are going to move into our a.i. and elections panel. i am happy to introduce my fellow panelists, secretary of state cisco aguilar. secretary of state scott sha a wap -- schwab. open a.i.'s beck yea waite. and our moderator, dr. tala
12:25 pm
schwab -- talia schwab. [applause] >> thank you so much for that. such a pleasure to chat about digital media and the 2024 election. we have a hot topic here. just a small topic. and just to offer some introductory remarks, we are in a remarkable setting right now in 2024. we have elections in over 630 countries, plus the european union. representing just under half of the world's population. which is mind-blowing. we have seen a.i. used in elections. we have the a.i. generated robocall impersonating president biden that sought to discourage voters in new hampshire's primary. we have auto clips of a liberal
12:26 pm
party leader discussing vote rigging. we have a video of an opposition leader in the conservative muslim majority nation of bangladesh wearing a bye keeny. there are so many things to talk about and possible uses of a.i. as we look to the election. i'm looking forward to this conversation. i want to just get started. we are delighted to be joined by becky, who is the head of global response at open a.i. and becky, i want to just dive right in. open a.i. released details about its approach to the 2024 elections earlier this year. and they noted that the rule that they have is that people cannot use open a.i. tools for campaigning or for influencing voters. can you talk to me about enforcing rules like that on a global scale? it seems almost unfathomable. tell us about it. becky: yes. thank you so much for having me.
12:27 pm
and for this l.b.j. having this forum and discussion in this milestone election year. it is more important than ever to this these sorts of conversations that bring together particularly folks across governments, civil society, and industry to have these important discussions. i spent a lot of the last six months speaking with policymakers and civil society around the globe to understand what is top of mind for them going into this election year. while we are very excited about the significant benefits of this technology, we also are clear eyed about its potential risks. and through those discussions, through that dialogue, we have developed a preparedness framework that focuses on three efforts. first, the policies. making sure we have the right policies in place. second, preventing abuse. and third, elevating appropriate
12:28 pm
information and transparency in our tools. first as you mentioned our policy lines, we noted that we don't allow political campaigning and discourage participation with our tools. we wanted to have a set of policies that were a little bit more conservative this go-round given that we haven't seen generative a.i. in the elections space before. we wanted to make sure we were really taking a conservative approach out of an abundance of caution. second is preventing abuse, this is getting to the enforcement piece. we think about safety through the entire lifecycle of our tools. it's not a single point. to use, if you'll excuse the bad metaphor, if you are going fishing and have a bunch of g net, you don't use just one, you use several nets so you catch as
12:29 pm
many fish as possible. we think about safety in the same way. we have interventions across the entire lifecycle of our tools to make sure we are enforcing on different harms. one example of intervention that we might have is something called reenforcement learning with human feedback. i don't know how many people here have used any of our tools or generative a.i. in general, but one thing that we have in our chat bot tool, our large language model, is this reenforcement learning where it's they very front end how we think about safety. it's a fancy way of saying we ask a question of the model, we generate a bunch of responses, and tell the model which one is best. by doing that we can steer the model over and over to something that is safer, more reliable, and more helpful. that's how we reduce the likelihood that it's going to produce these harmful responses. t -- finally, the last piece is
12:30 pm
around transparency n our model we look to elevate phroept sources -- appropriate sources and site where information is coming from where appropriate. i guess final thought beforehanding it over to someone else on the panel it's ever more important we have these sorts of conversations. and that we are collaborating not just across industry but also within industry. we are really excited about. so work that we are doing with our social media companies and others where generative information might be actually distributed. making sure that we have those close connections going into the election season. talia: speaking about close connections. open a.i. has a partnership with the national association of secretaries of state. and what an honor for to us have the president of this organization with us, secretary scott schwab. thank you so much for being here. let's get started about thinking about kansas. when you think about kansas, what a.i. or foreign influence
12:31 pm
threats worry you the most? secretary schwab: this is where we come from looking at -- there is a difference between the campaign side and the election side. and often people commingle them. as a secretary if you get that fake biden phone call, i'm not concerned about that because that's campaign side. we'll let our ethics commission deal with that. our bureau of investigation and what not. but on the campaign side, this is where it can get to be really a concern. i hate to use these examples because then when you use the kpafrpbls you get people -- you give people ideas. if john son county is a -- johnson county is a wealthy county. i get the honor to live there. i love t but it's a very -- it's a purple county. so imagine if somebody generated a video that was shared on social that said, due to bomb
12:32 pm
threats, all johnson county polling places will be closed on election day. imagine the chaos if someone used my image and likeness. we already know that sometimes news is so quick. we got to get this out there. now i'm out there saying that's not me. now the news is trying to say, what's real? and then when it's all sorted out there was no bomb threat, it was fake. how many voters did you affect i'm not going to take a chance. i'm not going to vote today. you can't undo that. those are the concerns -- i really like what minnesota is doing as relates to a.i. if you generate an a.i. image or video or a voice and it doesn't have a disclaimer saying that it's a.i., then you are using it to influence -- influence a campaign or election, it's a crime. they carved out you can do satire. the "saturday night live" exemption.
12:33 pm
those are the things on the election side that become terrifying. you are not misrepresenting a tkaepb candidate. they -- a candidate. they can undo that. there are two great motivators for humans to make decisions, hope and fear. fear's stronger. it's a lot easier. so if you get -- cause voters to have fear to not vote, how does that truly influence election? that's outside the campaign side. as secretaries that is the conversation we are having. talia: disturbing scenario. secretary schwab: now somebody is on the internet i have an idea. those are the concerns. talia: we have a student in the audios i have an idea how to deter. secretary schwab: social security-r sos. chaos.gov and hope us. talia:you authored an article in foreign affairs earlier this
12:34 pm
year where you said generative a.i. companies in particular can help by developing and making available tools for identifying a.i. generated content and ensuring that their capabilities are designed, developed, and deployed with security as the top priority to prevent them from being misused by nefarious actors. how well do you think generative a.i.s are do. secretary schwab: the example she just gave. creating safety nets. you don't know until -- when we hear the phrase -- in the article it's got to be -- safety has to be your top priority. when boeing has a door come off an aeufrp -- airplane, what's the first thing they say? safety is our top priority. i get t this still happened. a lot of times you don't know what the holes are until after it happens. we don't know how they are doing. we can come through november of this year and -- i'm curious about your opinion you are more
12:35 pm
of a swing state than kansas. there is a good direction which way kansas will go. but there are concerns that we won't know until after the election what will happen. my bigger fear is not this presidential election. it's going tonight one in four years because it will be more developed. and -- nobody knows how to truly weaponize a.i. right now. but in four years i'm pretty sure they will. in 2020 our biggest concern was misinformation. we got hit with the pandemic. the gray philosopher yogi berra once said the problem with trying to predict the future it keeps changing. that's what happened in 2020. now 2024 -- in 2020 bedidn't -- we didn't deal with a.i. now it's like a freight train. we'll know about it in 2024, we don't know how creative people have come to deploy ttalia: really interesting. and some conversations we had
12:36 pm
before also mentioned. the things that will develop between now and 2024 we need to look at that. secretary, i want to bring you into this conversation as well. in january, you said that addressing a.i. threats to electoral integrity will be a partnership between the federal government, private sector, and local governments. i'm hoping you can give us a progress report on how much progress has been made in helping state and local governments understand the threats and how to approach them. >> no progress. and reesely, real -- it's really, really frustrating. especially when a high level federal official arrives in your state and they ask you what are you doing about a.i.? you look at them and go you want my state to step up, put the resources behind something that is receiving billions of dollars of investment, you are the federal government. you have access to researchers. you have access to information. i can only treatment about. i'm trying to figure out how to get 17 counties across our
12:37 pm
battleground state to be able to use legacy systems that exist to transfer on to a statewide system. and you are asking me to be the leader on a.i.? it's unfor the tphafplt and unfair. talia: not a good progress report there. hopefully someone -- it's unfortunate and unfair. talia: not a good progress report there. hopefully -- secretary aguilar: this is a thing that is impacting the rest of our country. it's not just an issue in nevada. for me to have somebody ask me how i'm approaching a.i., i think it's unfortunate. especially when the federal government has not had a hearing on funding. everything in the election space from the federal side is react. there is no strategic plan. there is no sustainable strategic funding in elections. even though it's deemed critical infrastructure, nobody's saying what is it we are going to do? everything we are doing is reactive.
12:38 pm
sorry to be the downer. i can tell you a lot of great things we are doing in nevada when it comes to election and voter engagement. when it comes to this issue, this is one that's catching all of us. talia: everything is evolving so quickly we are all thinking about what could be -- we don't know yet. secretary aguilar: we'll get the issues -- catch the shall shoes of the earn nateor. i had an opportunity to participate in the a.i. project. and they brought a few of us election leaders into a raofplt we tested several of the chat bots. the information that was coming out about nevada specifically was wrong. and when you talk about these issues -- we don't know when somebody's getting this bad information. we know it's a younger voter that's going to go to a.i. and use it to ask the questions to become educated. if somebody's turning 18 how do
12:39 pm
i register to vote, and the chat bot tells them you have to register three weeks before the election, which is not true in nevada. nevada we have same day voter registration. this young person is going to walk away and continue with their day. we just lost a voter. and that's what scares me. i will have no idea that happened. because the information being given is wrong and false. talia: the fact we can say that in the united states where there is lots of available information that models can be trade upon. then we think global scale which vivian, i know you at aspen you have been doing work thinking about this and hosting a lot of public conversations and discussions about a.i. and elections. how does the threat and strategies to counter possible threats change when we think about this globally? vivian: as you said in the opening, this is a year of -- record year for national elections. we have been able to see, you mentioned some in your open, too, as we count down to the
12:40 pm
people voting for the national -- for the -- in the fall, we have been able to track what we have seen happening in other national elections in the united states. and is it -- can we point to anything that said this changed the vote? we don't know. i think secretary schwab had it right, we are not really going to know the impact on anything of 2024 until we are able to study it afterwards, if we k if we can get access to the data, which is another issue and another panel we should have. but we have seen in every single national election a.i. used. along the lines of some of the examples you gave about vote rigging, or leading candidate in a very muslim country, picture of her in a bikini. it's terrifying.
12:41 pm
the messaging, again, it's the stuff you can't see. it's the what's as app messages. the telegram messages. that's where this information, misinformation, false images, false audio can travel. and the ability to generate this kind of content highly targeted, customized, personalized to your district at scale is unprecedented. talia: we are going to continue on this frightening theme for a second. becky, i want to ask you something. you live in a world of open a.i. you live in a world of a.i. and thinking about the challenges. you do it as a global level. it would be really fascinating for all of us to think about from your sraptage point -- vantage point what's the worst case kase alaskan tpheuro in the upcoming election and what can be done to circumvent that? .
12:42 pm
becky: what you mentioned in your opening remarks, the harm that we have seen to date is really around deep fakes that we've actually seen play out at a global scale in these elections. we've heard over and over and over again in our conversations that this is the thing that they're most concerned about. i think the technology today is not yet at a place where some of these more large scale risks and concerns could take place, but there are audio-visual models available, audio images and video that could be leverages and -- hrefbg randled and that -- leveraged and that out of context information or misinformation could result in some really scary outcomes. what we're doing, we only have an image model currently available. we don't have a commercially available audio or video
12:43 pm
product. but for images, we have a mitigation in place with the front end where you can't create an image of a real person. so if you ask the model to create an image of an elected official, a secretary of state, an election official, it won't create that, it will refuse. that said, we know that images that are seemingly innocuous could be taken out of context and that can be equally harmful. and so i think one of the things that you mentioned in your article earlier is really around provenence. so understanding the origin of these audio-visual models and their output is a huge part of the work that we're doing internally. to get a little bit more context on what that looks like, there's something called c2pa, it's a fancy term for just a piece of data that's attached to an image. you can think of it like a passport. so as you go around to different countries, you bring your passport. similar to this image.
12:44 pm
if it travels around the internet, has this piece of data attached to it. and with that data someone on the other end can identify where it originated. that it was produced by dali3 or image generation tool, if it was produced by a different model or if it's an authentic image and it was perhaps an image that kwaeupl from the basketballs, -- image that came from the bbc who is another entity that's signed on to c2pa. this is by no means a perfect solution, if the image is modified, it can lose that data. but it's a really good step in the direction of creating an industry-wide standard where all of these different platforms can talk to each other. our tools are not a distribution platform, people don't distribute content from our services but they can go to a distribution service and upload it to any one of the social medias. so making sure that we have common language that we can use is really important. the last thing i'll say on that
12:45 pm
is i think it's really important that we continue as an industry to push forward the research in this area around prove nance, -- provenence, as i said, the current technology is by no means a silver bullet. one of the things i'm very excited about that we're working on internally is something called a classifier. that effectively allows us to look at a whole bunch of imactionages and -- images and with very high accuracy identify which ones came from our tools. the thing that i'm excited about it is that it does that even when the image has been modified in ways that you see out in the wild on the internet. so on social media things get cropped, text gets laid over it, and it continues to be able to identify those images with high degrees of accuracy. that's the kind of stuff that we need in order to be much more robust with these kinds of harms. so, that's sort of top of mind for 2024. talia: i think that's great,
12:46 pm
making those sorts of things public in the way you called for in your article, secretary schwab, i think is a really exciting way to think about how this could help. how things could be more productive moving forward. i want to return, vivian, you kind of started to mention that encrypted messaging apps like what'sapp are a particular issue. and secretary aguilar, you've mentioned that multiple languages are a particular issue and a concern, especially in a place like nevada and here in texas. can you tell us a little bit more about what you're thinking in terms of a.i. and foreign influence and why things like multiple languages and maybe elaborate a bit more on vivian's comments about whatsapp and other encrypted messaging apps? sec. aguilar: we have a large latino population in nevada. they are going to determine the outcome of some of our most critical elections but it goes back to even a.i. and translation of information, again, at the a.i. democracy project, we went through some of
12:47 pm
the translation projects and the information, the way information was translated into spanish, the tone was very festive and very party-like and i think when you're talking about the seriousness of elections, that's not going to translate very well to that voter so that's a big concern. and we also did it into hindy and the way hindi was translated and translated was so strict and so critical that i think a voter would hear that and would be afraid to actually vote because it was so strict and so direct. so the tone of translation is very, very critical. but the majority of participate don't speak second languages -- people don't speak second languages so they're not able to understand the impact translation is able to have. vivian: i think that's -- talia: i think that's one thing you think about. not only the scale in terms of how quickly people can create
12:48 pm
messages using a.i. but also how far they can spread, given distribution channels. and then adding language to it. sec. aguilar: the data that's being relied upon in a.i. machines is not generated by these communities either. and so there's a sense of already existing bias. how do you ensure that you're actually being content-appropriate in those translations and that information being given? talia: yeah. what a good point. in thinking about how we deal with all of this, vivian, i want to come back to you. because given your background in the news from "the new york times" and the guardian, i'm hoping you can tell us about what you think -- how prepared are the news media to deal with this? what role should they have in this? vivian: could be more prepared. the efforts -- at it's as pen institute, our objective is --
12:49 pm
at the aspen institute, our objective is to share information across groups because i think that is the biggest gap. we presented it at the national association of secretaries of state, about what are the risks and what are the mitigations. we have a meeting coming up in two weeks for technologists in silicon valley. in fact, the secretaries are going to be there, which i'm very grateful for, to help the tech companies, those who are not as informed as becky about what the risks are, to help them understand what are the challenges that those on the ground are facing so they can also come up with mitigations. and the third part of it is to make sure that the media is ready for how to cover these issues when they inevitably come up. because there is -- i come from news media, i'm in my heart a journalist. i will always consider myself -- my primary identity is journalist. that said, my fellow journalists
12:50 pm
don't always necessarily do the right things. i think there is sort of a little bit of an overblown coverage of the big spectacular deepfake which can lead people to mistrust everything, which i mentioned on the podium earlier. and at the same time not -- just being prepared for how to cover when these kinds of, not just the deepfake but the big, shiny, spectacular deepfake, but the kinds of things that secretary aguilar was talking about in terms of what's happening with language translation, whether it's get out the -- you know, fair-minded, good-hearted get out the vote or whether it's nefarious actors who are trying to dissuade those using easily accessible language tools or what stories may be traveling across messaging apps and really making sure that they are prepared. the most -- really the only thing we can do i think for
12:51 pm
tpwaour is to make sure -- 2024 is to make sure that the public understands when -- whatever they hear, whatever they see, whatever strange robo call they get about a bomb -- bomb scarce at all vote offing -- scarce at all of the voting place, that they know where to go to check it out. so it's, whereas your website again? sos.kansas. com. sec. schwab: also important too, if it doesn't say.gov, chances are it didn't come from us. we had significant cyberprotections by using the dot-tkpwofb. there's a lot of im-- dot-gov. there are a lot of imitates that are are dot-co. vivian: those in charge of election integrity who will be the first -- or local election leaders in their communities, or trusted news media. talia: making sure that people
12:52 pm
have those relationships so they will go to trusted media outlets. sec. aguilar: on trusted media thing, i have an opportunity to speak with teachers today. what bothers me now about trusted media is majority of americans and students don't have access to that media because they're behind a pay wall. and that pay wall is a huge barrier to people having the opportunity to get good journalism. and i think when it comes to elections, i wish the elections information would be -- the pay wall would be removed because it's in the public interest to ensure people have strong information available to them. vivian: i will say, a lot of the nonprofit local media, some of whom report represented by -- who were represented by organizations today have opened up the pay wall. sec. aguilar: it's tkpwraetd but we need to ensure they have the resources to exist to do the journalist they people in -- journalism they need. talia: that raises us to the next question which i'd like to
12:53 pm
ask of all of you which is, if you have the ability to enact one policy with respect to a.i. and elections, or realistic policy, so this is not the magic wand scenario, this is a realistic policy that you can enact, i'm curious to hear what it would be from each of your respective perspectives. who wants to start? sec. schwab: i spent 19 years in the legislature and i love making policy, being a chairman was one of the greatest honors i ever had because you get to -- and when i was a chair of financial institutions, the biggest issue was uber because the question was, who is responsible for the insurance on the vehicle, right? and if you remember, back in 2016, maybe it was 2015, uber canceled their network in kansas and if you opened uber it said, please respond to chairman schwab about getting uber again
12:54 pm
in kansas and so you clicked on it and imagine their network, it shut down our server in our capital because it got all these emails. so i took down uber. so i took up uber. we struck a a deal and whatno, but things are, i believe in the free market so there's's a freem there but it does work through critical infrastructure and subject to regulation is engaged in commerce inn the united states, your subject to regulation which is fair. i really am spending more time on that minnesota law say no, you have to compensate you can't create it but you have to be honest about what it is. if you're not, then recorded so the book at you and is going to be, it's financially going to hurt you. if you're just a college kid or a foreign adversary, it's still going to be a cost because maybe
12:55 pm
the federal government will not go g after you but minnesota haa national guard. they can still sue and have jurisprudence across oceans, right? it's more of a challenge middle east you said hey, this is a standard of what were going to do. outside of that outgoing make laws better? you're setting a greatrd w stan, do we put that in statute? i don't know. i'm going handout off to you. >> that's a great pivot. from our lens it really is about standardization. thinking about provenance, that's one example where there has been across not just tech industryry but also the news media, some amount of standardization that has occurred organically but i think that to apply to other areas of this technology, there does need to be some momentum that is across not just industry but also government.t. one thing you were saying
12:56 pm
earlier is how do the models respond consistently? one way we can do that and we are exploring is pulling in or calling democratic input, identify a whole host of representative views to understand what model behavior should look like. i don't think 1000 1000 pen silicon valley should be responsible for determining what that looks like. b we are trying to make good strides for the absolute standard but that something we need to pull together, a whole bunch of mines across multiple industry to figure out and rollout a clear consistent way to do some of t these really toh things in the space of technology. >> i love that collaboration i think he threw how you would enact these indicators. when researchers that's been doing some work to try to figure out if you display that the people, just the raw information of where an image came from, without effect whether people find to be true or false.
12:57 pm
she found indeed it does have some beneficial effects. there's some optimism behind that work which is exciting. jump into what you think?er >> back to my days and grammar schools every time he wrote report you had used a primary source. if these chatbots the mission grable to only use data from primary sources, though sources being.com websites making sure that information is being used about a past president of voter access policiesos in 2021 but they're not showing up in these chatbots viktor shokin old nevada law. if we could go back to dot gov, go back to statues and use primary sources as information. >> what do you think? >> all hearken back to the minnesota law secretary schwab was referring to. we do need congress, united states, the federal congress to act and to create legislation along these lines.
12:58 pm
again b we don't want to ban synthetic media. that would be ridiculous but we do want disclosure, mandatory disclosure and real penalties for those who do not. andee also to give the federal election commission more teeth. the federal election commission, they really only about campaign finance. there have any other authority. and probably should have a lot of authority by the way. i'm not recommending that. it's like the election, elections are handled by the states but when it comes to this kind of disclosure around the use of synthetic media, ai generated false content, there is a role. i know many members of congress of both parties that agree this is a role that the fec can play. >> great policies. if i had that magic wand right now i would do it. so for l our last question herei want to endns with giving peopla sense of where we are all coming from. so that after the 20.4 election
12:59 pm
we could all reflect back on this panel and what we think. i want to read your headlines about ai and election and what t you from each of you in our short time left what you think the fears of ai are overhyped, under hyped, , or just about right. >> the guardian says disinformation imagine ai could erode democracy in the 214 u.s. election. pointer, how generative and i could help foreign adversaries influence u.s. elections. from foreign affairs, the coming age of ai power to propaganda. overhyped, under hyped, which is a right? >> the other great philosopher that being taylor swift that say two things can be true at the same time. so it can be overhyped and underhyped at the same time. that's my answer. >> do you want toe offer 30 seconds more of explanation? >> for so we don't know what would happen in 2024. there is reason to be very concerned about the impact of
1:00 pm
synthetic media ai generated false information on the election. so it is not overhyped it will need to be aware of that but for reasons mentioned before, so people the worst check to make sure something is true. so that's the underhyped. the overhyped is that i fear we're going to just make people stop believe, like i said over and over again, , that they will not believe anything, and that's got nothing to do with technology, tech companies policy. that is a massive societal issue at the could be incredibly damaging. >> we have to be quick that because her times almost up. over, under, about right? >> as long as we were prepared to respond collectively. >> okay. .. clear-eyed about what's coming. i don't necessarily think that all of those things are likely to happen right now with the current technology. but i do think it

0 Views

info Stream Only

Uploaded by TV Archive on