Artificial Intelligence Ethics: CVU Student Shares Thoughts, Advice To Vermont Lawmakers

Mar 30, 2018

Vermont lawmakers are considering legislation to create an artificial intelligence task force. If the bill wins final approval, Vermont will be the first state in the country to take this step.

Milo Cress, a junior at Champlain Valley Union High School, went to the Vermont Statehouse this winter to testify in favor of this bill.

Vermont Edition recently spoke to Cress about his interest in this subject and the advice he gave state lawmakers looking at the bill.

The below interview has been edited for clarity and brevity. Listen to an extended conversation above.

On what he told lawmakers:

"My advice to lawmakers was to include as many people in the discussion as possible, because when we're developing and regulating artificial intelligence technologies, we want the developers and the regulators to be on the same page. We want everyone to know what our capabilities are and what our obligations are.

"And as we grow along with this AI technology, it'll come to fruition as we are able to take the reins of its development. And I want ... when I do help take the reins of the development of AI, for it to be in as responsible a state as possible so that I can pass on to my children a technology which is powerful enough to change the way that we learn, but responsibly developed enough to make sure that we're still in an environment where we can learn in a safe way."

Listen to the full Vermont Edition episode from March 30 about the proposed artificial intelligence commission.

On how he got interested in this topic:

"I think it all started when I was fascinated by how machines could learn. How processes which seem deterministic — if you give it one input, it'll give you a set output — how can you get it to change and adapt and act intelligently? And it's a fascinating question that I'm still exploring now.

"And it's a question that I think will shape a lot of technology in the future because the ability for machines to think creatively and adapt to challenges is going to present a lot of ethical and technological and legal challenges. And I think those questions, the questions that arise, are very important to answer. As we're learning more about these technologies, we'll learn more about what the bounds of them are."

On what's at stake with AI development:

"When we exhibit intelligence, we think of it as a human trait and it's what's differentiated us from other animals as we've evolved. But some artificial intelligence technologies are doing just that, using genetic algorithms to evolve and adapt to changes in their environment. And that's a really fascinating thing.

"And I think what's at stake here is, in the short term, our ability to trust these systems, to act without bias, to act without error and to act in a fair-minded way — something that we often can't trust people to do. What we'll learn is what it means to be human and what the ethics of consciousness and of intelligence are. And I think that's something that's being addressed even today, and I think it's something that we need to address today."

More from Vermont Edition — Smart, But Not Dangerous: Striking The Right Balance On Machines That Think [March 9]

On developing AI responsibly:

"Often the question is asked, 'how can we pursue these new technologies?' without the question being asked, 'how can we do it in the most responsible way?' And time and again that hasn't led us down a path toward success in the long term, and I think that artificial intelligence is no exception. Elon Musk, I believe, said that artificial intelligence is the single greatest existential threat to humanity. And I believe that, for better or worse, it's one of the last things that humans will ever need to invent.

"In the best case scenario, the pace of artificial intelligence's innovation will outstrip our own as it becomes able to redesign itself in a more innovative way. And then it's able to innovate more, and that feedback loop of constant redesign and innovation, many people believe will lead to the singularity or the melding of, you know, ourselves with AI technologies or the ascendance of AI.

"In the worst case scenario, as a tool it'll work in its own interests and disregard our own interests. And in doing that, our even inventing it could be an unethical thing to do. But in the short term, I think the problems that AI poses, or the questions that we'll have to answer before we implement AI in the mainstream, is how those biases and how those errors will play out in our society."

Cress then cited a current example: there's an algorithm used in the legal system to determine potential recidivism, which was found to exhibit racial bias in its determinations of risk levels — ProPublica investigated this, and you can read more about their findings here.

" ... That's a scary thought because whether the data it's being fed is biased or whether the conclusions it's drawing are biased, there is a certain amount of bias in the system. And unless we can design systems that are transparent, that are easily debuggable, that we can figure out where the bias is coming from, then we can't trust them to make decisions that can shape people's lives."

On technology in schools:

"Often schools, when they sign up for online services, are being forced to accept terms of service that they can't negotiate. You either accept it and use the service or you decline those terms of service and you aren't able to use it. And when valuable tools are being leveraged for a certain amount of data, then it's a hard choice that schools have to make.

"And I hope at some point that technology providers and companies will be able to negotiate what data exactly is shared with advertisers, what data is stored by the company and where that data is stored — because often I've read the terms of service for technologies that are being used inside of my school and other school districts, and often data is stored in warehouses and data facilities that are not even in the United States.

"And the laws that apply to managing that data often are way different, and especially that's worrisome when the laws of that country mean that the country itself, the government, can access that data without notifying users.

"And all of this sounds pretty Orwellian until you see that data is a commodity that's traded at the same rate as many currencies. And that's a scary thought, I think, and it's not something that I, as a student, want to be caught up in."

Broadcast on Friday, March 30, 2018 during the noon hour; rebroadcast at 7 p.m.