We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

cce62 • 6 years ago

One of Stuart Russell's points I think was that you don't have to reach general AI to see adverse impact. For example, if a judge believes that the sentencing can be delegated to an AI (which may be a "run of the mill" expert system or linear classifier) then we have a situation that demands society confront the question of AI safety. That exact scenario is happening today.

I think the need to have some cogent presentation of evolving AI/machine learning infused into legal, medical, and policy curricula is quite urgent.

Chris DeGruyter • 6 years ago

It's an intriguing problem to be sure, but my chief worry is that we're bypassing crucial steps in understanding real intelligence by creating artificial neutral nets with feedback and giving them direct connections to all of human knowledge. No form of intelligence in existence has ever been born this way, and escaping the norm without understanding its structure can't be as beneficial as we think. I think the community needs to take a step back, start with replicating what we can observe, then expanding on that as appropriate data suggests it is safe.

Dan Syrstad • 6 years ago

It's hard to know how long it will take to fix a problem when you don't have any idea what the problem is. What will a AGI look like? No one knows yet.

Benjamin Todd • 6 years ago

It's true there are many things that are hard to research until we know what form AGI will take, but there are other parts of the control problem and AI policy that seem useful regardless of the architecture.
https://arxiv.org/abs/1606....

Michael Cohen • 6 years ago