Quantcast
Channel: Scott MacLeod's Anthropology of Information Technology & Counterculture
Viewing all articles
Browse latest Browse all 4453

Iaraka River leaf chameleon: ""Does Neuralink Solve The Control Problem" may be a straw man" philosophically, "Why This Robot Ethicist Trusts Technology More Than Humans: MIT’s Kate Darling ...," Modeling of a fly brain or a mouse brain, Extended computer science / brain and cognitive science departments/coders (over decades) of the Stanford/MITs, the Oxbridges, the Univ Tokyo+, Conversation with best universities in Chinese, Arabic and Persian languages, for example, seems to me to be important here

$
0
0

Richard Price:

I wrote this piece last week about Elon Musk's new company, Neuralink. Musk's idea is that we need to merge with AI in order to ward off the threat of superintelligence. In this piece, I discuss this strategy, and argue that merging with AI doesn't make AI safer.

Any thoughts, comments, or questions would be much appreciated!

https://www.academia.edu/s/0c5db7bb12/does-neuralink-solve-the-control-problem?source=link

*
Hi Richard, and All,

Have folks read "Why This Robot Ethicist Trusts Technology More Than Humans: MIT’s Kate Darling, who writes the rules of human-robot interaction, says an AI-enabled apocalypse should be the least of our concerns" - https://magenta.as/why-this-robot-ethicist-trusts-technology-more-than-humans-8969d0b5f0a0#.7zzkfn2m1?  (See, too - http://scott-macleod.blogspot.com/2017/03/lake-michigan-stanford-law-codex-court.html).

As a followup to my observation two days ago here - that ""Does Neuralink Solve The Control Problem" may be a straw man," philosophically, my hope here, too, is that the extended computer science / brain and cognitive science departments/coders (over decades) of the Stanford/MITs, the Oxbridges, the Univ Tokyo+, and south Korean Universities (Seoul National Univ+), ... plus in each of all countries' official languages (how best to come into conversation with best universities in Chinese, Arabic and Persian languages, for example, seem to me to be important questions here)  ... and especially with their cultures (the Don Knuth's - the great Stanford Professor Emeritus of Computer Science) emerging from specific ethical traditions (e.g. Christianity, Buddhism), will be able to successfully inform the control problem concerning non-benevolent AI. "Identity" questions re anthropology (I'm an anthropologist) and language/coding questions seem central here too.

Perhaps CC World University and School - http://worlduniversityandschool.org - which is like CC MIT OCW in its 7 languages currently - https://ocw.mit.edu/courses/translated-courses/ - with CC Wikipedia in its 358 languages - will successfully help create ethical online CC universities in all countries' official languages. (World University and School is also Quaker-Friendly-informed in part).

Best,
Scott
scottmacleod.com


*
Thanks, Richard and all, and interesting. The most promising modeling of a fly brain or a mouse brain that I've seen comes from Stanford's/Google's research head Tom Dean. See this Stanford Neuroscience talk from October 2106 - https://www.youtube.com/watch?v=HazJ7LHihG8 (and re the "brain" label in my blog - http://scott-macleod.blogspot.com/search/label/Brain - as well). With this modeling as a kind of AI - and with Google's and Stanford's digital and knowledge-of-brain resources, I think it will be a long time before this AI, as a pragmatic example and for STEM research too, will develop issues of control problems. Until then, "Does Neuralink Solve The Control Problem" may be a straw man. I find it fascinating though that we may be able to model a fly's brain as well as a mouse's brain by 2020, which Tom Dean may have been calling for in this talk - and possibly address issues of the threat of AI in new ways. Best, Scott scottmacleod.com







*






...



Viewing all articles
Browse latest Browse all 4453

Trending Articles