#201 Re: Kako zamisljate svijet za 2000 godina ?
Posted: 03/12/2015 22:48
KTK VisokoSarajmen wrote:
Eko koža je l'
KTK VisokoSarajmen wrote:
Eko koža je l'
Na!UnscarD wrote:To ces moci za 40 godina a ne za 2000 ... barem prema ray kurtzwellu .... za 80 godina ce racunari biti toliko jaki da ce biti moguce transferisati kompletan mozak svih ljudi na svijetu koji bi na taj nacin nastavili da zive vjecno ...shiljak wrote:Kao što danas imaš sliku nekog, za 2.000 godina ćeš moći napraviti simulaciju neke osobe na računaru
Recimo kopiraš nekog kad mu je bilo 30 godina, i on priča i ponaša se virtuelno isto kako se tad ponašao i sve što je znao
Last year, a curious nonfiction book became a Times best-seller: a dense meditation on artificial intelligence by the philosopher Nick Bostrom, who holds an appointment at Oxford. Titled “Superintelligence: Paths, Dangers, Strategies,” it argues that true artificial intelligence, if it is realized, might pose a danger that exceeds every previous threat from technology—even nuclear weapons—and that if its development is not managed carefully humanity risks engineering its own extinction. Central to this concern is the prospect of an “intelligence explosion,” a speculative event in which an A.I. gains the ability to improve itself, and in short order exceeds the intellectual potential of the human brain by many orders of magnitude.
Although Bostrom did not know it, a growing number of people around the world shared his intuition that technology could cause transformative change, and they were finding one another in an online discussion group administered by an organization in California called the Extropy Institute. The term “extropy,” coined in 1967, is generally used to describe life’s capacity to reverse the spread of entropy across space and time. Extropianism is a libertarian strain of transhumanism that seeks “to direct human evolution,” hoping to eliminate disease, suffering, even death; the means might be genetic modification, or as yet uninvented nanotechnology, or perhaps dispensing with the body entirely and uploading minds into supercomputers. (As one member noted, “Immortality is mathematical, not mystical.”)
He believes that the future can be studied with the same meticulousness as the past, even if the conclusions are far less firm. “It may be highly unpredictable where a traveller will be one hour after the start of her journey, yet predictable that after five hours she will be at her destination,” he once argued. “The very long-term future of humanity may be relatively easy to predict.” He offers an example: if history were reset, the industrial revolution might occur at a different time, or in a different place, or perhaps not at all, with innovation instead occurring in increments over hundreds of years. In the short term, predicting technological achievements in the counter-history might not be possible; but after, say, a hundred thousand years it is easier to imagine that all the same inventions would have emerged.
Bostrom calls this the Technological Completion Conjecture: “If scientific- and technological-development efforts do not effectively cease, then all important basic capabilities that could be obtained through some possible technology will be obtained.” In light of this, he suspects that the farther into the future one looks the less likely it seems that life will continue as it is. He favors the far ends of possibility: humanity becomes transcendent or it perishes.