5333 private links
While chatbots like ChatGPT have wowed the world with their eloquence and apparent knowledge—even if they often make things up—Voyager shows the huge potential for language models to perform helpful actions on computers. Using language models in this way could perhaps automate many routine office tasks, potentially one of the technology’s biggest economic impacts. //
Video games have long been a test bed for AI algorithms, of course. AlphaGo, the machine learning program that mastered the extremely subtle board game Go back in 2016, cut its teeth by playing simple Atari video games. AlphaGo used a technique called reinforcement learning, which trains an algorithm to play a game by giving it positive and negative feedback, for example from the score inside a game.
It is more difficult for this method to guide an agent in an open-ended game such as Minecraft, where there is no score or set of objectives and where a player’s actions may not pay off until much later. Whether or not you believe we should be preparing to contain the existential threat from AI right now, Minecraft seems like an excellent playground for the technology. //
Mentil Smack-Fu Master, in training
4y
90
Today it's hunting a pig in a world of blocks.
Tomorrow it's hunting long pig in a world of blocs. //
stooart Seniorius Lurkius
7y
12
Subscriptor++
First there was the VCR to watch TV for us, then there was the digital monk to believe in thing for us. Now we'll soon have the digital gamer to play games for us! //
malor Ars Legatus Legionis
19y
11,382
Subscriptor
I find it absolutely eerie that 'pick randomly from the most likely next words in a sentence', used as an algorithm, can do these astonishing things. It makes me wonder very intensely if our own intelligence is not what we think it is.
Have you ever questioned the nature of your reality? //
WhatDoesTheFoxSay Smack-Fu Master, in training
1m
16
malor said:
I find it absolutely eerie that 'pick randomly from the most likely next words in a sentence', used as an algorithm, can do these astonishing things. It makes me wonder very intensely if our own intelligence is not what we think it is.
Have you ever questioned the nature of your reality?
On one level, I think it's very much worthwhile to ponder the nature of reality and of our selves. It can help us to mature and to grow. And to an extent, finding things that inspire this kind of introspection can be really helpful and meaningful. And it is true that we don't understand very much about intelligence or about how the brain works.
But you've got to remember that there is no such thing as an "ai agent". Language models do not understand the meaning of their training data. They do not understand the meaning of the output they generate. However it is that our brains really work, it's certainly nothing like how so-called ai systems work. See Moravec's paradox's for more on why that is: