a16z founder: AI is the "ultimate media"

Author: Li Yuan, Geek Park

Those in the investment circle are the best at media, and those in the media circle are the best at investing.

Using this sentence to describe Marc Andreessen, the founder of the well-known investment institution a16z, can be said to be very appropriate.

Starting from the Netscape browser, and then turning to a well-known venture capitalist in Silicon Valley, Marc Andreessen has experienced many waves of ".com", social media, mobile Internet, etc., and is still active until now.

At the moment when AI is hot, the names of companies such as OpenAI and Mobius AI have been added to Andreessen's investment list.

In addition to investment, Marc, a "technical optimist" who always likes to share his views on social media, recently issued a view that "AI will save the world".

Marc Andreessen talks AI with Databricks CEO Ali Ghodsi|Databricks

On June 29, local time, at the Data+AI Conference held by Databricks, Marc Andreessen had a conversation with Databricks CEO Ali Ghodsi, and talked again about his views on the current development of artificial intelligence and why he does not think that AI will give humans bring about an existential crisis.

Talking Points

  1. AI consumes massive amounts of data and becomes the "ultimate media";
  2. The next generation of artificial intelligence will have a larger model, and the "illusion" problem will be controlled;
  3. Programmers will not be terminated by AI, good engineers can do more;
  4. The AGI end of the world will not happen, and intelligence will always bring better things;
  5. AI is constantly going through cycles and we are living in the best of times.

The following is the text of Marc Andreessen's speech at the meeting, organized by Geek Park:

01 AI is becoming the "Ultimate Media"

The idea of artificial intelligence was actually proposed in the 1930s and 1940s. People have been thinking about artificial intelligence for about 80 years. It seems to have been with the computer industry and the Internet, and people kept finding ways to do it, but it never became a major thing in the industry.

There is a great book Rise of the Machines that tells the background story of artificial intelligence. In the 30s, 40s, 50s, it was called cybernetics. Even before the advent of electronic computers, there was debate among the likes of John von Neumann and Alan Turing. At that time, they knew that electronic computers would be built in the future. Since the concept of Babbage's difference engine was born, people have been studying how to build computers.

The question at the heart of their debate was really about the nature of computers: Should a general-purpose computer be what is now known as a von Neumann machine, i.e., should it execute sequences of instructions in a deterministic fashion as instructed by its programmers? Or should it be based on a model of the human brain? The neural network paper was published in 1943, when they actually knew that they could build a computer out of neurons.

There were a lot of people arguing at the time that no, we shouldn't use von Neumann machines, we should just use brain models. But there were no chips, no data, and all the underlying technology, so they couldn't make it happen.

There has been a major breakthrough in the past five years, and this approach has suddenly started to work, and one of the most interesting questions is why is this happening now? **This has a lot to do with the theme of this conference, and a big part of it is data. **It turns out that for AI to work, it takes a lot of data.

We have to scale the Internet, we have to get the full library of the global web, and the full crawl data that is fed into the search engines, we have to get all the image data, including Google images and videos and so on, in order to train these models. It turns out that they do work, and then of course, that means making the AI work even better now requires more data. So it feels like worlds like internet data and artificial intelligence are colliding and magic is happening. **

There is a Marshall McLuhan point of view that I agree with. Marshall McLuhan was a famous media theorist, and he said about 40, 50 years ago that every medium becomes the content of the next medium. So he said, what were they doing when the radio came on? They are basically reading newspaper articles. What did they do when TV came out? They basically televised offline lectures and stage plays. When the Internet came along, it suddenly became a platform that could include all previous forms of media, including TV, movies and everything else.

**Artificial intelligence is the ultimate example of this point of view, and different media forms are basically part of training artificial intelligence. **A major breakthrough in artificial intelligence now is the concept of multimodal artificial intelligence. So, if you use ChatGPT today, it is trained on text; if you use Midjourney, it is trained on images, but the new AI that will be released will be trained on multiple media types at the same time. So you'll have AI trained on multiple media, including text, images, video, structured data, documents, and mathematical equations at the same time.

AI will be able to cut across all of these domains of data, all forms of media that have ever been around, data, they all matter.

02 **AI trains AI, **can create and calculate

The previous generation of artificial intelligence will also become the data source of the next generation of artificial intelligence. Bigger and better artificial intelligence emerges,

AI research today is basically how to use data created by humans to train AI. Humans then do what's called reinforcement learning, where they're basically tweaking the AI's results, but a lot of research is now focused on how to make AIs teach and train each other. So there's going to be this cascading, upward cascade where AIs actually train their successors.

**The current neural network is a new type of computer, a probabilistic computer. **What does this mean? If you ask the same question twice, it will give different answers. Even if you ask the question in a different way, it will give a different answer. It will also give a different answer if the training data is changed a little bit. If you praise it, or you tell it to imitate some famous people to answer questions, or you do various engineering, it will give different answers, and then it can do one amazing thing, that is, it will hallucinating.

If it doesn't know the answer, it makes up an answer, and people see this, and it's scary for engineer-minded people, but for creative people, they're like, wow, computers can actually create things, and we It's pretty amazing that there is actually a computer that can create fiction.

I talk to a lot of friends, and some of them say, "Well, I don't know if I can use AI because I'm not sure if the answer is correct." And I say, "Have you ever worked with a human?" If a person tells you something, you'll probably want to double-check at some point as well to make sure that what they said was factually accurate. But you communicate with other people because they have ideas that you don't have, and they create thoughts that you don't.

**Now we have both computer types: the engineer type that outputs definite results, and the creative type. **

What happens next is that they get combined and you end up with a hybrid system. You already have ChatGPT, and if you ask it a math or science question, it will usually answer wrong. But if you use the Wolfram Alpha plugin, combined with ChatGPT, it suddenly starts answering correctly. So I think now there will be a form of engineering where you combine these two models of computing and you have computers that can both create and perform tasks.

03 A****I WON'T TERM PROGRAMMER

I have an eight-year-old child, and the most emotional thing about AI development for me is that from now on, every child, mine and everyone else's, will be in an AI teacher, coach , mentors, consultants under the guidance of growth. It will be with them throughout their lives and do everything possible to ensure that each person can reach their full potential.

I introduced ChatGPT to my eight year old about a month ago and installed it on his laptop. I told him you could ask it any question and it would answer any question you had. He said, "Well, of course, isn't that what computers are for? Of course it will answer all your questions." Although he didn't understand the importance of it, I understood the importance of it. I remember every step the computer industry took to get to the point of being able to answer any question, and it was obvious to him. I think kids are going to grow up in a very different, better world.

I tend to think that a really good programmer will still need a long time of training and understanding the basic principles of programming in the future, just like a really good mathematician still needs to receive mathematics training, even with a calculator. **So really good programmers will still fully understand everything from the bottom up, but they will be more efficient than before, and they will be able to do more things in their career. **

In the future, the work of most programmers will be upgraded to a higher level. More and more your job as a programmer will be like being a programmer's manager rather than just writing all the code yourself. We are all managers, to manage AI.

Now, we use tools like GitHub Copilot where AI is helping to make recommendations, fix bugs, etc. As these systems become more complex, you, as a programmer, will be able to give them more complex tasks. You'll be able to just say to them write this code, write that code, do this, do that, and it will go away and execute and report back to you.

I guess, today you are paired with a human and an AI Copilot. My guess is that in the future there will be a pairing of one person with more than one of these AI Copilots. Maybe start with two, then basically five and ten. Maybe very skilled programmers, there will be 1000 of these AI systems. And then, as a result of that, you're basically going to be able to effectively oversee an AI force, and then it's a matter of how much time, attention, and energy you can dedicate to overseeing this as a whole.

A lot of people who can't code will also be able to actually program effectively. This trend has been going on for a long time, with the birth of many low-code, no-code tools that enable ordinary people to write programs without a computer science degree, and I think this trend will be accelerated. So I think a lot of non-professional programmers will be able to create code.

There is a classic fallacy in economics known as the Aggregate Labor Fallacy, which is a zero-sum worldview that holds that there is a certain amount of work that needs to be done, and if machines do the work, then humans have nothing left to do . In fact, the exact opposite happened. **Basically, when machines are able to do jobs for humans, you are actually freeing people up to do more valuable things. **

Yes. So there was a time when 99% of people were basically farmers. Then after the Industrial Revolution, there was a time when 99% of the population worked in factories. Whereas today, from our point of view, a much smaller percentage of people work on farms and factories, but there are more jobs overall because a lot of new demand is created and a lot of new businesses are created and industry. So I think that's going to lead to huge economic growth, which will lead to a lot of job growth and wage growth.

Also, coding has such properties that basically the world can never be satisfied with it. There are always more programs to write. There are always more things to do with code. Everyone knows it, everyone in business knows it. No one is satisfied with what they want software to achieve. What they lack is the time and resources to actually build the software they need. So I guess, there will be a lot of software, and there will be a lot of people working on software development in the end.

04 Destruction does not exist Human "evil AI"

There has been a recurring thought throughout human history that something comes along that fundamentally alters the human experience, and then it either leads to utopia, the concept we call the Singularity, or it creates dystopia, creating Hell Earth, everything is going horribly wrong. I'm an engineer by training, and this sounds very much like science fiction to me. So I don't think that's actually the case.

I like the point that a guy at Berkeley made,** he called what we're doing as a species, as a civilization, "Slouching Towards Utopia." **I really like this word, it means that basically things have been getting better, in terms of material well-being, health and intelligence, people's abilities have been gradually improving, but they are not in a way that leads to actual , literally utopian ways to get better, but we are gradually moving towards utopia, and although in our imperfect, flawed and fallen world, we still manage to improve the world to some degree. So this attitude is a form of cautious optimism, not radicalism.

There are currently two views on artificial intelligence bringing the end of the world. One view is that AI will announce its own goals. Like the plot of "Terminator", it will wake up one day and decide to hate us. To this, my answer is, **it is not like a person, it has no consciousness, no will, no such things. **

Then there is another view, the so-called "AI doomists", who believe that AI does not need self-awareness or any form of self to create scenarios that destroy humanity.

For example, the famous "Paperclip Optimizer" paper, which posits a scenario where someone tells the AI to make paperclips, and then the AI decides that it needs to convert all the atoms on Earth into paperclips, including sunlight, in all human bodies atoms, are transformed into paper clips. In order to maximize the number of paper clips in the world, it will develop its own energy source, master nuclear fusion technology, have its own space station, and have its own robot army. It will do whatever it takes to maximize the number of paperclips.

I was thinking, whether it has free will or not, this example is very strange. And we also have to consider practical constraints. Where will it get the chips to run the complex algorithms and make all the paperclips? Because just today, we can't even get chips to run the AI in our startups.

I thought, maybe now there's an evil baby AI in the lab at Databricks that wants to rule the world, so it's put out a purchase order to Nvidia, but, it's not getting the chip at all (laughs). I think we should wait to see those evil baby AIs before worrying too much about big AIs.

The reason for being optimistic about artificial intelligence is because it involves the concept of intelligence. We know a lot about human intelligence, which has been one of the most important topics studied by social scientists over the past century. ** In fact, intelligence makes everything better when it comes to humans. This is a very important statement, but it turns out that there is plenty of research to back it up. **

Smart basically means people will be better. People with higher intelligence are more successful academically, and they are more successful in their careers. Their children are more successful. They are healthier and live longer. They are also less violent. They are better at handling conflict and better at solving difficult problems. By the way, they are less biased, more open-minded, and more receptive to new ideas. So basically, applying intelligence to humans is one thing that makes everything better.

The world around us, including being able to meet and communicate with each other in spaces like this, and everything else that we do, isn't us waking up some morning with all these wonderful buildings and electricity and everything else waiting We have built it, but humans have built it step by step through the application of intelligence.

We build everything we love with intelligence that makes the world go round. But we've always been limited by our own capabilities or data capabilities. Now we have the opportunity to apply machine intelligence to all of these endeavors, improving everyone's ability to do things in the world.

05 Humans who study AI are heroes

Starting in the 1940s, AI scientists basically worked for 80 years without actually getting paid. I remember studying computer science and artificial intelligence when I was in college. At that time, artificial intelligence was like a fringe field, a theory that was questioned.

There was an AI boom in the 80s, but it didn't work out. It was a bubble burst, a very bad time. So by the end of the 80s, artificial intelligence was seriously doubted. This is the fourth such cycle. People had a lot of hopes for artificial intelligence, but in the end it didn't come true.

At that time, AI scientists worked in AI and computer science departments and laboratories, were born, grew up, got PhDs, became professors, taught AI for 30 years and then retired, working hard for a lifetime may not have anything big to show results. Many of them have passed away.

They were working on a set of theories and ideas that we now know are possible, but it took 80 years. This kind of determination, foresight, courage, insight and tenacity, throwing yourself into a field where you never get rewarded, people at the time wondered if they weren't sober enough to think the work wasn't going to work. But now that we know that these ideas work, we’re like, wow, they saw the future, they’ve always really understood what to do, it’s just going to take time for the whole process to pay off. I put them in the legendary category.

**I think people who are pushing the AI hard now, can be put in the hero category. The entire AI research community, including everyone at the conference today. **

I use the word hero on purpose because we’ve discussed in the article that we’re in this cultural moment right now where people are really mad about everything. I don't know if people notice, but right now a lot of people are in a bad mood and unhappy about a lot of things. The world is in some sort of emotional slump.

So, as soon as anything new comes out, there's an immediate debate about how bad it is, how terrible it is, it's going to destroy the world, it's going to destroy everything, and reading the newspaper reports, what's happening looks like a catastrophe. So I think anyone who's making the world a better place through this technology, what they're doing, I think they're heroes.

People have been working for hundreds of years and are now finally able to reap these AI benefits. So we're really lucky right now. Each of you can be a future hero.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)