Detailed explanation of Modular: How does the father of LLVM create the future AI engine language?

Structure of this article

**1.What is a compiler? **

2. About Chris Lattner, the father of LLVM

  • What are XLA and MLIR *What is LLVM?
  • What is clang *The relationship between Clang and LLVM

3.About Modular

  • Modular——artificial intelligence engine
  • About Google’s TPU
  • About deep learning and programmability
  • What are the technical challenges in actually building the engine?

4. About entrepreneurship, engineering team building, and the future of AI

**1. What is a compiler? **

A compiler is a software tool that translates high-level programming languages into computer executable code. The compiler converts the source code written by the programmer into binary instructions that the computer can understand and execute. These instructions are packaged into executable File or library to run programs on your computer.

The main workflow of the compiler is source code → preprocessor → compiler → object code → linker → executable programs (utables)

Another vivid example:

*Teacher: Children, today we are learning addition. *

bla bla bla ……

*Children: Teacher, we learned. *

Teacher: Now you can just understand 1+1=? The compiler

2. Chris Lattner, the father of LLVM

Before talking about Modular, let’s first talk about the experience of Chris Lattner. He was born in California in 1978 and grew up in the San Francisco Bay Area. He started programming when he was very young. Later, he received a bachelor’s degree in computer science from UC Berkeley and Studied for a PhD at Stanford University, focusing on compiler optimization and automatic parallelization.

Chris Lattner led the development of LLVM during his PhD. Because of LLVM, he won the 2012 ACM Software Award** (ACM Software Award)****. After that, Lattne was hired by Apple and was responsible for many important projects at Apple, including the design and development of the Clang compiler front-end, the Swift (the language that replaced Objective-C) programming language, and the improvement of the Xcode development environment. The Swift language is widely popular for its simplicity and performance and is used by developers for application development on iOS, macOS, and other Apple platforms.

After leaving Apple, Chris Lattner worked at companies such as Tesla and Google, and continued to publish research papers and participate in open source projects in the field of programming languages and compiler technologies. He was responsible for leading the Tensorflow infrastructure team at Google. , created XLA and MLIR.

Here we explain what XLA and MLIR are:

XLA (Accelerated Linear Algebra) is a domain-specific linear algebra compiler that can speed up TensorFlow models, potentially without requiring any changes to the source code. It can **** improve running speed and improve memory usage. ***

***MLIR (Multi-Level Intermediate Representation:Multi-Level Intermediate Representation) is a compiler framework. Its design optimizes the compiler and basically covers all the common parts of the compiler design. , which greatly facilitates compiler developers. *

**More importantly, what is LLVM? **(The following is excerpted from the brief book, see the reference link for the original text)

**LLVM can be understood as a collection of modular, reusable compilers and tool chain technologies. LLVM (actually the full name of Low Level Virtue Machine, but it has never been used as a virtual machine) So the following LLVM is not an acronym; it is the full name of the project.

Then we continue, The traditional compiler architecture looks like this:

Taken apart, it includes:

  • Frontend: front-end (lexical analysis, syntax analysis, semantic analysis, intermediate code generation)
  • Optimizer: optimizer (intermediate code optimization)
  • Backend: backend (generates machine code)

The LLVM architecture looks like this

**To describe it accurately, LLVM has made many innovations in the compiler, such as: **

  • Different front-end and back-end use unified intermediate code LLVM Intermediate Representation (LLVM IR)
  • If you need to support a new programming language, you only need to implement a new front end
  • If you need to support a new hardware device, you only need to implement a new backend
  • The optimization phase is a universal phase. It targets the unified LLVM IR. Whether it supports new programming languages or new hardware devices, there is no need to modify the optimization phase.
  • In contrast, GCC’s front-end and back-end are not too separated, and the front-end and back-end are coupled together. Therefore, it becomes particularly difficult for GCC to support a new language or a new target platform.
  • LLVM is now used as a common infrastructure for implementing various static and runtime compiled languages (GCC family, Java, .NET, Python, Ruby, Scheme, Haskell, D, etc.)

**What is Clang? **

A sub-project of the LLVM project, a C/C++/Objective-C compiler front-end based on the LLVM architecture. Compared with GCC, Clang has the following advantages

  • Fast compilation speed: On some platforms, Clang’s compilation speed is significantly faster than GCC (compilation speed of OC in Debug mode is 3 times faster than GGC)
  • Small memory footprint: The memory occupied by the AST generated by Clang is about one-fifth of that of GCC
  • Modular design: Clang adopts a library-based modular design, which is easy to integrate with IDE and reuse for other purposes.
  • Diagnostic information is highly readable: During the compilation process, Clang creates and retains a large amount of detailed metadata (metadata), which is beneficial to debugging and error reporting.
  • The design is clear and simple, easy to understand, easy to expand and enhance

Relationship between Clang and LLVM

For the overall architecture of LLVM, clang is used as the front end. LLVM in a broad sense refers to the entire LLVM architecture, and LLVM in a general narrow sense refers to the LLVM backend (including code optimization and target code generation).

Source code (c/c++) goes through clang–> intermediate code (after a series of optimizations, Pass is used for optimization) --> machine code

Reference: Jianshu-Aug 12, 2018-A simple explanation to help you understand what LLVM is

3. About Modular

Modular——Artificial Intelligence Engine

  • **C****hris Lattner’s thoughts on compilers and entrepreneurship as Modular? **

C****hris Lattner:“I created Modular not with a hammer looking for nails, nor with innovation for the sake of innovation. Currently, for companies like OpenAI, a small number of employees need to spend a lot of time manually writing CUDA kernels. . However, optimizations for AI compilers can be used to improve software collaboration and expand the capabilities of people with different skills and knowledge

"The final form of the compiler is to allow users to use very simple assembly code to complete any task and solve problems without having to know much about the hardware. The real role of the compiler is to use higher abstractions level to express."

*"*The FasterTransformer launched by NVIDIA can bring huge performance improvements. Therefore, many large model companies and developers are using FasterTransformer. However, if you want to innovate in the field of Transformer, you will be limited by FasterTransformer."

  • **In other words, the role of the compiler lies in its generalization. How to understand the generalization here? **

C****hris Lattner: If you want to get the same or better results as FasterTransformer, but using other general architectures (here non-Transformer architectures), then through the compiler, you can get the best of both worlds As a result, the same excellent performance can be obtained while researching. In other words, optimization at the compiler level can be equivalent to an “AI engine” and assist in the transformation of LLM architecture in the future.

  • **Mojo is currently very popular, and it is compatible with the Python ecosystem, but the goal of Modular is to build a unified artificial intelligence engine. So, from your perspective, what are the current problems that need to be solved in the field of artificial intelligence research and development? **

Chris Lattner: If you go back to the period of 2015, 2016, and 2017, the rapid development of artificial intelligence, the technology of that era was mainly led by TensorFlow and PyTorch. PyTorch appeared slightly later than TensorFlow. , but they are both similar designs in some aspects. **However, the people who build and design TensorFlow and PyTorch systems are mainly composed of backgrounds in artificial intelligence, differential equations, and automatic differentiation. They have not solved the boundary problem of software and hardware. **

So Keras* is needed (Note: Keras is an open source artificial neural network library written in Python. It can be used as a high-level application programming interface for Tensorflow, Microsoft-CNTK and Theano to carry out deep learning models. The design, debugging, evaluation, application and visualization ) or nn.Module (Note: nn.Module * is a concept unique to PyTorch and is also a class that will be frequently used*)*.

The bottom layer of these things is actually operators. How to implement convolution, matrix multiplication, reduction algorithm and element-by-element operation? You need to use CUDA and Intel MKL (Intel Math Kernel Library), and then continue to build on these foundations.

This model is okay at the beginning, but the problem also exists. There are very few operators at first. However, as long as a new hardware is introduced, even if it is just a new CPU variant introduced by Intel, the computational complexity will continue to rise. , Today, TensorFlow and PyTorch have thousands of operators, each called a kernel, and all kernels need to be written manually by humans.

(Note: IP in the chip industry is generally also called IP core. IP core refers to the mature design of circuit modules with independent functions in the chip. This circuit module design can be applied to other chip designs that contain this circuit module. project, thereby reducing the design workload, shortening the design cycle, and improving the success rate of chip design. The mature design of this circuit module embodies the designer’s wisdom and reflects the designer’s intellectual property rights. Therefore, the chip industry uses IP cores to Indicates the mature design of this circuit module. IP core can also be understood as the intermediate component of chip design)

In other words, once new hardware is launched, you have to rewrite thousands of Kernels, so the threshold for entering the hardware will become higher and higher. Moreover, this situation also creates many difficulties for scientific research. For example, a researcher , very few know how these kernels work.

(Note: Because kernel development is often specific to hardware platforms, even nvidia is not universal, let alone FPGA or DPU. Custom operators are the main channel to expand the computing capabilities of hardware platform software)

You should also know that many people are now writing CUDA Kernel* (Note: Kernel: the name of the function run by CUDA on the GPU)*, but the skill profile of the engineers here is completely different from the skills that can innovate the model architecture. **As a result, many artificial intelligence systems face such a challenge: they cannot find technical experts who can write Kernel. **

*(Note: *When using GPU for calculation, all calculation processes can be encapsulated into a GPU Kernel and executed on the GPU in a sequential manner. For the sake of versatility, traditional operator libraries will The design is very basic, so there are a lot of them)

About Google’s TPU

I participated in Google’s TPU project when I was at Google. At that time, the challenges faced by the TPU team included: There are thousands of different Kernels on the market, how to launch a novel hardware? At that time, many colleagues also mentioned whether a compiler could be used to accomplish this. **So, instead of hand-writing thousands of Kernels, rewriting all these operators, and completely creating its own ecosystem like Intel or NVIDIA, the compiler can be more flexible than manual because the compiler can allow us to do it in different ways Mixing and matching Kernel can also optimize a lot of performance. **

**Compilers can achieve this in a generalized way, but if we use the traditional hand-written Kernel, what comes out must be a fixed arrangement of Kernels that people think is interesting, rather than something new that researchers want. **So, Google’s XLA (Explained above: Accelerated Linear Algebra: Accelerated Linear Algebra) was born. XLA can support large-scale exaflop-level computers, but the problem comes again. XLA emerged to support Google’s TPU.

**It is really difficult to build a compiler, and there are still scalability issues. Currently, compiler engineers must be hired to complete many machine learning engineering projects, and compiler engineers who understand machine learning and various related knowledge are even more important. not enough. In addition, XLA is not scalable and only works with TPUs. **

About Deep Learning and Programmability

If we look back at the development history of NVIDIA’s CUDA and deep learning, such as the birth of AlexNet and the development of deep learning, many people believe that the birth of AlexNet is the result of the combination of data, ImageNet + computing, and the power of GPU. **

(Note: In 2012, Alex Krizhevsky and ilya, students of Geoffrey Hinton, one of the three giants of deep learning and Turing Award winner, proposed AlexNet, and won the championship with a significant advantage in the ILSVRC that year, far exceeding the third Two. This result has attracted great attention from academia and industry, and computer vision has gradually entered an era dominated by deep learning)

**But many people forget the importance of “programmability”. **

Because it was CUDA that allowed researchers to invent convolutional kernels that didn’t exist before, and TensorFlow didn’t exist at that time. **In fact, it is the combination of data, computation, and programmability that enables novel research to launch an entire wave of deep learning systems. **

**Therefore, it is very important to learn from past events and history. So, how do you take the next step? How do we move into the next era of this technology where everyone can benefit from humanity’s amazing algorithmic innovation and ideas? **How to benefit from the compiler? How can you leverage the scale and versatility of compilers to solve new problems? **Most importantly, how do you benefit from programmability? This is the Modular we are working on - the artificial intelligence engine. **

About the future of artificial intelligence

  • **How do you view the future of artificial intelligence development? Do you think there will be more collaboration between teams with different profiles or directions? Is one of your goals to make it easier for non-compiler experts to use the compiler? **

**Chris Lattner: **Human beings are amazing, but no one can put everything in their own head. **People of different types and specialties working together can create something greater than everyone else. For example For example, I have some ability, but I basically can’t remember any differential equations, right? So the next inventor of the LLM architecture will definitely not be me (laughs).

**But if I think about it from a systems perspective, if I can get these people to contribute, collaborate, and understand how these things work, there will be a breakthrough. **How to promote invention? How do you get more people who understand different parts of the problem to actually collaborate? **Therefore, the exploration of Mojo and the engine is an effort to eliminate the complexity of the problem. Because there are many already built systems that are simply aggregated together. **

The solutions here are usually just based on solving the problem rather than being designed from the top down. And I think that Modular provides a simpler stack that can help reduce the complexity of the entire stack. If it is not refactored, it will still follow the fragmented and chaotic history* (fragmentation in the field of artificial intelligence)* Once you want to change it a little, you will Crash, performance will be terrible, or it will not work. This situation comes from the fragmentation of the underlying layer.

  • **How to understand compilers: Are compilers and languages a medium for human collaboration or crossover? **

Chris Lattner: I created Modular not to look for nails with a hammer, nor to innovate for the sake of innovation. Currently, for companies like OpenAI, a small number of employees need to spend a lot of time manually writing CUDA kernels. **However, optimizations for AI compilers can be used to improve software collaboration and expand the capabilities of people with different skills and knowledge. **

The final form of the compiler is to allow users to complete any task and solve problems using very simple assembly code without having to know much about the hardware. The real role of the compiler is to be able to perform tasks at a higher level of abstraction. Express. Compilers are also a way to properly automate common optimizations that might otherwise require manual programming.

The first goal is to make the process as streamlined as possible;

The second goal is that if you push a lot of complexity out of your mind, you can make room for new complexity. In addition, through abstraction, you can take advantage of the compiler - because the compiler has infinite attention to detail, while humans do not;

**Higher levels of abstraction can also give us many other abilities. **Deep learning systems and Modular have elevated calculations to the graphical level. How to explain it? Here, it means that once you are freed from complex programming statements such as for loops and become more declarative, it means changing the calculation. model. A lot of people don’t realize this yet, but I think it’s possible. Because the existing system is easy to cause headaches, many functions provided by abstraction are to implement Pmap and Vmap (Note: These two functions include automatic derivation and parallelization)

The improvement of technology has benefited from a large number of well-structured and well-structured systems, a large number of novel high-performance computing hardware, and a large number of breakthroughs that have been made. ** Therefore, I very much hope that Modular can be more Wide application and popularity, breaking the complexity of the series, this is wonderful. **

(Note: The biggest difference between declarative programming and ordinary programming is that there is an additional concept of time. You can define the past, present and future, instead of maintaining the one-way time of the entire execution link. With the concept of time , the definition content can be simplified, and then the calculation graph can be obtained through “deduction”, instead of writing an operator here and an operator there, and putting together a static graph)

  • **Can you define Modular’s artificial intelligence engine, artificial intelligence framework, and artificial intelligence compiler? **

Chris Lattner: Most people will use tools such as PyTorch when training large models. In such a scenario, CUDA or Intel MKL will soon be introduced. I call these types of engines collectively, and mention To the engine, it mainly refers to the hardware interface, and Modular provides a new engine that can introduce TensorFlow, PyTorch and others. Then, users can drive operations and perform hardware programming in a new way, based on the correct abstraction. , you can produce cool implementations.

  • **According to your definition, Modular is based between the framework layer and the hardware layer. Then we want to know the petaflops (number of floating point operations that can be performed per second) of Modular on A100, But I found that the website is all CPU, and I don’t see the GPU. So my question is, everyone is working hard to make the GPU faster, so why do you want to make the CPU run first? **

Chris Lattner: Thinking from first principles, we have to start from the bottom. How do you define today’s artificial intelligence systems? Many people talk about GPU every day and argue about GPU. It seems that everything is related to GPU. However, artificial intelligence is actually a large-scale, heterogeneous, parallel computing problem. Therefore, traditional artificial intelligence starts with data loading, and the GPU does not load data, so you must perform a series of tasks such as data loading, preprocessing, and networking, as well as a large number of matrix calculations.

To drive the GPU, a CPU is definitely needed. When we develop software for Accelerator, we will find that various problems exist, and what developers think is important is an important part of the problem they want to solve. So, everyone built a system based on the problem, and it had to be designed completely according to the requirements of the chip. **

**From a technical point of view, Modular wants to build a universal compiler, because it is easy to go from universal to vertical, but my experience with XLA is that it is not feasible to start with something specialized and then generalize it. of. **

For the artificial intelligence industry, the training scale is directly proportional to the research team size, while the inference scale is directly proportional to the product scale, user base, etc. **Therefore, a lot of inference is still done on the CPU. Therefore, we decided to start with the CPU and improve the architecture first. The CPU is easier to use and will not be unavailable. After the general architecture is completed, we can continue to expand. Then we are currently also working on GPUs, which will be launched soon, and will be expanded to these different types of Accelerators over time. **

  • **What are the technical challenges in actually building the engine? **

Chris Lattner: Members of our team have basically been exposed to all compilers and related entities in the industry. For example, I have participated in research on XLA and TensorFlow, and there are also members from PyTorch, TVM, Intel OpenVINO, and Onyx Runtime. The challenge for everyone is that a lot of the systems were designed five to eight years ago. Artificial intelligence at that time was different from now. There was no large language model.

**The problem is that when you build a system, it starts out as a bunch of code and then gets bigger, bigger, bigger. And the faster a system evolves, the harder it is to make fundamental changes. So we chose to do it all over again from scratch. **

If you still want to be able to write the Kernel by hand, we will first prototype it in C++, and then gradually introduce mojo, which means that you can build a very complex automatic fusion compiler that applies all the most advanced technologies and also surpasses the most advanced ones. Technology.

We know that users hate Static Shape Limitations and the lack of programmability. For example, they do not want to be bound only to Tensor* (Note: Tensor is actually a multidimensional array, with the purpose of creating higher-dimensional matrices and vectors. Many large models have irregular Tensors)*

  • **Are there any design goals for Modular? Or principle? **

Chris Lattner: I don’t know if I have a systematic set of principles. It sounds like having a set of principles is a bit like holding a hammer and seeing that everything is a nail. **But a lot of what we have to do is unlock the potential of the hardware and do it in a way that is super easy to use. So a lot of the starting conditions are less about enabling new things and more about solving complex problems in getting things done, so it’s more like design and engineering. **

If you chat with an LLM company, it is easy to find that they have spent more than 200 million US dollars on GPU and A100 GPU of a specific memory size. **Everyone wants to get all the possibilities through GPU (computing power). On the one hand, there are many People want to get inside the chip and unlock the potential, but there are many others who want more portability, generality, and abstraction. **

The challenge here, therefore, becomes how to enable and design the system to achieve abstraction by default without giving up all functionality. Similarly, many compilers, especially machine learning compilers, basically just try to cover a specific point in the space, and their functions are not universal.

Another thing is that we care about users, because many people are obsessed with technology, but forget that the portraits of people who apply technology and those who create technology are completely different. We need to understand the ideas of developers who use tools.

  • **Modular has just been released for download and use based on Linux Mojo in the past few days. MacOS and Windows versions will be released soon. So what other toolkits and components will be available in the next six to nine months from now? **

**Chris Lattner:**Mojo is still a young language, and we will gradually work with the ecosystem to make it more and more mature. We want to have a big community around Mojo to build cool stuff together. In order to achieve this goal, we will gradually open source Mojo, and everyone must work together to solve many details to build a well-functioning ecosystem, not just a mess.

Just like the Python 2-3 disaster that everyone has experienced, no one wants to recall it* (Note: More than 80% of the two syntaxes are incompatible, and due to historical issues, many Linux distributions rely on py2 at the bottom, but The user accidentally used py3, and pip install xxx crashed the system). *

**Q: What is your relationship with Guido (*****Note: Dutch engineer Guido van Rossum, who invented the Python language) and the Python Foundation? How are they related to each other? **

Chris Lattner: Guido knew Mojo was coming, and Guido spent a lot of time with our team, and we feel very lucky. He occasionally shows up in the Discord mojo community and gives me difficult questions, which is awesome.

We believe that mojo is a member of the Python family. Of course, there are many members of the Python family, including PyPy and Cython, etc. We hope that Python can continue to develop and continue to add new content. Mojo also continues to grow and add new content.

Go back 30 or 40 years ago, when there was the C language, and then a new thing called C++ appeared in 1983. C++ is actually the C language with classes (Note: In 1979, Bjame Sgoustrup went to Bell Labs and began to transform C into a language with classes. In 1983, the language was officially named C++)

What happened at the time was that C and C++ started out as two different communities, but there was a lot of convergence, sharing of ideas, and exchange of ideas between them.

Of course, all C language features were eventually integrated into C++. So I hope the same thing will happen with Mojo and Python.

About Entrepreneurship and Engineering Team Building

  • **There have been many founders like you who have had long, amazing careers as engineers. All of a sudden, it’s CEO. So, what have you learned about building teams and coaching others? Especially after becoming an engineer, now you also have to be the product leader and responsible for financing. What do you think? **

**Chris Lattner:**At Modular, my co-founder Tim and I have a very close relationship and are very complementary. In the process of starting a business, having someone to talk to is really, really important. What I am experiencing now is something I have never experienced as an engineering leader at Google and Apple.

**When we started this company, our belief was - understand pain. Representing a large company is different from representing a start-up company. When the start-up company is actually established, I will be the engineering leader and start to form an engineering team, while Tim will be responsible for product and business work, and will be responsible for the work in the absence of a big company. Communicate with different companies (customers) under the aura of the factory. For example, what are your current pain points? **

What are you currently working on? What are the challenges? How can we help? What do you think of what we’re doing? As Modular, the challenge we face is that what we want to build is really a super difficult and very abstract technical problem.

**To solve these difficult problems requires hiring very expensive technical experts from all big technology companies. Therefore, I must raise a lot of funds, motivate employees well, pay them wages, and make employees feel Comfortable. **

We face customers directly, and we see the pain of customers in the field of AI intelligence. Building and deploying many things is really a mess, and everyone is trapped by too many ineffective things. Therefore, our vision is to unify all these things with Modular.

But the problem comes again. While developing products, the products are also constantly changing, and the needs will continue to change over time. So what the teams we worked with ended up with a very high level of complexity, with lots of different messy systems developed for different special cases.

  • **Experience as an Engineering Leader: Extensive experience building teams and recruiting engineers. Do you have any experience or suggestions on project management? **

Chris Lattner: My job is to help the team win. You have to define what winning is, give people a clear vision, a clear goal, and keep everyone aligned. **When you have a large group of very good people around you When everyone wants to be a hero, a clear goal is very important. When potential energy is superimposed, great progress will be made quickly, but when it is reversed, the momentum will be offset. **

Internally, I’m often personally involved in helping build the initial infrastructure, and it’s important to show the team how it’s going to work, and that’s part of our culture. The most important thing to me in the engineering team is the speed of implementation, if you wait 24 hours or three weeks to run CI, everything will slow down.

When hiring and as you continue to develop your employees, you also want to determine, what are the employees good at, right? I really believe that if you have a really good, really passionate team of people and you couple them with something they really want to do, they have superpowers.

So a lot of times, we want to make sure that people are solving the right problems. In this way, they can grow, do things, push, and make decisions independently. But many times, people will be very focused on the product, or some are particularly good at doing business, focusing on customers and the problems that customers encounter. But you can’t solve and build a product without a team.

**I like Tim very much because he is really good at areas that I am not very good at, and everyone is learning from each other. **

About ChatGPT and Artificial Intelligence

The outbreak of ChatGPT is super interesting. For people like us who have been paying attention to artificial intelligence for a long time, ChatGPT means a kind of user interface innovation and makes people realize the power of artificial intelligence. Looking back, I think this raised the profile of AI in the public consciousness for several years.

**What is the most interesting unsolved mystery in the field of artificial intelligence? **

Chris Lattner: I think artificial intelligence is in its adolescence right now. There are a lot of smart people with different ideas about what AI is, right? Some people think everything should be end-to-end neural networks and software should disappear. I think the question to be answered is, what is the balance between training algorithms and intelligent design algorithms? I personally don’t think it’s all one or all the other, if you want to build a cat detector then CNN is indeed a good way to do it. If you want to write a bootloader or operating system, implementing it with a for loop works just fine. **But where will these things be phased out over time? How can we get app developers to think more consistently about these presentations? **

**Our bet for the future is that AI as a software development methodology will eventually become part of the toolset of how people think about how to build applications. Not just iPhone applications or similar forms, but also the entire cloud service, data pipeline, and ultimately iterative construction of the entire user product. Of course, we are still on the path of exploration. **

*Thank youEvolution of Technology Life** to all my friends for their continued support to the author over the years, and thank you to ChaosAI**

*Thanks to Dakai, AsyncGreed, Zhang Bo, Mao Li, Ethan, and Mew for their professional help

references:

1.

2. Reference: What is IP referred to in the chip industry? -Xin Analects-The sky is high and the clouds are darkAndi863

3.Chris Lattner’s Homepage (nondot.org)

4.nn.functional and nn.Module- Liang Yun 1991 Algorithm Food House 2020-07-10 21:47

5. Reference: Jianshu-Aug 12, 2018-A simple explanation to help you understand what LLVM is

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin