Skip to content

Commit 445128d

Browse files
gmagogsfmfacebook-github-bot
authored andcommittedJul 8, 2020
Add PyTorch Glossary (pytorch#40639)
Summary: Pull Request resolved: pytorch#40639 Differential Revision: D22421207 Pulled By: gmagogsfm fbshipit-source-id: 7df8bfc85e28bcf1fb08892a3671e7a9cb0dee9c
1 parent bce75a2 commit 445128d

File tree

2 files changed

+83
-0
lines changed

2 files changed

+83
-0
lines changed
 

‎GLOSSARY.md

+82
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# PyTorch Glossary
2+
3+
- [PyTorch Glossary](#pytorch-glossary)
4+
- [Operation and Kernel](#operation-and-kernel)
5+
- [ATen](#aten)
6+
- [Operation](#operation)
7+
- [Native Operation](#native-operation)
8+
- [Custom Operation](#custom-operation)
9+
- [Kernel](#kernel)
10+
- [Compound Operation](#compound-operation)
11+
- [Composite Operation](#composite-operation)
12+
- [Non-Leaf Operation](#non-leaf-operation)
13+
- [Leaf Operation](#leaf-operation)
14+
- [Device Kernel](#device-kernel)
15+
- [Compound Kernel](#compound-kernel)
16+
- [JIT Compilation](#jit-compilation)
17+
- [JIT](#jit)
18+
- [TorchScript](#torchscript)
19+
- [Tracing](#tracing)
20+
- [Scripting](#scripting)
21+
22+
# Operation and Kernel
23+
24+
## ATen
25+
Short for "A Tensor Library". The foundational tensor and mathematical
26+
operation library on which all else is built.
27+
28+
## Operation
29+
A unit of work. For example, the work of matrix multiplication is an operation
30+
called aten::matmul.
31+
32+
## Native Operation
33+
An operation that comes natively with PyTorch ATen, for example aten::matmul.
34+
35+
## Custom Operation
36+
An Operation that is defined by users and is usually a Compound Operation.
37+
For example, this
38+
[tutorial](https://pytorch.org/docs/stable/notes/extending.html) details how
39+
to create Custom Operations.
40+
41+
## Kernel
42+
Implementation of a PyTorch operation, specifying what should be done when an
43+
operation executes.
44+
45+
## Compound Operation
46+
A Compound Operation is composed of other operations. Its kernel is usually
47+
device-agnostic. Normally it doesn't have its own derivative functions defined.
48+
Instead, AutoGrad automatically computes its derivative based on operations it
49+
uses.
50+
51+
## Composite Operation
52+
Same as Compound Operation.
53+
54+
## Non-Leaf Operation
55+
Same as Compound Operation.
56+
57+
## Leaf Operation
58+
An operation that's considered a basic operation, as opposed to a Compound
59+
Operation. Leaf Operation always has dispatch functions defined, usually has a
60+
derivative function defined as well.
61+
62+
## Device Kernel
63+
Device-specific kernel of a leaf operation.
64+
65+
## Compound Kernel
66+
Opposed to Device Kernels, Compound kernels are usually device-agnostic and belong to Compound Operations.
67+
68+
# JIT Compilation
69+
70+
## JIT
71+
Just-In-Time Compilation.
72+
73+
## TorchScript
74+
An interface to the TorchScript JIT compiler and interpreter.
75+
76+
## Tracing
77+
Using `torch.jit.trace` on a function to get an executable that can be optimized
78+
using just-in-time compilation.
79+
80+
## Scripting
81+
Using `torch.jit.script` on a function to inspect source code and compile it as
82+
TorchScript code.

‎README.md

+1
Original file line numberDiff line numberDiff line change
@@ -323,6 +323,7 @@ Three pointers to get you started:
323323
- [Tutorials: get you started with understanding and using PyTorch](https://pytorch.org/tutorials/)
324324
- [Examples: easy to understand pytorch code across all domains](https://github.com/pytorch/examples)
325325
- [The API Reference](https://pytorch.org/docs/)
326+
- [Glossary](https://github.com/pytorch/pytorch/blob/master/GLOSSARY.md)
326327

327328
## Resources
328329

0 commit comments

Comments
 (0)
Please sign in to comment.