From 3060252d47223d32e5ac93f773be29333473e6f6 Mon Sep 17 00:00:00 2001 From: Priyanshu Sharma Date: Thu, 3 Nov 2022 14:40:02 -0400 Subject: [PATCH] fix: corrected links in PTED file --- _data/ecosystem/pted/2021/posters.yaml | 1353 +++++++++++++----------- 1 file changed, 718 insertions(+), 635 deletions(-) diff --git a/_data/ecosystem/pted/2021/posters.yaml b/_data/ecosystem/pted/2021/posters.yaml index 306f163fa3b7..dd4f3bfc7412 100644 --- a/_data/ecosystem/pted/2021/posters.yaml +++ b/_data/ecosystem/pted/2021/posters.yaml @@ -1,24 +1,26 @@ - authors: - - Josh Izaac - - Thomas Bromley - categories: - - Platform, Ops & Tools - description: "PennyLane allows you to train quantum circuits just like neural networks!,\ - \ This poster showcases how PennyLane can be interfaced with PyTorch to enable\ - \ training of quantum and hybrid machine learning models. The outputs of a quantum\ - \ circuit are provided as a Torch tensor with a defined gradient. We highlight how\ - \ this functionality can be used to explore new paradigms in machine learning, including\ - \ the use of hybrid models for transfer learning." - link: http://www.pennylane.ai - poster_link: https://assets.pytorch.org/pted2021/posters/K1.png + - Josh Izaac + - Thomas Bromley + categories: + - Platform, Ops & Tools + description: + PennyLane allows you to train quantum circuits just like neural networks!, + This poster showcases how PennyLane can be interfaced with PyTorch to enable training + of quantum and hybrid machine learning models. The outputs of a quantum circuit + are provided as a Torch tensor with a defined gradient. We highlight how this + functionality can be used to explore new paradigms in machine learning, including + the use of hybrid models for transfer learning. + link: http://www.pennylane.ai + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K1.png section: K1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K1.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K1.png title: Bring quantum machine learning to PyTorch with PennyLane - authors: - - Jeffrey Mew + - Jeffrey Mew categories: - - Compiler & Transform & Production - description: "Visual Studio Code, a free cross-platform lightweight code editor,\ + - Compiler & Transform & Production + description: + "Visual Studio Code, a free cross-platform lightweight code editor,\ \ has become the most popular among Python developers for both web and machine\ \ learning projects. We will be walking you through an end to end PyTorch project\ \ to showcase what VS Code has a lot to offer to PyTorch developers to boost their\ @@ -33,17 +35,18 @@ \ the Azure services, VS Code can be the one-stop shop for any developers looking\ \ to build machine learning models with PyTorch." link: https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/ - poster_link: https://assets.pytorch.org/pted2021/posters/A4.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A4.png section: A4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A4.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A4.png title: PyTorch development in VS Code - authors: - - Yanan Cao - - Harry Kim - - Jason Ansel + - Yanan Cao + - Harry Kim + - Jason Ansel categories: - - Compiler & Transform & Production - description: TorchScript is the bridge between PyTorch's flexible eager mode to + - Compiler & Transform & Production + description: + TorchScript is the bridge between PyTorch's flexible eager mode to more deterministic and performant graph mode suitable for production deployment. As part of PyTorch 1.9 release, TorchScript will launch a few features that we'd like to share with you earlier, including a) a new formal language specification @@ -53,15 +56,16 @@ that can shed light on performance characteristics of TorchScript model. We are constantly making improvements to make TorchScript easier to use and more performant. link: http://fb.me/torchscript - poster_link: https://assets.pytorch.org/pted2021/posters/A5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A5.png section: A5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A5.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A5.png title: Upcoming features in TorchScript - authors: - - Alessandro Pappalardo + - Alessandro Pappalardo categories: - - Compiler & Transform & Production - description: Brevitas is an open-source PyTorch library for quantization-aware training. + - Compiler & Transform & Production + description: + Brevitas is an open-source PyTorch library for quantization-aware training. Thanks to its flexible design at multiple levels of abstraction, Brevitas generalizes the typical uniform affine quantization paradigm adopted in the deep learning community under a common set of unified APIs. Brevitas provides a platform to @@ -79,12 +83,13 @@ section: B4 title: Quantization-Aware Training with Brevitas - authors: - - Jerry Zhang - - Vasiliy Kuznetsov - - Raghuraman Krishnamoorthi + - Jerry Zhang + - Vasiliy Kuznetsov + - Raghuraman Krishnamoorthi categories: - - Compiler & Transform & Production - description: Quantization is a common model optimization technique to speedup runtime + - Compiler & Transform & Production + description: + Quantization is a common model optimization technique to speedup runtime of a model by upto 4x, with a possible slight loss of accuracy. Currently, PyTorch support Eager Mode Quantization. FX Graph Mode Quantization improves upon Eager Mode Quantization by adding support for functionals and automating the quantization @@ -92,15 +97,16 @@ to make the model compatible with FX Graph Mode Quantization (symbolically traceable with torch.fx). link: https://pytorch.org/docs/master/quantization.html#prototype-fx-graph-mode-quantization - poster_link: https://assets.pytorch.org/pted2021/posters/B5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/B5.png section: B5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-B5.png - title: 'PyTorch Quantization: FX Graph Mode Quantization' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-B5.png + title: "PyTorch Quantization: FX Graph Mode Quantization" - authors: - - Fabio Nonato + - Fabio Nonato categories: - - Compiler & Transform & Production - description: " Deep learning models can have game-changing impact on machine learning\ + - Compiler & Transform & Production + description: + " Deep learning models can have game-changing impact on machine learning\ \ applications. However, deploying and managing deep learning models in production\ \ is complex and requires considerable engineering effort - from building custom\ \ inferencing APIs and scaling prediction services, to securing applications,\ @@ -119,47 +125,50 @@ \ processing endpoint and showcase the workflow for deploying the optimized model\ \ using TorchServe containers on Amazon ECS." link: https://bit.ly/3mQVowk - poster_link: https://assets.pytorch.org/pted2021/posters/C4.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C4.png section: C4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C4.png - title: Accelerate deployment of deep learning models in production with Amazon EC2 + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C4.png + title: + Accelerate deployment of deep learning models in production with Amazon EC2 Inf1 and TorchServe containers - authors: - - James Reed - - Zachary DeVito - - Ansley Ussery - - Horace He - - Michael Suo + - James Reed + - Zachary DeVito + - Ansley Ussery + - Horace He + - Michael Suo categories: - - Compiler & Transform & Production - description: "FX is a toolkit for writing Python-to-Python transforms over PyTorch\ + - Compiler & Transform & Production + description: + "FX is a toolkit for writing Python-to-Python transforms over PyTorch\ \ code.\nFX consists of three parts:\n> Symbolic Tracing \u2013 a method to extract\ \ a representation of the program by running it with \"proxy\" values.\n> Graph-based\ \ Transformations \u2013 FX provides an easy-to-use Python-based Graph API for\ \ manipulating the code.\n> Python code generation \u2013 FX generates valid Python\ \ code from graphs and turns that code into executable Python `nn.Module` instances." link: https://pytorch.org/docs/stable/fx.html - poster_link: https://assets.pytorch.org/pted2021/posters/C5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C5.png section: C5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C5.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C5.png title: Torch.fx - authors: - - Abhijit Khobare - - Murali Akula - - Tijmen Blankevoort - - Harshita Mangal - - Frank Mayer - - Sangeetha Marshathalli Siddegowda - - Chirag Patel - - Vinay Garg - - Markus Nagel - categories: - - Compiler & Transform & Production - description: 'AI is revolutionizing industries, products, and core capabilities + - Abhijit Khobare + - Murali Akula + - Tijmen Blankevoort + - Harshita Mangal + - Frank Mayer + - Sangeetha Marshathalli Siddegowda + - Chirag Patel + - Vinay Garg + - Markus Nagel + categories: + - Compiler & Transform & Production + description: + "AI is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today use too much memory, compute, and energy. To make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. Quantization - and compression help address these issues. In this tutorial, we''ll discuss: + and compression help address these issues. In this tutorial, we'll discuss: The existing quantization and compression challenges @@ -167,20 +176,21 @@ challenges How developers and researchers can implement these techniques through the AI Model - Efficiency Toolkit' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/D4.png + Efficiency Toolkit" + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D4.png section: D4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D4.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D4.png title: AI Model Efficiency Toolkit (AIMET) - authors: - - Natasha Seelam - - Patricio Cerda-Mardini - - Cosmo Jenytin - - Jorge Torres + - Natasha Seelam + - Patricio Cerda-Mardini + - Cosmo Jenytin + - Jorge Torres categories: - - Database & AI Accelerators - description: 'Pytorch enables building models with complex inputs and outputs, including + - Database & AI Accelerators + description: + 'Pytorch enables building models with complex inputs and outputs, including time-series data, text and audiovisual data. However, such models require expertise and time to build, often spent on tedious tasks like cleaning the data or transforming it into a format that is expected by the models. @@ -214,19 +224,21 @@ We aim to present our benchmarks covering wide swaths of problem types and illustrate how Lightwood can be useful for researchers and engineers through a hands-on demo.' link: https://mindsdb.com - poster_link: https://assets.pytorch.org/pted2021/posters/H8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H8.png section: H8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H8.png - title: 'Pytorch via SQL commands: A flexible, modular AutoML framework that democratizes - ML for database users' -- authors: - - 'Sam Partee ' - - Alessandro Rigazzi - - Mathew Ellis - - Benjamin Rob - categories: - - Database & AI Accelerators - description: SmartSim is an open source library dedicated to enabling online analysis + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H8.png + title: + "Pytorch via SQL commands: A flexible, modular AutoML framework that democratizes + ML for database users" +- authors: + - "Sam Partee " + - Alessandro Rigazzi + - Mathew Ellis + - Benjamin Rob + categories: + - Database & AI Accelerators + description: + SmartSim is an open source library dedicated to enabling online analysis and Machine Learning (ML) for traditional High Performance Computing (HPC) simulations. Clients are provided in common HPC simulation languages, C/C++/Fortran, that enable simulations to perform inference requests in parallel on large HPC systems. SmartSim @@ -235,16 +247,17 @@ modeling, is augmented with a PyTorch model to resolve quantities of eddy kinetic energy within the simulation. link: https://github.com/CrayLabs/SmartSim - poster_link: https://assets.pytorch.org/pted2021/posters/J8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J8.png section: J8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J8.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J8.png title: PyTorch on Supercomputers Simulations and AI at Scale with SmartSim - authors: - - Patricio Cerda-Mardini - - Natasha Seelam + - Patricio Cerda-Mardini + - Natasha Seelam categories: - - Database & AI Accelerators - description: 'Many domains leverage the extraordinary predictive performance of + - Database & AI Accelerators + description: + 'Many domains leverage the extraordinary predictive performance of machine learning algorithms. However, there is an increasing need for transparency of these models in order to justify deploying them in applied settings. Developing trustworthy models is a great challenge, as they are usually optimized for accuracy, @@ -292,15 +305,16 @@ Neural Networks. 19th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2007), 2, 388-395.' link: https://mindsdb.com - poster_link: https://assets.pytorch.org/pted2021/posters/I8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I8.png section: I8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I8.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I8.png title: Model agnostic confidence estimation with conformal predictors for AutoML - authors: - - Derek Bouius + - Derek Bouius categories: - - Database & AI Accelerators - description: AMD Instinct GPUs are enabled with the upstream PyTorch repository + - Database & AI Accelerators + description: + AMD Instinct GPUs are enabled with the upstream PyTorch repository via the ROCm open software platform. Now users can also easily download the installable Python package, built from the upstream PyTorch repository and hosted on pytorch.org. Notably, it includes support for distributed training across multiple GPUs and @@ -308,45 +322,50 @@ for the PyTorch community build to help develop and maintain new features. This poster will highlight some of the work that has gone into enabling PyTorch support. link: www.amd.com/rocm - poster_link: https://assets.pytorch.org/pted2021/posters/K8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K8.png section: K8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K8.png - title: "Enabling PyTorch on AMD Instinct\u2122 GPUs with the AMD ROCm\u2122 Open\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K8.png + title: + "Enabling PyTorch on AMD Instinct\u2122 GPUs with the AMD ROCm\u2122 Open\ \ Software Platform" - authors: - - DeepSpeed Team Microsoft Corporation + - DeepSpeed Team Microsoft Corporation categories: - - Distributed Training - description: 'In the poster (and a talk during the breakout session), we will present + - Distributed Training + description: + "In the poster (and a talk during the breakout session), we will present three aspects of DeepSpeed (https://github.com/microsoft/DeepSpeed), a deep learning optimization library based on PyTorch framework: 1) How we overcome the GPU memory barrier by ZeRO-powered data parallelism. 2) How we overcome the network bandwidth barrier by 1-bit Adam and 1-bit Lamb compressed optimization algorithms. 3) How we overcome the usability barrier by integration with Azure ML, HuggingFace, and - PyTorch Lightning.' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/E1.png + PyTorch Lightning." + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/E1.png section: E1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-E1.png - title: 'DeepSpeed: Shattering barriers of deep learning speed & scale' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-E1.png + title: "DeepSpeed: Shattering barriers of deep learning speed & scale" - authors: - - Stephanie Kirmer - - Hugo Shi + - Stephanie Kirmer + - Hugo Shi categories: - - Distributed Training - description: We have developed a library that helps simplify the task of multi-machine + - Distributed Training + description: + We have developed a library that helps simplify the task of multi-machine parallel training for PyTorch models, bringing together the power of PyTorch DDP with Dask for parallelism on GPUs. Our poster describes the library and its core function, and demonstrates how the multi-machine training process works in practice. link: https://github.com/saturncloud/dask-pytorch-ddp section: E2 - title: 'Dask PyTorch DDP: A new library bringing Dask parallelization to PyTorch - training' + title: + "Dask PyTorch DDP: A new library bringing Dask parallelization to PyTorch + training" - authors: - - Vignesh Gopakumar + - Vignesh Gopakumar categories: - - Distributed Training - description: Solving PDEs using Neural Networks are often ardently laborious as + - Distributed Training + description: + Solving PDEs using Neural Networks are often ardently laborious as it requires training towards a well-defined solution, i.e. global minima for a network architecture - objective function combination. For a family of complex PDEs, Physics Informed neural networks won't offer much in comparison to traditional @@ -355,26 +374,27 @@ that can create more general PINNs that can solve for a variety of PDE scenarios rather than solving for a well-defined case. We believe that this brings Neural Network based PDE solvers in comparison to numerical solvers. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/E3.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/E3.png section: E3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-E3.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-E3.png title: Optimising Physics Informed Neural Networks. - authors: - - Mandeep Baines - - Shruti Bhosale - - Vittorio Caggiano - - Benjamin Lefaudeux - - Vitaliy Liptchinsky - - Naman Goyal - - Siddhardth Goyal - - Myle Ott - - Sam Sheifer - - Anjali Sridhar - - Min Xu - categories: - - Distributed Training - description: 'FairScale is a library that extends basic PyTorch capabilities while + - Mandeep Baines + - Shruti Bhosale + - Vittorio Caggiano + - Benjamin Lefaudeux + - Vitaliy Liptchinsky + - Naman Goyal + - Siddhardth Goyal + - Myle Ott + - Sam Sheifer + - Anjali Sridhar + - Min Xu + categories: + - Distributed Training + description: + 'FairScale is a library that extends basic PyTorch capabilities while adding new SOTA techniques for high performance and large scale training on one or multiple machines. FairScale makes available the latest distributed training techniques in the form of composable modules and easy to use APIs. @@ -406,29 +426,31 @@ FairScale has also been integrated into Pytorch Lightening, HuggingFace, FairSeq, VISSL, and MMF to enable users of those frameworks to take advantage of its features.' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/F1.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/F1.png section: F1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-F1.png - title: FairScale-A general purpose modular PyTorch library for high performance + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-F1.png + title: + FairScale-A general purpose modular PyTorch library for high performance and large scale training - authors: - - Aurick Qiao - - Sang Keun Choe - - Suhas Jayaram Subramanya - - Willie Neiswanger - - Qirong Ho - - Hao Zhang - - Gregory R. Ganger - - Eric P. Xing - categories: - - Distributed Training - description: 'AdaptDL is an open source framework and scheduling algorithm that + - Aurick Qiao + - Sang Keun Choe + - Suhas Jayaram Subramanya + - Willie Neiswanger + - Qirong Ho + - Hao Zhang + - Gregory R. Ganger + - Eric P. Xing + categories: + - Distributed Training + description: + "AdaptDL is an open source framework and scheduling algorithm that directly optimizes cluster-wide training performance and resource utilization. By elastically re-scaling jobs, co-adapting batch sizes and learning rates, and avoiding network interference, AdaptDL improves shared-cluster training compared with alternative schedulers. AdaptDL can automatically determine the optimal number - of resources given a job''s need. It will efficiently add or remove resources + of resources given a job's need. It will efficiently add or remove resources dynamically to ensure the highest-level performance. The AdaptDL scheduler will automatically figure out the most efficient number of GPUs to allocate to your job, based on its scalability. When the cluster load is low, your job can dynamically @@ -436,18 +458,20 @@ existing PyTorch training code elastic with adaptive batch sizes and learning rates. - Showcase: Distributed training and Data Loading' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/F2.png + Showcase: Distributed training and Data Loading" + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/F2.png section: F2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-F2.png - title: 'AdaptDL: An Open-Source Resource-Adaptive Deep Learning Training/Scheduling - Framework' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-F2.png + title: + "AdaptDL: An Open-Source Resource-Adaptive Deep Learning Training/Scheduling + Framework" - authors: - - Natalie Kershaw + - Natalie Kershaw categories: - - Distributed Training - description: 'As deep learning models, especially transformer models get bigger + - Distributed Training + description: + "As deep learning models, especially transformer models get bigger and bigger, reducing training time becomes both a financial and environmental imperative. ONNX Runtime can accelerate large-scale distributed training of PyTorch transformer models with a one-line code change (in addition to import statements @@ -461,19 +485,21 @@ In this poster, we demonstrate how to fine-tune a popular HuggingFace model and show the performance improvement, on a multi-GPU cluster in the Azure Machine - Learning cloud service.' + Learning cloud service." link: https://aka.ms/pytorchort section: G1 - title: 'Accelerate PyTorch large model training with ONNX Runtime: just add one - line of code!' -- authors: - - Jack Cao - - Daniel Sohn - - Zak Stone - - Shauheen Zahirazami - categories: - - Distributed Training - description: PyTorch / XLA enables users to train PyTorch models on XLA devices + title: + "Accelerate PyTorch large model training with ONNX Runtime: just add one + line of code!" +- authors: + - Jack Cao + - Daniel Sohn + - Zak Stone + - Shauheen Zahirazami + categories: + - Distributed Training + description: + PyTorch / XLA enables users to train PyTorch models on XLA devices including Cloud TPUs. Cloud TPU VMs now provide direct access to TPU host machines and hence offer much greater flexibility in addition to making debugging easier and reducing data transfer overheads. PyTorch / XLA has now full support for this @@ -482,34 +508,36 @@ develop models but also reduce the cost of large-scale PyTorch / XLA training runs on Cloud TPUs. link: http://goo.gle/pt-xla-tpuvm-signup - poster_link: https://assets.pytorch.org/pted2021/posters/G2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/G2.png section: G2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-G2.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-G2.png title: PyTorch/XLA with new Cloud TPU VMs and Profiler - authors: - - Ari Bornstein + - Ari Bornstein categories: - - Frontend & Experiment Manager - description: PyTorch Lightning reduces the engineering boilerplate and resources + - Frontend & Experiment Manager + description: + PyTorch Lightning reduces the engineering boilerplate and resources required to implement state-of-the-art AI. Organizing PyTorch code with Lightning enables seamless training on multiple-GPUs, TPUs, CPUs, and the use of difficult to implement best practices such as model sharding, 16-bit precision, and more, without any code changes. In this poster, we will use practical Lightning examples to demonstrate how to train Deep Learning models with less boilerplate. link: https://www.pytorchlightning.ai/ - poster_link: https://assets.pytorch.org/pted2021/posters/E4.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/E4.png section: E4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-E4.png - title: 'PyTorch Lightning: Deep Learning without the Boilerplate' -- authors: - - Jiong Gong - - Nikita Shustrov - - Eikan Wang - - Jianhui Li - - Vitaly Fedyunin - categories: - - Frontend & Experiment Manager - description: "Intel and Facebook collaborated to enable BF16, a first-class data\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-E4.png + title: "PyTorch Lightning: Deep Learning without the Boilerplate" +- authors: + - Jiong Gong + - Nikita Shustrov + - Eikan Wang + - Jianhui Li + - Vitaly Fedyunin + categories: + - Frontend & Experiment Manager + description: + "Intel and Facebook collaborated to enable BF16, a first-class data\ \ type in PyTorch, and a data type that are accelerated natively with the 3rd\ \ Gen Intel\xAE Xeon\xAE scalable processors. This poster introduces the latest\ \ SW advancements added in Intel Extension for PyTorch (IPEX) on top of PyTorch\ @@ -519,15 +547,16 @@ \ FP32 with the stock PyTorch and 1.40X-4.26X speed-up with IPEX BF16 inference\ \ over FP32 with the stock PyTorch." link: https://github.com/intel/intel-extension-for-pytorch - poster_link: https://assets.pytorch.org/pted2021/posters/E5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/E5.png section: E5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-E5.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-E5.png title: Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology - authors: - - Robin Lobel + - Robin Lobel categories: - - Frontend & Experiment Manager - description: TorchStudio is a standalone software based on PyTorch and LibTorch. + - Frontend & Experiment Manager + description: + TorchStudio is a standalone software based on PyTorch and LibTorch. It aims to simplify the creation, training and iterations of PyTorch models. It runs locally on Windows, Ubuntu and macOS. It can load, analyze and explore PyTorch datasets from the TorchVision or TorchAudio categories, or custom datasets with @@ -536,51 +565,54 @@ simultaneously and compared to identify the best performing models, and export them as a trained TorchScript or ONNX model. link: https://torchstudio.ai/ - poster_link: https://assets.pytorch.org/pted2021/posters/F4.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/F4.png section: F4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-F4.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-F4.png title: TorchStudio, a machine learning studio software based on PyTorch - authors: - - Jieru Hu - - 'Omry Yadan ' + - Jieru Hu + - "Omry Yadan " categories: - - Frontend & Experiment Manager - description: 'Hydra is an open source framework for configuring and launching research + - Frontend & Experiment Manager + description: + "Hydra is an open source framework for configuring and launching research Python applications. Key features: - Compose and override your config dynamically to get the perfect config for each run - Run on remote clusters like SLURM and AWS without code changes - Perform basic greed search and hyper parameter optimization without code changes - Command line tab completion for your dynamic config And - more.' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/F5.png + more." + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/F5.png section: F5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-F5.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-F5.png title: Hydra Framework - authors: - - Victor Fomin - - Sylvain Desroziers - - Taras Savchyn + - Victor Fomin + - Sylvain Desroziers + - Taras Savchyn categories: - - Frontend & Experiment Manager - description: This poster intends to give a brief but illustrative overview of what + - Frontend & Experiment Manager + description: + This poster intends to give a brief but illustrative overview of what PyTorch-Ignite can offer for Deep Learning enthusiasts, professionals and researchers. Following the same philosophy as PyTorch, PyTorch-Ignite aims to keep it simple, flexible and extensible but performant and scalable. Throughout this poster, we will introduce the basic concepts of PyTorch-Ignite, its API and features it offers. We also assume that the reader is familiar with PyTorch. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/G4.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/G4.png section: G4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-G4.png - title: 'PyTorch-Ignite: training common things easy and the hard things possible' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-G4.png + title: "PyTorch-Ignite: training common things easy and the hard things possible" - authors: - - Sanzhar Askaruly - - Nurbolat Aimakov - - Alisher Iskakov - - Hyewon Cho + - Sanzhar Askaruly + - Nurbolat Aimakov + - Alisher Iskakov + - Hyewon Cho categories: - - Medical & Healthcare - description: Deep learning has transformed many aspects of industrial pipelines + - Medical & Healthcare + description: + Deep learning has transformed many aspects of industrial pipelines recently. Scientists involved in biomedical imaging research are also benefiting from the power of AI to tackle complex challenges. Although the academic community has widely accepted image processing tools, such as scikit-image, ImageJ, there @@ -588,18 +620,19 @@ analysis. We propose a minimal, but convenient Python package based on PyTorch with common deep learning models, extended by flexible trainers and medical datasets. link: https://github.com/tuttelikz/farabio - poster_link: https://assets.pytorch.org/pted2021/posters/H4.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H4.png section: H4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H4.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H4.png title: Farabio - Deep Learning Toolkit for Biomedical Imaging - authors: - - Michael Zephyr - - Prerna Dogra Richard Brown - - Wenqi Li - - Eric Kerfoot + - Michael Zephyr + - Prerna Dogra Richard Brown + - Wenqi Li + - Eric Kerfoot categories: - - Medical & Healthcare - description: "Healthcare image analysis for both radiology and pathology is increasingly\ + - Medical & Healthcare + description: + "Healthcare image analysis for both radiology and pathology is increasingly\ \ being addressed with deep-learning-based solutions. These applications have\ \ specific requirements to support various imaging modalities like MR, CT, ultrasound,\ \ digital pathology, etc. It is a substantial effort for researchers in the field\ @@ -610,19 +643,20 @@ \ solutions by providing domain-specialized building blocks and a common foundation\ \ for the community to converge in a native PyTorch paradigm." link: https://monai.io/ - poster_link: https://assets.pytorch.org/pted2021/posters/H5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H5.png section: H5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H5.png - title: 'MONAI: A Domain Specialized Library for Healthcare Imaging' -- authors: - - Shai Brown - - Daniel Neimark - - Maya Zohar - - Omri Bar - - Dotan Asselmann - categories: - - Medical & Healthcare - description: "Theator is re-imagining surgery with a Surgical Intelligence platform\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H5.png + title: "MONAI: A Domain Specialized Library for Healthcare Imaging" +- authors: + - Shai Brown + - Daniel Neimark + - Maya Zohar + - Omri Bar + - Dotan Asselmann + categories: + - Medical & Healthcare + description: + "Theator is re-imagining surgery with a Surgical Intelligence platform\ \ that leverages highly advanced AI, specifically machine learning and computer\ \ vision technology, to analyze every step, event, milestone, and critical junction\ \ of surgical procedures.\n\nOur platform analyzes lengthy surgical procedure\ @@ -637,20 +671,22 @@ \ into training pipelines \u2013 speeding up workflow, minimizing human error,\ \ and freeing up our research team for more important tasks. Thus, enabling us\ \ to scale our ML operation and deliver better models for our end users." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/I4.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I4.png section: I4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I4.png - title: How theator Built a Continuous Training Framework to Scale Up Its Surgical + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I4.png + title: + How theator Built a Continuous Training Framework to Scale Up Its Surgical Intelligence Platform - authors: - - Cebere Bogdan - - Cebere Tudor - - Manolache Andrei - - Horia Paul-Ion + - Cebere Bogdan + - Cebere Tudor + - Manolache Andrei + - Horia Paul-Ion categories: - - Medical & Healthcare - description: We present Q&Aid, a conversation agent that relies on a series of machine + - Medical & Healthcare + description: + We present Q&Aid, a conversation agent that relies on a series of machine learning models to filter, label, and answer medical questions based on a provided image and text inputs. Q&Aid is simplifying the hospital logic backend by standardizing it to a Health Intel Provider (HIP). A HIP is a collection of models trained on @@ -664,23 +700,24 @@ chat ends, the transcript is forwarded to each hospital, a doctor being in charge of the final decision. link: https://qrgo.page.link/d1fQk - poster_link: https://assets.pytorch.org/pted2021/posters/I5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I5.png section: I5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I5.png - title: 'Q&Aid: A Conversation Agent Powered by PyTorch' -- authors: - - Jaden Hong - - Kevin Tran - - Tyler Lee - - Paul Lee - - Freddie Cha - - Louis Jung - - Dr. Jung Kyung Hong - - Dr. In-Young Yoon - - David Lee - categories: - - Medical & Healthcare - description: "Sleep disorders and insomnia are now regarded as a worldwide problem.\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I5.png + title: "Q&Aid: A Conversation Agent Powered by PyTorch" +- authors: + - Jaden Hong + - Kevin Tran + - Tyler Lee + - Paul Lee + - Freddie Cha + - Louis Jung + - Dr. Jung Kyung Hong + - Dr. In-Young Yoon + - David Lee + categories: + - Medical & Healthcare + description: + "Sleep disorders and insomnia are now regarded as a worldwide problem.\ \ Roughly 62% of adults worldwide feel that they don't sleep well. However, sleep\ \ is difficult to track so it's not easy to get suitable treatment to improve\ \ your sleep quality. Currently, the PSG (Polysomnography) is the only way to\ @@ -694,18 +731,19 @@ \ 85.5 % accuracy in 5-class (Wake, N1, N2, N3, Rem) using PSG signals measured\ \ from 3,700 subjects and 77 % accuracy in 3-class (Wake, Sleep, REM) classification\ \ using only sound data measured from 1,2000 subjects." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/J4.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J4.png section: J4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J4.png - title: 'Sleepbot: Multi-signal Sleep Stage Classifier AI for hospital and home' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J4.png + title: "Sleepbot: Multi-signal Sleep Stage Classifier AI for hospital and home" - authors: - - Akshay Agrawal - - Alnur Ali - - Stephen Boyd + - Akshay Agrawal + - Alnur Ali + - Stephen Boyd categories: - - Medical & Healthcare - description: 'We present a unifying framework for the vector embedding problem: + - Medical & Healthcare + description: + "We present a unifying framework for the vector embedding problem: given a set of items and some known relationships between them, we seek a representation of the items by vectors, possibly subject to some constraints (e.g., requiring the vectors to have zero mean and identity covariance). We want the vectors associated @@ -723,17 +761,18 @@ custom embeddings alike. By making use of automatic differentiation and hardware acceleration via PyTorch, we are able to scale to very large embedding problems. We will showcase examples of embedding real datasets, including an academic co-authorship - network, single-cell mRNA transcriptomes, US census data, and population genetics.' - link: '' + network, single-cell mRNA transcriptomes, US census data, and population genetics." + link: "" section: J5 - title: 'PyMDE: Minimum-Distortion Embedding' + title: "PyMDE: Minimum-Distortion Embedding" - authors: - - "Fernando P\xE9rez-Garc\xEDa" - - Rachel Sparks - - "S\xE9bastien Ourselin" + - "Fernando P\xE9rez-Garc\xEDa" + - Rachel Sparks + - "S\xE9bastien Ourselin" categories: - - Medical & Healthcare - description: 'Processing of medical images such as MRI or CT presents unique challenges + - Medical & Healthcare + description: + "Processing of medical images such as MRI or CT presents unique challenges compared to RGB images typically used in computer vision. These include a lack of labels for large datasets, high computational costs, and metadata to describe the physical properties of voxels. Data augmentation is used to artificially increase @@ -755,20 +794,22 @@ pipelines and allow them to focus on the deep learning experiments. It encourages open science, as it supports reproducibility and is version controlled so that the software can be cited precisely. Due to its modularity, the library is compatible - with other frameworks for deep learning with medical images.' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/K4.png + with other frameworks for deep learning with medical images." + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K4.png section: K4 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K4.png - title: 'TorchIO: Pre-Processing & Augmentation of Medical Images for Deep Learning - Applications' -- authors: - - Laila Rasmy - - Ziqian Xie - - Degui Zhi - categories: - - Medical & Healthcare - description: With the extensive use of electronic records and the availability of + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K4.png + title: + "TorchIO: Pre-Processing & Augmentation of Medical Images for Deep Learning + Applications" +- authors: + - Laila Rasmy + - Ziqian Xie + - Degui Zhi + categories: + - Medical & Healthcare + description: + With the extensive use of electronic records and the availability of historical patient information, predictive models that can help identify patients at risk based on their history at an early stage can be a valuable adjunct to clinician judgment. Deep learning models can better predict patients' outcomes @@ -788,16 +829,17 @@ intubation, and hospitalization for more than 3 days, respectively versus LR which showed 82.8%, 83.2%, and 76.8% link: https://github.com/ZhiGroup/pytorch_ehr - poster_link: https://assets.pytorch.org/pted2021/posters/K5.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K5.png section: K5 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K5.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K5.png title: Deep Learning Based Model to Predict Covid19 Patients' Outcomes on Admission - authors: - - Binghui Ouyang - - "Alexander O\u2019Connor " + - Binghui Ouyang + - "Alexander O\u2019Connor " categories: - - NLP & Multimodal, RL & Time Series - description: "While Transformers have brought unprecedented improvements in the\ + - NLP & Multimodal, RL & Time Series + description: + "While Transformers have brought unprecedented improvements in the\ \ accuracy and ease of developing NLP applications, their deployment remains challenging\ \ due to the large size of the models and their computational complexity. \n Indeed,\ \ until recently is has been a widespread misconception that hosting high-performance\ @@ -815,28 +857,30 @@ \ architecture to be supported by customer inference.\n\nWe will discuss our experience\ \ of piloting transformer-based intent models, and present a workflow for going\ \ from data to deployment for similar projects." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/A1.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A1.png section: A1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A1.png - title: ' Rolling out Transformers with TorchScript and Inferentia' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A1.png + title: " Rolling out Transformers with TorchScript and Inferentia" - authors: - - Kashif Rasul + - Kashif Rasul categories: - - NLP & Multimodal, RL & Time Series - description: PyTorchTS is a PyTorch based Probabilistic Time Series forecasting + - NLP & Multimodal, RL & Time Series + description: + PyTorchTS is a PyTorch based Probabilistic Time Series forecasting framework that comes with state of the art univariate and multivariate models. link: https://github.com/zalandoresearch/pytorch-ts - poster_link: https://assets.pytorch.org/pted2021/posters/A2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A2.png section: A2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A2.png - title: 'PyTorchTS: PyTorch Probabilistic Time Series Forecasting Framework' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A2.png + title: "PyTorchTS: PyTorch Probabilistic Time Series Forecasting Framework" - authors: - - Sasha Sheng - - Amanpreet Singh + - Sasha Sheng + - Amanpreet Singh categories: - - NLP & Multimodal, RL & Time Series - description: MMF is designed from ground up to let you focus on what matters -- + - NLP & Multimodal, RL & Time Series + description: + MMF is designed from ground up to let you focus on what matters -- your model -- by providing boilerplate code for distributed training, common datasets and state-of-the-art pretrained baselines out-of-the-box. MMF is built on top of PyTorch that brings all of its power in your hands. MMF is not strongly opinionated. @@ -844,29 +888,31 @@ extensible and composable. Through our modular design, you can use specific components from MMF that you care about. Our configuration system allows MMF to easily adapt to your needs. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/A3.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A3.png section: A3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A3.png - title: 'MMF: A modular framework for multimodal research' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A3.png + title: "MMF: A modular framework for multimodal research" - authors: - - Dirk Groeneveld - - Akshita Bhagia - - Pete Walsh - - Michael Schmitz + - Dirk Groeneveld + - Akshita Bhagia + - Pete Walsh + - Michael Schmitz categories: - - NLP & Multimodal, RL & Time Series - description: An Apache 2.0 NLP research library, built on PyTorch, for developing + - NLP & Multimodal, RL & Time Series + description: + An Apache 2.0 NLP research library, built on PyTorch, for developing state-of-the-art deep learning models on a wide variety of linguistic tasks. link: https://github.com/allenai/allennlp section: B1 - title: 'AllenNLP: An NLP research library for developing state-of-the-art models' + title: "AllenNLP: An NLP research library for developing state-of-the-art models" - authors: - - John Trenkle - - Jaya Kawale & Tubi ML team + - John Trenkle + - Jaya Kawale & Tubi ML team categories: - - NLP & Multimodal, RL & Time Series - description: "Tubi is one of the leading platforms providing free high-quality streaming\ + - NLP & Multimodal, RL & Time Series + description: + "Tubi is one of the leading platforms providing free high-quality streaming\ \ movies and TV shows to a worldwide audience. We embrace a data-driven approach\ \ and leverage advanced machine learning techniques using PyTorch to enhance our\ \ platform and business in any way we can. The Three Pillars of AVOD are the\ @@ -884,19 +930,20 @@ \ project Title vectors from the universe to our tubiverse with as much fidelity\ \ as possible in order to ascertain potential value for each target use case.\ \ We will describe several techniques to understand content better using Pytorch." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/B2.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/B2.png section: B2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-B2.png - title: 'Project Spock at Tubi: Understanding Content using Deep Learning for NLP' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-B2.png + title: "Project Spock at Tubi: Understanding Content using Deep Learning for NLP" - authors: - - Benoit Steiner - - Chris Cummins - - Horace He - - Hugh Leather + - Benoit Steiner + - Chris Cummins + - Horace He + - Hugh Leather categories: - - NLP & Multimodal, RL & Time Series - description: "As the usage of machine learning techniques is becoming ubiquitous,\ + - NLP & Multimodal, RL & Time Series + description: + "As the usage of machine learning techniques is becoming ubiquitous,\ \ the efficient execution of neural networks is crucial to many applications.\ \ Frameworks, such as Halide and TVM, separate the algorithmic representation\ \ of\nthe deep learning model from the schedule that determines its implementation.\ @@ -917,15 +964,16 @@ \ completes in seconds instead of hours, making it possible to include it as\ \ a new backend for PyTorch itself." link: http://facebook.ai - poster_link: https://assets.pytorch.org/pted2021/posters/B3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/B3.png section: B3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-B3.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-B3.png title: RL Based Performance Optimization of Deep Neural Networks - authors: - - Zhenghong Liu + - Zhenghong Liu categories: - - NLP & Multimodal, RL & Time Series - description: Forte is an open-source toolkit for building Natural Language Processing + - NLP & Multimodal, RL & Time Series + description: + Forte is an open-source toolkit for building Natural Language Processing workflows via assembling state-of-the-art NLP and ML technologies. This toolkit features composable pipeline, cross-task interaction, adaptable data-model interfaces. The highly composable design allows users to build complex NLP pipelines of a @@ -938,19 +986,20 @@ pipeline can be easily adapted to different domains and tasks with small changes in the code. link: https://github.com/asyml/forte - poster_link: https://assets.pytorch.org/pted2021/posters/C1.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C1.png section: C1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C1.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C1.png title: A Data-Centric Framework for Composable NLP - authors: - - Shagun Sodhani - - Amy Zhang - - Ludovic Denoyer - - Pierre-Alexandre Kamienny - - Olivier Delalleau + - Shagun Sodhani + - Amy Zhang + - Ludovic Denoyer + - Pierre-Alexandre Kamienny + - Olivier Delalleau categories: - - NLP & Multimodal, RL & Time Series - description: 'The two key components in a multi-task RL codebase are (i) Multi-task + - NLP & Multimodal, RL & Time Series + description: + "The two key components in a multi-task RL codebase are (i) Multi-task RL algorithms and (ii) Multi-task RL environments. We develop open-source libraries for both components. [MTRL](https://github.com/facebookresearch/mtrl) provides components to implement multi-task RL algorithms, and [MTEnv](https://github.com/facebookresearch/mtenv) @@ -974,19 +1023,20 @@ wrappers to add multi-task support with a small code change. - MTRL and MTEnv are used in several ongoing/published works at FAIR.' + MTRL and MTEnv are used in several ongoing/published works at FAIR." link: http://qr.w69b.com/g/tGZSFw33G - poster_link: https://assets.pytorch.org/pted2021/posters/C2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C2.png section: C2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C2.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C2.png title: Environments and Baselines for Multitask Reinforcement Learning - authors: - - Lysandre Debut - - Sylvain Gugger - - "Quentin Lhoest\_" + - Lysandre Debut + - Sylvain Gugger + - "Quentin Lhoest\_" categories: - - NLP & Multimodal, RL & Time Series - description: 'Transfer learning has become the norm to get state-of-the-art results + - NLP & Multimodal, RL & Time Series + description: + "Transfer learning has become the norm to get state-of-the-art results in NLP. Hugging Face provides you with tools to help you on every step along the way: @@ -1010,23 +1060,24 @@ The pipeline is then simply a six-step process: select a pretrained model from the hub, handle the data with Datasets, tokenize the text with Tokenizers, load the model with Transformers, train it with the Trainer or your own loop powered - by Accelerate, before sharing your results with the community on the hub.' + by Accelerate, before sharing your results with the community on the hub." link: https://huggingface.co/models - poster_link: https://assets.pytorch.org/pted2021/posters/C3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C3.png section: C3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C3.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C3.png title: The Hugging Face Ecosystem - authors: - - Manuel Pariente - - Samuele Cornell - - Jonas Haag - - Joris Cosentino - - Michel Olvera - - "Fabian-Robert St\xF6ter" - - Efthymios Tzinis - categories: - - NLP & Multimodal, RL & Time Series - description: Asteroid is an audio source separation toolkit built with PyTorch and + - Manuel Pariente + - Samuele Cornell + - Jonas Haag + - Joris Cosentino + - Michel Olvera + - "Fabian-Robert St\xF6ter" + - Efthymios Tzinis + categories: + - NLP & Multimodal, RL & Time Series + description: + Asteroid is an audio source separation toolkit built with PyTorch and PyTorch-Lightning. Inspired by the most successful neural source separation systems, it provides all neural building blocks required to build such a system. To improve reproducibility, recipes on common audio source separation datasets are provided, @@ -1038,36 +1089,38 @@ sharing them is also made easy with asteroid's CLI.","poster_showcase":"Audio Source Separation, Speech Processing, Deep Learning","email":"cornellsamuele@gmail.com"} link: https://asteroid-team.github.io/ - poster_link: https://assets.pytorch.org/pted2021/posters/D1.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D1.png section: D1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D1.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D1.png title: "\_Asteroid: the Pytorch-based Audio Source Separation Toolkit for Researchers" - authors: - - Ludovic Denoyer - - Danielle Rothermel - - Xavier Martinet + - Ludovic Denoyer + - Danielle Rothermel + - Xavier Martinet categories: - - NLP & Multimodal, RL & Time Series - description: RLStructures is a lightweight Python library that provides simple APIs + - NLP & Multimodal, RL & Time Series + description: + RLStructures is a lightweight Python library that provides simple APIs as well as data structures that make as few assumptions as possible about the structure of your agent or your task, while allowing for transparently executing multiple policies on multiple environments in parallel (incl. multiple GPUs). It thus facilitates the implementation of RL algorithms while avoiding complex abstractions. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/D2.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D2.png section: D2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D2.png - title: 'rlstructures: A Lightweight Python Library for Reinforcement Learning Research' -- authors: - - Luis Pineda - - Brandon Amos - - Amy Zhang - - Nathan O. Lambert - - Roberto Calandra - categories: - - NLP & Multimodal, RL & Time Series - description: Model-based reinforcement learning (MBRL) is an active area of research + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D2.png + title: "rlstructures: A Lightweight Python Library for Reinforcement Learning Research" +- authors: + - Luis Pineda + - Brandon Amos + - Amy Zhang + - Nathan O. Lambert + - Roberto Calandra + categories: + - NLP & Multimodal, RL & Time Series + description: + Model-based reinforcement learning (MBRL) is an active area of research with enormous potential. In contrast to model-free RL, MBRL algorithms solve tasks by learning a predictive model of the task dynamics, and use this model to predict the future and facilitate decision making. Many researchers have argued that MBRL @@ -1086,17 +1139,18 @@ of diagnostics tools to identify potential issues while training dynamics models and control algorithms. link: https://github.com/facebookresearch/mbrl-lib - poster_link: https://assets.pytorch.org/pted2021/posters/D3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D3.png section: D3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D3.png - title: 'MBRL-Lib: a PyTorch toolbox for model-based reinforcement learning research' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D3.png + title: "MBRL-Lib: a PyTorch toolbox for model-based reinforcement learning research" - authors: - - Geeta Chauhan - - Gisle Dankel - - Elena Neroslavaskaya + - Geeta Chauhan + - Gisle Dankel + - Elena Neroslavaskaya categories: - - Performance & Profiler - description: Analyzing and improving large-scale deep learning model performance + - Performance & Profiler + description: + Analyzing and improving large-scale deep learning model performance is an ongoing challenge that continues to grow in importance as the model sizes increase. Microsoft and Facebook collaborated to create a native PyTorch performance debugging tool called PyTorch Profiler. The profiler builds on the PyTorch autograd @@ -1111,15 +1165,16 @@ operations. Come learn how to profile your PyTorch models using this new delightfully simple tool. link: https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool - poster_link: https://assets.pytorch.org/pted2021/posters/H6.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H6.png section: H6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H6.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H6.png title: Introducing New PyTorch Profiler - authors: - - Naren Dasan + - Naren Dasan categories: - - Performance & Profiler - description: For experimentation and the development of machine learning models, + - Performance & Profiler + description: + For experimentation and the development of machine learning models, few tools are as approachable as PyTorch. However, when moving from research to production, some of the features that make PyTorch great for development make it hard to deploy. With the introduction of TorchScript, PyTorch has solid tooling @@ -1139,15 +1194,16 @@ in an application or used from the command line to easily increase the performance of inference applications. link: https://nvidia.github.io/TRTorch/ - poster_link: https://assets.pytorch.org/pted2021/posters/I6.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I6.png section: I6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I6.png - title: 'TRTorch: A Compiler for TorchScript Targeting NVIDIA GPUs with TensorRT' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I6.png + title: "TRTorch: A Compiler for TorchScript Targeting NVIDIA GPUs with TensorRT" - authors: - - Charles H. Martin + - Charles H. Martin categories: - - Performance & Profiler - description: "WeightWatcher (WW) is an open-source, diagnostic tool for analyzing\ + - Performance & Profiler + description: + "WeightWatcher (WW) is an open-source, diagnostic tool for analyzing\ \ Deep Neural Networks (DNN), without needing access to training or even test\ \ data. It can be used to: analyze pre/trained pyTorch models; \ninspect models\ \ that are difficult to train; gauge improvements in model performance; predict\ @@ -1156,16 +1212,17 @@ \ research (done in\\-joint with UC Berkeley) into \"Why Deep Learning Works\"\ , using ideas from Random Matrix Theory (RMT), Statistical Mechanics, and Strongly\ \ Correlated Systems." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/J6.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J6.png section: J6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J6.png - title: 'WeightWatcher: A Diagnostic Tool for DNNs' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J6.png + title: "WeightWatcher: A Diagnostic Tool for DNNs" - authors: - - Mario Lezcano-Casado + - Mario Lezcano-Casado categories: - - Performance & Profiler - description: '"This poster presents the ""parametrizations"" feature that will be + - Performance & Profiler + description: + '"This poster presents the ""parametrizations"" feature that will be added to PyTorch in 1.9.0. This feature allows for a simple implementation of methods like pruning, weight_normalization @@ -1190,17 +1247,18 @@ From this perspective, parametrisation maps an unconstrained tensor to a constrained space such as the space of orthogonal matrices, SPD matrices, low-rank matrices... This approach is implemented in the library GeoTorch (https://github.com/Lezcano/geotorch/)."' - link: '' + link: "" section: K6 title: Constrained Optimization in PyTorch 1.9 Through Parametrizations - authors: - - Richard Liaw - - Kai Fricke - - Amog Kamsetty - - Michael Galarnyk + - Richard Liaw + - Kai Fricke + - Amog Kamsetty + - Michael Galarnyk categories: - - Platforms & Ops & Tools - description: Ray is a popular framework for distributed Python that can be paired + - Platforms & Ops & Tools + description: + Ray is a popular framework for distributed Python that can be paired with PyTorch to rapidly scale machine learning applications. Ray contains a large ecosystem of applications and libraries that leverage and integrate with Pytorch. This includes Ray Tune, a Python library for experiment execution and hyperparameter @@ -1209,18 +1267,19 @@ are becoming the core foundation for the next generation of production machine learning platforms. link: https://ray.io/ - poster_link: https://assets.pytorch.org/pted2021/posters/H1.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H1.png section: H1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H1.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H1.png title: Distributed Pytorch with Ray - authors: - - Vincenzo Lomonaco - - Lorenzo Pellegrini Andrea Cossu - - Antonio Carta - - Gabriele Graffieti + - Vincenzo Lomonaco + - Lorenzo Pellegrini Andrea Cossu + - Antonio Carta + - Gabriele Graffieti categories: - - Platforms & Ops & Tools - description: Learning continually from non-stationary data stream is a long sought + - Platforms & Ops & Tools + description: + Learning continually from non-stationary data stream is a long sought goal of machine learning research. Recently, we have witnessed a renewed and fast-growing interest in Continual Learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and @@ -1230,30 +1289,32 @@ code-base for fast prototyping, training and reproducible evaluation of continual learning algorithms. link: https://avalanche.continualai.org - poster_link: https://assets.pytorch.org/pted2021/posters/H2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H2.png section: H2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H2.png - title: 'Avalanche: an End-to-End Library for Continual Learning based on PyTorch' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H2.png + title: "Avalanche: an End-to-End Library for Continual Learning based on PyTorch" - authors: - - Hong Xu + - Hong Xu categories: - - Platforms & Ops & Tools - description: IBM Z is a hardware product line for mission-critical applications, + - Platforms & Ops & Tools + description: + IBM Z is a hardware product line for mission-critical applications, such as finance and health applications. It employs its own CPU architecture, which PyTorch does not officially support. In this poster, we discuss why it is important to support PyTorch on Z. Then, we show our prebuilt minimal PyTorch package for IBM Z. Finally, we demonstrate our continuing commitment to make more PyTorch features available on IBM Z. link: https://codait.github.io/pytorch-on-z - poster_link: https://assets.pytorch.org/pted2021/posters/H3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/H3.png section: H3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-H3.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-H3.png title: PyTorch on IBM Z and LinuxONE (s390x) - authors: - - Dr. Ariel Biller + - Dr. Ariel Biller categories: - - Platforms & Ops & Tools - description: "Both from sanity considerations and the productivity perspective,\ + - Platforms & Ops & Tools + description: + "Both from sanity considerations and the productivity perspective,\ \ Data Scientists, ML engineers, Graduate students, and other research-facing\ \ roles are all starting to adopt best-practices from production-grade MLOps.\n\ \nHowever, most toolchains come with a hefty price of extra code and maintenance,\ @@ -1267,19 +1328,20 @@ \ pipeline. We will measure the number of changes needed to the codebase and provide\ \ evidence of real low-cost integration. All code, logs, and metrics will be available\ \ as supporting information." - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/I1.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I1.png section: I1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I1.png - title: 'The Fundamentals of MLOps for R&D: Orchestration, Automation, Reproducibility' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I1.png + title: "The Fundamentals of MLOps for R&D: Orchestration, Automation, Reproducibility" - authors: - - Masashi Sode - - Akihiko Fukuchi - - Yoki Yabe - - Yasufumi Nakata + - Masashi Sode + - Akihiko Fukuchi + - Yoki Yabe + - Yasufumi Nakata categories: - - Platforms & Ops & Tools - description: "Is your machine learning model fair enough to be used in your system?\ + - Platforms & Ops & Tools + description: + "Is your machine learning model fair enough to be used in your system?\ \ What if a recruiting AI discriminates on gender and race? What if the accuracy\ \ of medical AI depends on a person's annual income or on the GDP of the country\ \ where it is used? Today's AI has the potential to cause such problems. In recent\ @@ -1294,16 +1356,17 @@ \ it allows you to add a fairness constraint to your model by adding only a few\ \ lines of code, using the fairness criteria provided in the library." link: https://github.com/wbawakate/fairtorch - poster_link: https://assets.pytorch.org/pted2021/posters/I2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I2.png section: I2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I2.png - title: 'FairTorch: Aspiring to Mitigate the Unfairness of Machine Learning Models' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I2.png + title: "FairTorch: Aspiring to Mitigate the Unfairness of Machine Learning Models" - authors: - - Thomas Viehmann - - Luca Antiga + - Thomas Viehmann + - Luca Antiga categories: - - Platforms & Ops & Tools - description: 'When machine learning models are deployed to solve a given task, a + - Platforms & Ops & Tools + description: + "When machine learning models are deployed to solve a given task, a crucial question is whether they are actually able to perform as expected. TorchDrift addresses one aspect of the answer, namely drift detection, or whether the information flowing through our models - either probed at the input, output or somewhere in-between @@ -1311,7 +1374,7 @@ TorchDrift is designed to be plugged into PyTorch models and check whether they are operating within spec. - TorchDrift''s principles apply PyTorch''s motto _from research to production_ + TorchDrift's principles apply PyTorch's motto _from research to production_ to drift detection: We provide a library of methods that canbe used as baselines or building blocks for drift detection research, as well as provide practitioners deploying PyTorch models in production with up-to-date methods and educational @@ -1319,25 +1382,26 @@ TorchDrift with an example illustrating the underlying two-sample tests. We show how TorchDrift can be integrated in high-performance runtimes such as TorchServe or RedisAI, to enable drift detection in real-world applications thanks to the - PyTorch JIT.' + PyTorch JIT." link: https://torchdrift.org/ - poster_link: https://assets.pytorch.org/pted2021/posters/I3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/I3.png section: I3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-I3.png - title: 'TorchDrift: Drift Detection for PyTorch' -- authors: - - Quincy Chen - - Arjun Bhargava - - Sudeep Pillai - - Marcus Pan - - Chao Fang - - Chris Ochoa - - Adrien Gaidon - - Kuan-Hui Lee - - Wolfram Burgard - categories: - - Platforms & Ops & Tools - description: "Modern machine learning for autonomous vehicles requires a fundamentally\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-I3.png + title: "TorchDrift: Drift Detection for PyTorch" +- authors: + - Quincy Chen + - Arjun Bhargava + - Sudeep Pillai + - Marcus Pan + - Chao Fang + - Chris Ochoa + - Adrien Gaidon + - Kuan-Hui Lee + - Wolfram Burgard + categories: + - Platforms & Ops & Tools + description: + "Modern machine learning for autonomous vehicles requires a fundamentally\ \ different infrastructure and production lifecycle from their standard software\ \ continuous-integration/continuous-deployment counterparts. At Toyota Research\ \ Institute (TRI), we have developed \u200BOuroboros\u200B - a modern ML platform\ @@ -1362,15 +1426,16 @@ \ learning in our autonomous vehicle fleets and accelerate the transition from\ \ research to production." link: https://github.com/TRI-ML - poster_link: https://assets.pytorch.org/pted2021/posters/J1.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J1.png section: J1 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J1.png - title: 'Ouroboros: MLOps for Automated Driving' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J1.png + title: "Ouroboros: MLOps for Automated Driving" - authors: - - Yujian He + - Yujian He categories: - - Platforms & Ops & Tools - description: carefree-learn makes PyTorch accessible to people who are familiar + - Platforms & Ops & Tools + description: + carefree-learn makes PyTorch accessible to people who are familiar with machine learning but not necessarily PyTorch. By having already implemented all the pre-processing and post-processing under the hood, users can focus on implementing the core machine learning algorithms / models with PyTorch and test @@ -1383,16 +1448,17 @@ so users can either run multiple tasks at the same time, or run a huge model with DDP in one line of code. carefree-learn also integrates with mlflow and supports exporting to ONNX, which means it is ready for production to some extend. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/J2.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J2.png section: J2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J2.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J2.png title: "carefree-learn: Tabular Datasets \u2764\uFE0F PyTorch" - authors: - - Wenwei Zhang + - Wenwei Zhang categories: - - Platforms & Ops & Tools - description: 'OpenMMLab project builds open-source toolboxes for Artificial Intelligence + - Platforms & Ops & Tools + description: + "OpenMMLab project builds open-source toolboxes for Artificial Intelligence (AI). It aims to 1) provide high-quality codebases to reduce the difficulties in algorithm reimplementation; 2) provide a complete research platform to accelerate the research production; and 3) shorten the gap between research production to @@ -1404,32 +1470,34 @@ Since the initial release in October 2018, OpenMMLab has released 15+ toolboxes that cover 10+ directions, implement 100+ algorithms, and contain 1000+ pre-trained models. With a tighter collaboration with the community, OpenMMLab will release - more toolboxes with more flexible and easy-to-use training frameworks in the future.' + more toolboxes with more flexible and easy-to-use training frameworks in the future." link: https://openmmlab.com/ - poster_link: https://assets.pytorch.org/pted2021/posters/J3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/J3.png section: J3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-J3.png - title: 'OpenMMLab: An Open-Source Algorithm Platform for Computer Vision' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-J3.png + title: "OpenMMLab: An Open-Source Algorithm Platform for Computer Vision" - authors: - - Sergey Kolesnikov + - Sergey Kolesnikov categories: - - Platforms & Ops & Tools - description: "For the last three years, Catalyst-Team and collaborators have been\ + - Platforms & Ops & Tools + description: + "For the last three years, Catalyst-Team and collaborators have been\ \ working on Catalyst\u200A - a high-level PyTorch framework Deep Learning Research\ \ and Development. It focuses on reproducibility, rapid experimentation, and codebase\ \ reuse so you can create something new rather than write yet another train loop.\ \ You get metrics, model checkpointing, advanced logging, and distributed training\ \ support without the boilerplate and low-level bugs." link: https://catalyst-team.com - poster_link: https://assets.pytorch.org/pted2021/posters/K2.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K2.png section: K2 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K2.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K2.png title: "Catalyst \u2013 Accelerated deep learning R&D" - authors: - - Anton Obukhov + - Anton Obukhov categories: - - Platforms & Ops & Tools - description: "Evaluation of generative models such as GANs is an important part\ + - Platforms & Ops & Tools + description: + "Evaluation of generative models such as GANs is an important part\ \ of deep learning research. In 2D image generation, three approaches became widely\ \ spread: Inception Score, Fr\xE9chet Inception Distance, and Kernel Inception\ \ Distance. Despite having a clear mathematical and algorithmic description, these\ @@ -1448,19 +1516,20 @@ \ and sources of remaining non-determinism summarized in sections below.\nTLDR;\ \ fast and reliable GAN evaluation in PyTorch" link: https://github.com/toshas/torch-fidelity - poster_link: https://assets.pytorch.org/pted2021/posters/K3.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/K3.png section: K3 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-K3.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-K3.png title: High-fidelity performance metrics for generative models in PyTorch - authors: - - 'Jona Raphael (jona@skytruth.org)' - - Ben Eggleston - - Ryan Covington - - Tatianna Evanisko - - John Amos + - Jona Raphael (jona@skytruth.org) + - Ben Eggleston + - Ryan Covington + - Tatianna Evanisko + - John Amos categories: - - Vision - description: "Operational oil discharges from ships, also known as \"bilge dumping,\"\ + - Vision + description: + "Operational oil discharges from ships, also known as \"bilge dumping,\"\ \ have been identified as a major source of petroleum products entering our oceans,\ \ cumulatively exceeding the largest oil spills, such as the Exxon Valdez and\ \ Deepwater Horizon spills, even when considered over short time spans. However,\ @@ -1486,16 +1555,17 @@ \ a database with more than 1100 high-confidence slicks from vessels. We will\ \ be discussing preliminary results from this dataset and remaining challenges\ \ to be overcome.\nLearn more at https://skytruth.org/bilge-dumping/" - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/A6.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A6.png section: A6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A6.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A6.png title: Using Satellite Imagery to Identify Oceanic Oil Pollution - authors: - - Tanishq Abraham + - Tanishq Abraham categories: - - Vision - description: Unpaired image-to-image translation algorithms have been used for various + - Vision + description: + Unpaired image-to-image translation algorithms have been used for various computer vision tasks like style transfer and domain adaption. Such algorithms are highly attractive because they alleviate the need for the collection of paired datasets. In this poster, we demonstrate UPIT, a novel fastai/PyTorch package @@ -1509,46 +1579,48 @@ dataset types, models, and metrics can be used as well. With UPIT, training and applying unpaired image-to-image translation only takes a few lines of code. link: https://github.com/tmabraham/UPIT - poster_link: https://assets.pytorch.org/pted2021/posters/A7.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A7.png section: A7 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A7.png - title: 'UPIT: A fastai Package for Unpaired Image-to-Image Translation' -- authors: - - Aaron Adcock - - Bo Xiong - - Christoph Feichtenhofer - - Haoqi Fan - - Heng Wang - - Kalyan Vasudev Alwala - - Matt Feiszli - - Tullie Murrell - - Wan-Yen Lo - - Yanghao Li - - Yilei Li - - 'Zhicheng Yan ' - categories: - - Vision - description: PyTorchVideo is the new Facebook AI deep learning library for video + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A7.png + title: "UPIT: A fastai Package for Unpaired Image-to-Image Translation" +- authors: + - Aaron Adcock + - Bo Xiong + - Christoph Feichtenhofer + - Haoqi Fan + - Heng Wang + - Kalyan Vasudev Alwala + - Matt Feiszli + - Tullie Murrell + - Wan-Yen Lo + - Yanghao Li + - Yilei Li + - "Zhicheng Yan " + categories: + - Vision + description: + PyTorchVideo is the new Facebook AI deep learning library for video understanding research. It contains variety of state of the art pretrained video models, dataset, augmentation, tools for video understanding. PyTorchVideo provides efficient video components on accelerated inference on mobile device. link: https://pytorchvideo.org/ - poster_link: https://assets.pytorch.org/pted2021/posters/A8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/A8.png section: A8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-A8.png - title: 'PyTorchVideo: A Deep Learning Library for Video Understanding' -- authors: - - A. Speiser - - "L-R. M\xFCller" - - P. Hoess - - U. Matti - - C. J. Obara - - J. H. Macke - - J. Ries - - S. C. Turaga - categories: - - Vision - description: Single-molecule localization microscopy (SMLM) has had remarkable success + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-A8.png + title: "PyTorchVideo: A Deep Learning Library for Video Understanding" +- authors: + - A. Speiser + - "L-R. M\xFCller" + - P. Hoess + - U. Matti + - C. J. Obara + - J. H. Macke + - J. Ries + - S. C. Turaga + categories: + - Vision + description: + Single-molecule localization microscopy (SMLM) has had remarkable success in imaging cellular structures with nanometer resolution, but the need for activating only single isolated emitters limits imaging speed and labeling density. Here, we overcome this major limitation using deep learning. We developed DECODE, a @@ -1563,33 +1635,36 @@ in SMLM. link: http://github.com/turagalab/decode section: B6 - title: Deep Learning Enables Fast and Dense Single-Molecule Localization with High + title: + Deep Learning Enables Fast and Dense Single-Molecule Localization with High Accuracy - authors: - - "Abraham S\xE1nchez" - - Guillermo Mendoza - - "E. Ulises Moya-S\xE1nchez" + - "Abraham S\xE1nchez" + - Guillermo Mendoza + - "E. Ulises Moya-S\xE1nchez" categories: - - Vision - description: 'We draw inspiration from the cortical area V1. We try to mimic their + - Vision + description: + "We draw inspiration from the cortical area V1. We try to mimic their main processing properties by means of: quaternion local phase/orientation to compute lines and edges detection in a specific direction. We analyze how this - layer is robust by its greometry to large illumination and brightness changes.' + layer is robust by its greometry to large illumination and brightness changes." link: https://gitlab.com/ab.sanchezperez/pytorch-monogenic - poster_link: https://assets.pytorch.org/pted2021/posters/B7.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/B7.png section: B7 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-B7.png + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-B7.png title: A Robust PyTorch Trainable Entry Convnet Layer in Fourier Domain - authors: - - "Fran\xE7ois-Guillaume Fernandez" - - Mateo Lostanlen - - Sebastien Elmaleh - - Bruno Lenzi - - Felix Veith - - and more than 15+ contributors + - "Fran\xE7ois-Guillaume Fernandez" + - Mateo Lostanlen + - Sebastien Elmaleh + - Bruno Lenzi + - Felix Veith + - and more than 15+ contributors categories: - - Vision - description: '"PyroNear is non-profit organization composed solely of volunteers + - Vision + description: + '"PyroNear is non-profit organization composed solely of volunteers which was created in late 2019. Our core belief is that recent technological developments can support the cohabitation between mankind & its natural habitat. We strive towards high-performing, accessible & affordable tech-solutions for protection @@ -1619,41 +1694,43 @@ Point clouds and Implicit functions, as well as several other tools for 3D Deep Learning.' link: https://github.com/pyronear - poster_link: https://assets.pytorch.org/pted2021/posters/B8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/B8.png section: B8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-B8.png - title: 'PyroNear: Embedded Deep Learning for Early Wildfire Detection' -- authors: - - Nikhila Ravi - - Jeremy Reizenstein - - David Novotny - - Justin Johnson - - Georgia Gkioxari - - Roman Shapovalov - - Patrick Labatut - - Wan-Yen Lo - categories: - - Vision - description: 'PyTorch3D is a modular and optimized library for 3D Deep Learning + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-B8.png + title: "PyroNear: Embedded Deep Learning for Early Wildfire Detection" +- authors: + - Nikhila Ravi + - Jeremy Reizenstein + - David Novotny + - Justin Johnson + - Georgia Gkioxari + - Roman Shapovalov + - Patrick Labatut + - Wan-Yen Lo + categories: + - Vision + description: + "PyTorch3D is a modular and optimized library for 3D Deep Learning with PyTorch. It includes support for: data structures for heterogeneous batching of 3D data (Meshes, Point clouds and Volumes), optimized 3D operators and loss functions (with custom CUDA kernels), a modular differentiable rendering API for Meshes, Point clouds and Implicit functions, as well as several other tools for - 3D Deep Learning.' + 3D Deep Learning." link: https://arxiv.org/abs/2007.08501 - poster_link: https://assets.pytorch.org/pted2021/posters/C6.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C6.png section: C6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C6.png - title: 'PyTorch3D: Fast, Flexible, 3D Deep Learning ' -- authors: - - E. Riba - - J. Shi - - D. Mishkin - - L. Ferraz - - A. Nicolao - categories: - - Vision - description: This work presents Kornia, an open source computer vision library built + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C6.png + title: "PyTorch3D: Fast, Flexible, 3D Deep Learning " +- authors: + - E. Riba + - J. Shi + - D. Mishkin + - L. Ferraz + - A. Nicolao + categories: + - Vision + description: + This work presents Kornia, an open source computer vision library built upon a set of differentiable routines and modules that aims to solve generic computer vision problems. The package uses PyTorch as its main backend, not only for efficiency but also to take advantage of the reverse auto-differentiation engine to define @@ -1666,15 +1743,16 @@ of classical vision problems implemented using our framework are provided including a benchmark comparing to existing vision libraries. link: http://www.kornia.org - poster_link: https://assets.pytorch.org/pted2021/posters/C7.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C7.png section: C7 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C7.png - title: 'Kornia: an Open Source Differentiable Computer Vision Library for PyTorch' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C7.png + title: "Kornia: an Open Source Differentiable Computer Vision Library for PyTorch" - authors: - - Thomas George + - Thomas George categories: - - Vision - description: Fisher Information Matrices (FIM) and Neural Tangent Kernels (NTK) + - Vision + description: + Fisher Information Matrices (FIM) and Neural Tangent Kernels (NTK) are useful tools in a number of diverse applications related to neural networks. Yet these theoretical tools are often difficult to implement using current libraries for practical size networks, given that they require per-example gradients, and @@ -1685,35 +1763,38 @@ and so on, where the matrix is either the FIM or the NTK, leveraging recent advances in approximating these matrices. link: https://github.com/tfjgeorge/nngeometry/ - poster_link: https://assets.pytorch.org/pted2021/posters/C8.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/C8.png section: C8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-C8.png - title: 'NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent - Kernels in PyTorch' -- authors: - - "B\xE9gaint J." - - "Racap\xE9 F." - - Feltman S. - - Pushparaja A. - categories: - - Vision - description: CompressAI is a PyTorch library that provides custom operations, layers, + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-C8.png + title: + "NNGeometry: Easy and Fast Fisher Information Matrices and Neural Tangent + Kernels in PyTorch" +- authors: + - "B\xE9gaint J." + - "Racap\xE9 F." + - Feltman S. + - Pushparaja A. + categories: + - Vision + description: + CompressAI is a PyTorch library that provides custom operations, layers, modules and tools to research, develop and evaluate end-to-end image and video compression codecs. In particular, CompressAI includes pre-trained models and evaluation tools to compare learned methods with traditional codecs. State-of-the-art end-to-end compression models have been reimplemented in PyTorch and trained from scratch, reproducing published results and allowing further research in the domain. - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/D6.png + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D6.png section: D6 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D6.png - title: 'CompressAI: a research library and evaluation platform for end-to-end compression ' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D6.png + title: "CompressAI: a research library and evaluation platform for end-to-end compression " - authors: - - Philip Meier - - Volker Lohweg + - Philip Meier + - Volker Lohweg categories: - - Vision - description: "The seminal work of Gatys, Ecker, and Bethge gave birth to the field\ + - Vision + description: + "The seminal work of Gatys, Ecker, and Bethge gave birth to the field\ \ of _Neural Style Transfer_ (NST) in 2016. An NST describes the merger between\ \ the content and artistic style of two arbitrary images. This idea is nothing\ \ new in the field of Non-photorealistic rendering (NPR). What distinguishes NST\ @@ -1730,15 +1811,16 @@ \ ease. This poster will showcase the core concepts of `pystiche` that will enable\ \ other researchers as well as lay persons to got an NST running in minutes." link: https://github.com/pmeier/pystiche - poster_link: https://assets.pytorch.org/pted2021/posters/D7.png + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D7.png section: D7 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D7.png - title: 'pystiche: A Framework for Neural Style Transfer' + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D7.png + title: "pystiche: A Framework for Neural Style Transfer" - authors: - - Siddhish Thakur + - Siddhish Thakur categories: - - Vision - description: ' Deep Learning (DL) has greatly highlighted the potential impact of + - Vision + description: + " Deep Learning (DL) has greatly highlighted the potential impact of optimized machine learning in both the scientific and clinical communities. The advent of open-source DL libraries from major industrial @@ -1761,10 +1843,11 @@ Keywords: Deep Learning, Framework, Segmentation, Regression, Classification, Cross-validation, Data - augmentation, Deployment, Clinical, Workflows' - link: '' - poster_link: https://assets.pytorch.org/pted2021/posters/D8.png + augmentation, Deployment, Clinical, Workflows" + link: "" + poster_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/D8.png section: D8 - thumbnail_link: https://assets.pytorch.org/pted2021/posters/thumb-D8.png - title: " GaNDLF \u2013 A Generally Nuanced Deep Learning Framework for Clinical\ + thumbnail_link: https://s3.amazonaws.com/assets.pytorch.org/pted2021/posters/thumb-D8.png + title: + " GaNDLF \u2013 A Generally Nuanced Deep Learning Framework for Clinical\ \ Imaging Workflows"