Title
Distill — Latest articles about machine learning
Go Home
Category
Description
Address
Phone Number
+1 609-831-2326 (US) | Message me
Site Icon
Distill — Latest articles about machine learning
Page Views
0
Share
Update Time
2022-05-05 16:21:16

"I love Distill — Latest articles about machine learning"

www.distill.pub VS www.gqak.com

2022-05-05 16:21:16

Distill About Prize Submit Sept. 2, 2021 Peer-reviewed Understanding Convolutions on Graphs AmeyaDaigavane, BalaramanRavindran, and GauravAggarwal Understanding the building blocks and design choices of graph neural networks. Sept. 2, 2021 Peer-reviewed A Gentle Introduction to Graph Neural Networks BenjaminSanchez-Lengeling, EmilyReif, AdamPearce, and AlexanderB. Wiltschko What components are needed for building learning algorithms that leverage the structure and properties of graphs? July 2, 2021 Editorial Distill Hiatus EditorialTeam After five years, Distill will be taking a break. March 4, 2021 Peer-reviewed Multimodal Neurons in Artificial Neural Networks GabrielGoh, NickCammarata †, ChelseaVoss †, ShanCarter, MichaelPetrov, LudwigSchubert, AlecRadford, and ChrisOlah We report the existence of multimodal neurons in artificial neural networks, similar to those found in the human brain. Nov. 17, 2020 Peer-reviewed Understanding RL Vision JacobHilton, NickCammarata, ShanCarter, GabrielGoh, and ChrisOlah With diverse environments, we can analyze, diagnose and edit deep reinforcement learning models using attribution. Sept. 11, 2020 Commentary Communicating with Interactive Articles FredHohman, MatthewConlen, JeffreyHeer, and DuenHorng (Polo) Chau Examining the design of interactive articles by synthesizing theory from disciplines such as education, journalism, and visualization. Aug. 27, 2020 Thread Thread: Differentiable Self-organizing Systems AlexanderMordvintsev, EttoreRandazzo, EyvindNiklasson, MichaelLevin, and SamGreydanus A collection of articles and comments with the goal of understanding how to design robust and general purpose self-organizing systems. May 5, 2020 Peer-reviewed Exploring Bayesian Optimization ApoorvAgnihotri and NipunBatra How to tune hyperparameters for your machine learning model using Bayesian optimization. March 16, 2020 Peer-reviewed Visualizing Neural Networks with the Grand Tour MingweiLi, ZhengeZhao, and CarlosScheidegger By focusing on linear dimensionality reduction, we show how to visualize many dynamic phenomena in neural networks. March 10, 2020 Thread Thread: Circuits NickCammarata, ShanCarter, GabrielGoh, ChrisOlah, MichaelPetrov, LudwigSchubert, ChelseaVoss, BenEgan, and SweeKiat Lim What can we learn if we invest heavily in reverse engineering a single neural network? Jan. 10, 2020 Peer-reviewed Visualizing the Impact of Feature Attribution Baselines PascalSturmfels, ScottLundberg, and Su-InLee Exploring the baseline input hyperparameter, and how it impacts interpretations of neural network behavior. Nov. 4, 2019 Peer-reviewed Computing Receptive Fields of Convolutional Neural Networks AndréAraujo, WadeNorris, and JackSim Detailed derivations and open-source code to analyze the receptive fields of convnets. Sept. 30, 2019 Peer-reviewed The Paths Perspective on Value Learning SamGreydanus and ChrisOlah A closer look at how Temporal Difference Learning merges paths of experience for greater statistical efficiency Aug. 6, 2019 Commentary A Discussion of ‘Adversarial Examples Are Not Bugs, They Are Features’ LoganEngstrom, JustinGilmer, GabrielGoh, DanHendrycks, AndrewIlyas, AleksanderMadry, ReiichiroNakano, PreetumNakkiran, ShibaniSanturkar, BrandonTran, DimitrisTsipras, and EricWallace Six comments from the community and responses from the original authors April 9, 2019 Commentary Open Questions about Generative Adversarial Networks AugustusOdena What we’d like to find out about GANs that we don’t know yet. April 2, 2019 Peer-reviewed A Visual Exploration of Gaussian Processes JochenGörtler, RebeccaKehlbeck, and OliverDeussen How to turn a collection of small building blocks into a versatile tool for solving regression problems. March 25, 2019 Peer-reviewed Visualizing memorization in RNNs AndreasMadsen Inspecting gradient magnitudes in context can be a powerful tool to see when recurrent units use short-term or long-term contextual understanding. March 6, 2019 Peer-reviewed Activation Atlas ShanCarter, ZanArmstrong, LudwigSchubert, IanJohnson, and ChrisOlah By using feature inversion to visualize millions of activations from an image classification network, we create an explorable activation atlas of features the network has learned and what concepts it typically represents. Feb. 19, 2019 Commentary AI Safety Needs Social Scientists GeoffreyIrving and AmandaAskell If we want to train AI to do what humans want, we need to study humans. Aug. 14, 2018 Editorial Distill Update 2018 DistillEditors An Update from the Editorial Team July 25, 2018 Peer-reviewed Differentiable Image Parameterizations AlexanderMordvintsev, NicolaPezzotti, LudwigSchubert, and ChrisOlah A powerful, under-explored tool for neural network visualizations and art. July 9, 2018 Peer-reviewed Feature-wise transformations VincentDumoulin, EthanPerez, NathanSchucher, FlorianStrub, Harmde Vries, AaronCourville, and YoshuaBengio A simple and surprisingly effective family of conditioning mechanisms. March 6, 2018 Peer-reviewed The Building Blocks of Interpretability ChrisOlah, ArvindSatyanarayan, IanJohnson, ShanCarter, LudwigSchubert, KatherineYe, and AlexanderMordvintsev Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space. Dec. 4, 2017 Commentary Using Artificial Intelligence to Augment Human Intelligence ShanCarter and MichaelNielsen By creating user interfaces which let us work with the representations inside machine learning models, we can give people new tools for reasoning. Nov. 27, 2017 Peer-reviewed Sequence Modeling with CTC AwniHannun A visual guide to Connectionist Temporal Classification, an algorithm used to train deep neural networks in speech recognition, handwriting recognition and other sequence problems. Nov. 7, 2017 Peer-reviewed Feature Visualization ChrisOlah, AlexanderMordvintsev, and LudwigSchubert How neural networks build up their understanding of images April 4, 2017 Peer-reviewed Why Momentum Really Works GabrielGoh We often think of optimization with momentum as a ball rolling down a hill. This isn’t wrong, but there is much more to the story. March 22, 2017 Commentary Research Debt ChrisOlah and ShanCarter Science is a human activity. When we fail to distill and explain research, we accumulate a kind of debt... Dec 6, 2016 Experiments in Handwriting with a Neural Network ShanCarter, DavidHa, IanJohnson, and ChrisOlah Several interactive visualizations of a generative model of handwriting. Some are fun, some are serious. Oct 17, 2016 Deconvolution and Checkerboard Artifacts AugustusOdena, VincentDumoulin, and ChrisOlah When we look very closely at images generated by neural networks, we often see a strange checkerboard pattern of artifacts. Oct 13, 2016 How to Use t-SNE Effectively MartinWattenberg, FernandaViégas, and IanJohnson Although extremely useful for visualizing high-dimensional data, t-SNE plots can sometimes be mysterious or misleading. Sept 8, 2016 Attention and Augmented Recurrent Neural Networks ChrisOlah and ShanCarter A visual overview of neural attention, and the powerful extensions of neural networks being built on top of it. Distill is dedicated to clear explanations of machine learning About Submit Prize Archive RSS GitHub Twitter ISSN 2476-0757