gahaalt

gahaalt OP t1_izo28vv wrote

Hello! Thanks for your feedback. Actually, Progress Table is flexible and you can display arbitrary data in table cells. It can be, for example, a string f"{epoch}/{total_epochs}". It's you who defines what will be displayed :)

To make it clearer, I created integrations.md where you can see an example of Progress Table integration with PyTorch and Keras.

3

gahaalt OP t1_izo0w5m wrote

Hi! Thanks for the feedback.

Actually, Progress Table is not tied to Keras or any other Deep Learning framework. You can use Progress Table to track any long-running process that produces data. The source code is not neural network specific :)

To help you start out, I've created a markdown file with PyTorch integration example. Check this out: integrations.md. Let me know if it's clear!

1

gahaalt OP t1_ivns08k wrote

Thanks for the opinion. Please look at the article: "What are Symbolic and Imperative APIs in TensorFlow 2.0?" by Josh Gordon linked here. It seems natural to him to describe this API as "Symbolic".

Basically if you google "Symbolic API" it seems to be commonly used to describe this very thing.

Also, a similar nomenclature is used in mxnet.

4

gahaalt OP t1_ivlr45z wrote

Let me copy the comparison in case somebody doesn't feel like clicking the link. This might be long, however.

ResNet with the help of Pytorch Symbolic:

from torch import nn
from pytorch_symbolic import Input, SymbolicModel

inputs = Input(shape=(3, 32, 32))
x = nn.Conv2d(inputs.C, 32, 3)(inputs)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3)(x)(nn.ReLU())
block_1_output = nn.MaxPool2d(3)(x)

x = nn.Conv2d(block_1_output.C, 64, 3, padding=1)(block_1_output)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3, padding=1)(x)(nn.ReLU())
block_2_output = x + block_1_output

x = nn.Conv2d(block_2_output.C, 64, 3, padding=1)(block_2_output)(nn.ReLU())
x = nn.Conv2d(x.C, 64, 3, padding=1)(x)(nn.ReLU())
block_3_output = x + block_2_output

x = nn.Conv2d(block_3_output.C, 64, 3)(block_3_output)(nn.ReLU())
x = nn.AvgPool2d(kernel_size=x.HW)(x)(nn.Flatten())
x = nn.Linear(x.features, 256)(x)(nn.ReLU())
x = nn.Dropout(0.5)(x)
outputs = nn.Linear(x.features, 10)(x)

model = SymbolicModel(inputs, outputs)

ResNet defined in "standard" PyTorch:

from torch import nn


class ToyResNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.relu = nn.ReLU()
        self.block1conv1 = nn.Conv2d(3, 32, 3)
        self.block1conv2 = nn.Conv2d(32, 64, 3)
        self.maxpool = nn.MaxPool2d(3)

        self.block2conv1 = nn.Conv2d(64, 64, 3, padding=1)
        self.block2conv2 = nn.Conv2d(64, 64, 3, padding=1)

        self.block3conv1 = nn.Conv2d(64, 64, 3, padding=1)
        self.block3conv2 = nn.Conv2d(64, 64, 3, padding=1)

        self.conv1 = nn.Conv2d(64, 64, 3)

        kernel_size = 7  # calculated by hand
        self.global_pool = nn.AvgPool2d(kernel_size)
        self.flatten = nn.Flatten()
        self.linear = nn.Linear(64, 256)
        self.dropout = nn.Dropout(0.5)
        self.classifier = nn.Linear(256, 10)

    def forward(self, x):
        x = self.relu(self.block1conv1(x))
        x = self.relu(self.block1conv2(x))
        block_1_output = self.maxpool(x)

        x = self.relu(self.block2conv1(block_1_output))
        x = self.relu(self.block2conv2(x))
        block_2_output = x + block_1_output

        x = self.relu(self.block3conv1(block_2_output))
        x = self.relu(self.block3conv2(x))
        block_3_output = x + block_2_output

        x = self.relu(self.conv1(block_3_output))
        x = self.global_pool(x)
        x = self.flatten(x)
        x = self.relu(self.linear(x))
        x = self.dropout(x)
        return self.classifier(x)


model = ToyResNet()
4

gahaalt OP t1_ivkmxtm wrote

Thanks for this question!

Pytorch Symbolic is simplifying the definition of the neural network models. It is indeed creating a graph under the hood to do this. In this graph, every edge is an nn.Module.

torchdynamo looks great as a tool for optimizing existing models to perform better on the GPU by removing the CPU overhead entirely. Sometimes the improvement is really impressive.

Yes, torchdynamo does some kind of graph capture as well. It even modifies the byte-code to speed up the execution. But in the end it is a wrapper for an nn.Module that speeds it up. To speed up the model, you have to define it first.

So the two libraries are actually independent. You can use torchdynamo to speed up models created with Pytorch Symbolic. IMO it is a great combination.

12

gahaalt OP t1_ivk3tvj wrote

Yeah! You have a lot of flexibility to do NAS here. You can create a huge graph of layers and sample a smaller path from it to create a Symbolic Model. One non-standard thing you need to do to pull it off is to modify ._children attribute of Symbolic Data when you want to rewire the connections in this graph.

I might add an example for a simple NAS soon.

4

gahaalt OP t1_ivjw8lx wrote

No, it has a different purpose than SymPy. As I understand SymPy is a library mainly for manipulating symbolic mathematical expressions.

Pytorch Symbolic uses symbolic variables to record (capture) the operations and later to replay them on arbitrary data. Under the hood, there's a graph with symbolic variables as nodes and transformations (e.g. layers) as edges.

Pytorch Symbolic can capture and replay arbitrary Python operations, but cannot display them in such neat notation as SymPy does.

6