-
Notifications
You must be signed in to change notification settings - Fork 355
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: remove legacy conv converter #3343
base: main
Are you sure you want to change the base?
Conversation
run_aot_moria_2dunet.py
Outdated
# Define the 2D U-Net model | ||
model = UNet( | ||
spatial_dims=2, | ||
in_channels=3, | ||
out_channels=2, | ||
channels=(16, 32, 64, 128), | ||
strides=(2, 2, 2), | ||
num_res_units=2, | ||
act="relu", | ||
norm="batch", | ||
dropout=0.1, | ||
).to(device).half().eval() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you plan to add this script, examples/dynamo would be a better fit. Please refer to the existing examples if you want to add this model as a part of model zoo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added this script only to test deconv with output_padding. I’ve created a separate issue with the reproduction steps and code, so I’ll remove this code from this PR.
@chohk88 I think we currently already have conv and deconv converters https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L2467, but both didn't implement the functionality of |
Description
A RuntimeError occurs in many models when the
output_padding
argument for deconvolution is non-zero. The current CTX converter cannot handle this case, so the legacy FX converter is used instead. However, the legacy FX converter also raises a RuntimeError for deconvolutions with non-zerooutput_padding
. If the legacy converter is removed, it results in graph breaks but avoids RuntimeErrors.The ideal solution would be to implement a dedicated converter for deconvolutions with non-zero
output_padding
. However, TensorRT's Python API does not currently supportoutput_padding
as an input fortensorrt.IDeconvolutionLayer
, making this implementation very challenging. It is recommended to create a separate issue to discuss and address this limitation.Error message:
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: