Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: remove legacy conv converter #3343

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

chohk88
Copy link
Collaborator

@chohk88 chohk88 commented Jan 3, 2025

Description

A RuntimeError occurs in many models when the output_padding argument for deconvolution is non-zero. The current CTX converter cannot handle this case, so the legacy FX converter is used instead. However, the legacy FX converter also raises a RuntimeError for deconvolutions with non-zero output_padding. If the legacy converter is removed, it results in graph breaks but avoids RuntimeErrors.

The ideal solution would be to implement a dedicated converter for deconvolutions with non-zero output_padding. However, TensorRT's Python API does not currently support output_padding as an input for tensorrt.IDeconvolutionLayer, making this implementation very challenging. It is recommended to create a separate issue to discuss and address this limitation.

Error message:

  File "/usr/local/lib/python3.12/dist-packages/model_navigator/commands/execution_context.py", line 156, in _execute_function
    fire.Fire(func, unwrapped_args)
  File "/usr/local/lib/python3.12/dist-packages/fire/core.py", line 143, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/fire/core.py", line 477, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
                                ^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/fire/core.py", line 693, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/model_navigator/commands/convert/converters/ep2torchtrt.py", line 142, in convert
    with TimingCacheManager(model_name=model_name, cache_path=timing_cache_dir) as timing_cache:
  File "/usr/local/lib/python3.12/dist-packages/model_navigator/frameworks/tensorrt/timing_tactics.py", line 267, in __exit__
    raise exc_value
  File "/usr/local/lib/python3.12/dist-packages/model_navigator/commands/convert/converters/ep2torchtrt.py", line 149, in convert
    tr_model_compiled = torch_tensorrt.dynamo.compile(
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py", line 288, in compile
    trt_gm = compile_module(
             ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/_compiler.py", line 462, in compile_module
    trt_module = convert_module(
                 ^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 142, in convert_module
    interpreter_result = interpret_module_to_result(
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_conversion.py", line 121, in interpret_module_to_result
    interpreter_result = interpreter.run()
                         ^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 610, in run
    self._construct_trt_network_def()
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 347, in _construct_trt_network_def
    super().run()
  File "/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py", line 146, in run
    self.env[node] = self.run_node(node)
                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 676, in run_node
    trt_node: torch.fx.Node = super().run_node(n)
                              ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch/fx/interpreter.py", line 203, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/dynamo/conversion/_TRTInterpreter.py", line 783, in call_function
    return converter(self.ctx.net, target, args, kwargs, self._cur_node_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/dist-packages/torch_tensorrt/fx/converters/aten_ops_converters.py", line 125, in aten_ops_convolution
    raise RuntimeError(f"Target {target} does not support `transposed=True` ")
RuntimeError: Target aten.convolution.default does not support `transposed=True` 

While executing %convolution_15 : [num_users=1] = call_function[target=torch.ops.aten.convolution.default](args = (%cat, %model_1_submodule_1_submodule_1_submodule_2_0_conv_weight, %model_1_submodule_1_submodule_1_submodule_2_0_conv_bias, [2, 2], [1, 1], [1, 1], True, [1, 1], 1), kwargs = {_itensor_to_tensor_meta: {<tensorrt.tensorrt.ITensor object at 0x7f4269bc4770>: ((1, 1, 128, 128), torch.float32, False, (16384, 16384, 128, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269695370>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bc9b30>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bbb570>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb8d30>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bbb630>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb9bb0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb94b0>: ((16,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb8130>: ((16,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a2b0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a370>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269a58130>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a58030>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a2f0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdddf0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdd7b0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdf130>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdc0b0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bba730>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88130>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a4f0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a9b0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a3b0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a2f0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a370>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a170>: ((16,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89fb0>: ((16,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89d30>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89df0>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269a89bb0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a881f0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89bf0>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89930>: ((1, 16, 1, 1), torch.float32, False, (16, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a898f0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a899f0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89830>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a894f0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8aaf0>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdf930>: ((1, 16, 64, 64), torch.float32, False, (65536, 4096, 64, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a0b0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a591f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89430>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88e30>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a893b0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269b52ef0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88d70>: ((32,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88a30>: ((32,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a887f0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8ac70>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269a89ff0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b5b0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb9170>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b0b0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a596f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8bcf0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a58170>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a870>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b7f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f426962b5f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a8f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b3b0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b570>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a580b0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b1b0>: ((32,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a882f0>: ((32,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8ff0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b90b0>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269a89230>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bb8fb0>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9370>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bc6b30>: ((1, 32, 1, 1), torch.float32, False, (32, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9170>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b99f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b92f0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9c70>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a59d30>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a898b0>: ((1, 32, 32, 32), torch.float32, False, (32768, 1024, 32, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bba5b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42688390b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269694130>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8830>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b86f0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a883b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8730>: ((64,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8570>: ((64,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8070>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a5a230>: None, <tensorrt.tensorrt.ITensor object at 0x7f42696bae70>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb0f0>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b170>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb030>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8ad30>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8270>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696babf0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a470>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a7b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a6b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b3f0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88c70>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88b30>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696ba570>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696ba430>: ((64,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b89b0>: ((64,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bbcb0>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bc4c30>: None, <tensorrt.tensorrt.ITensor object at 0x7f42696bb6f0>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb530>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b8f0>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb670>: ((1, 64, 1, 1), torch.float32, False, (64, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb5f0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a883f0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b370>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696baab0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb4b0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b85f0>: ((1, 64, 16, 16), torch.float32, False, (16384, 256, 16, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a896f0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bbb6b0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8ab30>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bbe70>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696ba2f0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a0f0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8ef0>: ((128,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb830>: ((128,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b81b0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a830>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269bbbeb0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a430>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b85b0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdcf30>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bbd30>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89a30>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8aa70>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9ef0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89af0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8bab0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a130>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8b8b0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b82f0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9330>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8cb0>: ((128,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9a70>: ((128,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a897b0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb4f0>: None, <tensorrt.tensorrt.ITensor object at 0x7f42696ba4b0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8c30>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269cdcfb0>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bbc30>: ((1, 128, 1, 1), torch.float32, False, (128, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bad70>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb130>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8230>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8a330>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bab30>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696149f0>: ((1, 128, 8, 8), torch.float32, False, (8192, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8bbf0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9e30>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696145f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269615ab0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696157f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb370>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bcb530>: ((256,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb2f0>: ((256,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9670>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9030>: None, <tensorrt.tensorrt.ITensor object at 0x7f4269a5a470>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb5b0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696babb0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696ba970>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269615bb0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a888f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a89470>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696168b0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b90f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a8abb0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb1b0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269a88c30>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269617b30>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696ba5f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8cf0>: ((256,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269615030>: ((256,), torch.float32, False, (1,), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269617bb0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b9430>: None, <tensorrt.tensorrt.ITensor object at 0x7f42696147f0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb270>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bbb70>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696165f0>: ((1, 256, 1, 1), torch.float32, False, (256, 1, 1, 1), torch.channels_last, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696160b0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8430>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bcb6f0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696bb7b0>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269bc6770>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f4269615e30>: ((1, 256, 8, 8), torch.float32, False, (16384, 64, 8, 1), torch.contiguous_format, False, {}), <tensorrt.tensorrt.ITensor object at 0x7f42696b8a70>: ((1, 384, 8, 8), torch.float32, False, (24576, 64, 8, 1), torch.contiguous_format, False, {})}})
Original traceback:
  File "/usr/local/lib/python3.12/dist-packages/monai/networks/nets/unet.py", line 297, in forward
    x = self.model(x)
  File "/usr/local/lib/python3.12/dist-packages/monai/networks/layers/simplelayers.py", line 129, in forward
    y = self.submodule(x)
  File "/usr/local/lib/python3.12/dist-packages/monai/networks/layers/simplelayers.py", line 129, in forward
    y = self.submodule(x)
  File "/usr/local/lib/python3.12/dist-packages/monai/networks/layers/simplelayers.py", line 129, in forward
    y = self.submodule(x)

Fixes # (issue)

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@chohk88 chohk88 requested review from peri044 and zewenli98 January 3, 2025 11:17
@chohk88 chohk88 self-assigned this Jan 3, 2025
@github-actions github-actions bot added component: conversion Issues re: Conversion stage component: api [Python] Issues re: Python API component: fx component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Jan 3, 2025
github-actions[bot]

This comment was marked as resolved.

github-actions[bot]

This comment was marked as resolved.

github-actions[bot]

This comment was marked as resolved.

py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py Outdated Show resolved Hide resolved
py/torch_tensorrt/fx/converters/aten_ops_converters.py Outdated Show resolved Hide resolved
Comment on lines 7 to 18
# Define the 2D U-Net model
model = UNet(
spatial_dims=2,
in_channels=3,
out_channels=2,
channels=(16, 32, 64, 128),
strides=(2, 2, 2),
num_res_units=2,
act="relu",
norm="batch",
dropout=0.1,
).to(device).half().eval()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you plan to add this script, examples/dynamo would be a better fit. Please refer to the existing examples if you want to add this model as a part of model zoo.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added this script only to test deconv with output_padding. I’ve created a separate issue with the reproduction steps and code, so I’ll remove this code from this PR.

@zewenli98
Copy link
Collaborator

@chohk88 I think we currently already have conv and deconv converters https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py#L2467, but both didn't implement the functionality of output_padding as you mentioned. Is it possible to append constant_pad after the current conv/deconv implementation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: fx fx
Projects
None yet
4 participants