Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPTQ - Move quantized_model to CUDA device #1535

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

samuel100
Copy link
Contributor

Describe your changes

When using GPTQ the quantized_model must be moved to the CUDA device to avoid the "Expected all tensors to be on the same device" error in auto-gptq. See AutoGPTQ/AutoGPTQ#729

Checklist before requesting a review

  • Add unit tests for this change.
  • Make sure all tests can pass.
  • Update documents if necessary.
  • Lint and apply fixes to your code by running lintrunner -a
  • Is this a user-facing change? If yes, give a description of this change to be included in the release notes.
  • Is this PR including examples changes? If yes, please remember to update example documentation in a follow-up PR.

@xiaoyu-work
Copy link
Contributor

According to the discussion thread, it seems this was already fixed by AutoGPTQ/AutoGPTQ#607?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants