Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Fedora CUDA Guide for Development in Toolbox Environment #11135

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

teihome
Copy link

@teihome teihome commented Jan 8, 2025

This pull request introduces a comprehensive guide for setting up the NVIDIA CUDA toolkit on Fedora within a toolbox environment. Given that NVIDIA does not provide CUDA packages for in-maintenance versions of Fedora, this guide walks users through the steps required to install CUDA using the Fedora 39 repository.

Key features of the guide include:

  • Detailed steps for creating and entering a Fedora 39 toolbox environment.
  • Instructions for adding the NVIDIA CUDA repository and resolving potential package conflicts manually.
  • Guidance on installing essential development tools, NVIDIA driver libraries, and the CUDA meta-package.
  • Environment configuration to ensure CUDA binaries are accessible.
  • Verification steps to confirm successful installation.
  • Troubleshooting tips and additional notes on using CUDA in a Fedora toolbox.

This guide aims to simplify the process for users facing challenges with CUDA installation on Fedora. The instructions have been tested and should enable users to develop CUDA applications without affecting their host system.

Please review the guide and provide feedback. If everything looks good, I look forward that this is merged into this projects documentation.

Thank you!

Since NVIDIA does not release CUDA for in-maintenance versions of Fedora, the process of setting up the CUDA toolkit on Fedora has become quite involved. This guide should help mere mortals install CUDA for development in a Fedora 39 toolbox environment, without affecting the host system.
@github-actions github-actions bot added the documentation Improvements or additions to documentation label Jan 8, 2025
@ngxson
Copy link
Collaborator

ngxson commented Jan 8, 2025

I had a look quickly at your guide. I've been using Fedora Silverblue until 39 (before switching to Mac). Having some questions here:

  • AFAIK toolbox is a containerized env, so user still need to install kernel driver on host machine, right?
  • While toolbox is available for both fedora mutable and non-mutable, this guide only been tested on the mutable distro, right?

And btw before I used distrobox. The good thing is that I could use ubuntu as the base container image, so installing CUDA library via apt package is simple. I'm wondering if we can do the same with toolbox. Just remember that this is installing user-space library, so in theory, it's not important which base distro image is used.

Copy link
Collaborator

@ericcurtin ericcurtin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo

docs/cuda-feoda.md -> docs/cuda-fedora.md

@ericcurtin
Copy link
Collaborator

I had a look quickly at your guide. I've been using Fedora Silverblue until 39 (before switching to Mac). Having some questions here:

  • AFAIK toolbox is a containerized env, so user still need to install kernel driver on host machine, right?

Yes, that's basically the exception, kernelspace drivers, it's possible to run everything else in a container.

  • While toolbox is available for both fedora mutable and non-mutable, this guide only been tested on the mutable distro, right?

And btw before I used distrobox. The good thing is that I could use ubuntu as the base container image, so installing CUDA library via apt package is simple. I'm wondering if we can do the same with toolbox. Just remember that this is installing user-space library, so in theory, it's not important which base distro image is used.

ramalama, toolbox, distrobox, podman, docker, basically all do the same thing, run an OCI container.

I think ramalama is furthest ahead as regards accelerated GPU/AI enablement though

@ericcurtin
Copy link
Collaborator

We should consider using a free AI code review bot like:

https://sourcery.ai/

we've been using it with ramalama to some success. It's nice as it takes some of the effort out of code reviews, typos are the kinda thing it's excellent at (although it's getting better all the time, some of the things it's spotted have amazed me, although most of the time it doesn't offer any feedback, but when it does, it's a real time-saver).

@ericcurtin
Copy link
Collaborator

I had a look quickly at your guide. I've been using Fedora Silverblue until 39 (before switching to Mac). Having some questions here:

If you install podman on macOS:

https://podman.io/docs/installation

it installs something called podman machine which is basically a Linux VM running Fedora Silverblue (that's the OS used for podman).

I use podman machine as a generic Linux VM on macOS. And thanks to @slp you can also run accelerated AI workloads on Apple Silicon via krunkit hypervisor using the kompute backend of llama.cpp .

  • AFAIK toolbox is a containerized env, so user still need to install kernel driver on host machine, right?
  • While toolbox is available for both fedora mutable and non-mutable, this guide only been tested on the mutable distro, right?

And btw before I used distrobox. The good thing is that I could use ubuntu as the base container image, so installing CUDA library via apt package is simple. I'm wondering if we can do the same with toolbox. Just remember that this is installing user-space library, so in theory, it's not important which base distro image is used.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants