-
Notifications
You must be signed in to change notification settings - Fork 343
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Setting Up Trellis on Windows #3
Comments
I've spent a few hours trying to make the project run on WSL and it works.
Tagging @JeffreyXiang to confirm it runs in WSL with a bit of fiddling. |
Another solution to run the inference code on Windows.
|
Another example to run the inference code on Windows. #10 |
Can anyone share a WHL for diffoctreerast? |
Thanks! That works. Any chance you have a WHL for mip-splatting.git too? That has the same build errors here (says no torch when torch is installed). The rest of the packages install fine. |
@SoftologyPro There are wheels that built from source https://github.com/iiiytn1k/sd-webui-some-stuff/releases/tag/diffoctreerast |
I successfully got Trellis working on Windows 10 without requiring WSL, using Anaconda and CUDA compilation tools release 12.4. The following command sequence replicates the conda create -n trellis python=3.10 -y
conda activate trellis
conda install pytorch==2.5.0 torchvision==0.20.0 pytorch-cuda=12.4 -c pytorch -c nvidia -y
pip install pillow imageio imageio-ffmpeg tqdm easydict opencv-python-headless scipy ninja rembg onnxruntime trimesh xatlas pyvista pymeshfix igraph transformers
pip install git+https://github.com/EasternJournalist/utils3d.git@9a4eb15e4021b67b12c460c7057d642626897ec8
pip install xformers==0.0.28.post2 --index-url https://download.pytorch.org/whl/cu124
# this will take 2hr+ to compile
pip install flash-attn
New-Item -ItemType Directory -Force -Path C:\tmp\extensions
git clone --recurse-submodules https://github.com/JeffreyXiang/diffoctreerast.git C:\tmp\extensions\diffoctreerast
pip install C:\tmp\extensions\diffoctreerast
pip install spconv-cu120
git clone https://github.com/autonomousvision/mip-splatting.git C:\tmp\extensions\mip-splatting
pip install C:\tmp\extensions\mip-splatting\submodules\diff-gaussian-rasterization\
pip install kaolin -f https://nvidia-kaolin.s3.us-east-2.amazonaws.com/torch-2.4.0_cu121.html
git clone https://github.com/NVlabs/nvdiffrast.git C:\tmp\extensions\nvdiffrast
pip install C:\tmp\extensions\nvdiffrast |
@bezo97 I managed to run the app.py demo with the following commands on WSL2 version 2.3.26 running Ubuntu 24.04.1 with Miniconda installed: git clone --recurse-submodules https://github.com/microsoft/TRELLIS.git
cd TRELLIS
conda create -n trellis python=3.10
conda activate trellis
conda install cuda -c nvidia/label/cuda-11.8.0
conda install pytorch=2.4.0 torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
conda install gxx_linux-64
./setup.sh --basic --xformers --flash-attn --diffoctreerast --spconv --vox2seq --mipgaussian --kaolin --nvdiffrast --demo PyTorch was pinned to 2.4.0 since it was the maximum version supported in this repo's setup.sh at the moment of writing. I added the "conda install gxx_linux-64" command since at some point during installation it failed to find the CPP compiler in the Conda environment. It installed GCC 11 confined to the Conda environment, so perhaps that would solve the issues you had with the GCC compiler version without needing to change the default system-wide version. I did not run into the GLIBCXX error. |
Here are my install.bat and run.bat for installing on Windows 11 without WSL and/or Conda required. It does need CUDA 12 and will not work with the IoT LTSC version of Windows 11. Create an empty directory. Save the 2 batch files into it. Command line into the directory. Run install.bat then run.bat. The only issue this has is it only works on a 4090 GPU? On a 3090 it gives an out of memory error. Which is not correct as there is plenty of VRAM and physical RAM free. It installs fine and the UI starts fine. But when you click Generate it will OOM if you do not have a 4090. I have never seen any AI/ML script that works only on a 4090 and not a 3090. If anyone has any ideas as to why this happens, please let me know. Maybe the WHLs have some 4090 only compatible code? install.bat
run.bat
|
ERROR: flash_attn-2.7.1.post1+cu124torch2.5.1cxx11abiFALSE-cp310-cp310-win_amd64.whl is not a supported wheel on this platform. Windows 11 IoT LTSC. |
OK, I added a note saying it will not work with IoT LTSC. |
Finally the only working 1 click solution, Microsoft has something to learn from this guy: The end user is not interested in installing a ton of useless software or even a whole OS to run something, nowadays you either make a 1 click solution or go to the bottom with all your 1% of people who ran it under windows without problems. I have no idea how this took only 2 days of 1 person's free time when the entire Microsoft staff couldn't do it. No hate, just a statement of facts. |
Has anyone been able to get spconv-cu120 working on Win? Got passed everything else but keep getting hung up there.
|
I think this might have something to do with the Python version; I made venv with Python 3.13, and I got the same error. Then I made the venv again with earlier Python version (3.11.8), now this allows me to get that spconv installed (pip install spconv-cu120) but diffoctreerast, diff_gaussian_rasterization and vox2seq complain in similar way (...is not a supported wheel on this platform.) |
For those on Windows with working Trellis, what GPU do you have? As I get out of memory issues from diff_gaussian_rasterization Someone of you experienced the same error?
|
Solved it with manually reinstalling thus making it work for 3090 as well:
|
Do you mind me asking what are your average generation times with the 3090 thinking about picking one up. |
Sure @wes-kay 10s on Gaussians and 20s on glb creation from the Gaussians
________________________________
Von: wes-kay ***@***.***>
Gesendet: Mittwoch, 11. Dezember 2024 18:44
An: microsoft/TRELLIS
Cc: DiamondGlassDrill; Comment
Betreff: Re: [microsoft/TRELLIS] Setting Up Trellis on Windows (Issue #3)
> For those on Windows with working Trellis, what GPU do you have? As I get out of memory issues from diff_gaussian_rasterization I do have a 3090
>
> Someone of you experienced the same error?
>
> diff_gaussian_rasterization\__init__.py", line 94, in forward
> num_rendered, color, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args)
> RuntimeError: CUDA out of memory. Tried to allocate 780.06 GiB. GPU 0 has a total capacity of 24.00 GiB of which 16.84 GiB is free. Of the allocated memory 5.52 GiB is allocated by PyTorch, and 309.87 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Solved it with manually reinstalling thus making it work for 3090 as well:
set TORCH_CUDA_ARCH_LIST=6.0 7.0 7.5 8.0 8.6+PTX
git clone https://github.com/autonomousvision/mip-splatting.git /tmp/extensions/mip-splatting
pip install /tmp/extensions/mip-splatting/submodules/diff-gaussian-rasterization/
Do you mind me asking what are your average generation times with the 3090 thinking about picking one up.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
from cumm.core_cc import tensorview_bind Does anyone know how to fix this problem? |
The single-click install didn't work. Here's what did: System: Windows 10 In Windows Explorer, make a folder for where you want Trellis to live. I called mine "Trellis" Once in the folder, download the repo from git using the following command: Set up a Conda environment. I have MiniConda installed. Anaconda will work as well. In your Command Prompt, paste in these commands one after the other: My system had trouble seeing the CUDA toolkit so I had to force the path. From here, you're following the process outlined by Sicxu. Copy each command below and paste it into your command prompt. pip install torch==2.5.1 torchvision --index-url=https://download.pytorch.org/whl/cu124 git clone https://github.com/NVlabs/nvdiffrast.git ./tmp/extensions/nvdiffrast git clone --recurse-submodules https://github.com/JeffreyXiang/diffoctreerast.git ./tmp/extensions/diffoctreerast git clone https://github.com/autonomousvision/mip-splatting.git ./tmp/extensions/mip-splatting xcopy .\extensions\vox2seq .\tmp\extensions\vox2seq /E /I pip install spconv-cu120 Each of these should install without error. If you get an error, copy it into ChatGPT and see if you can resolve it. Copy the following text and paste it into a new text file. @echo off Save the text file and rename it to RunTrellis.bat From here, you should be able to double click the bat file. On first run, it will download the necessary models. You may get a message about xformers. I ignored this and everything still ran fine. You'll end up with a web address. Copy that and paste it into your browser. You should get the Gradio app interface and be able to drag some of the sample images in to test it. EDIT1: I tried that but it errored out. I ran that through ChatGPT and it told me that this command will not run in a Windows command prompt and that I should use xcopy instead. If you can't get XCopy to work, you can manually copy this extension yourself. See PeterSmoofwah's comment below. EDIT 2: Open the app.py file in Notepad or Notepad++ and scroll to the bottom. You'll see a hashtag comment indicating that this is where it will launch the Gradio app. Delete the code after the #Launch the Gradio App hashtag. Then replace it with this: Launch the Gradio appif name == "main":
Save the app.py file. The next time you launch the Run_Trellis.bat file, it will open the gradio app automatically in a new browser tab. |
This was the fix for me after trying literally every other thing on the internet but xcopy didn't work for me had to manually drag the extension to the tmp folder and then the install command worked |
worked perfectly at the first shot! Thanks so much! |
Thanks a lot @realstevewarner ! with your detailed instructions I managed to intall everything and make it work with the GradioUI on my Windows 10 workstation. I did not get it to work the first time though. And like @PeterSmoofwah above, the Xcopy command did not work. Like him I did the copy procedure manually, that first time. When that first attempt did not result in a working piece of software at the end, the error message I got was about a missing module that was called The second time, aftert deleting everything and starting from scratch, it did work. The key that made it work this time was to change the folder I was in. Basically, I went down into the new Trellis subfolder that is created when I first downloaded and installed trellis.git, which is pretty much the first step of the install procedure. By diving into the Trellis subfolder before installing everthing else, all the links will work properly, including the xcopy command that I had to fix manually the first time. TLDR: after completing the command |
Thanks for the instructions. Before I begin with this I just wanted to double check, all the command pasting and such. It is all done with the command prompt opened from within the Trellis location on my PC? |
Correct. You're going to create a folder for this to live in. Then inside that folder, run a command prompt. In the command prompt, you're going to do a Git pull for the entire project, which is going to create a new folder inside the one you created. That folder will contain the contents of the Trellis code. You're also going to create an environment using Conda. That environment will contain all of the dependent resources. Basically everything you'll be installing after the initial Git pull. The Conda environment ensures that the files you install for this code to run do not impact any other python based projects you're running. The location of your Conda environment will depend on which version of Conda you're running. I'm using the free MiniConda tool. It stores its environments under your C:/ drive user name. In my case, C:\Users\steve\miniconda3\envs. Keep in mind that the dependent resources take up a good chunk of space. My Trellis conda environment takes up nearly 10GB of space. My actual Trellis install folder (the one you create at the beginning) takes up 7GB. So make sure you've got ample space on your drive before proceeding. |
The models have UVs. It's an Atlas map, so not something you're going to easily edit in Photoshop. You can use Materialize 1.78 or the latest version of Stable Projectorz to remove embedded lighting in the texture. The mesh isn't detailed enough (even with the absolute minimum of simplification allowed by Trellis) to need to pull a normal map unless you're planning to do a seriously low-poly version for LOD4 or LOD5 game assets. Something worth noting is that the meshes come in without merged verts. If you're going to do any editing on this, it's best to select everything and merge/fuse overlapping verts. In Blender, you can use the Quad Remesh addon to quickly create quad topology from the triangle mesh. From there, you can create a usable UV map and bake the texture from the native triangle mesh onto the quad mesh. It's a few extra steps, but shouldn't take more than 10 minutes once you're familiar with the process. |
After sampling, I get the following error in the console:
I managed to get this installed and running on my workstation computer without issue yesterday. I don't recall having to explicitly do anything like this in order to get it working. Now I'm home and trying to get it to run on my own machine and I'm stuck at this point now unfortunately. Any help would greatly be appreciated. EDIT: I'm an idiot. I didn't change the CUDA version in the .bat file so it was pointing to something that wasn't on my machine i.e.) 12.6 instead of 12.4 |
Try running any errors through ChatGPT. It's how I was able to resolve every error that popped up along the way. Here's what ChatGPT said about your issue and how to (potentially) fix it. In my case, when I got errors, I fed them into ChatGPT, tried the fix, and in several cases got new errors which sent me back to ChatGPT to get more answers. This was often the case when trying to get the One Click Install version working, which never ultimately worked. Hopefully the answer above from ChatGPT helps. |
Had to change the line to xcopy .\extensions\vox2seq .\tmp\extensions\vox2seq /E /I, otherwise, all the packages are successfully installed with these commands. |
@DiamondGlassDrill I have the same problem on my 3090, where should I put this code? in the installation bat? thanks |
Anyone build these whls for python 3.11? I'm trying to install it into Blender and it runs 3.11.
|
Shoot me an email. |
This worked for me, got a lot of errors with Command Prompt but flawless with Miniconda first try, I got it running on my humble 3060 12gb. About 30 to 60 second per generation. Fun stuff. |
Are you adding the path in your startup.bat file? For me, even though I have CUDA 12.4 installed, it would not run unless I specified the path in my startup file. Here's what I have in my bat: call conda activate trellis |
always timeout; i use conda install pytorch==2.5.0 torchvision==0.20.0 pytorch-cuda=12.4 -c pytorch -c nvidia -y instead of pip install torch==2.5.1 torchvision --index-url=https://download.pytorch.org/whl/cu124 ; can running |
Thank you! copying your bat file! it works, and now i can render models! thanks alot once again! |
How do you guys compile without Windows Kits and Windows Visual Studio ? I see no one talking about it. |
I fully understand this is not an issue, just making a thread in the event that someone has a working setup with windows, the current dependencies are failing on the Microsoft C++ runtime on WSL and all the other ways I've tried to set Trellis up with.
This is above my pay grade, so if anyone wants to update with their working steps, I would appreciate it.
The text was updated successfully, but these errors were encountered: