Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Blazor InputFile Component should handle chunking when the file is uploaded #84685

Closed
1 task done
Aeroverra opened this issue Apr 8, 2023 · 14 comments
Closed
1 task done
Labels
arch-wasm WebAssembly architecture area-System.Net.Http
Milestone

Comments

@Aeroverra
Copy link

Aeroverra commented Apr 8, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe the problem.

The feature request is related to the limitations and problems encountered when uploading large files using Blazor WebAssembly and the InputFile component. Due to the constraints of the underlying browser implementation and the fetch API, uploading large files can lead to memory issues and permission errors.

To work around these limitations, developers often need to implement custom JavaScript solutions that handle chunking of the file during the upload process. This can be challenging and time-consuming, and it would be beneficial if the InputFile component could automatically handle chunking when uploading files.

By adding built-in support for chunking in the InputFile component, it would make it easier for developers to work with large files and improve the overall experience of handling file uploads in Blazor WebAssembly applications. This feature would also help to mitigate the limitations imposed by the current browser implementation and the fetch API, making file uploads more efficient and reliable.

Describe the solution you'd like

I would like the Blazor InputFile component to automatically handle chunking of files during the upload process. The solution should seamlessly integrate with the existing component API. This enhancement would allow developers to upload large files without encountering memory limitations or permission issues caused by the browser or the fetch API.

Additional context

Although some may initially view enabling chunking in the InputFile component as outside the scope, I believe that there are numerous benefits to doing so that make it a fitting addition. This is particularly relevant for Blazor developers, who typically value efficient and reliable development without having to rely on custom JavaScript solutions. I am fairly new to the Javascript side of things myself but I have been working on an upload as many of my sites require it. I have created a poc that was able to successfully bypass the limit. I would be happy to provide it if needed but because its not a modified version of the InputFile component I won't put it here initially.

@SteveSandersonMS
Copy link
Member

Could you provide a specific minimal sample showing that this doesn't already do the chunking you're expecting? How do you know for sure it's not already being streamed? I'm asking because the implementation does supply the data as a stream, so it's not 100% clear whether the behavior you're observing is due to the framework or due to how you're using it.

Once we can confirm the exact gap between what you want and what actually happens, we'll be in a better position to plan for this. Thanks!

@ghost
Copy link

ghost commented Apr 10, 2023

Hi @Aeroverra. We have added the "Needs: Author Feedback" label to this issue, which indicates that we have an open question for you before we can take further action. This issue will be closed automatically in 7 days if we do not hear back from you by then - please feel free to re-open it if you come back to this issue after that time.

@Aeroverra
Copy link
Author

Aeroverra commented Apr 11, 2023

Greetings, and thank you for your response. I apologize if there has been a misunderstanding on my part. When attempting to upload files larger than 2 GB with the InputFile component and then pass that stream to a StreamContent using HTTPClient, I encounter a "System Out of Memory" exception in my browser console. Upon further research, I came across this documentation which appears to indicate that WebAssembly does not support streaming files and instead loads the entire file into the client first. Could you kindly clarify if my interpretation of this information is accurate or if I have misunderstood the situation?

I will try to throw together a sample today.

@ghost
Copy link

ghost commented Apr 11, 2023

Hi @Aeroverra. We have added the "Needs: Author Feedback" label to this issue, which indicates that we have an open question for you before we can take further action. This issue will be closed automatically in 7 days if we do not hear back from you by then - please feel free to re-open it if you come back to this issue after that time.

@Aeroverra
Copy link
Author

Aeroverra commented Apr 11, 2023

https://github.com/Aeroverra/Example-big-upload-dotnet-blazor-wasm
Here is the results.

Wasm Console

info: BlazorApp1.Client.Pages.Index[0]
      2022-11-30 13-45-58.mp4 19094013 has started uploading.
blazor.webassembly.js:1 info: BlazorApp1.Client.Pages.Index[0]
      2022-11-30 13-45-58.mp4 19094013 has finished uploading.
blazor.webassembly.js:1 info: BlazorApp1.Client.Pages.Index[0]
      2022-10-10 17-41-17.mkv 3724265667 has started uploading.
blazor.webassembly.js:1 warn: BlazorApp1.Client.Pages.Index[0]
      2022-10-10 17-41-17.mkv not uploaded Out of memory
System.OutOfMemoryException: Out of memory
   at System.IO.MemoryStream.set_Capacity(Int32 value)
   at System.IO.MemoryStream.EnsureCapacity(Int32 value)
   at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)
   at System.Net.Http.HttpContent.LimitMemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)
   at System.IO.MemoryStream.WriteAsync(ReadOnlyMemory`1 buffer, CancellationToken cancellationToken)
--- End of stack trace from previous location ---
   at System.IO.Stream.<CopyToAsync>g__Core|29_0(Stream source, Stream destination, Int32 bufferSize, CancellationToken cancellationToken)
   at System.Net.Http.StreamToStreamCopy.<CopyAsync>g__DisposeSourceAsync|1_0(Task copyTask, Stream source)
   at System.Net.Http.HttpContent.<CopyToAsync>g__WaitAsync|56_0(ValueTask copyTask)
   at System.Net.Http.MultipartContent.SerializeToStreamAsyncCore(Stream stream, TransportContext context, CancellationToken cancellationToken)
   at System.Net.Http.HttpContent.LoadIntoBufferAsyncCore(Task serializeToStreamTask, MemoryStream tempBuffer)
   at System.Net.Http.HttpContent.<WaitAndReturnAsync>d__82`2[[System.Net.Http.HttpContent, System.Net.Http, Version=6.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a],[System.Byte[], System.Private.CoreLib, Version=6.0.0.0, Culture=neutral, PublicKeyToken=7cec85d7bea7798e]].MoveNext()
   at System.Net.Http.BrowserHttpHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   at System.Net.Http.HttpClient.<SendAsync>g__Core|83_0(HttpRequestMessage request, HttpCompletionOption completionOption, CancellationTokenSource cts, Boolean disposeCts, CancellationTokenSource pendingRequestsCts, CancellationToken originalCancellationToken)
   at BlazorApp1.Client.Pages.Index.OnInputFileChange(InputFileChangeEventArgs e) in C:\Users\nick.halka\Desktop\Repos\Example big upload dotnet blazor wasm\BlazorApp1\BlazorApp1\Client\Pages\Index.razor.cs:line 34

Server Console

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: https://localhost:7231
info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://localhost:5283
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
      Content root path: C:\Users\nick.halka\Desktop\Repos\Example big upload dotnet blazor wasm\BlazorApp1\BlazorApp1\Server\
warn: BlazorApp1.Server.Controllers.UploadsController[0]
      Files are uploading...
warn: BlazorApp1.Server.Controllers.UploadsController[0]
      File read 2022-11-30 13-45-58.mp4

Upload never makes it to the server as I suspect the Input File component loads the entire thing at once rather than chunking and streaming the request. Using JavaScript interop you should be able to emulate a stream to bypass that limit..

edit:
Okay that is super weird. Just did one more test to make sure I am not crazy and It appears to work just fine when iterating through the openreadstream. Is HTTPClient StreamContent the culprit here? One would assume StreamContent would stream the content though.

@javiercn
Copy link
Member

@Aeroverra thanks for the additional details.

What is happening here is that the content is being buffered by HttpClient, so it can't allocate enough memory for it.

InputFile already handles chunking. Is your usage of HttpClient that is causing this, since it's trying to read the entire stream.

In general, you'd be better off creating a specialized component for this.

I think uploading a large file like this without relying on the browser native functionality is going to be challenging. I am not even sure if its possible even if you use the fetch API directly, as the challenge is that the content for the request body needs to be buffered before it is sent. (HTML does not support streaming body for the request (well, Chrome/Edge do experimentally, but it's not widely supported).

In addition to that, since HttpClient is buffering the memory, there is a limit of 2GB, since webassembly apps are 32 bit only (wasm is only 32 bit). Even if you were able to do this, you'll be asking the browser to potentially allocate up to 15GB in a contiguous memory block, which in most cases is potentially going to fail.

Our InputFile component is designed to provide an easy out of the box experience for the most common types of files, it's not meant as a battle horse that can handle all extreme situations (like everything, it has limits and constraints).

For something like what you are looking for to do, I believe you are better off with creating your own specialized component that can coordinate with a specifically designed endpoint on the server to handle the upload. This likely means that you handle the chunking yourself by using multiple HTTP range requests to handle the upload instead of a single request.

@javiercn
Copy link
Member

A separate consideration would be if we can find a way to prevent HttpClient from buffering before calling fetch, so I'll transfer this to the runtime repo so that they can chime in.

/cc: @lewing @marek-safar @pavelsavara

@dotnet-issue-labeler dotnet-issue-labeler bot added the needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners label Apr 12, 2023
@javiercn javiercn transferred this issue from dotnet/aspnetcore Apr 12, 2023
@ghost ghost added the untriaged New issue has not been triaged by the area owner label Apr 12, 2023
@javiercn javiercn added the arch-wasm WebAssembly architecture label Apr 12, 2023
@ghost
Copy link

ghost commented Apr 12, 2023

Tagging subscribers to 'arch-wasm': @lewing
See info in area-owners.md if you want to be subscribed.

Issue Details

Is there an existing issue for this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe the problem.

The feature request is related to the limitations and problems encountered when uploading large files using Blazor WebAssembly and the InputFile component. Due to the constraints of the underlying browser implementation and the fetch API, uploading large files can lead to memory issues and permission errors.

To work around these limitations, developers often need to implement custom JavaScript solutions that handle chunking of the file during the upload process. This can be challenging and time-consuming, and it would be beneficial if the InputFile component could automatically handle chunking when uploading files.

By adding built-in support for chunking in the InputFile component, it would make it easier for developers to work with large files and improve the overall experience of handling file uploads in Blazor WebAssembly applications. This feature would also help to mitigate the limitations imposed by the current browser implementation and the fetch API, making file uploads more efficient and reliable.

Describe the solution you'd like

I would like the Blazor InputFile component to automatically handle chunking of files during the upload process. The solution should seamlessly integrate with the existing component API. This enhancement would allow developers to upload large files without encountering memory limitations or permission issues caused by the browser or the fetch API.

Additional context

Although some may initially view enabling chunking in the InputFile component as outside the scope, I believe that there are numerous benefits to doing so that make it a fitting addition. This is particularly relevant for Blazor developers, who typically value efficient and reliable development without having to rely on custom JavaScript solutions. I am fairly new to the Javascript side of things myself but I have been working on an upload as many of my sites require it. I have created a poc that was able to successfully bypass the limit. I would be happy to provide it if needed but because its not a modified version of the InputFile component I won't put it here initially.

Author: Aeroverra
Assignees: -
Labels:

arch-wasm, untriaged, needs-area-label

Milestone: -

@pavelsavara
Copy link
Member

I think this is duplicate of #36634

@javiercn
Copy link
Member

javiercn commented Apr 12, 2023

@pavelsavara I would not think of this as a duplicate per se, there are two aspects of it:

  • Where the buffering is happening (currently on the .NET side).
  • Actual streaming of the HTTP request body to the server.

I think it will be useful for us, if we can perform the buffering at the JS level instead of the .NET level (like it's happening inside HttpClient) because we don't have to manage the JS memory and it can be released back to the OS after we are done with it.

My understanding is that since we can't shrink the wasm memory, once we've allocated 1GB, we can reuse that 1GB but we can't give it back.

For these types of scenarios, if we are able to push the buffering to the JS layer, it should work (since our InputFile is already doing chunking internally).

Ideally, enabling 2, is the ultimate goal, but 1 is a prerequisite for 2, and will actually address the problem so long as the provided stream does not allocate the entire response in memory.

@marek-safar marek-safar added area-System.Net.Http and removed untriaged New issue has not been triaged by the area owner needs-area-label An area label is needed to ensure this gets routed to the appropriate area owners labels Apr 12, 2023
@ghost
Copy link

ghost commented Apr 12, 2023

Tagging subscribers to this area: @dotnet/ncl
See info in area-owners.md if you want to be subscribed.

Issue Details

Is there an existing issue for this?

  • I have searched the existing issues

Is your feature request related to a problem? Please describe the problem.

The feature request is related to the limitations and problems encountered when uploading large files using Blazor WebAssembly and the InputFile component. Due to the constraints of the underlying browser implementation and the fetch API, uploading large files can lead to memory issues and permission errors.

To work around these limitations, developers often need to implement custom JavaScript solutions that handle chunking of the file during the upload process. This can be challenging and time-consuming, and it would be beneficial if the InputFile component could automatically handle chunking when uploading files.

By adding built-in support for chunking in the InputFile component, it would make it easier for developers to work with large files and improve the overall experience of handling file uploads in Blazor WebAssembly applications. This feature would also help to mitigate the limitations imposed by the current browser implementation and the fetch API, making file uploads more efficient and reliable.

Describe the solution you'd like

I would like the Blazor InputFile component to automatically handle chunking of files during the upload process. The solution should seamlessly integrate with the existing component API. This enhancement would allow developers to upload large files without encountering memory limitations or permission issues caused by the browser or the fetch API.

Additional context

Although some may initially view enabling chunking in the InputFile component as outside the scope, I believe that there are numerous benefits to doing so that make it a fitting addition. This is particularly relevant for Blazor developers, who typically value efficient and reliable development without having to rely on custom JavaScript solutions. I am fairly new to the Javascript side of things myself but I have been working on an upload as many of my sites require it. I have created a poc that was able to successfully bypass the limit. I would be happy to provide it if needed but because its not a modified version of the InputFile component I won't put it here initially.

Author: Aeroverra
Assignees: -
Labels:

arch-wasm, area-System.Net.Http

Milestone: -

@pavelsavara pavelsavara added this to the Future milestone Apr 12, 2023
@Aeroverra
Copy link
Author

Aeroverra commented Apr 12, 2023

@pavelsavara I would not think of this as a duplicate per se, there are two aspects of it:

  • Where the buffering is happening (currently on the .NET side).
  • Actual streaming of the HTTP request body to the server.

I think it will be useful for us, if we can perform the buffering at the JS level instead of the .NET level (like it's happening inside HttpClient) because we don't have to manage the JS memory and it can be released back to the OS after we are done with it.

My understanding is that since we can't shrink the wasm memory, once we've allocated 1GB, we can reuse that 1GB but we can't give it back.

For these types of scenarios, if we are able to push the buffering to the JS layer, it should work (since our InputFile is already doing chunking internally).

Ideally, enabling 2, is the ultimate goal, but 1 is a prerequisite for 2, and will actually address the problem so long as the provided stream does not allocate the entire response in memory.

Ahh okay. Thanks for your response. Looks like I led myself down the wrong path. In that case it sounds like Signalr or a websocket may make the most sense.

Edit: This was transfered so im not sure If I should close this or not. So my understanding is the browser is responsible for sending the request from httpclient and it loads the entire thing into memory. Even setting a buffer size on httpclient wouldn't fix this right?

@Gabriel123N
Copy link

It seems that .NET 9 Preview 5 and above are now supporting this #91699

Sample code I was using until now:

//Added with .NET9
var WebAssemblyEnableStreamingRequestKey = new HttpRequestOptionsKey<bool>("WebAssemblyEnableStreamingRequest");

            var req = new HttpRequestMessage(HttpMethod.Post, uriBuilder.Uri);
//Added with .NET9
            req.Version = HttpVersion.Version20;
//Added with .NET9
            req.Options.Set(WebAssemblyEnableStreamingRequestKey, true);

            req.Content = new StreamContent(chunkedStream);
            
            response = await _client.SendAsync(req);
            Console.WriteLine(await response.Content.ReadAsStringAsync());
            uploadResult = JsonSerializer.Deserialize<GalleryUploadResult>(await response.Content.ReadAsStringAsync(), _jsonSerializerOptions);

In .NET 8, using several call of my upload function and a file stream from the Blazor fluent library's FluentInputFileEventArgs copied into a MemoryStream in chunks of 4MB, the browser tab memory usage would still slowly grow by 50/70MB per 5GB+ file uploads and the network bandwidth was capped at 120Mbps.

After upgrading to .NET 9 Preview 6, the same code without any chunking by passing the FluentInputFileEventArgs file stream saw the browser use 100% of the available bandwidth. CPU usage also went way down since I went from several fetches/s to a single network request for the whole file.

The memory usage is still slowly increasing as the file is uploaded but overall, the increase is similar to the chunking method, which is a massive improvement considering that without streaming, it would attempt to buffer the whole file.

@pavelsavara
Copy link
Member

Here is the documentation PR dotnet/AspNetCore.Docs#33693

I think this is done, closing

@github-actions github-actions bot locked and limited conversation to collaborators Nov 9, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
arch-wasm WebAssembly architecture area-System.Net.Http
Projects
None yet
Development

No branches or pull requests

6 participants