Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large File Upload Issue (still) with MultipartReader in ASP.NET Core -8+ #58233

Open
1 task done
SomeProgrammerGuy opened this issue Oct 4, 2024 · 7 comments
Open
1 task done
Labels
area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions

Comments

@SomeProgrammerGuy
Copy link

Is there an existing issue for this?

  • I have searched the existing issues

Describe the bug

Large File Upload Issue (still) with MultipartReader in ASP.NET Core -8+


Problem (one of many) Summary

The real problem even noting the below is that .NET / ASP.NET is simply incapable of uploading large files 4GB+ which frankly is rather a joke in 2025. With .NET 9 on the horizon and yet this fundamental web task still seems to be ignored.

Note: This is running in Visual Studio 2022 only using Kestrel. No other web server is involved. Before anybody starts commenting about the various Kestrel settings that can be changed I have pretty much tried them all in some form. (unless there is some hidden strange one not commonly known.)

  • The LengthLimit is hard coded to HeadersLengthLimit (16KB) during the constructor of MultipartReader.
  • Developers cannot modify HeadersLengthLimit before it's used because it's set after the constructor completes.
  • As a result, any multipart data (Header) exceeding 16KB causes an InvalidDataException, preventing large file uploads.

Why This Is an Issue

  • The inability to adjust HeadersLengthLimit before it's enforced means developers cannot handle large file uploads. Given the violation of the Single Responsibility Principle discussed below, this limitation shouldn't even exist.
  • According to RFC 7578, which governs multipart/form-data, there is no specified maximum size for multipart uploads or headers. Therefore, this hard coded limit imposes an unnecessary restriction that is not compliant with the RFC.
  • Uploading large files is fundamental to many web applications. The lack of configuration in the MultipartReader class hinders developers from implementing essential functionality.

Previous Reports of This Issue (still after seven years, I mean really)

This issue has been reported to yourselves multiple times over the years:

Violation of the Single Responsibility Principle

Now I always note "Rules are for the obedience of fools and the guidance of wise men." but in this case it seems a good call.

  • The MultipartReader class is responsible for parsing multipart data but also imposes hard coded limits on headers and maybe body sizes as well without allowing developers to adjust them .
  • This mixing of responsibilities violates the Single Responsibility Principle.
    • Reasoning: The class should focus solely on parsing multipart data. Enforcing limits should be a separate concern, configurable by the developer or handled by another component (e.g., the web server, which is the case generally).
  • By hardcoding these limits and not providing a way to configure them, the class restricts legitimate use cases (e.g., large file uploads) and reduces flexibility.

Issue Description:

try
{
    // Code removed for brevity

    // Initialize the MultipartReader with the boundary, request body stream, and custom buffer size
    MultipartReader multipartReader = new MultipartReader(
        HeaderUtilities.RemoveQuotes(contentType.Boundary).Value!, // The multipart boundary
        httpRequest.Body,                                          // Stream containing multipart data
        customBufferSize                                           // The minimum buffer size to use
    );

    // When MultipartReader is instantiated, the following defaults are set:
    // - HeadersCountLimit is set to DefaultHeadersCountLimit (16)
    // - HeadersLengthLimit is set to DefaultHeadersLengthLimit (16,384 bytes or 16KB)
    //
    // These defaults are established as:
    // public const int DefaultHeadersCountLimit = 16;
    // public const int DefaultHeadersLengthLimit = 1024 * 16;
    // public int HeadersLengthLimit { get; set; } = DefaultHeadersLengthLimit;
    //
    // Also, in the constructor, the private field _currentStream is initialized:
    // private MultipartReaderStream _currentStream;
    //
    // The initialization is done with LengthLimit set to HeadersLengthLimit:
    // _currentStream = new MultipartReaderStream(_stream, _boundary)
    // {
    //     LengthLimit = HeadersLengthLimit
    // };
    //
    // Note: There's a comment in the code indicating a known issue:
    // // TODO: HeadersLengthLimit can't be modified until after the constructor.
    //
    // This means we cannot adjust HeadersLengthLimit before it's used in _currentStream.

    // **** THIS IS WHERE IT FAILS IF THE FILE UPLOAD IS TOO LARGE ****
    MultipartSection? multipartSection = await multipartReader.ReadNextSectionAsync(cancellationToken);

    // Explanation of the failure:
    //
    // Inside ReadNextSectionAsync(), the first operation is to drain any preamble data:
    // await _currentStream.DrainAsync(cancellationToken);
    //
    // The DrainAsync() method is defined in StreamHelperExtensions.cs:
    //
    // public static Task DrainAsync(this Stream stream, CancellationToken cancellationToken)
    // {
    //     return stream.DrainAsync(ArrayPool<byte>.Shared, null, cancellationToken);
    // }
    //
    // This calls the overloaded DrainAsync() method:
    //
    // public static async Task DrainAsync(this Stream stream, ArrayPool<byte> bytePool, long? limit, CancellationToken cancellationToken)
    // {
    //     cancellationToken.ThrowIfCancellationRequested();
    //     var buffer = bytePool.Rent(_maxReadBufferSize);
    //     long total = 0;
    //     try
    //     {
    //         var read = await stream.ReadAsync(buffer.AsMemory(), cancellationToken); // ** NOTE THIS CALL **
    //
    //         while (read > 0)
    //         {
    //             cancellationToken.ThrowIfCancellationRequested();
    //             if (limit.HasValue && limit.GetValueOrDefault() - total < read)
    //             {
    //                 throw new InvalidDataException($"The stream exceeded the data limit {limit.GetValueOrDefault()}.");
    //             }
    //             total += read;
    //             read = await stream.ReadAsync(buffer.AsMemory(), cancellationToken);
    //         }
    //     }
    //     finally
    //     {
    //         bytePool.Return(buffer);
    //     }
    // }
    //
    // During the first read operation, the stream's ReadAsync method is called, which eventually leads to MultipartReaderStream's ReadAsync.

    // Inside MultipartReaderStream.cs:

    // private int UpdatePosition(int read)
    // {
    //     _position += read;
    //
    //     if (_observedLength < _position)
    //     {
    //         _observedLength = _position;
    //
    //         // The LengthLimit is set to HeadersLengthLimit (16KB) in the constructor and cannot be modified before this point.
    //         // Since we cannot adjust HeadersLengthLimit before it's enforced, any large file upload exceeding 16KB will cause an exception.
    //
    //         if (LengthLimit.HasValue && _observedLength > LengthLimit.GetValueOrDefault())
    //         {
    //             throw new InvalidDataException($"Multipart body length limit {LengthLimit.GetValueOrDefault()} exceeded.");
    //         }
    //     }
    //
    //     return read;
    // }
    //
    // This is where the exception is thrown above when a large file is uploaded:
    //     if (LengthLimit.HasValue && _observedLength > LengthLimit.GetValueOrDefault())
    //
    
    // Code removed for brevity
}
catch (Exception ex)
{
    // Handle the exception appropriately
    // ...
}

Struggles and Final Thoughts

After struggling with this issue, I decided to write my own Multipart handler. But first, I wrote some code to directly take the uploaded stream and write it straight to a file without loading everything into memory. I tested it by attempting to upload a 5 GB file using Postman.

The following overly commented and overly verbose (not for production) code exists because if you cancel the upload in Postman or simply close Postman halfway through, Kestrel just keeps chugging along like nothing is wrong. It continues until it empties what I assume is its cache, and then just freezes. I’m sure there must be some timeout configuration to address this, but whatever it is, it’s incredibly long—almost like a DDoS/hacker's dream.

Once I got the code running somewhat correctly, it still would not allow the file to progress past 3,854,123 bytes. It just sits there until my own timer runs out.

[HttpPost]
[Route("api/testupload")]
[Every Possible Version of size limits and disable form stuff ect and in program.cs]
public async Task<IActionResult> TestLargeSingleFileUploadAsync()
{
    // There is a lot of stuff for logging and just trying to test and bodge the thing to work.
    // Don't use this in production without going throuugh it carefully first.

    CancellationToken cancellationToken = HttpContext.RequestAborted;
    
    bool uploadCompleted = false;

    try
    {
        Log.Information("User is attempting to upload a file.");

        // Define the path and filename where the incoming file will be saved.
        string uploadingFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");

        // Use a reasonable buffer size for large file uploads.
        const int bufferSize = 16 * 1024 * 1024; // 16 MB buffer size.

        // Set a logging threshold of 200 MB.
        const long loggingThreshold = 200 * 1024 * 1024; // 200 MB

        // Keep track of the next logging threshold.
        long nextLoggingThreshold = loggingThreshold;

        // Use a FileStream to write the incoming data with a specified buffer size.
        using (FileStream fileStream = new(
            uploadingFilePath,
            FileMode.Create,
            FileAccess.Write,
            FileShare.Read,
            bufferSize: bufferSize, // Use the defined buffer size.
            FileOptions.Asynchronous)) // Asynchronous I/O without WriteThrough for better performance
        {
            PipeReader bodyReader = HttpContext.Request.BodyReader;
            long totalBytesRead = 0;
            long totalBytesWritten = 0;

            while (true)
            {
                // Read from the body reader using the cancellation token.
                Task<ReadResult> readTask = bodyReader.ReadAsync(cancellationToken).AsTask();

                // Check if the read task completes within the timeout (e.g., 5 seconds).
                if (await Task.WhenAny(readTask, Task.Delay(10000, cancellationToken)) != readTask)
                {
                    Log.Warning("Read operation timed out, possibly because the client has stopped sending data.");
                    throw new TimeoutException("Read operation timed out, possibly because the client has stopped sending data.");
                }

                ReadResult readResult = await readTask;
                ReadOnlySequence<byte> buffer = readResult.Buffer;
                long bytesRead = buffer.Length;

                if (readResult.IsCompleted && buffer.IsEmpty)
                {
                    Log.Information("Completed reading from the request body. Total bytes read: {TotalBytesRead} bytes.", totalBytesRead);
                    break;
                }

                totalBytesRead += bytesRead;

                // Write the buffer segments to the file stream.
                foreach (ReadOnlyMemory<byte> segment in buffer)
                {
                    await fileStream.WriteAsync(segment, cancellationToken);
                    totalBytesWritten += segment.Length;

                    // Log only after exceeding the next 200 MB threshold.
                    if (totalBytesWritten >= nextLoggingThreshold)
                    {
                        Log.Information("Total bytes written to file so far: {TotalBytesWritten} bytes.", totalBytesWritten);
                        nextLoggingThreshold += loggingThreshold; // Update to the next 200 MB threshold.
                    }
                }

                // Mark the buffer as consumed.
                bodyReader.AdvanceTo(buffer.End);

                // Check for cancellation before continuing the next read.
                cancellationToken.ThrowIfCancellationRequested();
            }

            // Perform a single flush at the end of the upload.
            await fileStream.FlushAsync(cancellationToken);
            Log.Information("Final flush completed. Total bytes written to disk: {TotalBytesWritten} bytes.", totalBytesWritten);

            Log.Information("User has successfully uploaded the file. Total bytes written: {TotalBytesWritten} bytes.", totalBytesWritten);
            uploadCompleted = true; // Mark the upload as successfully completed.
        }

        return NoContent();
    }
    catch (OperationCanceledException)
    {
        Log.Warning("File upload was cancelled by the client.");
        return StatusCode(StatusCodes.Status499ClientClosedRequest, "File upload cancelled by client.");
    }
    catch (TimeoutException timeoutException)
    {
        Log.Warning(timeoutException, "File upload timed out.");
        return StatusCode(StatusCodes.Status408RequestTimeout, "File upload timed out.");
    }
    catch (Exception exception)
    {
        Log.Error(exception, "An unexpected error occurred during file upload.");
        return StatusCode(StatusCodes.Status500InternalServerError, exception.Message);
    }
    finally
    {
        // Delete the incomplete file only if the upload did not complete successfully.
        // if (!uploadCompleted)
        // {
        //     string incompleteFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");
        //     if (System.IO.File.Exists(incompleteFilePath))
        //     {
        //         try
        //         {
        //             System.IO.File.Delete(incompleteFilePath);
        //             Log.Information("Incomplete file '{FilePath}' has been deleted.", incompleteFilePath);
        //         }
        //         catch (Exception fileException)
        //         {
        //             Log.Error(fileException, "Failed to delete incomplete file '{FilePath}'.", incompleteFilePath);
        //         }
        //     }
        // }
    }
}

Conclusion

After spending a week battling with this, I realised that ASP.NET and Kestrel fall short when it comes to large file uploads. The lack of clear documentation and flexibility makes it evident that they are not well-suited for handling this use case. This isn’t just a small oversight—it’s a fundamental flaw that I think requires a complete redesign, not just a minor fix.

What Did I Do in the End?

Although my Rust programming skills are a bit rough around the edges, I used Axum, and it just works. Hardly any code, really fast, and even though Rust is difficult to program, it didn’t take long to get it working.

I only add this for anyone in 2024 trying to do this with .NET: Don’t waste the week I’ve spent trying to get it to work. Consider alternatives like Axum if you need reliable large file upload support.

If somebody has managed to get this to work in .NET (without loading the whole file in memory for note) I would love to see a Full working example including all config around it, .NET and Kestrel.

References

Expected Behavior

To be able to Upload Large files...

Steps To Reproduce

No response

Exceptions (if any)

No response

.NET Version

.NET 8

Anything else?

No response

@gfoidl gfoidl added the area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions label Oct 4, 2024
@amcasey
Copy link
Member

amcasey commented Oct 7, 2024

Thanks for writing that up and sorry for the time and frustration that led to it. It's true that huge files don't "just work", at least in MVC, but the particular bottleneck will vary from app to app. The docs you linked pointed to a file upload sample that can handle 5GB files with minor adjustments.

I had some trouble following where HeadersLengthLimit came into the problem - can you tell me more about that? Are you putting the large file contents in the request header?

@WHumphreys
Copy link

WHumphreys commented Oct 8, 2024

Hi @amcasey, If you read through the code snippet I have posted it is commented (hopefully clearly) as exactly what is happening with "where HeadersLengthLimit came into the problem". In a nutshell it "drains" the _currentStream first using the hardcoded (you cannot change) "HeadersLengthLimit = DefaultHeadersLengthLimit" passed to the LengthLimit before it gets to the LengthLimit = BodyLengthLimit by throwing an exception.

This is using postman to simple post a multipart file with nothing special in the Postman setup. It works fine with file roughly up to 2GB from 'My' memory. Now even if Postman is sending something strange (which I doubt it is, this should still never happen.

Again as stated above, the size of the header and the body can technically be any size in the RFC so it shouldn't be being restricted at all here. (Yes in firewalls, web servers, etc. then yes (as it is a security risk.)

I will have a look at your adjustments when I have a little time later.

@WHumphreys
Copy link

WHumphreys commented Oct 8, 2024

Hi @amcasey,

I had a quick look at your link, but I couldn't fully understand what specific solution you were suggesting (admittedly, it was a quick glance). I attempted to make similar changes in my setup, but unfortunately, they didn't resolve the problem in my simple test case.

The existing code is quite complex, and I think it would be very beneficial to have a straightforward example, as I outlined in my original post. Here's what I'm looking for in basic terms:

  1. Environment Setup: Demonstrate this in Visual Studio 2022 or later Using only Kestrel and no other web server,
  2. Client Setup: Use Postman as the client for file uploads, even if Postman might not perfectly align with ideal client behaviour. This is important because any solution should be robust enough to handle potentially non-standard or even malicious client behaviours.
  3. File Size: Upload a file larger than 6GB.
  4. Streaming Requirement: Stream that file directly to server disk. This doesn't need to create a file with a specific type; it can simply be a binary file representing the uploaded data. The purpose is to demonstrate that we can upload a large file and have it stored exactly as it was received.
  5. Kestrel Configuration: Provide explicit details on what should be added to Program.cs for Kestrel configurations to handle such uploads.
  6. API Implementation: Show a simple, well-commented REST API endpoint that handles the upload. This should work without requiring any refactoring or additional abstractions—the focus should be on simplicity.

I've already started this process, as seen in my original comment.

If we can get a basic scenario like this to work, I believe it would serve as an essential demonstration for documentation—showing that .NET can handle a large direct file stream to disk in the simplest form possible. From there, developers can decide on best practices or more advanced implementations, but the foundational "walk before you run" approach needs to work first.

In conclusion note the code in the original post under "Once I got the code running somewhat correctly, it still would not allow the file to progress past 3,854,123 bytes. It just sits there until my own timer runs out."

@amcasey
Copy link
Member

amcasey commented Oct 8, 2024

I agree that the existing sample is inadequate but, as you say, let's walk before we run. If you grab the branch from my PR, you should be able to compile a simple web app that has the basic characteristics you want - from a page, you can post a large file and have it written directly to disk on the server. I happen to have tested 5GB, but the difference between 5GB and (e.g.) 10GB shouldn't matter - the 32 bit boundaries are at 2/4GB.

In that branch you can open, build, and run aspnetcore\mvc\models\file-uploads\samples\3.x\SampleApp\SampleApp.csproj in VS 2022. (I happen to be on 17.12.0 Preview 3, but you should be able to use an older version, downgrading the TFM to 8.0, if necessary.) You should see a page like this:

image

The interesting link is the last one. The handler is in UploadPhysicalFile. It doesn't require a custom MultipartReader or modify (AFAICT) the HeaderLengthLimit.

My best guess is that your app is missing an update to KestrelServerOptions.Limits.MaxRequestBodySize, but it's hard to say without a buildable repro.

@WHumphreys
Copy link

@amcasey Your new code above errors exactly as my original post. "Multipart body length limit 16384 exceeded." the reason why being explained in great detail in that post.

Did you attempt to upload a very big file to your new API endpoint using Postman as described?

Again as to my last post this should start from the basics.

Just stream in chunks any file directly to disk without using any of the multipart upload library.:

  1. "Environment Setup: Demonstrate this in Visual Studio 2022 or later Using only Kestrel and no other web server,
  2. Client Setup: Use Postman as the client for file uploads, even if Postman might not perfectly align with ideal client behaviour. This is important because any solution should be robust enough to handle potentially non-standard or even malicious client behaviours.
  3. File Size: Upload a file larger than 6GB.
  4. Streaming Requirement: Stream that file directly to server disk. This doesn't need to create a file with a specific type; it can simply be a binary file representing the uploaded data. The purpose is to demonstrate that we can upload a large file and have it stored exactly as it was received.
  5. Kestrel Configuration: Provide explicit details on what should be added to Program.cs for Kestrel configurations to handle such uploads.
    API Implementation: Show a simple, well-commented REST API endpoint that handles the upload. This should work without requiring any refactoring or additional abstractions—the focus should be on simplicity."

Now this doesn't even need to be in a project.

This code can simply be two code windows here.

The first containing any Kestrel etc. settings in Program.cs.

And an API class with one method that does the above.

I have already posted a good start to this already in one of my earlier posts:

[HttpPost]
[Route("api/testupload")]
[Every Possible Version of size limits and disable form stuff ect and in program.cs]
public async Task<IActionResult> TestLargeSingleFileUploadAsync()
{
    // There is a lot of stuff for logging and just trying to test and bodge the thing to work.
    // Don't use this in production without going throuugh it carefully first.

    CancellationToken cancellationToken = HttpContext.RequestAborted;
    
    bool uploadCompleted = false;

    try
    {
        Log.Information("User is attempting to upload a file.");

        // Define the path and filename where the incoming file will be saved.
        string uploadingFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");

        // Use a reasonable buffer size for large file uploads.
        const int bufferSize = 16 * 1024 * 1024; // 16 MB buffer size.

        // Set a logging threshold of 200 MB.
        const long loggingThreshold = 200 * 1024 * 1024; // 200 MB

        // Keep track of the next logging threshold.
        long nextLoggingThreshold = loggingThreshold;

        // Use a FileStream to write the incoming data with a specified buffer size.
        using (FileStream fileStream = new(
            uploadingFilePath,
            FileMode.Create,
            FileAccess.Write,
            FileShare.Read,
            bufferSize: bufferSize, // Use the defined buffer size.
            FileOptions.Asynchronous)) // Asynchronous I/O without WriteThrough for better performance
        {
            PipeReader bodyReader = HttpContext.Request.BodyReader;
            long totalBytesRead = 0;
            long totalBytesWritten = 0;

            while (true)
            {
                // Read from the body reader using the cancellation token.
                Task<ReadResult> readTask = bodyReader.ReadAsync(cancellationToken).AsTask();

                // Check if the read task completes within the timeout (e.g., 5 seconds).
                if (await Task.WhenAny(readTask, Task.Delay(10000, cancellationToken)) != readTask)
                {
                    Log.Warning("Read operation timed out, possibly because the client has stopped sending data.");
                    throw new TimeoutException("Read operation timed out, possibly because the client has stopped sending data.");
                }

                ReadResult readResult = await readTask;
                ReadOnlySequence<byte> buffer = readResult.Buffer;
                long bytesRead = buffer.Length;

                if (readResult.IsCompleted && buffer.IsEmpty)
                {
                    Log.Information("Completed reading from the request body. Total bytes read: {TotalBytesRead} bytes.", totalBytesRead);
                    break;
                }

                totalBytesRead += bytesRead;

                // Write the buffer segments to the file stream.
                foreach (ReadOnlyMemory<byte> segment in buffer)
                {
                    await fileStream.WriteAsync(segment, cancellationToken);
                    totalBytesWritten += segment.Length;

                    // Log only after exceeding the next 200 MB threshold.
                    if (totalBytesWritten >= nextLoggingThreshold)
                    {
                        Log.Information("Total bytes written to file so far: {TotalBytesWritten} bytes.", totalBytesWritten);
                        nextLoggingThreshold += loggingThreshold; // Update to the next 200 MB threshold.
                    }
                }

                // Mark the buffer as consumed.
                bodyReader.AdvanceTo(buffer.End);

                // Check for cancellation before continuing the next read.
                cancellationToken.ThrowIfCancellationRequested();
            }

            // Perform a single flush at the end of the upload.
            await fileStream.FlushAsync(cancellationToken);
            Log.Information("Final flush completed. Total bytes written to disk: {TotalBytesWritten} bytes.", totalBytesWritten);

            Log.Information("User has successfully uploaded the file. Total bytes written: {TotalBytesWritten} bytes.", totalBytesWritten);
            uploadCompleted = true; // Mark the upload as successfully completed.
        }

        return NoContent();
    }
    catch (OperationCanceledException)
    {
        Log.Warning("File upload was cancelled by the client.");
        return StatusCode(StatusCodes.Status499ClientClosedRequest, "File upload cancelled by client.");
    }
    catch (TimeoutException timeoutException)
    {
        Log.Warning(timeoutException, "File upload timed out.");
        return StatusCode(StatusCodes.Status408RequestTimeout, "File upload timed out.");
    }
    catch (Exception exception)
    {
        Log.Error(exception, "An unexpected error occurred during file upload.");
        return StatusCode(StatusCodes.Status500InternalServerError, exception.Message);
    }
    finally
    {
        // Delete the incomplete file only if the upload did not complete successfully.
        // if (!uploadCompleted)
        // {
        //     string incompleteFilePath = Path.Combine(_appsettings.PublicUploadsDirectoryPath, "uploadedFile.txt");
        //     if (System.IO.File.Exists(incompleteFilePath))
        //     {
        //         try
        //         {
        //             System.IO.File.Delete(incompleteFilePath);
        //             Log.Information("Incomplete file '{FilePath}' has been deleted.", incompleteFilePath);
        //         }
        //         catch (Exception fileException)
        //         {
        //             Log.Error(fileException, "Failed to delete incomplete file '{FilePath}'.", incompleteFilePath);
        //         }
        //     }
        // }
    }
}

@amcasey
Copy link
Member

amcasey commented Oct 14, 2024

Please remember that you've been thinking about your scenario for much longer than we have, so things that seem obvious with your experience and context are less so for us.

Posting code fragments is useful for illustration purposes, but working programs are much easier to validate and debug. A toy repo on GitHub would be ideal if there's something you'd like to demonstrate.

I'm having some trouble following what limit you're currently encountering: is it at 16 KB, 1 GB, or 6 GB? Do they all fail in the same way or are there multiple failure modes?

The purpose of the example PR I shared was to demonstrate that Kestrel can handle very large file uploads - it may not be directly applicable to your scenario. I happen to have used the upload page already present in the sample, but I agree that it's important to work with multiple clients. If you use the page in the sample, does that work on your box? Are you only seeing problems with postman or have you tried other clients as well?

For security reasons, we're not allowed to run postman on our network. If the problem can't be reproduced with, say, curl, a pcap (without TLS) would be a useful investigative resource.

@r-Larch
Copy link

r-Larch commented Nov 26, 2024

I encountered this issue while investigating another problem with large file uploads.

After diving into MultipartReader and FormFileModelBinder,
I have a question for the reporter of this issue: @SomeProgrammerGuy, @WHumphreys

Are you certain that Postman is sending the file with the Content-Type: multipart/form-data header and a valid multipart boundary in the request body?

One possible explanation for your issue could be the following:
Postman is sending the header Content-Type: multipart/form-data; boundary=--XXXX, but the body contains plain file bytes without the required boundary.
This mismatch might be causing the HeadersLengthLimit to be exceeded.

Explanation:
This is expected behavior. The MultipartReader relies on the presence of a valid boundary as specified in the Content-Type header to parse multipart request bodies. If the body does not include a valid boundary, the reader cannot locate where individual parts of the form data begin and end. In such cases, the MultipartReader continues reading the stream until it surpasses the HeadersLengthLimit, leading to an error. This behavior ensures that improperly formatted requests are rejected rather than silently failing or producing undefined results.


To share the solution to my (related but different) problem: enabling BufferBody resolved the memory issue because it ensures that the request body stream is buffered in a file for large file uploads rather than being held entirely in memory. This prevents memory exhaustion when handling large payloads.

Here’s how I implemented it using the RequestFormLimitsAttribute:

[HttpPost("upload"), RequestFormLimits(BufferBody = true), DisableRequestSizeLimit]
public async Task<ActionResult> Upload([FromForm] IFormFileCollection files)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area-networking Includes servers, yarp, json patch, bedrock, websockets, http client factory, and http abstractions
Projects
None yet
Development

No branches or pull requests

5 participants