-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reorder requirements file decoding #12795
base: main
Are you sure you want to change the base?
Reorder requirements file decoding #12795
Conversation
a3f1cac
to
aa0f744
Compare
aa0f744
to
7df3500
Compare
7df3500
to
b4c3255
Compare
src/pip/_internal/req/req_file.py
Outdated
exc.encoding, | ||
fallback_encoding, | ||
) | ||
content = raw_content.decode(fallback_encoding) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be a good idea to use error="backslashreplace"
here. Most of the time, the offending bytes would just be a part of a comment anyway and would not make a difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be a good idea to use
error="backslashreplace"
here. Most of the time, the offending bytes would just be a part of a comment anyway and would not make a difference.
I've been hesitating with this a bit, specifically I'm wondering if this could be abused for nefarious purposes where the contents of the file you 'see' (not well defined, since this is the case where the data won't fully decode) isn't the same contents that pip will process. Though I'm having a hard time finding a vulnerable use-case (something like injecting an extra element or a adding a .
to a domain name in a requirement URL)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, we're trying to enforce compliance with the documentation, so I don't think we should introduce another hack to support technically mal-encoded requirement files.
Hmm, since the documentation says it's utf-8 unless there is a PEP-263 style comment, shouldn't we rather decode as utf8 is there is no such comment, and if that fails, fallback to the current That way we have a (more or less) non-breaking path to compliance with the docs? Also, I'd put all that in auto_decode, with a docstring comment that the function is meant for requirements.txt decoding as per the docs. |
This sounds reasonable, though I think this change would need to be made in |
Sounds good. I've removed from the 24.3 milestone. Feel free to ping me when you get back to this. |
b4c3255
to
d0bf895
Compare
👍 I've updated the change and title+description. It was basically a re-do so I just stomped my previous commits. per the description: I found and fixed another bug while testing this: requirements files starting with a UTF-32 LE BOM would always be decoded as UTF-16 LE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great so far! I love your tests. They're extensive and so easy to read! Also good catch with the UTF-32 LE vs UTF-16 LE BOM bug.
It is simplest to encode your requirements files with UTF-8. | ||
The process for decoding requirements files is: | ||
|
||
- Check for any Byte Order Mark at the start of the file and if found use | ||
the corresponding encoding to decode the file. | ||
- Check for any {pep}`263` style comment (e.g. `# -*- coding: <encoding name> -*-`) | ||
and if found decode with the given encoding. | ||
- Try and decode with UTF-8, and if that fails, | ||
- fallback to trying to decode using the locale defined encoding. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the specification for requirement files, we should be tighter.
The default encoding for requirement files is `UTF-8` unless a different encoding
is specified using a {pep}`263` style comment (e.g. `# -*- coding: <encoding name> -*-`).
```{warning}
pip will fallback to the locale defined encoding if `UTF-8` decoding fails. This is a quirk
of pip's parser. This behaviour is *deprecated* and should not be relied upon.
```
While we're here, should we expand the specification to formally allow BOMs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While we're here, should we expand the specification to formally allow BOMs?
How much do we want to encourage this usage, do you have an idea for how much they are relied on? Specifically, I'm wondering if we can just say: UTF-8 by default, PEP-263 style comments supported, other bits deprecated, i.e if someone is relying on locale specific or BOM, we suggest they should switch to PEP-263 comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer to not allow BOMs, as @matthewhughes934 says. Ideally, as it's a backward incompatible change, we would initially issue a deprecation warning if a BOM is found, but given that the use of a BOM is undocumented (in the user docs), I'd be fine with simply removing that support if that's easier.
However, PEP 263 states that the UTF8 signature '\xef\xbb\xbf'
should signal UTF8, so that one (which is treated as a BOM in auto_decode
) should stay.
It's worth noting that UTF16 and UTF32 encodings can't be handled with a PEP 263 style comment, as those encodings are not ASCII supersets. But I think it's fine to stop supporting them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth noting that UTF16 and UTF32 encodings can't be handled with a PEP 263 style comment, as those encodings are not ASCII supersets. But I think it's fine to stop supporting them.
I'm not following this issue closely, but in case it's helpful, I'll note that uv initially only supported UTF8 but then found they had user issues not supporting UTF16: astral-sh/uv#2283
I think because redirecting from stdout to a file on Windows would end up with a UTF16 file, e.g. pip freeze > requirements.txt
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really in the mood to break our Windows users over requirements file decoding, honestly. I'd prefer documenting BOMs as supported as I don't like to have implementation-defined behaviour here, but I also recognize that would promote their proliferation so leaving this as implementation-defined behaviour is a fair compromise. Using the old docs text or my suggestion (without any changes) is fine.
Thanks @notatallshaw for letting us know that UTF-16 can be pretty common on Windows.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not really in the mood to break our Windows users over requirements file decoding, honestly. I'd prefer documenting BOMs as supported as I don't like to have implementation-defined behaviour here, but I also recognize that would promote their proliferation so leaving this as implementation-defined behaviour is a fair compromise. Using the old docs text or my suggestion (without any changes) is fine.
👍 I like your more concise docs, and the warning, so went with that 5c49381
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think because redirecting from stdout to a file on Windows would end up with a UTF16 file
In Windows 11, cmd seems to use UTF81, as does Powershell Core. Windows Powershell does use UTF16, though, so I guess there's a proportion of users who will encounter this (I don't have a feel for the relative usage of the 3 shells).
Given this, +1 on retaining the existing support but not documenting it.
Footnotes
-
This may depend on the console codepage, I'm not sure when UTF8 became the default. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I added a code comment explaining why we still support BOMs: f1dbf49
content = auto_decode(f.read()) | ||
raw_content = f.read() | ||
except OSError as exc: | ||
raise InstallationError(f"Could not open requirements file: {exc}") | ||
|
||
content = _decode_req_file(raw_content, url) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, if the decoding with the locale encoding also fails, the exception won't be caught. Any objections to calling _decode_req_file
within in the try-except?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, if the decoding with the locale encoding also fails, the exception won't be caught. Any objections to calling
_decode_req_file
within in the try-except?
No objection to adding it there. My original assumption, and the reason I moved it out, was the catching of OSError
was only really relevant for reading the content, and not the actual decoding.
Oh and sorry for taking so long to re-review your PR. It fell off our radar, and then we haven't had the time to review it honestly. |
fd2801e
to
60c6bb9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could make the fallback to the locale/system encoding raise a deprecation warning instead of a plain warning, but I'm honestly not very interested in going through a deprecation cycle over this. If other maintainers feel differently, I have no issues with adding one.
Thank you again for your patience! Good work!
This changes the decoding process to be more iniline with what was previously documented. The new process is outlined in the updated docs. The `auto_decode` function was removed and all decoding logic moved to the `pip._internal.req.req_file` module because: * This function was only ever used to decode requirements file * It was never really a generic 'util' function, it was always tied to the idiosyncrasies of decoding requirements files. * The module lived under `_internal` so I felt comfortable removing it A warning was added when we _do_ fallback to using the locale defined encoding to encourage users to move to an explicit encoding definition via a coding style comment. This fixes two existing bugs. Firstly, when: * a requirements file is encoded as UTF-8, and * some bytes in the file are incompatible with the system locale Previously, assuming no BOM or PEP-263 style comment, we would default to using the encoding from the system locale, which would then fail (see issue pypa#12771) Secondly, when decoding a file starting with a UTF-32 little endian Byte Order Marker. Previously this would always fail since `codecs.BOM_UTF32_LE` is `codecs.BOM_UTF16_LE` followed by two null bytes, and because of the ordering of the list of BOMs the UTF-16 case would be run first and match the file prefix so we would incorrectly deduce that the file was UTF-16 little endian encoded. I can't imagine this is a popular encoding for a requirements file. Fixes: pypa#12771
f1dbf49
to
ef986d2
Compare
Latest change was just the result of me rebasing on |
There's a test failure https://github.com/pypa/pip/actions/runs/12528829883/job/34943820232?pr=12795, error:
It looks like |
I hit "rerun failed jobs". Let's see if that fixes it. |
This changes the decoding process to be more in line with what was
previously documented. The new process is outlined in the updated docs.
The
auto_decode
function was removed and all decoding logic moved tothe
pip._internal.req.req_file
module because:the idiosyncrasies of decoding requirements files.
_internal
so I felt comfortable removing itA warning was added when we do fallback to using the locale defined
encoding to encourage users to move to an explicit encoding definition
via a coding style comment.
This fixes two existing bugs. Firstly, when:
Previously, assuming no BOM or PEP-263 style comment, we would default
to using the encoding from the system locale, which would then fail (see
issue #12771)
Secondly, when decoding a file starting with a UTF-32 little endian Byte
Order Marker. Previously this would always fail since
codecs.BOM_UTF32_LE
iscodecs.BOM_UTF16_LE
followed by two nullbytes, and because of the ordering of the list of BOMs we the UTF-16
case would be run first and match the file prefix so we would
incorrectly deduce that the file was UTF-16 little endian encoded. I
can't imagine this is a popular encoding for a requirements file.
Fixes: #12771