-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for interim responses (1xx) #118
Comments
Great question! Does anyone have any links to good examples of how this is exposed in any standard library HTTP interfaces? |
Looking at other API is a good idea! I have worked with interim responses in Go and Node.js, so I can share their approaches. GoWhen handling incoming request on the server-side, the handler receives a mux.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
w.Header().Set("My-Interim-Header", "hello")
w.WriteHeader(105) // interim response
w.Header().Del("My-Interim-Header")
w.Header().Set("My-Final-Header", "hello")
w.WriteHeader(200) // final response
}) Retrieving interim responses as a client is a bit more tedious. You have to attach a client tracer to the outgoing HTTP request. This tracer allows you to define a callback that will be invoked for every received interim response: ctx := context.TODO()
ctx = httptrace.WithClientTrace(ctx, &httptrace.ClientTrace{
Got1xxResponse: func(code int, header textproto.MIMEHeader) error {
fmt.Printf("Got %d response\n", code)
return nil
},
})
req, _ := http.NewRequestWithContext(ctx, "GET", "https://example.com", nil)
res, _ := http.DefaultClient.Do(req) Node.jsFor clients in Node.js, receiving interim responses is as easy as listening for the const req = http.request(options);
req.end();
req.on('information', (info) => {
console.log(`Got information prior to main response: ${info.statusCode}`);
}); For servers in Node.js, the support for generic 1xx responses is not great. Node.js offers dedicated methods for sending specific 1xx resposnes:
Other than that, there are no methods for sending generic 1xx responses with custom headers. That being said, the approaches from Go and Node.js are similar:
Do you think these concepts are transferable to wasi-http? |
Thanks so much for the detailed examples in 2 languages; that helps create a picture of what we'd need at the WASI level. I think one thing I was wondering about that your examples help to explain is how to fallback gracefully when the receiver of a response doesn't know or care about interim responses: it sounds like you just get the non-interim response and body-stream like normal and silently ignore all the interim responses. Just to confirm: is it the case that interim responses can only be received before a single final non-interim (non-1xx) response (followed by the body stream)? If so, that suggests to me that (perhaps in a 0.3 release, when we're making breaking changes to wasi-http anyways) that |
Yep! (cite)
Do you mean that |
I was thinking that (again, in a 0.3 breaking-change timeframe) the resource type returned by Thinking through direct component-to-component composition scenarios, it seems like the ideal here is that we're not bifurcating the types (or One requirement that standard libraries have that we don't (yet) is that they have to maintain backwards compatibility with existing users whereas we only need to be able to implement these library interfaces (with library impl code that can know the full 0.3 interface), which is nice. |
Ah, I see, I was still thinking about it in the framework of the current |
@lukewagner would you mind sketching out what you think the API would look like here? We're also interested in supporting interim responses in Spin, and this seems like a good time to update the Naively, I could imagine something like |
Taking advantage of the restricted response structure I tried to summarize my understanding of above and trying to optimize for the common case where there are no informational responses (or there are, but they're ignored), I was wondering if we could just get away with simply adding one extra method to interface wasi:http/types {
resource responses {
informational: func() -> stream<informational-response>;
final-status-code: func() -> status-code;
body: func() -> option<body>;
}
}
interface wasi:http/handler {
handle: func(r: request) -> result<responses, error-code>;
} noting that a The question with this interface is: what happens if the client calls This is just a idea though; WDYT? |
Is it possible to ignore received informational responses until Would that be possible? |
@Acconut The challenge I believe is avoiding race conditions where informational responses are lost because But thinking more about how chaining is supposed to work in practice makes me think that my previous sketch is overly simplistic and that probably we want something more like what @dicej wrote. In particular, combining all the responses into 1 exclusively-owned FWIW, we could tighten up handle: func(r: request) -> result<stream<interim-response, response>, error-code>; where |
FWIW: I added experimental, WASIp2-compatible informational response support to a fork of Spin (which also required forking Hyper, since upstream doesn't support 1xx responses yet): https://github.com/dicej/spin-informational-demo It uses this WIT interface: package spin:http@3.0.0;
interface http {
use wasi:http/types@0.2.0.{response-outparam, headers, status-code};
/// Send an HTTP 1xx response.
///
/// Unlike `response-outparam.set`, this does not consume the `response-outparam`, allowing the
/// guest to send an arbitrary number of informational responses before sending the final response
/// using `response-outparam.set`.
send-informational: func(out: borrow<response-outparam>, status: status-code, headers: headers);
} |
This addresses the outbound response half of 1xx support: #139, based on the prototype I mentioned above. |
This looks like a great proposal as it uses the type system to disallow semantically incorrect operation (e.g. sending an informational response after a final one). One thing to keep in mind is that informational responses are still a niche use case, so one might want to optimize their API for the typical use case of sending one final response while also allowing advanced use cases with informational responses. Would your proposed API be a hindrance for the typical use case? |
@Acconut I agree that informational responses will be rare and thus it's worth trying to avoid hurting perf in the common case; that was the origin of my original suggestion. The tricky thing seems to be supporting the incremental streaming of interim responses in a multi-proxy/component chain while allowing individual components to ignore the existing of interim responses. But I won't claim to have exhausted the design space here, so other ideas welcome. One saving grace may be that most folks are going to be using a standard HTTP library implemented on top of wasi-http, and this library's implementation can hide interim responses (by doing the automatic forwarding I mentioned above) unless the client code indicates interest (by registering the interim-response callbacks, as you showed above). My assumption is that the (to be added) zero-copy |
Another possible approach: resource response {
...
/// Create a new informational response; `status` must be in the range [100-199]
new-informational: static func(status: status-code, headers: headers, next: future<response>) -> result<response>;
/// Returns the next response if this is an informational response; otherwise return an error
next() -> result<future<response>>;
...
} |
A server can generate one final response and multiple interim responses for a single request. From RFC 9110:
Some informational status codes, like 100 Continue, are typically handled by the HTTP implementation themselves, but other interim responses are useful for the applications. For example, a server may want to generate an
103 Early Hint
interim respone to allow the client to preload resources. Alternatively, a server may want to repeatedly generate interim responses for a long-running request to update the client on the processing progress. The client, on the other hand, may be interested in consuming those interim responses.As far as I understand - and please correct me here if I am wrong - the interface currently does not expose capabilities for clients to receive or for server to generate interim responses. Would there be interest in adding such features?
The text was updated successfully, but these errors were encountered: