-
Notifications
You must be signed in to change notification settings - Fork 185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dectect schema evolution or partition evolution for append DataFile #777
Comments
This is a very interesting question, that I'm happy to elaborate on.
This is true for V1 tables, here the field-IDs are omitted and not written to the files. Therefore for V1 tables there are special rules: The is an issue to create a dedicated API to evolve the partition, that enforces these rules for V1 tables: #732. I think this would be a great thing to have since violating these rules might brick the table, or even worse; data corruption. For V2 tables, we used field-ID projection, where the reader will read and project the files correctly into the structs based on the Field-IDs. This allows for re-ordering, and when reading the files, they will be read into the correct position of the struct. The write order of fields doesn't make any difference for V2, as they will be re-ordered on read. Of course, I would suggest keeping the same order as the partition spec, to keep everyone sane.
This ties in with a discussion I had early this week with @c-thiel that resulted in #771. My suggestion was to make the field-ID required regardless of the version (see #763). This is safe to do if we adhere to the imitations of partition-evolution mentioned above. When reading V1 tables, we can sequentially add the IDs to each of the partition specs, starting at 1000:
I think that's a great thing to do anyway. It isn't super expensive, and will avoid folks bricking their table. Preferably by field-ID for both V1 and V2, otherwise order for V1, and field-IDs for V2.
I'm not sure if I fully understand this one. We know the type in the file, and we know what to project to. Iceberg currently has a fairly limited set of promotions. This is because we encode the upper- and lower bound into binary. Based on the number of bytes, we can safely determine if it is a float (4 bytes), double (8 bytes), and if we need to promote the type based on the current schema. We can do some cool stuff here, for example, if you query I know that this is a lot of text, hope this helps, and always happy to elaborate Edit: I've did some checks today, and it seem to work pretty well with minor modifications: #786 |
Thanks for looping me in @Fokko, I'd not seen AboveMax / BelowMin in the other impls before - pretty cool. I'll think about how we would go about implementing this in rust - we don't do many conversions at the moment in Datum::to anyway - maybe we could add |
I've actually started on adding some more conversions, good beginner task for a Rust novice like me :) |
Thanks @ZENOTME for raising this.
Do you mean we should maintain a partition schema in
I'm also confusing about this part. I think it's reasonable to validate that the arrow record batch's schema matches current table schema we are using. But doing schema evolution at the time of writing doesn't sound like a good idea to me. They should be two transaction updates. |
After #349, we support appending DataFile now. But I found there are some check may miss now: When we append DataFile, schema evolution or partition evolution may happen in the table after we generate the DataFile, which will cause the info of DataFile invalid. E.g partition value in DataFile will be invalid when partition evolution happen. lower_bound(upper_bound) will be invalid when schema evolution happen. So we need to detect the case that DataFile is incompatible with table.
For partition evolution, we have two ways to detect:
For schema evolution:
Based on the above analysis, we need to make the following fixes:
I'm not sure whether my understand is correct, please correct me if something wrong. cc @Fokko @liurenjie1024 @Xuanwo
The text was updated successfully, but these errors were encountered: