RAIDZ Expansion feature discussion #15232
Replies: 23 comments 83 replies
-
So to summarize that: Afaik, a copy command will rewrite the data with the new width after expansion. I think I saw a script sloshing around that basically does that for an entire array, cleaning up after itself. Does it file by file for space reasons. Takes a minute, but, you know. Only have to do it once. From my perspective that’s plenty good enough. Raid expansion is a niche feature, if much beloved by that niche. Adjusting width is then a niche within the niche. Handling that with a community script sounds like an entirely reasonable thing to do. Permissions and paths stay as is, filenames stay, last accessed and creation date changes and width changes. To explain the niche comments: I am part of that niche. Home user but without the funds to start at full width. I did a poor man’s raid expansion eventually, from 5 to 8. So the niche are people who start at a smaller width than they know they’ll need, and will only run one vdev. By definition home users, and then only a fraction of those. The niche within the niche are users where rewriting archival data makes an appreciable difference, or that care even though it doesn’t. That need not be handled in the file system. Handling it with good UX outside seems great. Setups that use ZFS like TrueNAS and Proxmox could even integrate such a script into their UI. |
Beta Was this translation helpful? Give feedback.
-
This script, with caveats (read the readme), helps automate rewriting your data by copying/pasting/deleting/moving at the filesystem level. |
Beta Was this translation helpful? Give feedback.
-
@Dulanic you can’t removed a raidz vdev from a pool. Mirror vdevs only (single disk vdev is special case of mirror) can be removed. You may be able to create a new pool with your 18TB drives, copy data over, then expand that. This really depends on how much you’re using already though. This is why a lot of home users stick with one vdev, lest they are “stuck” with two to manage. I have one 8x8 raidz2 which will likely last me forever. But in case it ever doesn’t, I intend to replace drives as they fail with larger-capacity ones. Once the last drive has been replaced, I’ll have more space. |
Beta Was this translation helpful? Give feedback.
-
My current take on this, the stripe-width expansion seems less of an urgent issue than the fact it only adds one drive at a time. |
Beta Was this translation helpful? Give feedback.
-
Is RAIDZ "contraction" support included in this PR? 🤔 |
Beta Was this translation helpful? Give feedback.
-
To @farkeytron: Discussion is here, not in the PR. IxSystems is paying for this work, so it’ll definitely come into TrueNAS. SCALE first likely, since that is where canary features land, then Core in due time. |
Beta Was this translation helpful? Give feedback.
-
Hah, I take you don't follow the issue tracking on github too carefully, then. The amount of issues I see with the encryption support (and not just performance problems, but like, swallowed my data or corrupted it on a replication ones) has made me pretty hesitant to recommend ZFS's native encryption support for anyone. |
Beta Was this translation helpful? Give feedback.
-
@don-brady @ahrens Thank you SO much for getting the ZFS expansion PR across the line. It'll likely be a key capability of ZFS from here on (in the next release onwards of course). 😄 |
Beta Was this translation helpful? Give feedback.
-
is the expansion option for all setups including stripe? |
Beta Was this translation helpful? Give feedback.
-
Quick question: if I update to zfs master and enable corresponding feature flag for raidz expansion, will it be active or enabled after expansion completes? Thus, can I downgrade to stable branch after that or there will be some incompatibilities in on disk layout and I'm limited to master branch after that until the next stable release. |
Beta Was this translation helpful? Give feedback.
-
A couple of days ago @mmatuska updated OpenZFS in FreeBSD main, including RAIDZ Expansion. You can try this out with the next set of 15-CURRENT snapshot images. |
Beta Was this translation helpful? Give feedback.
-
I tried to read through the DOCS here but still not sure if I can extend my 2-disk VDEV ( only option to create was a mirror ) Can this be expanded with a 3rd disk as RAIDZ-1 ? |
Beta Was this translation helpful? Give feedback.
-
This is arriving at a really good moment for me, I was about to expand a 4-disk raidz2 into a 6-disk by the old method (create an entirely new pool then So I must ask:
TIA! |
Beta Was this translation helpful? Give feedback.
-
How does expand address fragmentation, if my cap is under 50% does the end result remove most/all fragmentation? What if my capacity is over 50%, does the end result in more fragmentation then before? |
Beta Was this translation helpful? Give feedback.
-
I'm considering upgrading the disks in my system with something faster, and add an extra disk in the process. So let's say I first add the extra disk and let ZFS expand the pool, then swap out the pre-existing disks one by one (or perhaps connect the faster one alongside) and let it resilver each time. Would the resulting pool have the updated data-parity ratio? I would assume so since eventually all disks would have newly written data, but because it happened in response to a resilver it may not take the earlier expansion (fully) into account? As in: disk X is pretty much copied bit-for-bit to replacement disk X, resulting in the same original ratio (only that disk has to be rebuilt so there's no (re)balancing of data-parity). I don't think there's enough room to connect all the new disks alongside the original ones and just send it over. :DD |
Beta Was this translation helpful? Give feedback.
-
I'm planning to use ZFS for my homeserver with proxmox. Would it be possible to setup a 2 Drive RAIDZ1 right now and add a third drive later using this method when it releases? |
Beta Was this translation helpful? Give feedback.
-
2 drive raidz is just a mirror with crippled performance. You better of
creating real mirror now and one you have all 3 drive - split the mirror
and create degraded raidz from 2 of 3 drives and zfs send all the data from
the 3rd. After that you can attach 3rd to make it redundant again. You will
sacrifice redundancy during migration though.
…On Thu, Feb 29, 2024, 02:55 Quadrubo ***@***.***> wrote:
Awesome thank you!
Wasn't even sure if RAIDZ1 with just 2 drives is a valid configuration.
—
Reply to this email directly, view it on GitHub
<#15232 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXQ6HJZEA2NGDSHANUI3LDYV5HP7AVCNFSM6AAAAAA4HOSFKSVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DMMRQGM3DM>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi when is this feature going to be available? From what I know it was said that will be added with "OpenZFS 2.3" version but do we have some ETA? Thanks |
Beta Was this translation helpful? Give feedback.
-
IMO "rewriting the data" should be part of the expansion process and part of this feature before release. I suppose making it optional at the time of expansion would be fine, but not too happy its up to users to manage/delete snapshops and copy/delete data to be able to take advantage of the expanded free space properly |
Beta Was this translation helpful? Give feedback.
-
I was wondering if someone can comment on the state of this feature? The PR is complete, but it's not in OpenZFS 2.2 and it looks like it's slated for 2.3. iX is however shipping this feature in their next release, which is tentatively end of October 2024. Is iX shipping a feature not yet ready for general use? Or is OpenZFS simply holding this back for version reasons waiting on on other changes slated for 2.3? |
Beta Was this translation helpful? Give feedback.
-
Not exactly related to raidz expansion, but more so changing the form of a ZFS pool. What are the blockers on allowing users to switch their pool from a mirrored setup, to a radiz setup (either offline, or online). It seems like it should be possible to go from mirror -> raidz, but obviously not in reverse. |
Beta Was this translation helpful? Give feedback.
-
Would it be possible to use a similar approach to implement RAIDZ Shrinking. So if less space is used we can remove a disk permanently without losing parity? |
Beta Was this translation helpful? Give feedback.
-
My understanding is that after an expansion, old data remains on the old logical stripe width and only new data gets the benefit of larger logical stripes. Is that still correct with this feature now being released? If yes, is there a way to convert old data to use the new, larger stripe? |
Beta Was this translation helpful? Give feedback.
-
This is carrying on from PR #15022, so we don't crowd the PR out with general discussion traffic. 😄
Beta Was this translation helpful? Give feedback.
All reactions