-
-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a new blackbar detection mode for subtitles #821
Comments
I was looking into this a little for myself, as I was having the same issue. I switched my blackbar detector to use The original PR (here) that implemented the However I propose that the thresholds at the bottom should be user configurable - 25%/75% is a reasonable default, but I'd like to be able to use 15%/85% or 20%/80% for some content. And perhaps the documentation can clarify that it does actually make an effort to avoid the subtitle region. I almost think there's an argument to rename |
Sorry for the silence on the subject for now but currently I'm working on LUT calibration. I have this feature in mind with the fact that... why should we limit ourselves? let the user activate as many scanlines as he needs from the pool available (every 10%) at the top and bottom. this doesn't burden the processor so much even in the case of RPi. Blackborderdetection would have to be completely rewritten, but this is even better. Since I don't use this module I only have a doubt whether we should assume that the blackborder at the top has exactly the same height as the one at the bottom (+/- 2-3 pixels)? And if many vertical scan-lines (from top and bottom) are enough or oblique are still necessary? |
I know it's not what's being explicitly discussed in this issue, but I would love a solution to this, as I watch most things with subtitles enabled. A naive implementation might be for the user to specify the color of the subtitle and its outline, and have HyperHDR ignore any pixels near the bottom of the screen that match those two colors when calculating the color to send to the LED. Maybe only do it when pixels matching those colors are grouped together/adjacent and encountered in the same block. Maybe even instead of ignoring those pixels, it could "borrow" colors from an adjacent block or extend the reach of the block to take pixels from further up in the content. A local AI based approach would probably fix this problem completely, but I wouldn't presume that is a route awawa-dev would want to go down. |
Feature request
What problem does this feature solve?
When watching a movie with blackbars, subtitles un-trigger the detection and top and bottom LEDs are turned of, except for white where the subtitles are.
What does the proposed API look like?
How should this be implemented in your opinion?
Instead of detecting blackbars based on the whole red area, the bottom zone should be split in the middle, something like the green area
Are you willing to work on this yourself?
Sure I can write a little C++, but testing is a bit more complicated since HyperHDR is running on my webOS TV.
The text was updated successfully, but these errors were encountered: