Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TEST: using XFAIL instead of SKIP where it is required #2099

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

samir-nasibli
Copy link
Contributor

Description

[in progress]
Tests will be checked.
Assuming that skip needed to be used for env or sigsevs


Checklist to comply with before moving PR from draft:

PR completeness and readability

  • I have reviewed my changes thoroughly before submitting this pull request.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • I have added a respective label(s) to PR if I have a permission for that.
  • I have resolved any merge conflicts that might occur with the base branch.

Testing

  • I have run it locally and tested the changes extensively.
  • All CI jobs are green or I have provided justification why they aren't.
  • I have extended testing suite if new functionality was introduced in this PR.

Performance

  • I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • I have provided justification why performance has changed or why changes are not expected.
  • I have provided justification why quality metrics have changed or why changes are not expected.
  • I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

@icfaust
Copy link
Contributor

icfaust commented Oct 9, 2024

just be careful, from the pytest docs: "Note that no other code is executed after the pytest.xfail() call, differently from the marker. That’s because it is implemented internally by raising a known exception"

@samir-nasibli
Copy link
Contributor Author

/intelci: run

@samir-nasibli
Copy link
Contributor Author

samir-nasibli commented Oct 9, 2024

just be careful, from the pytest docs: "Note that no other code is executed after the pytest.xfail() call, differently from the marker. That’s because it is implemented internally by raising a known exception"

Thank you @icfaust ! better to use pytest.mark.xfail instead. Not for all cases applicable, will think about it and suggest

@syakov-intel
Copy link

@samir-nasibli what needs to be done to get out of Draft state?

@samir-nasibli
Copy link
Contributor Author

pytest.xfail() This isn't working as I expected. After calling it, the code after doesn't work, so the proposed changes lose all value and meaning. I suggest closing the PR and in the future, if something needs to be marked as xfail, the type of error expected and the reason for it should be specified. For example:

# Don't know the exact error type or error message, and how to provide exact message here to check.
# RuntimeError("the error message ") doesn't work.
@pytest.mark.xfail(raises=RuntimeError, "Expected to fail due to bug in histogram merges fixed in 2025.2.")
@pytest.mark.parametrize("dataframe,queue", get_dataframes_and_queues())
def test_sklearnex_import_et_classifier(dataframe, queue):
...

@icfaust make sense to close and rethink about the design.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants