Skip to content
BeeWare Docs
2025.12.11.21227.dev1

Fixing an issue

BeeWare Docs Tools tracks a list of known issues. Any of these issues are candidates to be worked on.

This list can be filtered in various ways. For example, you can filter by platform, so you can focus on issues that affect the platforms you're able to test on; or you can filter by issue type, such as documentation bugs. There's also a filter for good first issues - these are issues that have been identified as problems that have a known cause, and we believe the fix should be relatively straightforward (although we might be wrong in our analysis).

If an issue is more than 6 months old, it's entirely possible that the issue has been resolved, so the first step is to verify that you can reproduce the problem. Use the information provided in the bug report to try and reproduce the problem. If you can't reproduce the problem, report what you have found as a comment on the issue, and pick another issue.

If you can reproduce the problem - try to fix it! Work out what combination of code is implementing the feature, and see if you can work out what isn't working correctly.

Even if you can't fix the problem, reporting anything you discover during the process as a comment on the issue is worthwhile. If you can find the source of the problem, but not the fix, that knowledge will often be enough for someone who knows more about a platform to solve the problem. If the issue doesn't already provide a good reproduction case (a small sample app that does nothing but reproduce the problem), providing one can be a huge help.

Contributing an issue fix

Set up a development environment

Contributing to BeeWare Docs Tools requires you to set up a development environment.

Prerequisites

You'll need to install the following prerequisites.

BeeWare Docs Tools requires Python 3.10+. You will also need a method for managing virtual environments (such as venv).

You can verify the version of Python that you have installed by running:

$ python3 --version

If you have more than one version of Python installed, you may need to replace python3 with a specific version number (e.g., python3.13)

We recommend avoiding recently released version of Python (i.e., versions that have a ".0" or ".1" micro version number, like e.g., 3.14.0). This is because the tools needed to support Python on macOS often lag usually aren't available for recently released stable Python versions.

BeeWare Docs Tools requires Python 3.10+. You will also need a method for managing virtual environments (such as venv).

You can verify the version of Python that you have installed by running:

$ python3 --version

If you have more than one version of Python installed, you may need to replace python3 with a specific version number (e.g., python3.13)

We recommend avoiding recently released version of Python (i.e., versions that have a ".0" or ".1" micro version number, like e.g., 3.14.0). This is because the tools needed to support Python on Linux often lag usually aren't available for recently released stable Python versions.

BeeWare Docs Tools requires Python 3.10+. You will also need a method for managing virtual environments (such as venv).

You can verify the version of Python that you have installed by running:

C:\...>py -3 --version

If you have more than one version of Python installed, you may need to replace the -3 with a specific version number (e.g., -python3.13)

We recommend avoiding recently released version of Python (i.e., versions that have a ".0" or ".1" micro version number, like e.g., 3.14.0). This is because the tools needed to support Python on Windows often lag usually aren't available for recently released stable Python versions.

Set up your development environment

The recommended way of setting up your development environment for BeeWare Docs Tools is to use a virtual environment, and then install the development version of BeeWare Docs Tools and its dependencies.

Clone the BeeWare Docs Tools repository

Next, go to the BeeWare Docs Tools page on GitHub, and, if you haven't already, fork the repository into your own account. Next, click on the "<> Code" button on your fork. If you have the GitHub desktop application installed on your computer, you can select "Open with GitHub Desktop"; otherwise, copy the HTTPS URL provided, and use it to clone the repository to your computer using the command line:

Fork the BeeWare Docs Tools repository, and then:

$ git clone https://github.com/<your username>/beeware-docs-tools.git

(substituting your GitHub username)

Fork the BeeWare Docs Tools repository, and then:

$ git clone https://github.com/<your username>/beeware-docs-tools.git

(substituting your GitHub username)

Fork the BeeWare Docs Tools repository, and then:

C:\...>git clone https://github.com/<your username>/beeware-docs-tools.git

(substituting your GitHub username)

Create a virtual environment

To set up a virtual environment and upgrade pip, run:

$ cd beeware-docs-tools
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv) $ python -m pip install -U pip
$ cd beeware-docs-tools
$ python3 -m venv .venv
$ source .venv/bin/activate
(.venv) $ python -m pip install -U pip
C:\...>cd beeware-docs-tools
C:\...>py -3 -m venv .venv
C:\...>.venv\Scripts\activate
(.venv) $ python -m pip install -U pip

Your prompt should now have a (.venv) prefix in front of it.

Install BeeWare Docs Tools

Now that you have the source code, you can do an editable install of BeeWare Docs Tools into your development environment. Run the following command:

(.venv) $ python -m pip install -U -e . --group dev
(.venv) $ python -m pip install -U -e . --group dev
(.venv) C:\...>python -m pip install -U -e . --group dev

Enable pre-commit

BeeWare Docs Tools uses a tool called pre-commit to identify simple issues and standardize code formatting. It does this by installing a git hook that automatically runs a series of code linters prior to finalizing any git commit. To enable pre-commit, run:

(.venv) $ pre-commit install
pre-commit installed at .git/hooks/pre-commit
(.venv) $ pre-commit install
pre-commit installed at .git/hooks/pre-commit
(.venv) C:\...>pre-commit install
pre-commit installed at .git/hooks/pre-commit

Now you are ready to start hacking on BeeWare Docs Tools!

Work from a branch

Before you start working on your change, make sure you've created a branch. By default, when you clone your repository fork, you'll be checked out on your main branch. This is a direct copy of BeeWare Docs Tools's main branch.

While you can submit a pull request from your main branch, it's preferable if you don't do this. If you submit a pull request that is almost right, the core team member who reviews your pull request may be able to make the necessary changes, rather than giving feedback asking for a minor change. However, if you submit your pull request from your main branch, reviewers are prevented from making modifications.

Working off your main branch also makes it difficult for you after you complete your first pull request. If you want to work on a second pull request, you will need to have a "clean" copy of the upstream project's main branch on which to base your second contribution; if you've made your first contribution from your main branch, you no longer have that clean version available.

Instead, you should make your changes on a feature branch. A feature branch has a simple name to identify the change that you've made. For example, if you're fixing a bug that causes build issues on Windows 11, you might create a feature branch fix-win11-build. If your bug relates to a specific issue that has been reported, it's also common to reference that issue number in the branch name (e.g., fix-1234).

To create a fix-win11-build feature branch, run:

(.venv) $ git switch -c fix-win11-build
(.venv) $ git switch -c fix-win11-build
(.venv) C:\...>git switch -c fix-win11-build
Reproduce the issue

You can't fix a problem if you don't have the problem in the first place. Therefore, reproducing the issue is a prerequisite to fixing it. In software, problems are commonly referred to as "bugs", and issues are often called "bug reports".

Someone has provided a bug report. You need to validate that the steps the reporter describes are resulting in the bug being reported. Can you reproduce the same result by doing exactly what was described in the report? If you can't, you need to figure out why.

Bugs in code

In an ideal situation, you will have the same setup as the person who reported the bug, you will follow the steps, and you will be able to reproduce the bug as described. In many cases, though, it won't be so straightforward. Many bug reports include only vague explanations, and a vague set of conditions. The problem is that many bugs vary based on the set of conditions involved, including how they're interacted with, various preconditions, operating system, operating system version, CPU architecture, or whether the user's machine is old and slow or new and fast. The more information we have about the situation surrounding the bug, the better. Try and reproduce the set of conditions that the reporter has provided. If you're unable to do so, your next step may need to be requesting more information from the person who reported the bug.

The best way to reproduce a bug is with the smallest possible example that still exhibits the issue. Most of the time reporters will not provide a minimum viable example; if they provide any example at all, it will be copied directly from their "real world" application. Your aim will be to reduce the report down to the simplest possible form that exhibits the issue. The best reproduction case is the smallest possible program. This reduction is itself helpful because it determines what the actual problem is. Anyone can take the minimal example, run it, and they will observe the bug that is described.

Bugs in documentation

Bugs in documentation can manifest in different ways. There are problems with formatting that result in rendering issues. Sometimes it's not even a bug; the person may have misread the documentation, or made a genuine mistake. This doesn't necessarily mean there isn't an issue with the documentation. The content may be unclear or imprecise, leaving room for confusion or misinterpretation. It's possible that a concept that should be discussed isn't, because it is completely undocumented.

When a bug is filed for a documentation issue, you'll want to verify that the issue reported actually still exists. In the case of rendering issues, you'll need to build the documentation to see if you can reproduce the issue. Content issues are a matter of reading to verify that no one has submitted an update.

Update the issue

The final step in the triage process is to document your findings by leaving a comment on the issue.

If you're able to reproduce the issue exactly as described, that's all you need to say. Leave a comment saying that you've confirmed that you're seeing the same problem, in the exact way the original reporter describes.

If you're able to provide any additional context, then include details of that context. This might include being able to reproduce the problem on a different operating system, or with a different version of some of the software involved, or anything else that varies from the original report.

If the original report was missing details that you needed to reproduce the report, include those details. This might include providing operating system or version details that the original report didn't make, more complete logs or stack traces, or clearer instructions on the exact sequence of operations needed to reproduce the problem. If you've developed a simpler way to reproduce the problem (or the original reporter didn't provide a reproduction case), you can include details of that reproduction methodology.

If you can't reproduce the issue, then you also leave a comment, detailing what you tried. Knowing where a problem doesn't exist is almost as important as knowing where it does exist, because that helps to narrow down a possible cause. If you have an theories about why you can't reproduce the problem - for example, if you think it's an error of usage, or that the problem has been resolved by a recent operating system update - include that speculation as part of your comment.

Lastly, you can provide any recommendations you may have to the core team. If you think the original report is in error, suggest that the issue should be closed; if you have a theory about the cause of the issue, you can suggest that as well. Your comments will help the core team work out how to progress the issue to the next step.

If fixing the issue requires changes to code:

Write, run, and test code

Fixing a bug or implementing a feature will require you to write some new code.

We have a code style guide that outlines our guidelines for writing code for BeeWare.

Test-driven development

A good way to ensure your code is going to do what you expect it to, is to first write a test case to test for it. This test case should fail initially, as the code it is testing for is not yet present. You can then write the code changes needed to make the test pass, and know that what you've written is solving the problem you are expecting it to.

Run your code

Once your code is written, you need to ensure it runs. You'll need to manually run your code to verify it is doing what you expect. If you haven't already, you'll want to write a test case for your changes; as mentioned above, this test should fail if your code is commented out or not present.

You'll add your test case to the test suite, so it can be run alongside the other tests. The next step is to run the test suite.

Running tests and coverage

BeeWare Docs Tools uses tox to manage the testing process and pytest for its own test suite.

The default tox command includes running:

  • pre-commit hooks
  • towncrier release note check
  • documentation linting

  • test suite for available Python versions

  • code coverage reporting

This is essentially what is run by CI when you submit a pull request.

To run the full test suite, run:

(.venv) $ tox
(.venv) $ tox
(.venv) C:\...>tox

The full test suite can take a while to run. You can speed it up considerably by running tox in parallel, by running tox p (or tox run-parallel). When you run the test suite in parallel, you'll get less feedback on the progress of the test suite as it runs, but you'll still get a summary of any problems found at the end of the test run. You should get some output indicating that tests have been run. You may see SKIPPED tests, but shouldn't ever get any FAIL or ERROR test results. We run our full test suite before merging every patch. If that process discovers any problems, we don't merge the patch. If you do find a test error or failure, either there's something odd in your test environment, or you've found an edge case that we haven't seen before - either way, let us know!

As with the full test suite, and the core, this should report 100% test coverage.

Running test variations

Run tests for multiple versions of Python

By default, many of the tox commands will attempt to run the test suite multiple times, once for each Python version supported by BeeWare Docs Tools. To do this, though, each of the Python versions must be installed on your machine and available to tox's Python discovery process. In general, if a version of Python is available via PATH, then tox should be able to find and use it.

Run only the test suite

If you're rapidly iterating on a new feature, you don't need to run the full test suite; you can run only the unit tests. To do this, run:

(.venv) $ tox -e py
(.venv) $ tox -e py
(.venv) C:\...>tox -e py

Run a subset of tests

By default, tox will run all tests in the unit test suite. When you're developing your new test, it may be helpful to run just that one test. To do this, you can pass in any pytest specifier as an argument to tox. These test paths are relative to the briefcase directory. For example, to run only the tests in a single file, run:

(.venv) $ tox -e py -- tests/path_to_test_file/test_some_test.py
(.venv) $ tox -e py -- tests/path_to_test_file/test_some_test.py
(.venv) C:\...>tox -e py -- tests/path_to_test_file/test_some_test.py

You'll still get a coverage report when running a part of the test suite - but the coverage results will only report the lines of code that were executed by the specific tests you ran.

Run the test suite for a specific Python version

By default tox -e py will run using whatever interpreter resolves as python on your machine. If you have multiple Python versions installed, and want to test a specific Python version from the versions you have installed, you can specify a specific Python version to use. For example, to run the test suite on Python 3.10, run:

(.venv) $ tox -e py310
(.venv) $ tox -e py310
(.venv) C:\...>tox -e py310

A subset of tests can be run by adding -- and a test specification to the command line.

Run the test suite without coverage (fast)

By default, tox will run the pytest suite in single threaded mode. You can speed up the execution of the test suite by running the test suite in parallel. This mode does not produce coverage files due to complexities in capturing coverage within spawned processes. To run a single python version in "fast" mode, run:

(.venv) $ tox -e py-fast
(.venv) $ tox -e py-fast
(.venv) C:\...>tox -e py-fast

A subset of tests can be run by adding -- and a test specification to the command line; a specific Python version can be used by adding the version to the test target (e.g., py310-fast to run fast on Python 3.10).

Code coverage

BeeWare Docs Tools maintains 100% branch coverage in its codebase. When you add or modify code in the project, you must add test code to ensure coverage of any changes you make.

However, BeeWare Docs Tools targets multiple platforms, as well as multiple versions of Python, so full coverage cannot be verified on a single platform and Python version. To accommodate this, several conditional coverage rules are defined in the tool.coverage.coverage_conditional_plugin.rules section of pyproject.toml (e.g., no-cover-if-is-windows can be used to flag a block of code that won't be executed when running the test suite on Windows). These rules are used to identify sections of code that are only covered on particular platforms or Python versions.

Of note, coverage reporting across Python versions can be a bit quirky. For instance, if coverage files are produced using one version of Python but coverage reporting is done on another, the report may include false positives for missed branches. Because of this, coverage reporting should always use the oldest version Python used to produce the coverage files.

Understanding coverage results

At the end of the coverage test output there should be a report of the coverage data that was gathered:

Name    Stmts   Miss Branch BrPart   Cover   Missing
----------------------------------------------------
TOTAL    7540      0   1040      0  100.0%

This tells us that the test suite has executed every possible branching path in the code. This isn't a 100% guarantee that there are no bugs, but it does mean that we're exercising every line of code in the codebase.

If you make changes to the codebase, it's possible you'll introduce a gap in this coverage. When this happens, the coverage report will tell you which lines aren't being executed. For example, lets say we made a change to some/interesting_file.py, adding some new logic. The coverage report might look something like:

Name                                 Stmts   Miss Branch BrPart  Cover   Missing
--------------------------------------------------------------------------------
src/some/interesting_file.py           111      1     26      0  98.1%   170, 302-307, 320->335
--------------------------------------------------------------------------------
TOTAL                                 7540      1   1726      0  99.9%

This tells us that line 170, lines 302-307, and a branch jumping from line 320 to line 335, are not being executed by the test suite. You'll need to add new tests (or modify an existing test) to restore this coverage.

Coverage report for host platform and Python version

You can generate a coverage report for your platform and version of Python. For example, to run the test suite and generate a coverage report on Python 3.10, run:

(.venv) $ tox -m test310
(.venv) $ tox -m test310
(.venv) C:\...>tox -m test310

Coverage report for host platform

If all supported versions of Python are available to tox, then coverage for the host platform can be reported by running:

(.venv) $ tox p -m test-platform
(.venv) $ tox p -m test-platform
(.venv) C:\...>tox p -m test-platform

Coverage reporting in HTML

A HTML coverage report can be generated by appending -html to any of the coverage tox environment names, for instance:

(.venv) $ tox -e coverage-platform-html
(.venv) $ tox -e coverage-platform-html
(.venv) C:\...>tox -e coverage-platform-html

It's not just about writing tests!

Although we ensure that we test all of our code, the task isn't just about maintaining that level of testing. Part of the task is to audit the code as you go. You could write a comprehensive set of tests for a concrete life jacket... but a concrete life jacket would still be useless for the purpose it was intended!

As you develop tests, you should be checking that the core module is internally consistent as well. If you notice any method names that aren't internally consistent (e.g., something called on_select in one module, but called on_selected in another), or where the data isn't being handled consistently, flag it and bring it to our attention by raising a ticket. Or, if you're confident that you know what needs to be done, create a pull request that fixes the problem you've found.

If fixing the issue requires changes to documentation:

Build documentation

Before making any changes to BeeWare Docs Tools's documentation, it is helpful to confirm that you can build the existing documentation.

You must have a Python 3.13 interpreter installed and available on your path (i.e., python3.13 must start a Python 3.13 interpreter).

BeeWare Docs Tools uses tox for building documentation. The following tox commands must be run from the same location as the tox.ini file, which is in the root directory of the project.

Live documentation preview

To support rapid editing of documentation, BeeWare Docs Tools has a "live preview" mode.

The live preview will build with warnings!

The live serve is available for iterating on your documentation updates. While you're in the process of updating things, you may introduce a markup issue. Issues considered a WARNING will cause a standard build to fail, however, the live serve is set up to indicate warnings in the console output, while continuing to build. This allows you to iterate without needing to restart the live preview.

A WARNING is different from an ERROR. If you introduce an issue that is considered an ERROR, the live serve will fail, and require a restart. It will not start up again until the WARNING issue is resolved.

To start the live server:

(venv) $ tox -e docs-live
(venv) $ tox -e docs-live
(venv) C:\...>tox -e docs-live

This will build the documentation, start a web server to serve the documentation, and watch the file system for any changes to the documentation source.

Once the server is started, you'll see something like the following in the console output:

INFO    -  [11:18:51] Serving on http://127.0.0.1:8000/

Open a browser, and navigate to the URL provided. Now you can begin iterating on the documentation. If a change is detected, the documentation will be rebuilt, and any browser viewing the modified page will be automatically refreshed.

docs-live is an initial step

Running docs-live to work with the live server is meant for initial iterating. You should always run a local build before submitting a pull request.

Local build

Once you're done iterating, you'll need to do a local build of the documentation. This build process is designed to fail if there are any markup problems. This allows you to catch anything you might have missed with the live server.

Generating a local build

To generate a local build:

(venv) $ tox -e docs
(venv) $ tox -e docs
(venv) C:\...>tox -e docs

The output of this build will be in the _build directory in the root of the project.

Generating a local translated build

BeeWare Docs Tools's documentation is translated into multiple languages. Updates to the English documentation have the potential lead to issues in the other language builds. It is important to verify all builds are working before submitting a pull request.

To generate a build of all available translations:

(venv) $ tox -e docs-all
(venv) $ tox -e docs-all
(venv) C:\...>tox -e docs-all

The output of each language build will be in the associated _build/html/<languagecode> directory, where <languagecode> is the two- or five-character language code associated with the specific language (e.g. fr for French, it for Italian, etc.).

If you find an issue with a single build, you can run that individual build separately by running tox -e docs-<languagecode>. For example, to build only the French documentation, run:

(venv) $ tox -e docs-fr
(venv) $ tox -e docs-fr
(venv) C:\...>tox -e docs-fr

The output of a single-language build will be in the _build directory.

Documentation linting

The build process will identify Markdown problems, but BeeWare Docs Tools performs some additional checks for style and formatting, known as "linting". To run the lint checks:

(venv) $ tox -e docs-lint
(venv) $ tox -e docs-lint
(venv) C:\...>tox -e docs-lint

This will validate the documentation does not contain:

  • dead hyperlinks
  • misspelled words

If a valid spelling of a word is identified as misspelled, then add the word to the list in docs/spelling_wordlist. This will add the word to the spellchecker's dictionary. When adding to this list, remember:

  • We prefer US spelling, with some liberties for programming-specific colloquialism (e.g., "apps") and verbing of nouns (e.g., "scrollable")
  • Any reference to a product name should use the product's preferred capitalization. (e.g., "macOS", "GTK", "pytest", "Pygame", "PyScript").
  • If a term is being used "as code", then it should be quoted as a literal (like this) rather than being added to the dictionary.
Write documentation

These are the steps to follow to write your documentation contribution to BeeWare Docs Tools.

Updating existing documentation

If you're editing the existing docs, you'll need to locate the file in the /docs/en directory. The file structure follows the page structure, so you can locate the file using the documentation URL.

Adding new documentation

If you're adding a new document, there are a few more steps involved.

You'll need to create the document in the appropriate location within the docs/en directory. For discussion, we'll say you're adding a new document with the filename new_doc.md.

Then, you'll need to update the docs/en/SUMMARY.md file to include your new file. SUMMARY.md is organized to basically reflect the docs/en directory structure, but, more importantly, directly determines the structure of the left sidebar. If you locate the section where you intend to include new_doc.md, you do not need to change anything in SUMMARY.md if you see a wildcard path listed. For example:

- ./path/to/directory/*

If the section where you intend to include new_doc.md is a list of individual Markdown links, you'll need to add an explicit link to yours. For example:

- [My new document](new_doc.md)

Writing your documentation

You can now open the desired file into your editor, and begin writing.

We have a documentation style guide that outlines our guidelines for writing documentation for BeeWare.

When you're ready to submit your contribution:

Add a change note

BeeWare Docs Tools uses towncrier to assist in building the release notes for each release. When you submit a pull request, it must include a change note - this change note will become the entry in the release notes describing the change that has been made.

Every pull request must include at least one file in the changes/ directory that provides a short description of the change implemented by the pull request. The change note should be in Markdown format, in a file that has name of the format <id>.<fragment type>.md. If the change you are proposing will fix a bug or implement a feature for which there is an existing issue number, the ID will be the number of that ticket. If the change has no corresponding issue, the PR number can be used as the ID. You won't know this PR number until you push the pull request, so the first CI pass will fail the towncrier check; add the change note and push a PR update and CI should then pass.

There are five fragment types:

  • feature: The PR adds a new behavior or capability that wasn't previously possible (e.g., adding support for a new packaging format, or a new feature in an existing packaging format);
  • bugfix: The PR fixes a bug in the existing implementation;
  • doc: The PR is a significant improvement to documentation;
  • removal; The PR represents a backwards incompatible change in the BeeWare Docs Tools API; or
  • misc; A minor or administrative change (e.g., fixing a typo, a minor language clarification, or updating a dependency version) that doesn't need to be announced in the release notes.

This description in the change note should be a high level "marketing" summary of the change from the perspective of the user, not a deep technical description or implementation detail. It is distinct from a commit message - a commit message describes what has been done so that future developers can follow the reasoning for a change; the change note is a description for the benefit of users, who may not have knowledge of internals.

For example, if you fix a bug related to project naming, the commit message might read:

Apply stronger regular expression check to disallow project names that begin with digits.

The corresponding change note would read something like:

Project names can no longer begin with a number.

Some PRs will introduce multiple features and fix multiple bugs, or introduce multiple backwards incompatible changes. In that case, the PR may have multiple change note files. If you need to associate two fragment types with the same ID, you can append a numerical suffix. For example, if PR 789 added a feature described by ticket 123, closed a bug described by ticket 234, and also made two backwards incompatible changes, you might have 4 change note files:

  • 123.feature.md
  • 234.bugfix.md
  • 789.removal.1.md
  • 789.removal.2.md

For more information about towncrier and fragment types see News Fragments. You can also see existing examples of news fragments in the changes directory of the BeeWare Docs Tools repository. If this folder is empty, it's likely because BeeWare Docs Tools has recently published a new release; change note files are deleted and combined to update the release notes with each release. You can look at that file to see the style of comment that is required; you can look at recently merged PRs to see how to format your change notes.

Submit a pull request

Now that you've committed all your changes, you're ready to submit a pull request. To ensure you have a smooth review process, there are a number of steps you should take.

Working with pre-commit

When you commit any change, pre-commit will run automatically. If there are any issues found with the commit, this will cause your commit to fail. Where possible, pre-commit will make the changes needed to correct the problems it has found. In the following example, a code formatting issue was found by the ruff check:

(.venv) $ git add some/interesting_file.py
(.venv) $ git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Failed
- hook id: ruff-format
- files were modified by this hook

1 file reformatted, 488 files left unchanged

ruff check...............................................................Passed
codespell................................................................Passed
(.venv) $ git add some/interesting_file.py
(.venv) $ git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Failed
- hook id: ruff-format
- files were modified by this hook

1 file reformatted, 488 files left unchanged

ruff check...............................................................Passed
codespell................................................................Passed
(.venv) C:\...>git add some/interesting_file.py
(.venv) C:\...>git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Failed
- hook id: ruff-format
- files were modified by this hook

1 file reformatted, 488 files left unchanged

ruff check...............................................................Passed
codespell................................................................Passed

In this case, ruff automatically fixed the problem; so you can then re-add any files that were modified as a result of the pre-commit checks, and re-commit the change. However, some checks will require you to make manual modifications. Once you've made those changes, re-add any modified files, and re-commit.

(.venv) $ git add some/interesting_file.py
(.venv) $ git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Passed
ruff check...............................................................Passed
codespell................................................................Passed
[bugfix e3e0f73] Minor change
1 file changed, 4 insertions(+), 2 deletions(-)
(.venv) $ git add some/interesting_file.py
(.venv) $ git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Passed
ruff check...............................................................Passed
codespell................................................................Passed
[bugfix e3e0f73] Minor change
1 file changed, 4 insertions(+), 2 deletions(-)
(.venv) C:\...>git add some\interesting_file.py
(.venv) C:\...>git commit -m "Minor change"
check toml...............................................................Passed
check yaml...............................................................Passed
check for case conflicts.................................................Passed
check docstring is first.................................................Passed
fix end of files.........................................................Passed
trim trailing whitespace.................................................Passed
ruff format..............................................................Passed
ruff check...............................................................Passed
codespell................................................................Passed
[bugfix e3e0f73] Minor change
1 file changed, 4 insertions(+), 2 deletions(-)

Once everything passes, you'll see a message indicating the commit has been finalized, and your git log will show your commit as the most recent addition. You're now ready to push to GitHub.

Push your changes to GitHub and create your pull request

The first time you push to GitHub, you'll be provided a URL that takes you directly to the GitHub page to create a new pull request. Follow the URL and create your pull request.

The following shows an example of what to expect on push, with the URL highlighted.

(.venv) $ git push
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 24 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 689 bytes | 689.00 KiB/s, done.
Total 8 (delta 4), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
remote:
remote: Create a pull request for 'fix-win11-build' on GitHub by visiting:
remote:      https://github.com/<your GitHub username>/BeeWare Docs Tools/pull/new/fix-win11-build
remote:
To https://github.com/<your GitHub username>/BeeWare Docs Tools.git
 * [new branch]      fix-win11-build -> fix-win11-build
(.venv) $ git push
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 24 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 689 bytes | 689.00 KiB/s, done.
Total 8 (delta 4), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
remote:
remote: Create a pull request for 'fix-win11-build' on GitHub by visiting:
remote:      https://github.com/<your GitHub username>/BeeWare Docs Tools/pull/new/fix-win11-build
remote:
To https://github.com/<your GitHub username>/BeeWare Docs Tools.git
 * [new branch]      fix-win11-build -> fix-win11-build
(.venv) C:\...>git push
Enumerating objects: 15, done.
Counting objects: 100% (15/15), done.
Delta compression using up to 24 threads
Compressing objects: 100% (6/6), done.
Writing objects: 100% (8/8), 689 bytes | 689.00 KiB/s, done.
Total 8 (delta 4), reused 0 (delta 0), pack-reused 0 (from 0)
remote: Resolving deltas: 100% (4/4), completed with 4 local objects.
remote:
remote: Create a pull request for 'fix-win11-build' on GitHub by visiting:
remote:      https://github.com/<your GitHub username>/BeeWare Docs Tools/pull/new/fix-win11-build
remote:
To https://github.com/<your GitHub username>/BeeWare Docs Tools.git
 * [new branch]      fix-win11-build -> fix-win11-build

If you've previously pushed the current branch to GitHub, you won't receive the URL again. However, there are other ways to get to the PR creation URL:

  • Navigate to the upstream repository, click on "Pull Requests" followed by "New pull request", and choose the from which you want to submit your pull request.
  • If you pushed recently, navigate to the upstream repository, locate the banner above the list of files that indicates the repo has "had recent pushes", and click the "Compare & pull request" button.
  • Use the GitHub CLI gh pr create command, and fill out the prompts.
  • Use the GitHub CLI gh pr create --web command to open a web browser to the PR creation page.

Any of these options will enable you to create your new pull request.

The GitHub CLI: gh

GitHub provides the GitHub CLI, which gives you access to many of the features of GitHub from your terminal, through the gh command. The GitHub CLI documentation covers all the features.

Pull request content

A pull request title must be informative, clear, and concise. Try to keep it short if possible, but longer titles are acceptable, if needed. A good PR title should give a person without any context a reasonably solid idea of what bug or feature is implemented by your PR.

The PR description must clearly reflect the changes in the PR. A person without any context should be able to read your description, and gain a relatively complete understanding of why the change is being made. Avoid jokes, idioms, colloquialisms, and unnecessary formatting, such as using all caps or excessive punctuation; this is meant to be a straightforward explanation of what is happening in your PR, and avoiding those things makes the description more accessible to others.

If there are any reproduction cases, or any testing regimen that you used that are not already a part of the changes present in the PR, they should be explained and included in the PR. The explanation should include how to run them, and what to do to reproduce the desired outcome.

If your pull request will resolve issue #1234, you should include the text Fixes #1234 in your pull request description. This will cause the issue to be automatically closed when the pull request is merged. You can refer to other discussions, issues or pull requests using the same #1234 syntax. You can refer to an issue on a different repository by prefixing the number with - for example python/cpython#1234 would refer to issue 1234 on the CPython repository.

Continuous integration

Continuous integration, or CI, is the process of running automated checks on your pull request. This can include simple checks like ensuring code is correctly formatted; but it also includes running the test suite, and building documentation.

There are any number of changes that can result in CI failures. Broadly speaking, we won't review a PR that isn't passing CI. If you create a pull requests and CI fails, we won't begin your review until it is passing. If your changes result in a failure, it is your responsibility to look into the reason, and resolve the issue.

When CI fails, the failure links will show up at the bottom of the PR page, under the heading "Some checks were not successful". You'll see a list of failed checks, which, will show up at the top of the list of all checks if there are passing checks as well. If you click on the failure link, it will take you to the log. The log often provides all the information you need to figure out what caused the failure. Read through the log and try to figure out why the failure is occurring, and then do what's necessary to resolve it.

Occasionally, a CI check will fail for reasons that are unrelated to your changes. This could be due to an issue on the machine that runs the CI check; or because a CI check is unstable. If you see a failure, and you're fairly certain it's unrelated to your changes, add a comment to your PR to that effect, and we will look into it.

To trigger a new CI run, you need to push new changes to your branch.

If you find yourself in a situation where you need help getting CI to pass, leave a comment on the PR letting us know and we'll do what we can to help.

The pre-commit and towncrier checks

If either the pre-commit or towncrier checks fail, it will block most of the rest of the CI checks from running. You'll need to resolve the applicable issues before the full set of checks will run.

We have limited CI resources. It is important to understand that every time you push to the branch, CI will start. If you're going to make a number of changes, it's better to make those changes locally, push them all at once. CI will only run on the most recent commit in a batch, minimizing the load on our CI system.

The process of submitting your PR is not done until it's passing CI, or you can provide an explanation for why it's not.