Skip to content

Commit

Permalink
Keep the message window around until all remaining threads complete. (#…
Browse files Browse the repository at this point in the history
…16944)

Improves fix for #16933
Improves upon pr #16934

Summary of the issue:
In PR #16934, it was ensured that NVDA's mutex would not be released until all remaining non-daemon threads were joined and completed. Otherwise, the NVDA process may stay around because of remaining background threads, such as the Braille auto detector worker thread.
However, the joining of the threads was done after NVDA's message window was destroyed, therefore making it impossible for a new instance of NVDA to locate and kill off the old NVDA if it truly was taking way too long.

Description of user facing changes
An old NvDA has more chance of being killed off if it is taking too long to exit when a new copy of NVDA is trying to start.

Description of development approach
Move the joining of the non-daemon threads out of nvda.pyw, and into the bottom of core.main. Also ensure that destroying the message window is the very last action taken. So the ordering of the end of core.main is now:

terminate all subsystems
Join threads
destroy the message window.
  • Loading branch information
michaelDCurran authored Aug 2, 2024
1 parent fbb7437 commit 5ecc243
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 15 deletions.
15 changes: 13 additions & 2 deletions source/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -1084,10 +1084,21 @@ def _doPostNvdaStartupAction():
pass
# We cannot terminate nvwave until after we perform nvwave.playWaveFile
_terminate(nvwave)
# #5189: Destroy the message window as late as possible
_terminate(NVDAHelper)
# Log and join any remaining non-daemon threads here,
# before releasing our mutex and exiting.
# In a perfect world there should be none.
# If we don't do this, the NvDA process may stay alive after the mutex is released,
# which would cause issues for rpc / nvdaHelper.
# See issue #16933.
for thr in threading.enumerate():
if not thr.daemon and thr is not threading.current_thread():
log.info(f"Waiting on {thr}...")
thr.join()
log.info(f"Thread {thr.name} complete")
# #5189: Destroy the message window as the very last action
# so new instances of NVDA can find this one even if it freezes during exit.
messageWindow.destroy()
_terminate(NVDAHelper)
log.debug("core done")


Expand Down
13 changes: 0 additions & 13 deletions source/nvda.pyw
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,6 @@ It sets up logging, and then starts the core.
import logging
import sys
import os
import threading

from typing import IO

Expand Down Expand Up @@ -534,18 +533,6 @@ finally:
if globalVars.appArgs.changeScreenReaderFlag:
winUser.setSystemScreenReaderFlag(False)

# Log and join any remaining non-daemon threads here,
# before releasing our mutex and exiting.
# In a perfect world there should be none.
# If we don't do this, the NvDA process may stay alive after the mutex is released,
# which would cause issues for rpc / nvdaHelper.
# See issue #16933.
for thr in threading.enumerate():
if not thr.daemon and thr is not threading.current_thread():
log.info(f"Waiting on {thr}...")
thr.join()
log.info(f"Thread {thr.name} complete")

# From MS docs; "Multiple processes can have handles of the same mutex object"
# > Use the CloseHandle function to close the handle.
# > The system closes the handle automatically when the process terminates.
Expand Down

0 comments on commit 5ecc243

Please sign in to comment.