This can be a double-edged sword: it's important to remember that a passing test is not a guarantee. Tests are written by developers, and developers are fallible. Test cases may not exercise the behavior in precisely the same way as the code you're troubleshooting. Test cases may even be missing for the particular scenario you're operating under.
By offering a solid foundation of trustworthy assumptions, along with empirical proof as to their validity, you can eliminate many possible points of failure while troubleshooting, allowing you to focus on what remains. You must still take steps to verify that you do have test coverage for the situation you're looking at, in order to have confidence in the test results. If you find missing coverage, you can add a test to fill the gap; this will either pass, eliminating another possible point of failure, or it will fail, indicating a probable source of the issue.
Just don't take unit test results as gospel; tests must be maintained just like any other code, and just like any other code, they're capable of containing errors and oversights. Trust the results, but not the tests, and learn the difference between the two: results will reliably tell you whether the test you've written passed or failed. It is the test, however, that executes the code and judges passing or failing. The execution may not cover everything you need, and the judgement may be incorrect, or not checking all factors of the result.