Instead of testing everything, it can be necessary for better efficiency, to not write tests for some code.
A meaningful solution would be to test the public interface only. And then, only if one has to deal with bugs that are better to track by test code than by the debugger alone, one would write test code for the respective parts of the protected and private interfaces.
An even more lightweight, better solution is to test only the typical “use cases” of a class, not even all of the public methods. This also means that tests will not be structured one per public method of the tested class, but one per use case, which means better coherence. Normally, the use cases are the most abstract functionality a class provides, so that when using them all other methods get also called (like private methods and public methods on lower levels of abstraction). Means that errors in the other methods will also turn out, and only if you need an additional test for debugging these errors you’ll write one. Place these tests above the use case tests, so that the order of failing is the order of meaningful correction (correcting tests on low abstraction levels first, as this will perhaps fix other ones automatically).
The above idea of testing use cases can even be extended to packages and whole libraries: test only the typical mode of application of a package or library, and only if errors occur, write tests for lower-level code. There is however a limit to this mode of testing: whenever singling out the broken part is or becomes on average more time-consuming or frustrating than writing tests for the code that’s one abstraction level lower, write the tests. Because, in analogy, you write tests before using code in a real-world application because finding bugs by testing is less time consuming and less frustrating than in the real-world application, because the testing environment is reduced in distracting complexity as much as possible.
For structuring the tests, it seems a good idea to use an array of test data that go from simple to advanced / hard tests. So when a test fails, one knows that the simple, underlying basic functionality is ok.