These are some good questions.
First, is there a reason why you encoded xxA2, xxA5, xxA7, and xxAA, atomically, instead of as A1/A4/A6/A9 + combining outside depth area?
The goal in this proposal was not to come up with the smallest number of code points, but, where possible, to match longstanding industry and official legacy practice in this field.
The dotted circle, indicating "outside depth area" is used systematically, but forms only a small number of combinations with other symbols. In addition, while the "semantics" in these cases do combine, the glyphs actually do not. In actual fonts, the symbols are each rendered a tad smaller inside the circle, than without.
Given that, if you go to a combining model, you've put up two stumbling blocks to migration for no real gain in expressive power: first, everything would have to re-coded from a single PUA code or ASCII character to a sequence, and then the font would have to be upgraded to recognize the sequence and produce a combined glyph (simple overlays give substandard appearance - I won't even mention xxAA).
Contrast this with the buoys (cols xx0 and xx1) and topmarks (cols xx2 and xx3). Here, the number of combinations (excluding supposedly unrealistic ones) is on the order of a hundred, and it is proving tricky to be sure whether any particular one of them can safely be left out. In that instance, the combining approach reduces the character count appreciably, while giving the system some needed expressiveness and flexibility.
That's why we followed one of the legacy approaches here (which does use combining marks) and not the other (which uses a precomposed subset). What legacy practice uses here is overlays - base symbols and marks have no need for attachment points and similar fancy footwork that is needed, for example, to make the combining accents work - the glyphs are regular enough to use one consistent set of metrics for inclined, resp. upright navigation mark symbols.
Hence, formally upgrading these overlays to combining marks changes essentially nothing but the code point mapping for an important part of the legacy base. That's a win - win.
Second, I have a few possible unifications. Maybe the domain or glyph shapes are too disparate, but I don't think these should be dismissed out of hand (well, maybe the fish).
xxB3 -> U+1F41F
xx96 -> U+0305
xxD1 -> U+20DD
I'd agree with you the fish you mention is clearly not an appropriate unification target, because U+1F41F encodes the "picture of a fish" whereas xxB3 encodes an abstract, highly stylized symbol of a fish. And xxB2 must match with xxB3.
However, take another example, the counterpart to xxB1. This is clearly the existing anchor symbol at U+2693. I can see no reason not to make that particular unification, the symbols are clearly constructed on the same principle.
On the other two you mention, I'm in principle open to unification - but I want to be sure that the unification doesn't destroy implicit dependencies within the set. For example, xxD0 and xxD1 are both exactly the same in line weight and radius, and both have a certain size relative to the buoy symbols. If 20DD - after careful review - were to be found to fit that bill, then it might make a unification target.
Same for xx96. It needs to fit with xx97 and xx98 in line weight, height, and in its graduated length. Again, I have no problem investigating to what extent U+0305 fits these criteria. However, these characters are not "decorative", and they are used in an environment where some precision of representation is essential. They are, in this sense, much closer to math symbols than to emoji. For that reason, unless it can be established that the characters suggested for unifications are in fact indistinguishable, the wiser course would be not to engage in any hasty conflations.