Moreover, I feel to understand there is no concept of invalid code‑point.
Unicode Standard, at chapter §2.4 says:
Unicode Standard 6.2 wrote:
The range of integers used to code the abstract characters is called the codespace. A particular integer in this set is called a code point. When an abstract character is mapped or assigned to a particular code point in the codespace, it is then referred to as an encoded character.
In the Unicode Standard, the codespace consists of the integers from 0 to 10FFFF, comprising 1,114,112 code points available for assigning the repertoire of abstract characters.
A code‑point is just anything in the code‑space, which is the type's domain. If there are 1,114,112 elements in the code‑space and there are 1,114,112 code‑points, that means they all are code‑points, which implies there is nothing like “invalid code‑point”.
However, some code‑points, the ones in the range D800h .. DFFFh, cannot be encoded in an UTF‑8/16/32 sequence (otherwise the sequence is ill‑formed). That implies these code‑points cannot be transmitted, but that still does not means these are invalid code‑points.
A code‑point may just be valid or not valid according to some matters and usage.