The guarantee that references enforce is "never null". This is a very important and useful feature of C++. We can completely skip any checks such as:
if (!str) { .. }
func1(LEX_CSTRING *str)
{
DBUG_ASSERT(str);
...
}
The code with *const references* is self documenting in this sense. It makes it very clear that the function does not own the object it is being passed (no questions wether the pointer should be freed or not) and that the reference is valid.
Other arguments, not strictly related to this use case, but are in favor of const references:
* By limiting our use of references to const references we avoid the ambiguity between regular pass-by-value arguments and non-const reference arguments. You work with them as if they are pass-by-value. Since they are const you can't do anything to them anyway, you're not changing any of the caller's data.
* Pointers are unclear wether they point to a buffer array or a single item. References do not allow you to do: ptr[0], ptr[1], ptr[2], etc.
* If we chose to make use of STL algorithms or C++11 features down the line, references tend to work easier than pointers. Can't come up with a good example right this moment, but most stl algorithms expect reference-based parameters, ex:
std::max(*int_ptr_a, *int_ptr_b) vs std::max(int_a, int_b)
* If some legacy code does need a pointer parameter, using the unary & operator on the reference will produce the required pointer type.
If we are doing refactoring anyway, I strongly request we reconsider the policy of avoiding references completely. I suggest we at least make use of them as input parameters to functions and methods in new code.
Vicențiu