My question is there an advanced option that will change the algorithm for sdrs to look at the reported total / used space of the datastore via the NFS RPC call vs calculating used space per VM.
We are basically running into an issue where we have 1 POD / 2 Datastores with roughly 40% dedupe and 2TB per datastore sitting empty we can't use.
I feel your pain but there's nothing in there today nor is any fix/workaround planned for any 5.x release. It's rumored to be addressed in 6.x but it requires a redesign of SDRS. Apparently the developers never envisioned that somebody would use SDRS with either de-duped datastores or thin-provisioned VMDKs.
I've got some volumes with over 80% de-dupe. I calculated at one point that if I wanted to use SDRS in our environment and not thin provision anything (either controller or guest), I'd have to buy another 80TB of disk space. SDRS is not worth another quarter-million bucks of enterprise storage.