Hi Forum
A question to all the GPS experts in this forum. I have managed to read in coordinates from a GPS and then use these coordinates to log tachy data or do setting out in the field.
But, if you have a look at the software from companies like Trimble, Leica, Topcon, etc. there is a correction that gets applied to the coordinates that comes from the GPS.
What we do is to set up our base station on a known point and then with the Rover we calibrate on a minimum of 2 known points. The program then calculates a correction which gets applied in that area to all the GPS coordinates.
Does anyone know the formula or the method to be used which will calculate this correction?
When using a Total Station, the correction that is applied depends on the distance away from the setup point. We use the Q-Point method. The intersecting rays from the beacons to the Total Station form a polygon and the most accurate result is the centroid of this polygon. Does this also apply to a GPS?
Thanks
Michael
A question to all the GPS experts in this forum. I have managed to read in coordinates from a GPS and then use these coordinates to log tachy data or do setting out in the field.
But, if you have a look at the software from companies like Trimble, Leica, Topcon, etc. there is a correction that gets applied to the coordinates that comes from the GPS.
What we do is to set up our base station on a known point and then with the Rover we calibrate on a minimum of 2 known points. The program then calculates a correction which gets applied in that area to all the GPS coordinates.
Does anyone know the formula or the method to be used which will calculate this correction?
When using a Total Station, the correction that is applied depends on the distance away from the setup point. We use the Q-Point method. The intersecting rays from the beacons to the Total Station form a polygon and the most accurate result is the centroid of this polygon. Does this also apply to a GPS?
Thanks
Michael