Geodetic Framework – Company Policy

Corporate Policy

Quality assurance of geodetic parameters, seismic navigation data and well surface positions, amongst others, are too frequently overlooked during the exploration cycle often resulting in inferior geo-spatial data being used within interpretation projects.  Many organizations seldom possess internal resources to address these issues and so the data is assumed correct or the issues are ignored entirely.  The establishment of a basic geodetic framework within an organization can be far less onerous than first might appear.  Its establishment will help boost confidence in software performance and geo-spatial data quality ensuring that the data is fit for use, or if not, errors within the data can be quantified.  This document examines some of the fundamental building blocks associated with an integrated geodetic framework along with the issues each block attempts to address.

 

Introduction

An exploration project typically involves data acquisition, data processing, interpretation and drilling of an exploratory well. Rarely is the geo-spatial component of the data regulated in an adequate fashion as it passes through each stage of a project.  The potential for amplifying the error budget, in both horizontal and vertical components rises, often unwittingly to the awareness of the asset teams.  What are some of the main sources of error?

Is the geo-spatial data appropriately assessed to determine its usefulness to the project?  What quality control systems are adopted?  Is the data fit for purpose?  Have errors within the data been quantified so users are given guidance as to the worthiness that data can be integrated into a project?

Software applications rarely share commonality of geodetic libraries or geodetic algorithms used to perform coordinate operations (conversions and transformations).  Users, all too often, place blind faith that the software performs these operations correctly.  Furthermore, coordinate operations are often conducted by technicians who are under qualified to understand the potential error, that if the operation is performed incorrectly, induced into the project data.  Many applications provide users the ability to create new user-defined coordinate reference systems and coordinate transformations.  Are their definitions correct?  Against which authority were the parameters taken?  What is the potential effect on data integrity should the wrong values be applied?

Metadata and data attributes are often overlooked, discarded or lost.  This leads to assumptions being made about coordinate referencing.  This is particularly prevalent with legacy data but is not the sole domain of such data.  The incorrect use of record identifiers within the navigation data exchange file is another source of potential confusion, e.g. vessel positions being confused with CDP positions.  These lead to inline errors which are trickier to isolate.

A quality assurance system will ensure basic workflows are established.  They will capture essential geo-spatial parameters and ensure data has been subjected to a quality control procedure that has adopted the necessarily business rules to safeguard data integrity thus ensuring it is not compromised.

IT limitations often mean geo-spatial data is not stored in an appropriate data store or is stored in an inadequate software application ‘database’.  One consequence is that metadata and other essential data attributes are not preserved, thus degrading the worth of the data.  IT will invariably have no understanding of the quality assurance checks that should be applied to the data prior to it being released for general use.  Without lack of proper rules and workflows data of poor quality finds its way into interpretation projects thus potentially compromising the business decisions made from the data.

 

Return to Our Philosophy