In Part 1 of this blog series we explored how the world of enterprise and application architecture is growing increasingly complex, and organizations need a solution that simplifies their landscape while ensuring high levels of data security and governance. To become a successful digital business, you need to select a data and application integration strategy that enables you to unify all your company’s data assets and analyze them in the context of your businesses bigger picture. If you haven't read that post, make sure to check it out here.Now that understand why data virtualization will become a major element of both API management and integration disciplines, you’ll want to know what is required in order to virtualize your data. What tools do you need to establish a consistent, virtualized view of your data across your application ecosystem?
We’ve identified 5 critical capabilities that are required to effectively virtualize your data in a RESTful paradigm:
 API NORMALIZATION
|The first challenge in data virtualization is to normalize access to the variety of endpoints that contain a given set of data objects in order to invoke them using a standardized set of URLs and to address product. Normalization of APIs includes the following:|
- Converting older protocols such as SOAP and WSDL to more easily accessible RESTful protocols. This can be done without changing the underlying service by providing a RESTful abstraction layer on top of the existing API. Standardize the payload to JSON to further insulate users from the underlying XML or other more complex structures.
- Standardize the URI and normalize paging, error codes and even searching to minimize the knowledge of each endpoint
- As a default you should provide access to the complete native payload including the return of custom fields.
Learn how our catalog of over 150 Elements, pre-built API integrations, offer a wide range of features including normalized pagination, error codes, search, authentication, eventing, and more.
 ENRICHED CATALOG OF METADATA
|Most modern APIs offer very limited metadata to describe the structure of the objects, fields and relationships between objects at the endpoint. For example, which fields are required when posting to the endpoint? Which fields can I search on? A data virtualization strategy should insulate the user from requiring in-depth knowledge of the end-point and therefore requires the ability to catalog richer metadata about each endpoint.|
Visit our Developer Portal - the one stop shop for all things Cloud Elements. You'll find API reference documentation, quick start guides, and templates.
 CANONICALIZED DATA MODELS
|The foundation of data virtualization is a centralized or canonicalized view of each data object. The canonicalized view provides a common perspective of each object regardless of the physical representation at each endpoint. The canonical model will contain all of the fields that you want to ensure are shared across every endpoint.|
Within the enterprise the challenge is that the ‘company’ object may be represented by dozens, hundreds, or thousands of endpoints. Enter in the canonicalized data model to provide a common perspective of each object.
Mark Geene, CEO & Co-Founder of Cloud Elements
 DATA TRANSFORMATION SERVICES
|Each endpoint will be mapped to the virtualized object and transformed to a consistent request and response payload. Transformation services include the ability to map fields from each endpoint to the canonicalized object while also including the ability to transform payload types and values. For example, one system may use High, Medium and Low while another uses 1, 2 or 3 to denote severity level. A transformation service will transform the values to a consistent model.|
 MULTI-LEVEL VIEWS
|Departments, divisions, partners or even customers need to incorporate their custom data fields into their view of the virtualized resource. A data virtualization strategy is not a one size fits all. Departments will need to accommodate the unique data attributes that they care about but aren’t necessarily required to by others teams.|
For example, marketing’s view of a contact will be different than the customer success team’s view. However, they will each need a core set of information to be consistent in order to move this data across your organization. A best practice as identified by Gartner is to have at least 3 levels of views into a data object. A corporate view of the data, a division or department and then an individual user level as well. Each of these views can roll up to the next but are not dependent upon the lower level. You can learn why Gartner named us a Visionary here.
In order to become a successful business in the digital world, it’s imperative that you choose the best data and application integration strategy that is right for your business. After all, Forrester estimates that Enterprise IT spent over $32 Billion on integration software alone in 2017. The right data and app integration strategy will empower you to unify all your company’s data assets and analyze them in context to get the the bigger picture of your business across each department, product line, and asset.
Data-driven app integration is the future of enterprise architecture and data is at the heart of this transformation. Data virtualization is becoming a key component of both API management and integration disciplines in order to bring order to the fragmentation and proliferation of data across an enterprise. With Virtual Data Resources you can re-architect and re-imagine your integration strategy with a one-to-many hub that places the data you care about at the center of your app ecosystem.
Gain more insights in our latest whitepaper on how enterprises are shifting to a model that places the data they care about at the center of their integration strategy through data virtualization. Get your copy by clicking below.