Wednesday, May 27, 2015

Preview for providing your Data Services to Mobile and Node.js


Three easy steps to connect your Data Virtualization OData Service and your Feedhenry Mobile Node.js Cloud App.

STEP 1: Openshift Application

Create your free OpenShift user account - instructions here

Deploy the cartridge via the OpenShift Web Console . On the Applications tab, choose 'Add Application...' - then select the 'JBoss Data Virtualization 6' cartridge under the xPaaS section. Full installation instructions are provided.

Alternately, if you have the OpenShift command line tools installed, you can deploy the DataVirtualization cartridge via command line:

rhc app-create <myApp> jboss-dv-6.0.0 

When the installation completes, you will be presented with a list of generated users and passwords similar to the screencap below. Make sure you save them!
installation users

STEP 2: WebUI Data Services

Use the WebUI (thanks to Mark Drilling) to create a Mashup Data Service with MySQL and Salesforce.


STEP 3: Feedhenry Mobile Application

Create a Feedhenry AngularJS Hello World Quick start for a Client and Cloud App.


Then create the Request to the OData URL with Basic Authentication in Node.js in the Cloud App.


Then modify the Client App to display the data.

Quick Video Describing the project.




References:
Data Virutalization Cartridge - https://github.com/jboss-datavirtualization/openshift-cartridge-datavirtualization

Monday, May 25, 2015

Data Virtualization Primer - The Architecture


This is the third in our Data Virtualization Primer Basics Series. I will cover the Data Virtualization architecture and components which are included in the presentation below. You can also check out the Data Virtualization Product Documentation.  We will also highlight the architecture in this article.

Our diagram to the right shows our Connect, Compose, Consume that highlights the Consumers, the Product and Sources. Data Virtualization gives us the Tools and Components to create, deploy, execute and monitor aggregated, federated data services.

The three main areas of Data Virtualization are:
  • Server - The server is positioned between business applications/Consumers and one or more data sources. An enterprise ready, scalable, managable, runtime for the Query Engine that runs inside JBoss AS that provides additional security, fault-tolerance, and administrative features.
  • Design Tools - The design tools are available to assist users in setting up Red Hat JBoss Data Virtualization for their desired data integration solution
  • Administration Tools - The administration tools are available for administrators to configure and monitor Red Hat JBoss Data Virtualization.



A JBoss Data Virtualization Server manages the following components:
  • Virtual Database - A virtual database (VDB) provides a unified view of data residing in multiple physical repositories. A VDB is composed of various data models and configuration information that describes which data sources are to be integrated and how. In particular, source models are used to represent the structure and characteristics of the physical data sources, and view models represent the structure and characteristics of the integrated data exposed to applications.
  • Access Layer - The access layer is the interface through which applications submit queries (relational, XML, XQuery and procedural) to the VDB via JDBC, ODBC or Web services.
  • Query Engine - When applications submit queries to a VDB via the access layer, the query engine produces an optimized query plan to provide efficient access to the required physical data sources as determined by the SQL criteria and the mappings between source and view models in the VDB. This query plan dictates processing order to ensure physical data sources are accessed in the most efficient manner.
  • Connector Architecture - Translators and resource adapters are used to provide transparent connectivity between the query engine and the physical data sources. A translator is used to convert queries into source-specific commands, and a resource adapter provides communication with the source.
The following design tools are available to assist users in setting up Red Hat JBoss Data Virtualization for their desired data integration solution:
  • Teiid Designer - Teiid Designer is a plug-in for JBoss Developer Studio, providing a graphical user interface to design and test virtual databases (VDBs).
  • Connector Development - The Connector Development Kit is a Java API that allows users to customize the connector architecture (translators and resource adapters) for specific integration scenarios.
  • WebUI - The WebUI allows building of Data Services through the browser so no local install is required.  This is developer preview and works with simple use cases currently
The following administration tools are available for administrators to configure and monitor Red Hat JBoss Data Virtualization.
  • AdminShell - AdminShell provides a script-based programming environment enabling users to access, monitor and control JBoss Data Virtualization.
  • Management Console - The Management Console provided by the Red Hat JBoss Enterprise Application Platform (EAP) is a web-based tool allowing system administrators to monitor and configure services deployed within a running JBoss EAP instance, including JBoss Data Virtualization.
  • Management CLI - The Management CLI (command-line interface) is provided by JBoss EAP to manage services deployed within a JBoss EAP instance. Operations can be performed in batch modes, allowing multiple tasks to be run as a group.
  • JBoss Operations Network - Red Hat JBoss Operations Network provides a single interface to deploy, manage, and monitor an entire deployment of Red Hat JBoss Middleware applications and services, including JBoss Data Virtualization.
  • Admin API - JBoss Data Virtualization includes a Java API ( org.teiid.adminapi ) that enables developers to connect to and configure JBoss Data Virtualization at runtime from within other applications.
  • Dashboard - The Dashboard builder allows connection to VDBs through the DV JDBC driver to visualize the data for testing and Business Analytics.



You can check out the product documentation as well at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Virtualization/6/html/Security_Guide/chap-Red_Hat_JBoss_Data_Virtualization.html#Data_Services_Platform_Overview.

Stayed tuned for the next Data Virtualization Primer topic!

Series 1 - The Basics
  1. Introduction
  2. The Concepts (SOAs, Data Services, Connectors, Models, VDBs)
  3. Architecture
  4. On Premise Server Installation
  5. JBDS and Integration Stack Installation
  6. WebUI Installation
  7. Teiid Designer - Using simple CSV/XML Datasources (Teiid Project, Perspective, Federation, VDB)
  8. JBoss Management Console
  9. The WebUI
  10. The Dashboard Builder
  11. OData with VDB
  12. JDBC Client
  13. ODBC Client
  14. DV on Openshift
  15. DV on Containers (Docker)

Thursday, May 21, 2015

Geo-spatial processing capabilities with Open Source Products


In this article we have a guest author, Rich Lucente.  Rich is a Red Hat Pre-sales engineer focusing on middleware and cloud computing initiatives for federal government customers.  He is going to discuss Geo-spatial processing capabilities with Open Source Products which include Fuse, BRMS, Data Virtualization and EAP.  You can find Rich on Linkedin at https://www.linkedin.com/profile/view?id=50013729 or email at rlucente@redhat.com.

Overview

Geo-spatial processing permeates the Department of Defense (DoD) with many solutions offered for tasks such as sensor and track fusion and correlation.  Geo-spatial tasks encompass a specialized knowledge domain, often requiring input from subject matter experts for an effective solution.  This article offers recommendations to modernize geo-spatial applications by leveraging current features and capabilities in popular open source products.  This does not go into sufficient detail to create a "fully baked" solution since that would require fully understanding the prerequisites, dependencies, and having access to key stakeholders and existing software capabilities.

A number of DoD programs have expressed an interest in modernization and Red Hat believes that several products in our middleware portfolio can be a key foundation to this effort.  Each product will be briefly described below with its applicability to this problem domain.

Red Hat JBoss Fuse 6.1

Red Hat JBoss Fuse 6.1 includes Apache Camel 2.12.0, which enables the definition of "routes" that specify chains, or pipelines, of activity on a message as it flows from a producer to a consumer.  These routes can include mediation, transformation, and various other processors.  Out of the box, Apache Camel includes a broad range of components that implement the protocols for endpoints.  Examples include common endpoints like filesystems, file transfer protocol (FTP), as well as more complicated interfaces like java database connectivity (JDBC) and web services (both REST and SOAP).

Traditional application and data flow when processing sensor measurements and tracks can be externalized into camel routes, enabling a more flexible processing solution.  The highly specialized processing for sensor and track fusion and correlation can still be embodied in specialized libraries that are accessed via custom message processors and/or custom camel components.  This approach provides more modularity by bubbling up the processing flow to a higher abstraction layer.

These routes can be combined with specialized geo-spatial persistence stores like PostGIS or MySQL with geo-spatial extensions.  Since camel components already exist for database interactions, this enables the results of the specialized library components to be persisted to geo-spatial data stores.  Camel routes can manage the flow of the data through a larger integrated system including subsystems and subcomponents that persist sensor measurement, track data, and correlation/fusion statistics into geo-spatial and other data sources.

Red Hat JBoss Business Rules Management System 6.1

Within complex specialized problem domains, many decision points exist on the type of data, the results of various statistical tests, and other heuristics to optimize the processing of the data.  These decisions are often buried in the implementation of the various libraries and sometimes are duplicated across software components, complicating any modernization and maintenance efforts.

Red Hat Business Rules Management System (BRMS) 6.1 specifically addresses the need to externalize various logical decisions into a versioned rules knowledgebase. Facts can be asserted into the knowledge session and then rules can be applied to prune the solution search space and create inferences on the data.  This externalization of key decision logic enables more flexibility and modularity in implementations.

Fusion and correlation algorithms for sensor measurements and tracks are replete with heuristics and decision logic to optimize the processing of this data.  Rather than bury decisions within the library implementations, BRMS can enable externalization of those decision points, providing for a greater level of flexibility in how tracks and sensor measurements are processed.

Red Hat JBoss Data Virtualization 6.1

Red Hat JBoss Data Virtualization (DV) 6.1 enables federation of multiple physical data sources into a single virtual database which may be exposed to an application as one more logical views.  Client applications can access each view as a web service (REST or SOAP), JDBC/ODBC connection, or OData (using Atom XML or JSON).  The DV tool offers an optimized query engine and a broad range of connectors to efficiently execute queries to populate the views.

Additionally, DV enables native query pass-throughs [1] to the underlying physical data source for those data sources that provide specialized query capabilities.  For example, databases with geo-spatial extensions can execute specialized queries like whether one object contains another.  By using query pass-throughs the DV query engine will not attempt further processing of the query but instead pass it as-is to the underlying geo-spatial datasource.  This pass-through query processing can be combined with standard SQL queries from other data sources so that DV can provide a highly customizable, flexible data access layer for client applications.  This data access layer can then be accessed as JDBC/ODBC, REST/SOAP web services and OData sources.

The Oracle and MongoDB translators within DV 6.1 also support geo-spatial operators.  Specifically,  the MongoDB translator [2] supports geo-spatial query operators in the "WHERE"  clause, when the data is stored in the GeoJSon format in the MongoDB  Document. These functions are supported:

  • CREATE FOREIGN FUNCTION geoIntersects (columnRef string,  type string, coordinates double[][]) RETURNS boolean;
  • CREATE FOREIGN FUNCTION geoWithin (ccolumnRef string,  type string, coordinates double[][]) RETURNS boolean;
  • CREATE FOREIGN FUNCTION near (ccolumnRef string,  coordinates double[], maxdistance integer) RETURNS boolean;
  • CREATE FOREIGN FUNCTION nearSphere (ccolumnRef string, coordinates double[], maxdistance integer) RETURNS boolean;
  • CREATE FOREIGN FUNCTION geoPolygonIntersects (ref string, north double, east double, west double, south double) RETURNS boolean;
  • CREATE FOREIGN FUNCTION geoPolygonWithin (ref string, north double, east double, west double, south double) RETURNS boolean;

The Oracle translator [3] supports the following geo-spatial functions:

  • Relate = sdo_relate                                                                 
  • CREATE FOREIGN FUNCTION sdo_relate (arg1 string,  arg2 string,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_relate (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_relate (arg1 string,  arg2 Object,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_relate (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
  • Nearest_Neighbor = dso_nn                                                                 
  • CREATE FOREIGN FUNCTION sdo_nn (arg1 string,  arg2 Object,  arg3 string,  arg4 integer) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_nn (arg1 Object,  arg2 Object,  arg3 string,  arg4 integer) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_nn (arg1 Object,  arg2 string,  arg3 string,  arg4 integer) RETURNS string;
  • Within_Distance = sdo_within_distance                                                                 
  • CREATE FOREIGN FUNCTION sdo_within_distance (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_within_distance (arg1 string,  arg2 Object,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_within_distance (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
  • Nearest_Neighbour_Distance = sdo_nn_distance                                                                 
  • CREATE FOREIGN FUNCTION sdo_nn_distance (arg integer) RETURNS integer;
  • Filter = sdo_filter                                                                 
  • CREATE FOREIGN FUNCTION sdo_filter (arg1 Object,  arg2 string,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_filter (arg1 Object,  arg2 Object,  arg3 string) RETURNS string;
  • CREATE FOREIGN FUNCTION sdo_filter (arg1 string,  arg2 object,  arg3 string) RETURNS string;

Hibernate Search in Enterprise Application Platform (EAP)

Besides the above, a canvas of activities across Red Hat show that the handling of geo-spatial information is also incorporated into other products.  Hibernate Search, which is part of Red Hat JBoss Enterprise Application Platform (EAP) and the Red Hat JBoss Web Framework Kit (WFK), implements geo-spatial query capabilities atop Apache Lucene.  The implementation enables either a classical range query on longitude/latitude or a hash/quad-tree indexed search when the data set is large.

The Geological Survey of the Netherlands (TNO) is using JBoss EAP 6 in conjunction with Hibernate Spatial to process geo-spatial data.  More information on this is available at https://www.tno.nl/en/focus-area/energy/geological-survey-of-the-netherlands/

Other programs within the Department of Defense are actively applying Red Hat technology as well.  Programs often leverage EAP as well as Apache Tomcat and Apache httpd within Enterprise Web Server to connect to backends in MySQL and MongoDB for basic track fusion and geo-spatial processing/querying and displaying  tracks on a map.

Conclusion

Geo-spatial processing is a key component of many DoD systems, at both the strategic and tactical level.  This article presented some alternatives to traditional implementations to more flexibly implement solutions that leverage features and capabilities in modern software frameworks.


To find out more examples and articles on each of the products you can also check out the resources from the Technical Marketing Managers:


  • EAP/JDG - Thomas Qvarnström - @tqvarnst
  • DV/Feedhenry - Kenny Peeples - @ossmentor
  • BRMS/BPMS - Eric D. Schabell - @ericschabell
  • Fuse/A-MQ - Christina Lin - @christina_wm
  • Wednesday, May 20, 2015

    Data Virtualization Primer - The Concepts


    Before we move on to Data Virtualization (DV) Architecture and jump into our first demo for the Primer, let's talk about the concepts and examine how and why we want to add a Data Abstraction Layer.

    This is the second in our Data Virtualization Primer Basics Series.  I cover the concepts in the presentation below which are also at http://teiid.jboss.org/basics/.  We will also highlight some of the concepts in this article.

    We have some main concepts that we should highlight which are:
    • Source Models
    • View Models
    • Translators
    • Resource Adaptors
    • Virtual Databases
    • Modeling and Execution Environments
    Source Models represent the structure and characteristics of physical data sources and the source model must be associated with a translator and a resource adaptor.  View Models represent the structure and characteristics you want to expose to your consumers.  These view models are used to define a layer of abstraction above the physical layer.  This enables information to be presented to consumers in business terms rather than as it is physically stored.  The views are defined using transformations between models.  The business views can be in a variety of forms: relational, XML or Web Services.

    A Translator provides a abstraction layer between the DV query engine and physical data source, that knows how to convert DV issued query commands into source specific commands and execute them using the Resource Adaptor.   DV provides pre-built translators like Oracle, DB2, MySQL, Postgres, etc.   The resource adaptor provides the connectivity to the physical data source.  This provides the way to natively issue commands and gather results.  A resource adaptor can be a Relational data source, web service, text file, main frame connection, etc.


    A Virtual Database (VDB) is a container for components used to integrate data from multiple data sources, so they can be accessed in a integrated manner through a single, uniform API.  The VDB contains the models.  There are 2 different types of VDBs.  The first is a dynamic VDB is defined using a simple XML file.  The second is a VDB through the DV designer in eclipse which is part of the integration stack and this VDB is in Java Archive (JAR) format.  The VDB is deployed to the Data Virtualization server and then the data services can be accessed through JDBC, ODBC, REST, SOAP, OData, etc.


    The two main high level components are the Modeling and Execution Environments.  The Modeling Environment is used to define the abstraction layers.  The Execution Environment is used to actualize the abstract structures from the underlying data, and expose them through standard APIs. The DV query engine is a required part of the execution environment, to optimally federate data from multiple disparate sources.

    Now that we highlighted the concepts, the last topic to cover is why the data abstraction, the data services, are good for SOA and Microservices.  Below are some of the reasons why the data services are important in these architectures:
    • Expose all data through a single uniform interface
    • Provide a single point of access to all business services in the system
    • Expose data using the same paradigm as business services - as "data services"
    • Expose legacy data sources as data services
    • Provide a uniform means of exposing/accessing metadata
    • Provide a searchable interface to data and metadata
    • Expose data relationships and semantics
    • Provide uniform access controls to information



    Stayed tuned for the next Data Virtualization Primer topic!

    Series 1 - The Basics
    1. Introduction
    2. The Concepts (SOAs, Data Services, Connectors, Models, VDBs)
    3. Architecture
    4. On Premise Server Installation
    5. JBDS and Integration Stack Installation
    6. WebUI Installation
    7. Teiid Designer - Using simple CSV/XML Datasources (Teiid Project, Perspective, Federation, VDB)
    8. JBoss Management Console
    9. The WebUI
    10. The Dashboard Builder
    11. OData with VDB
    12. JDBC Client
    13. ODBC Client
    14. DV on Openshift
    15. DV on Containers (Docker)

    Monday, May 18, 2015

    Data Virtualization Primer - Introduction

    This week we are starting the Data Virtualization Primer which I am splitting into 3 series - The Basics, The Connectors and the Solutions.  My goal is to publish one or two articles a week, each one covering a topic that can be reviewed in a short amount of time.  Demos and examples will be included and some of the topics will be broken into multiple parts to help easily digest them.  The planned outline is below as well as our first topic which is Data Virtualization Introduction.

    Series 1 - The Basics
    1. Introduction
    2. The Concepts (SOAs, Data Services, Connectors, Models, VDBs)
    3. Architecture
    4. On Premise Server Installation
    5. JBDS and Integration Stack Installation
    6. WebUI Installation
    7. Teiid Designer - Using simple CSV/XML Datasources (Teiid Project, Perspective, Federation, VDB)
    8. JBoss Management Console
    9. The WebUI
    10. The Dashboard Builder
    11. OData with VDB
    12. JDBC Client
    13. ODBC Client
    14. DV on Openshift
    15. DV on Containers (Docker)
    Series 2 - The Connectors

    This series will cover each connector including example demos of each.

    Series 3 - The solutions
    1. Big Data Example
    2. IoT Example
    3. Cloud Example
    4. Mobile Example
    5. JBoss Multi-product examples (Fuse/BPMS/BRMS/Feedhenry/DG)

    Thursday, May 14, 2015

    Unlock the value of SaaS within your enterprise



    Connecting systems of engagement, like CRM, with systems of record, such as ERP can be challenging.  Some systems reside on-premise, but more and more are moving to the cloud. And adoption of SaaS services, such as SalesForce, can make integration seem even more daunting.  But it doesn't have to be....

    You can quickly connect SaaS and on-premise applications to expand the value of both with Red Hat® JBoss® Fuse.  This lightweight enterprise service bus includes Apache Camel and makes these connections not only possible, but easy as well. 


    Join this webinar to learn about:
    • Connectors included with JBoss Fuse, specifically SalesForce and SAP connectors.
    • Connecting SalesForce with SAP through a demo.
    • How to easily expand the solution to mobile devices using a mobile application platform, FeedHenry™ by Red Hat.


    Speakers: 
    Kenny Peeples, JBoss technology evangelist, Red Hat
    Luis Cortes, Partner marketing manager, Red Hat

    Join the live event:
    Wednesday, May 27, 2015 | 11 a.m. EDT | 8 a.m. PDT

    Register here for the webinar.

    Wednesday, May 13, 2015

    Moving to Data Services for Microservices

    There have been alot of discussions on Microservices lately.  Alot of concentration has been around the services themselves.  But what about the Data that these Services need and use?  Should Data be tightly coupled to the microservice?   Should there be abstraction between the service and the data?  In this blog we will touch on Micro Data Services and how I think they can be created.

    A microservice is a software architecture style, a form of application development, in which applications are built and composed as a suite of services.  The services are small, independent, self contained, independently deployable and scalable.  They are highly decoupled and focus on a small task or capability.  So a formal definition:
    Microservices is an architectural approach, that emphasizes the functional decomposition of applications into single-purpose, loosely coupled services managed by cross-functional teams, for delivering and maintaining complex software systems with the velocity and quality required by today’s digital business.
    One of the characteristics of microservices, described by Martin Fowler, in his microservices article, is described as Decentralized Data Management.  He describes this as letting each service manage its own database.  Either different instances of the same database technology or entirely different database systems.  As he indicated this is an approach called Polyglot Persistence.   In the context of the database, this is referring to services using a mix of databases to take advantage of the fact that different databases are suitable for different types of programs.  Of course there maybe already existing silos or monolith databases that the microservices need to use.

    So first let's talk about going from Monolith to Microservices visually and then let's talk about how Data Virtualization can help Enterprises move to microservices.  


    The monolith application is single-tiered and the user interface and data access code are put in a single program in a single platform.  Usually a monolith describes main frame type applications with tight coupling of the components instead of reuse modularity.  There are several disadvantages to using the monolith approach:
    • Less iteration due to large code base and complex integration points with many dependencies
    • Maintenance  of the large code base
    • Code quality can be poor with the large code base







    The Microservice architecture encompasses the application components together into the small independent service including the data access.  I wanted to highlight some of the advantages to using microservices:

    ·         Microservice architecture gives developers the freedom to independently develop and deploy services
    ·         A microservice can be developed by a fairly small team
    ·         Code for different services can be written in different languages (though many practitioners discourage it)
    ·         Easy integration and automatic deployment (using open-source continuous integration tools such as Jenkins, Hudson, etc.)
    ·         Easy to understand and modify for developers, thus can help a new team member become productive quickly
    ·         The developers can make use of the latest technologies
    ·         The code is organized around business capabilities
    ·         Starts the web container more quickly, so the deployment is also faster
    ·         When change is required in a certain part of the application, only the related service can be modified and redeployed—no need to modify and redeploy the entire application
    ·         Better fault isolation: if one microservice fails, the other will continue to work (although one problematic area of a monolith application can jeopardize the entire system)
    ·         Easy to scale and integrate with third-party services
    ·         No long-term commitment to technology stack


    Now let’s move toward the data discussion with Microservices.  How can I create a Micro Data Service so the microservice has access to the data it needs and only the data it needs?  That is where we can pull in JBoss Data Virtualization to allow easy migration and adoption of microservices.   As seen in the diagram below we have a lot of different data sources that microservices may need.  So we can use Data Virtualization to add Micro Data Services for each of the microservices.  We can also add Security such as row level security and column masking to the Virtual Database (VDB).  The VDB can be created for each microservice or we can create Multiple Micro Views in the VDBs.  What are the Benefits to using Data Virtualization for Micro Data Services?
    ·         Connect to many Datasources
    ·         Create VDBs and Views according to capabilities
    ·         Expose the VDBs  through different standards (ODBC, JDBC, OData, REST, SOAP) for the microservices
    ·         Ability to place your Micro Data Service in the xPaaS on Openshift
    ·         Create the access levels based on roles for fine grained access
    ·         Keep your data stores as they are with new DV views and Migrate to new sources easily with DV
    ·         Provide the same data services used in the microservices to Business Intelligence Analytic tools
    Now that you see the advantages and I have peaked your curiosity, check out the Videos, Documentation and Downloads to start you first Data Service for use with your Microservices:


    References: