Showing posts with label big data. Show all posts
Showing posts with label big data. Show all posts

Friday, October 30, 2015

Webinar: How to become a data-driven organization to achieve more and gain a competitive edge







Data-driven companies share characteristics that help them achieve more and gain a competitive advantage within their industry. The importance of data-driven thinking is not new, but what does it mean in practice?

Red Hat and Mammoth Data deliver a modern, cloud-ready architecture using Hadoop and Spark to transform your organization into a data-driven one.

Join this webinar to learn about:
  • What a data driven company is and what the challenges are to becoming one
  • Moving to real time—the stages of data-driven transformation
  • Data consolidation and analytics
  • Computer-aided decision making
  • Real-time decision making:
    • The semantic web, natural language, and “everything-as-a-service”
    • Modern data architecture and solutions
A case study, using a real-world situation, will also be a part of this webinar.
    Speakers: 

    Syed Rasheed, sr. product marketing manager, Red Hat
    Andrew C. Oliver, president and founder, Mammoth Data

    Join the live event:
    Time zone converter
    • Tuesday, November 17, 2015 | 11 a.m. EDT | 8 a.m. PDT

    Thursday, July 23, 2015

    Connecting to Cloudera Quickstart Virtual Machine from Data Virtualization and SQuirreL

    http://www.redhat.com/en/files/resources/en-rhjb-ventana-research-infographic.pdf
    One of the great capabilities of JBoss Data Virtualization is the ability to connect to Hadoop through Hive which was added as part of Data Virtualization (DV) 6.0.  This gives us the ability to aggregate data from multiple datasources that include big data.  This also gives us the ability to analyze our data through many different tools and standards.  From the Re-Think Data Integration infographic, more than one-quarter of companies see virtualizing data as a critical approach to integrating big data.  With DV 6.1 Cloudera Impala was added for fast SQL query access to data stored in Hadoop.  So I wanted to add an example of how to use Cloudera Impala as a Data Source for DV.  To find out how Cloudera Impala fits into the Hadoop ecosystem take a look at the Impala Concepts and Architecture documentation.

    Differences between Hive and Impala

    First let's take a look at an overview of Hive and Impala.  Cloudera Impala is a native Massive Parallel Processing (MPP) query engine which enables users to perform interactive analysis of data stored in HBase or HDFS.  The Apache Hive data warehouse software facilitates querying and managing large data sets residing in distributed storage and provides a mechanism to project structure onto this data and query the data using a SQL-like language called HiveQL.  I grabbed some of the information from the Cloudera Impala FAQ.

    How does Impala compare to Hive (and Pig)?  Impala is different from Hive because it uses its own daemons that are spread across the cluster of queries.  Since Impala does not rely on MapReduce it avoices the startup overhead of MapReduce jobs which allows Impala to return results in real time.
    Can any Impala query also be executed in Hive? Yes.  There are some minor differences in how some queries are handled, but Impala queries can also be completed in Hive.  Impala SQL is a subset of HiveQL.  Impala is maintained by Cloudera while Hive is maintained by Apache.
    Can I use Impala to query data already loaded into Hive and HBase? There are no additional steps to allow Impala to query tables managed by Hive, whether they are stored in HDFS or HBase.  Impala is configured to access the Hive metastore.
    Is Hive an Impala requirement?  The Hive metastore service is a requirement.  Impala shares the same metastore database as Hive, allowing Impala and Hive to access the same tables transparently.  Hive itself is optional.
    What are good use cases for Impala as opposed to Hive or MapReduce?  Impala is well-suited to executing SQL queries for interactive exploratory analytics on large data sets.  Hive and MapReduce are appropriate for very long running batch-oriented tasks such as ETL.

    Cloudera Setup

    There are several options that Cloudera offers to test their product.  There are quickstart VMs, Cloudera Live and Local install for CDH.  For quick setup I chose a quick start Virtual Box Virtual Machine for CDH 5.4.x for a single-node Hadoop Cluster with examples for easy learning.  The VMs run CentOS 6.4 and are available for VMWare, VirtualBox and KVM.  All of them require 64-bit host OS.  I also chose a bridged network, increased the memory and CPUs. Once the VM is downloaded you extract the files from the ZIP, import the VM and make network/memory/CPU setting changes.  Then the VM can be started.


    Once you launch the VM, you are automatically logged in as the cloudera user. The account details are:
    • username: cloudera
    • password: cloudera
    The cloudera account has sudo privileges in the VM. The root account password is cloudera.  The root MySQL password (and the password for other MySQL user accounts) is also cloudera.  Hue and Cloudera Manager use the same credentials.  Then we want to browse to the Cloudera Manager at quickstart.cloudera:7180/cmf/home.  We want to make sure our services are started such as Impala, Hive, YARN, etc.


    Now that we have the VM setup we want to make sure we can add data.  I ran through Cloudera's Tutorial Exercise 1-3 at quickstart.cloudera/tutorial/home.  Then we can see the tables and data through the Hue UI (quickstart.cloudera:8888) through impala and hive.


    Now we have our Big Data Environment running so let's move onto testing Impala and Hive.

    SQuirreL Testing

    SQuirreL SQL Client is a free open source graphical SQL client written in Java that will allow you to view the structure of a JDBC compliant database, browse the data in tables, issue SQL commands etc.  First we have to download the JDBC drivers for Impala and Hive.  First we will go through Impala in SQuirrel.  I downloaded the Impala JDBC v2.5.22 driver.  Then I unzipped the jdbc4 zip file.  We add the JDBC driver by clicking on the drivers tab, then the add driver button.  In the extra class path tab click on Add, browse to the extracted jar files and add them along with the Name and Class Name.


    Once the driver is added  the driver should have a green check mark.


    Next we click on the Aliases tab and add a new connection.  


    We add the driver and URL and then click connect.  Once connected we can browse to the tables, select a table and preview content.


    Now we have previewed the data through Impala through SQuirreL.  Now we want to test Hive as well. We download the Hive 1.2.1 driver from Hive Apache.  We do the same as above and add the driver by pointing to the jars in the lib directory of the download and use Driver Class org.apache.hive.jdbc.HiveDriver.  Once the driver is added then we create a session connecting to Hive. 


    We can view the tables and content which shows we can now use Hive and Impala through DV to access the data in Cloudera.


    Data Virtualization Testing

    Now we can move onto testing Cloudera Impala with DV.  DV 6.1 GA can be downloaded from jboss.org.  We will also use JBoss Developer Studio and the JBoss Integration Stack.

    Data Virtualization Setup

    Run through the DV install instructions to install the product.  Then we want to update the files for the Cloudera Impala JDBC driver.

    To configure the Cloudera's Impala as datasource with DV

    1) Make sure the server is not running.
    2) In the modules directory create the org/apache/hadoop/impala/main folder.


    3) Within the folder we want to create the module.xml file.

    <?xml version="1.0" encoding="UTF-8"?>
    <module xmlns="urn:jboss:module:1.0" name="org.apache.hadoop.impala">
        <resources>
          <resource-root path="ImpalaJDBC4.jar"/>
          <resource-root path="hive_metastore.jar"/>
          <resource-root path="hive_service.jar"/>
          <resource-root path="libfb303-0.9.0.jar"/>
          <resource-root path="libthrift-0.9.0.jar"/>
          <resource-root path="TCLIServiceClient.jar"/> 
          <resource-root path="ql.jar"/>        
        </resources>
    
        <dependencies>
            <module name="org.apache.log4j"/>
            <module name="org.slf4j"/>
     <module name="org.apache.commons.logging"/>
            <module name="javax.api"/>
            <module name="javax.resource.api"/>        
        </dependencies>
    </module>
    

    Note that the resources points to the jar files included with the Impala driver.

    4) Now copy the above JDBC jar files in the resources section to the folder.
    5) Next we update the standalone.xml file in the standalone/configuration folder.  The driver and datasource can be added.

    <datasource jndi-name="java:/impala-ds" pool-name="ImpalaDS" enabled="true" use-java-context="true">
         <connection-url>jdbc:impala://10.1.10.168:21050/;auth=noSasl</connection-url>
         <driver>impala</driver>
    </datasource>
    

    <driver name="impala" module="org.apache.hadoop.impala">
         <driver-class>com.cloudera.impala.jdbc4.Driver</driver-class>
    </driver>
    

    6) Now we can start the server

    JBoss Developer Studio Setup

    Run through the install instructions for installing the JBoss Developer Studio and installing the Teiid Components from the Integration Stack.   Create a new workspace, ie dvdemo-cloudera.  Then we will create a example view.

    1. Create a New Teiid Model Project, ie clouderaimpalatest


    2. Next Import metadata using JDBC from a database into a new or existing relational model using Data Tools JDBC data source connection profile.


    3.  Next we create a new connection profile with the Generic JDBC Profile Type.


    4.  We create a new driver, ie Impala Driver, by adding all the jars and setting the connection settings.  The username/password should be ignored.







    5. Next we import the metadata and select the types of objects in the database to import.  We will just choose the tables and then the sources folder.




    6. We add a new server and set it as externally managed.  We start the server externally and then click the start button within JBDS,  


    7. Within the sources folder in the Teiid Perspective we right click one of the tables then Modeling and Preview Data.  If everything is setup properly then Data will display.


    8. Now we can create a layer or View to add an abstract layer.  In our case we are just going to create a View for a one to one to Source example.  But to show the power of DV we would normally aggregate or federate multiple sources into a view either in this layer or create another layer above that uses the lower layers for greater abstraction and flexibility.  We will test with the customers table.  After creating our view with table we tie the source table to the view.  We also set the primary key on the customerid so when we create the VDB OData is available.  We can also preview the data on the view.


    9.  We create a VDB that we can deploy and execute to the server.


    10.  After right clicking on the clouderaimpalatest.vdb we click on deploy so it is deployed to the server.  Next we can browse to the OData to show the data as a consumer.

    -First we take a look at the metadata


    -Then we can list all the customers


    References

    https://hive.apache.org/
    https://developer.jboss.org/wiki/ConnectToAHadoopSourceUsingHive2
    http://www.cloudera.com/content/cloudera/en/downloads/connectors/impala/jdbc/impala-jdbc-v2-5-5.html
    https://github.com/onefoursix/Cloudera-Impala-JDBC-Example

    Monday, March 2, 2015

    Using a Customer Context with the Camel Components and Data Virtualization


    Overview


    Cojan van Ballegooijen, Red Hat Senior Solution Architect, Bill Kemp, Red Hat Senior Solution Architect, and myself have created an example around a Customer Context Use Case to show how to use the Camel Components in Fuse to access a Data Virtualization Virtual Database (VDB).  The data service provides the customer context which is aggregated data from a XML file and CSV file.  The data on each customer provides the name, the credit score, the number of calls the customer has placed to customer support and the sentiment (Hot, Cold, Warm) toward the company from social media.  We will review the components and show how to run the demo.  The demo repository is located in jbossdemocentral on github.  In our project directory we have the individual use cases which are built and deployed when running the scripts.  The Teiid jdbc jar is loaded into the profile with wrap file during the run script.

    Use Case 1 - JDBC Component

    In the first use case we are set up the bean for the sql query that we want to execute and the bean for the datasource properties.  A timer component runs a query every 60 seconds, the results from the query are then split into individual records and then sent to the log.  We are using the Blueprint DSL in the blueprint.xml.

    blueprint.xml design view
    blueprint.xml source view with the query property and datasource properties
    Note the url that accesses the CustomerContext Virtual Database.  Also the query is set in the body and the datasource name is part of the jdbc URI.

    JDBC Component excerpt from the Camel Component Page:
    The jdbc component enables you to access databases through JDBC, where SQL queries (SELECT) and operations (INSERT, UPDATE, etc) are sent in the message body. This component uses the standard JDBC API, unlike the Camel SQL Component component, which uses spring-jdbc.

    Maven users will need to add thecamel-jdbc dependency to their pom.xml for this component.  This component can only be used to define producer endpoints, which means that you cannot use the JDBC component in a from() statement.  The URI Format for the JDBC component is:

    jdbc:dataSourceName[?options]

    This component only supports producer endpoints.   You can append query options to the URI in the following format, ?option=value&option=value&...

    Use Case 2 - SQL Component

    The second use case is similar to the first in that a timer component runs a query every 60 seconds, the results from the query are then split into individual records and then sent to the log.  Also we are using the Blueprint DSL in the blueprint.xml.  The difference with the SQL component is the query is part of the URI of the component.   Also we are loading the datasource into the SqlComponent class.

    blueprint.xml design view
    blueprint.xml source view of SqlComponent with datasource reference
    SQL Component excerpt from the Camel Component Page:
    The sql: component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query.   From Camel 2.11 onwards this component can create both consumer (e.g. from()) and producer endpoints (e.g. to()).  In previous versions, it could only act as a producer.

    This component uses spring-jdbc behind the scenes for the actual SQL handling.  Maven users will need to add the camel-sql dependency to their pom.xml for this component.  The SQL component also supports:
    The SQL component uses the following endpoint URI notation:

    sql:select * from table where id=# order by name[?options]

    You can append query options to the URI in the following format, ?option=value&option=value&...

    Use Case 3 - Olingo Component

    The Olingo component will be a part of Fuse 6.2 so we decided to wait until that release in order to document and add this component to this demo.  You can try an example with Camel 2.14 which we have in the https://github.com/jbossdemocentral/dv-fuse-integration-demo/tree/master/projects/DVWorkspacewithFuseTest/olingo2 folder of the project.  We will cover in more detail in a follow up article for the Olingo component.

    Olingo Component excerpt from Camel Component Page:
    The Olingo2 component utilizes Apache Olingo version 2.0 APIs to interact with OData 2.0 and 3.0 compliant services. A number of popular commercial and enterprise vendors and products support the OData protocol. A sample list of supporting products can be found on the OData website.

    Maven users will need to add the camel-olingo2 dependency to their pom.xml for this component.  The URI format for the Olingo component is:

    olingo2://endpoint/?[options]

    Use Case 4 - JETTY Component for a REST Service

    For Use Case 4 we use a REST service to expose the OData Data Virtualization Service.

    blueprint.xml design view
    blueprint.xml source view of the route
    Note the CustomerContextVDB OData service is being used with the DV Username and Password as parameters.  This returns all the data when accessing the Jetty URL, http://localhost:9000/usecase4.

    Jetty Component excerpt from the Camel Component Page:
    The jetty component provides HTTP-based endpoints for consuming and producing HTTP requests. That is, the Jetty component behaves as a simple Web server. Jetty can also be used as a http client which mean you can also use it with Camel as a producer.

    Maven users will need to add the camel-jetty dependency to their pom.xml for this component.  The URI format is:

    jetty:http://hostname[:port][/resourceUri][?options]

    You can append query options to the URI in the following format, ?option=value&option=value&...

    Running the Project

    Step 1: Download and unzip the repository or Clone the repository. If running on Windows, it is reccommended the project be extracted to a location near the root drive path due to limitations of length of file/path names.

    Step 2: Add the DV and Fuse Products to the software directory.  You can download them from the Customer Support Portal (CSP) or jboss.org.

    Step 3: Run 'init.sh' or 'init.bat' to setup the environment locally. 'init.bat' must be run with Administrative privileges.

    Step 4: Run 'run.sh' or 'run.bat' to start the servers, create the container and deploy the bundles.

    Step 5: Sign onto the Fuse Management console, http://localhost:8181, with the admin user and check the console log to see the output from the routes for the use cases. You can also view the Camel Diagrams.  Browse to http://localhost:9000/usecase4 to see the data for Use Case 4 through Jetty.

    The demo can be run in a docker container in addition to a local install. Full instructions can be found in support/docker/README.md of the project.

    Sunday, February 15, 2015

    JBoss Data Virtualization Sizing Architecture Tool


    The JBoss Data Virtualization Sizing Architecture Tool is a simple web application that has around 10 - 15 questions.  After all questions are answered and submitted, corresponding recommendations for Data Virtualization will be presented.  The recommendations include:
    • How many servers are need, with how many cores?
    • How much memory/JVM size for each node?
    • Suggestions of configuration changes for any performance improvement.
    Follow the link, Sign on with your Red Hat account and click start to enter the responses to the questions to get a recommendation.



    Friday, January 9, 2015

    JBoss Data Virtualization 6.1 Beta Available


    JDV 6.1 beta is available for download from
    - JBoss.org at http://www.jboss.org/products/datavirt/overview/
    - Customer Portal at  https://access.redhat.com/products/red-hat-jboss-data-virtualization

    JDV 6.1 beta Documentation is available at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Virtualization/

    For JDV 6.1, we focused on three major areas:

        •    Big Data
        •    Cloud
        •    Development and Deployment Improvements

    with the following new features and enhancements

    BIG DATA

     - Cloudera Impala
    In addition to the Apache Hive support released in JDV 6.0, JDV 6.1 will also offer support for Cloudera Impala for fast SQL query access to data stored in Hadoop.  Support of Impala is aligned with our growing partnership with Cloudera that was announced in October.

    - Apache Solr
    New in JDV 6.1 is support for Apache Solr as a data source.  With Apache Solr, JDV customers will be able to take advantage of enterprise search capabilities for organized retrieval of structured and unstructured data.

    - MongoDB
    Support for MongoDV as a NoSQL data source was released in Technical Preview in JDV 6.0 and will be fully supported in JDV 6.1. Support of MongoDB brings support for a document-oriented NoSQL database to JDV customers.

    - JDG 6.3
    Support for JDG as a data source was new in JDV 6.0.  We expand on this support in JDV 6.1, with the ability to perform writes in addition to reads.  JDV 6.1 users can also take advantage of JDG Library mode as an embedded cache in addition to the support as a remote cache that was previously available.

    - Apache Cassandra (Tech Preview)
    Apache Cassandra will be released as a Technical Preview in JDV 6.1.  Support of Apache Cassandra brings support for the popular columnar NoSQL database to JDV customers.

    CLOUD

    - OpenShift Online with new WebUI
    We introduced JDV in OpenShift Online as Developer Preview with the JDV 6.0 release and will update our Developer Preview cartridge for JDV 6.1. With JDV 6.1, we are adding a WebUI that focuses on ease of use for web and mobile developers.  This lightweight user interface allows users to quickly access a library of existing data services, or create one of their own in a top-down manner.  Getting Started instructions can be found here:  https://developer.jboss.org/wiki/IntroToTheDataVirtualizationWebInterfaceOnOpenShift

    - SFDC Bulk API
    With JDV 6.1 we improve support for the Salesforce.com Bulk API with a more RESTful interface and better resource handling.  The SFDC Bulk API is optimized for loading very large sets of data.

    - Cloud Enablement
    With JDV 6.1 we will have full support of JBoss Data Virtualization on Amazon EC2 and Google Compute Engine.


    PRODUCTIVITY AND DEPLOYMENT IMPROVEMENTS

    -Security audit log dashboard
    Consistent centralized security capabilities across multiple heterogeneous data sources is a key value proposition for JDV.  In JDV we add a security audit log dashboard that can be viewed in the dashboard builder which is included with JDV.   The security audit log works with JDV’s RBAC feature and displays who has been accessing what data and when.

    - Custom Translator improvements
    JDV offers a large number of supported data sources out of box and also provides the capability for users to build their own custom translators. In JDV 6.1 we are providing features to improve usability including archetype templates that can be used to generate a starting maven project for custom development.  When the project is created, it will contain the essential classes and resources to begin adding custom logic.

    - Azul Zing JVM
    JDV 6.1 will provide support for Azul Zing JVM.  Azul Zing is optimized for Linux server deployments and designed for enterprise applications and workloads that require any combination of large memory, high transaction rates, low latency, consistent response times or high sustained throughput.

    - MariaDB
    JDV 6.1 will support MariaDB as a data source.  MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features.

    - Apache POI Connector for Excel
    JDV has long supported Microsoft Excel as a data source.  In JDV 6.1, we add support for the Apache POI connector that allows reading of Microsoft Excel documents on all platforms.

    - Performance Improvements
    We continue to invest in improved performance with every release of JDV.  In JDV 6.1, we focused particularly on improving performance with dependent joins including greater control over full dependent join pushdown to the datasource(s).

    - EAP 6.3
    JDV 6.1 will be based on EAP 6.3 and take advantage of the new patching capabilities provided by EAP.

    Monday, October 27, 2014

    The past week in review for the JBoss Community

    This week is my first time posting for the weekly editorial and excited to join the team. There is a lot to highlight for this weeks editorial. Autumn is now upon us in the Northern Hemisphere which marks the transition from summer to winter. The arrival of night is earlier, temperature is cooling and leaves are turning color as well as falling. I was at the Boston office a couple of weeks ago and the area was beautiful with the change in the color of the leaves in addition to the cool weather. With the transition of warm to cold weather autumn is known as the primary harvest with many harvest festivals celebrated across the globe. Whether you celebrate Labor Thanksgiving Day in Japan, the Dutch Feast of Saint Martin of Tours, American Thanksgiving Feast, Canadian Thanksgiving Feast, German Martinmas, Czech Republic Posviceni/Obzinky, Chinese Harvest Moon Festival, etc., have a great Autumn.

    Now on to our exciting JBoss weekly content

    Job Opening


    Red Hat is the best company in the world to work. I have enjoyed Red Hat since day one and continue to enjoy the work, the people and the open source culture. We have a current job opening for a Software Sustaining Engineer who will help improve the quality of the BRMS and BPM Suite platforms which are the productised versions of the Drools and jBPM open source projects. So if you love Drools and jBPM, and want to help make them even better and even more robust - then this is the job for you The role is remote, so you can be based almost anywhere.


    Events


    We had several events that took place plus some coming up:
    • 300+ kids, 16 speakers (4 from middle/high school), 6 rooms, 24 sessions of 75 mins each = extremely rewarding weekend + inspired kids! Silicon Valley Code Camp Kids (SVCC.kids) is a one-day event that is a new addition to the famous Silicon Valley Code Camp (SVCC) event. The event was held at Foothill College in Los Altos Hills, CA on October 12th.
    • DecisionCAMP 2014 took place at San Jose on October 13-15 which is a free conference in the San Jose area for business rules and decision management practitioners. The conference concentrates on Business Rules and Decision Management. Decision Management is the art or science, depending on your perspective, of automating decisions in your systems.
    • Last week was the Openslava 2014 Conference for emerging technologies and open-source in Bratislava, Slovakia. Videos from the talks will be published soon. Markus Eisele posted a Trip Report which also included a video and presentation on 50 best features of Java EE 7.
    • Coming up in November we have several people from Red Hat involved at Devoxx BE 2014. Devoxx has grown to be one of the most popular Java conferences series in Europe. This year we are excited to announce that JBoss will be presenting a keynote on the future capabilities of PaaS. We have severalspeakers who are speaking on a variety of topics. Visit JBoss Community members at Devoxx University, the Hackergarten, the sessions or have a drink with us at Nox!

    Blog/Articles


    A lot of blogs and articles were posted the last couple of weeks so I listed them here for your reading pleasure:
    1. JBoss Teiid
    2. JBoss BRMS and JBoss BPM Suite
    3. JBoss Fuse
    4. JBoss Wildfly
    5. JBoss Aerogear
    • Markus Eisele provided another episode his developer interview which took place with Matthias Wessendorf. Matthias is working at Red Hat where he is leading the AeroGear project. Previously, he was the PMC Chair of the Apache MyFaces project. Matthias is a regular conference speaker.

    Releases


    The last couple of weeks we had several new project releases. Take all of them for a spin and enjoy!
    • JBoss Tools and Developer Studio for Eclipse Luna! There have been many feature additions and a lot of bug fixing polish going into this main release and these have been documented/described in details atWhat’s New.
    • Immutant 2 (The Deuce) Alpha2 Released! We're as happy as a cat getting vacuumed to announce our second alpha release of The Deuce, Immutant 2.0.0-alpha2. Big, special thanks to all our early adopters who provided invaluable feedback on alpha1 and our incremental releases.
    • Infinispan 7.0.0.CR2 released! As we approach final release, the main themes of this CR were bug fixes and enhancements, many related to Partition Handling.
    • JGroups 3.6.0.Final released! We just released 3.6.0.Final to SourceForge [1] and Nexus. It contains a few new features, but mostly optimizations and a few bug fixes. It is a small release before starting work on the big 4.0.
    • RichFaces 4.5.0.CR2 Release Announcement! We have a second candidate release for RichFaces 4.5 (4.5.0.CR2) available. We’ve fixed a couple of regressions uncovered by both our community and QA team.
    • Teiid 8.9 CR1 Posted! After a small delay Teiid 8.9 CR1 has been posted to the maven repository and the download page.
    • SwitchYard 2.0.0.Alpha3 Now Available! The SwitchYard team has been making steady progress on the 2.0 release and I'm pleased to announce the latest preview of SwitchYard 2.0, Alpha3. We're rapidly approaching beta quality and the only think keeping this release from being called a beta is the lack of support for BPM and rules components on WildFly. Overall, the team has made great progress improving stability, especially on Fuse/Karaf.

    That's all for this week, please join us next week when we will share more news about the JBoss Community.

    Thursday, October 16, 2014

    Government and Industry Partnership Summit


    I am doing an Ignite talk at the C5ISR (Command, Control, Communications, Computers, Combat Systems, Intelligence, Surveillance, and Reconnaissance) Conference this year in Charleston, SC on November 19.   Over the past seven years, the Annual C5ISR Government/Industry Partnership Summit has grown into the Premiere East Coast technical event, and the 8th Annual 2014 Summit is shaping up to be our best event yet! This years’ theme is “Technologies Enabling Information Dominance”, and will feature National-Level Speakers, Interactive Workshops, Specialized Technical Tracks, Receptions, Exhibits, and as always, unparalleled networking opportunities!

    You can find the agenda here: http://www.cvent.com/events/eighth-annual-c5isr-government-and-industry-partnership-summit/agenda-9049a22c7fca4e858a6c70acfd06c611.aspx

    You can find the registration information here: http://www.cvent.com/events/eighth-annual-c5isr-government-and-industry-partnership-summit/invitation-9049a22c7fca4e858a6c70acfd06c611.aspx

    Speakers for the 8th Annual Summit include:

    • VADM Ted Branch, Deputy Chief of Naval Operations for Information Dominance, and Director of Naval Intelligence. (Accepted)
    • VADM Jan Tighe, Commander, U.S. Fleet Cyber Command (U.S. 10th Fleet) (Accepted)
    • RADM David Lewis, SPAWARSYSCOM (Accepted)
    • Major General Vincent Stewart, Commander, MARFORCYBERCOM (Accepted)
    • BGen Kevin Nally, USMC CIO (Accepted)
    • RDML Christian Becker, PEO C4I; PEO Space Systems (Accepted)
    • Mr. David DeVries, Acting DoD CIO (Accepted)
    • Ms. Janice Haith, DoN, Deputy CIO (Accepted)
    Find out more information by visiting the website: http://www.cvent.com/events/eighth-annual-c5isr-government-and-industry-partnership-summit/event-summary-9049a22c7fca4e858a6c70acfd06c611.aspx

    Wednesday, September 3, 2014

    Discover Red Hat and Apache Hadoop for the Modern Data Architecture


    I will be doing 2 joint webinars in September with Hortonworks.  Please register here and join us for Hadoop and Data Virtualization  Use Cases.

    As the enterprise's big data program matures and Apache Hadoop becomes more deeply embedded in critical operations, the ability to support and operate it efficiently and reliably becomes increasingly important. To aid enterprise in operating modern data architecture at scale, Red hat and Hortonworks have collaborated to integrate HDP with Red Hat's proven platform technologies.

    Join us in this interactive series, as we'll demonstrate how Red Hat JBoss Data Virtualization can integrate with Hadoop through Hive and provide users easy access to data.

    Here's what you'll be signing up for:

    Webinar 1, September 3 @10am PST: Red Hat and Hortonworks: Delivering the open modern data architecture
    Webinar 2, September 10 @10am PST: Red Hat JBoss Data Virtualization and HDP: Evolving your data into strategic asset (demo/deep dive)
    Webinar 3, September 17 @10am PST: Red Hat JBoss and Hortonworks: Enabling the Data Lake (demo/deep dive)

    You'll receive an individualized email for each webinar (3 in total) upon registration.

    Speakers:
    John Kreisa, VP Strategic Marketing, Hortonworks
    Raghuram Thiagarajan, Director, Product Management, Hortonworks
    Robert Cardwell, VP Strategic Partnerships and Aliiances, Red Hat
    Syed Rasheed, Senior Principal Product Marketing. Red Hat
    Kim Palko, Principal Product Manager, Middleware, Red Hat
    Kenneth Peeples, Principal Product Marketing Manager, Middleware, Red Hat