Tuesday, April 21, 2015

Connecting to SAP from Fuse 6.1 Part 2 - Java Connector (JCo) Component

In the first part of connecting to SAP from Fuse 6.1 we showed how to use the Camel SAP Netweaver Gateway Component with a sample developer account and sample data.  The Netweaver Gateway component is part of Camel as of 2.12.   In this second part we examine using the Camel SAP JCo (Java Connector) component.  This camel component is included as part of the Fuse 6.1 Enterprise product and is supported by Red Hat but is not part of the Camel community at this point.

The diagram shows the technical schema of data conversion in the SAP JCo (standalone version). Starting from a Java application, a Java method is forwarded via the JCo Java API (Application Programming Interface) and an additional Middleware Interface to RFC Middleware, where it is converted to an RFC (ABAP) call using the JNI(Java Native Interface) layer, and sent to the SAP system. Using the same method in the other direction, an RFC Call is converted to Java and forwarded to the Java application.

SAP provides SAP Java Connector as a standalone software component that can be installed independently of the SAP system. You can access the installation files at service.sap.com/connectors.  In order to download the Java Connector you must  have a S User or service account which is explained in this article.



Overview

The SAP Component enables outbound and inbound communication to and from SAP systems using synchronous remote function calls, sRFC.  The component uses the SAP Java Connector (SAP JCo) library to facilitate bidirectional communication with SAP.  The component supports two types of endpoints: destination endpoints and server endpoints.  You can find more on the Fuse 6.1 Camel JCo Component in the Red Hat documentation.  We will give a quick comparison of the two components, discuss the demonstration setup and then show how to run the demonstration.

SAP Netweaver Gateway Component

Pros
  • Familiar tools and technologies for Java developers
  • Existing ABAP functions/dialogs can easily be exposed as a gatway service
Cons
  • Netweaver Gateway needs to be installed in SAP backend or separately at a cost
  • Creating services in ABAP not trivial for for more complex scenarios
  • Not transactional
JCo Camel Component

Pros
  • Fits well into the Java EE world
  • No additional installs on SAP backend
  • Bidirectional communication (Java Calls SAP, SAP calls Java)
  • Transactional
Cons
  • Proprietary protocol
  • Complexity
Demonstration Overview

For our sample we will start a timer that will fire only once, get customers, log them and then save the customers to a file.  The Camel Route is shown in this diagram.  Let's take a quick look at the setup of the project in order to use the JCo component.  


Project Setup

The following will be required in the pom.xml:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
<dependency>               
 <groupId>com.sap.conn.jco</groupId>              
 <artifactId>sapjco3</artifactId>              
 <version>3.0.11</version>              
 <scope>system</scope>              
 <systemPath>/home/kpeeples/sapjco3/sapjco3.jar</systemPath>        
 </dependency>   
<dependency>          
 <groupId>org.fusesource</groupId>          
 <artifactId>camel-sap</artifactId>         
 <version>1.0.0.redhat-379</version>         
 <exclusions>                
  <exclusion>                      
   <groupId>com.sap.conn.jco</groupId>
              <artifactId>sapjco3</artifactId>
   </exclusion>             
  </exclusions>        
</dependency>

The first dependency defines the location of the sapjco3.jar.   The sapjco3 folder also contains the libsapjco3.so.  The second dependency is required for the camel-sap component.  Now we can define our route in our Camel Context.  The URI Scheme of the component is:

sap:[destination:destinationName|server:serverName]rfcName?options

The destination: prefix designates a destination endpoint and destinationName is the name of a specific outbound connection to an SAP instance. Outbound connections are named and configured at the component level. The rfcName in a destination endpoint URI is the name of the RFC invoked by the endpoint in the connected SAP instance.

The server: prefix designates a server endpoint and serverName is the name of a specific inbound connection from an SAP instance. Inbound connections are named and configured at the component level.  The rfcName in a server endpoint URI is the name of the RFC handled by the endpoint when invoked from the connected SAP instance.

The SAP component maintains three maps to store destination data, server data and repository data. The component’s property, destinationDataStore, stores destination data keyed by destination name, the property,serverDataStore, stores server data keyed by server name and the property, repositoryDataStore, stores repository data keyed by repository name. These configurations must be passed to the component during its initialization.  So for our demo we have the following since we are just going to retrieve customer data:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
  <bean id="sap" class="org.fusesource.camel.component.sap.SAPComponent">
   <property name="destinationDataStore">
    <map>
     <entry key="NPL" value-ref="nplDestinationData" />
    </map>
   </property>
   <property name="serverDataStore">
    <map />
   </property>
   <property name="repositoryDataStore">
    <map />
   </property>
  </bean>

The configurations for destinations are maintained in the destinationDataStore property of the SAP component. Each entry in this map configures a distinct outbound connection to an SAP instance. The key for each entry is the name of the outbound connection and is used in the destinationName component of a destination endpoint URI as described in the URI format section.

1
2
3
4
5
6
7
8
  <bean id="nplDestinationData" class="org.fusesource.camel.component.sap.model.rfc.impl.DestinationDataImpl">
   <property name="ashost" value="nplhost" />
   <property name="sysnr" value="00" />
   <property name="client" value="001" />
   <property name="user" value="developer" />
   <property name="passwd" value="password" />
   <property name="lang" value="en" />
  </bean>

Now we can look at our camel context.


1
2
3
4
5
6
7
8
<camel:camelContext xmlns="http://camel.apache.org/schema/spring">
   <camel:route>
    <camel:from uri="timer:runOnce?repeatCount=1" />
    <camel:to uri="sap:destination:NPL:BAPI_FLCUST_GETLIST" />
    <camel:to uri="log:sapintegration?level=INFO" />
    <camel:to uri="file:target?fileName=BAPI_FLCUST_GETLIST.xml" />
   </camel:route>
  </camel:camelContext>

We start the route with a timer that will just run once.  We use the sap component with the destination endpoint with the destination name NPLdestination.   Then we use the rfcName BAPI_FLCUST_GETLIST that we populate in the SAP setup, which in our case is the cloud appliance described below.  Once we retrieve the customer list from SAP we log the response and save the data to a file.

The full source code is at https://github.com/jbossdemocentral/fuse-components-sap.  But before we can run the Camel Context we have to setup the SAP Server with the data.  So in our demonstration as we stated before we are using a Cloud Appliance.

I’ll assume you have an SAP instance you can have access to.  If that is not the case, please go here to create your own SAP Cloud Appliance in AWS, that covers what’s needed for running this demo:

- Amazon account creation and prerequisites for configuring it
- Using the Access and Secret keys
- The 5 steps to get the SAO Cloud Appliance ready in cal.sap.com
- Create and install a Minisap license
- Create example data to work with (380 entries on CUSTOMER_LIST)

Run the Project to list the Customer Data
After creating the project as described above or from cloning the repository and then importing into JBDS, right click on the camel-context.xml under src/main/resources/META-INF/spring/ then select Run As then Camel Context (without tests)


You will see the Camel Context start with the results below.  You can turn on trace to get more information.


You can view the BAPI_FLCUST_GETLIST.xml file to find all the data returned from SAP.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
<bapi_flcust_getlist:response xmlns:bapi_flcust_getlist="http://sap.fusesource.org/rfc/NPL/BAPI_FLCUST_GETLIST">
  <customer_list>
    <row city="Walldorf" countr="DE" countr_iso="DE" custname="SAP AG" customerid="00000001" email="info@sap.de" form="Firma" phone="06227-34-0" pobox="" postcode="69190" region="" street="Dietmar-Hopp-Allee 16">
    <row city="Walldorf" countr="DE" countr_iso="DE" custname="Andreas Klotz" customerid="00000002" email="Andreas.Klotz@sap.com" form="Herr" phone="05344-676792" pobox="" postcode="69190" region="" street="Pimpinellenweg 9">
.
.
.
    <row city="Mannheim" countr="DE" countr_iso="DE" custname="Christine Pan" customerid="00004686" email="Christine_Pan@Mannheim.net" form="Frau" phone="0621/812547" pobox="" postcode="68163" region="" street="Emil-Heckel-Str. 102">
    <row city="Emmendingen" countr="DE" countr_iso="DE" custname="Horst Mechler" customerid="00004687" email="Horst_Mechler@Emmendingen.net" form="Herr" phone="07641 927813" pobox="" postcode="79312" region="" street="Elzstrasse 27">
  </row></row></row></row></customer_list>
  <customer_range>
  <extension_in>
  <extension_out>
  <return>
    <row field="" id="BC_IBF" log_msg_no="000000" log_no="" message="Method was executed successfully" message_v1="" message_v2="" message_v3="" message_v4="" number="000" parameter="" system="NPLCLNT001" type="S">
  </row></return>
</extension_out></extension_in></customer_range></bapi_flcust_getlist:response>
 

Setting up a SAP Cloud Appliance for Testing with the JBoss Fuse Camel JCo Component


This article will help you setup a SAP Cloud Appliance with a SAP Netweaver Application Server ABAP 7.4 on SAP MaxDB [trial edition] instance running on AWS so you can test the JBoss Fuse Camel JCo Component.  An example usage with Fuse is in this article.

We will go through the steps to get the instance ready with the license and with some example data to begin testing.    The main steps are as follows:

- Amazon account creation and prerequisites for configuring it
- Using the Access and Secret keys
- The 5 steps to get the SAP Cloud Appliance ready in cal.sap.com
- Create and install a Minisap license
- Create example data to work with (380 entries on CUSTOMER_LIST)

Cloud Appliance
First we must have a Amazon Account at aws.amazon.com.  Amazon will be our cloud provider and we must have the security credentials (Access Key and Secret Key) to add to the SAP CAL account. These types of accounts can be created using consolidated billing in AWS. Note that there are several prerequisites for configuring your AWS account:
  • Enable the Amazon EC2 service for your AWS account
  • Your IAM user has the following roles: AmazonEC2FullAccess, AmazonVPCFullAccess, ReadOnlyAccess, and AWSAccountUsageReportAccess
For more information about specific questions for the AWS cloud provider for SAP, see the FAQ.

After adding the EC2 service on AWS, you must follow the following steps:
  • The Access Key and the Secret Key of your AWS account
  1. Navigate to http://aws.amazon.com. 
  2. Logon to your account. 
  3. Choose Account  then Security Credentials. 
  4. In the Access Credentials section: 
    1. To see your access key, choose the Access Keys tab. 
    2. To see your secret key, choose the Secret Access Key tab and then choose the Show link. 
  • The virtual private cloud parameter (VPC) VPC is needed to be configured in the AWS location US-East (Virginia). 


Now that we have the Access Keys we can get the SAP Cloud Appliance ready.  We have to go to cal.sap.com to go through 5 steps:


1. Register


2. Add the security credentials


3. Add the solution and in our case we chose the  SAP Netweaver Application Server ABAP 7.4 on SAP MaxDB [trial edition].


4. Create the instance which can take awhile to activate/start


5. Now we can connect to the SAP system and add the sample data.  You can click on connect under operations which will bring up the access point.


You can click on SAP GUI (NPL,001,00) but before doing so you need the SAP GUI in order to connect to the instance.  You can get the SAP GUI from the service marketplace or from the appliance with ssh/scp.


From the getting started guide which can be accessed through the connection access point screen above:

You need a SAP GUI 7.20 Patch level 9 or above.
For the Windows OS (32 bit and 64 bit) you can find the SAPGUI software package on the server at: /sapmnt/A4H/custom/SAP_GUI_for_Windows_7.30_Patchlevel_4_Hotfix_1_for _SAP_SCN_(Trial)_20130611_0830.exe
You have to copy the according file to your computer and start the self-extraction.
A SAPGUI for the Java Environment can be found on the server at: /sapmnt/A4H/custom/SAP_GUI_FOR_JAVA_730.zip
You have to copy the according file to your computer, unpack the archive and follow the installation instructions.


Create License

The ABAP system comes with a temporary license that allows you to logon to the system. As first step before using the system you need to install a 90 days Minisap license as follows:
1. Logon to ABAP via SAP GUI with user SAP* in tenant 000. “SAP*/ch4ngeme”




2. Start transaction SLICENSE 
3. Get a “Minisap” license at http://www.sap.com/minisap . As system ID choose NPL - SAP NetWeaver 7.x (MaxDB). As hardware key use the hardware key shown in transaction SLICENSE.


4. Click “Install new License” under edit and select the downloaded license from step 3.


5. After license installation call transaction SECSTORE and run a check for all entries using F8. 

This is needed to enable RFC after the change of the installation number from INITIAL to DEMOSYSTEM. Installing the Minisap license will change the installation number from INITIAL to DEMOSYSTEM. The developer access key for user DEVELOPER and installation number DEMOSYSTEM is already in the system and you can start developing in the customer name range (Z*, Y*).

Create the Data
Now we can create the data on SAP.
Step 1: Navigate to transaction “SE38”
Step 2: Run the program SAPBC_DATA_GENERATOR clicking the clock with green checkmark
Step 3: Hit the clock/checkmark again
Step 4: Start Transaction SE37 and launch function module “BAPI_FLCUST_GETLIST” by clicking on the wrench
Step 5: Provide a search pattern in the CUSTOMER_NAME field (such as S*) and hit the execute button
Step 6: You should have 380 entries in the resulting CUSTOMER_LIST
Step 7: Click on the list icon to browse the list

Monday, April 20, 2015

Integration Series 1 - JBoss Fuse integration bridges the gap between SAP, SalesForce and mobile apps


We have a guest blogger this week. Luis Cortes,  Principal Manager of Product Marketing at Red Hat, @licortes_redhatwill give us an overview of our Salesforce, SAP, Fuse and Feedhenry integration series.

A common need of JBoss Fuse enterprise customers is the creation of business solutions that integrate complex software products such as CRM or ERP systems (think SAP). To this day many of them reside on-premise in the companies’ data centers, although more and more companies are moving them to PaaS and private clouds. In addition, the ever-growing adoption of SaaS services adds new demands to integrate with 3rd party services hosted in public clouds, such as Salesforce.

But we’re not done yet. To add to the always on, ubiquitous nature of business, the enterprise is going mobile at a growing speed, and this requires real-time access from all type of devices to critical information that resides and interacts with the above mentioned solutions.

In the next four blogs of this series, Kenny Peeples will guide us on how JBoss Fuse can be a key element in easily integrating your systems regardless of whether they reside on premise or in the cloud, including mobile interaction.

For this we have decided to showcase Fuse-SAP connectivity via Fuse JCo connector and Fuse NetWeaver Gateway connector; Fuse-SalesForce connectivity via the Fuse SalesForce connector; and Fuse-mobile connectivity via FeedHenry (Red Hat mobile application platform) via its REST API.

Due to the variety of ways our customers run JBoss products, we also want to show you different scenarios, with Fuse running on premise and in the cloud. In the first series of articles Fuse will be running on premise and the rest of pieces in the cloud as services: FeedHenry in the could, SAP in the SAP Cloud, and SalesForce, well, in the SalesForce cloud :-) In addition, the last article of the series will showcase the same demo with Fuse also running in the cloud, as iPaaS in OpenShift. We’ll give you instructions to run both on premise and in the cloud.





With this, we will highlight four use cases:

1. SalesForce to SAP: The personal data in Salesforce of a customer that has confirmed a purchase will be used to create a new customer record in SAP.

2. Mobile to SalesForce to SAP: Using a smartphone, a sales person closes a sales opportunity, the associated opportunity in SalesForce is updated accordingly and the personal data of the customer is used to create a new customer record in SAP.

3. SAP to SalersForce: A customer is late on payments and gets flagged in SAP, and the Salesforce record is accordingly updated to alert the sales team of a potential sales risk.

4. SAP to Mobile to SalesForce: A customer is late on payments and gets flagged in SAP, an alert appears on the smartphone of its manager, which puts the customer “on hold”, and the Salesforce record is accordingly updated to alert the sales team of a potential sales risk.

As you go through them, think of all the possibilities this opens to integrate these or additional systems using Camel routes and the more of 150 connectors offered by Fuse, and how to use this on your next projects to integrate systems in disparate environments.

References:
Integration Series 1 - Overview from Luis Cortes
Integration Series 1 Use Case 1 - SalesForce to SAP 
Integration Series 1 Use Case 2 - Mobile to SalesForce to SAP
Integration Series 1 Use Case 3 - SAP to SalersForce 
Integration Series 1 Use Case 4 - SAP to Mobile to SalesForce



Thursday, April 16, 2015

Red Hat JBoss Enterprise Application Platform 6.2 Achieves Highest Level Common Criteria Certification

JBoss Enterprise Application Platform 6.2 has been awarded the Common Criteria Certification at
Evaluation Assurance Level (EAL) 4+ which is the highest level of assurance for a commercial middleware platform.  

Achieving the Common Criteria Certification for JBoss Enterprise Application Platform 6.2 supports Red Hat's reputation as an industry leader in technology and showcases the company's ongoing commitment to security. In 2012, JBoss Enterprise Application Platform 5.1.0 and 5.1.1 also achieved Common Criteria certification at the EAL4+ assurance level.

The Common Criteria is an internationally recognized set of standards used by the federal government and organizations to assess the security and assurance of technology products. EAL categorizes the depth and rigor of the evaluation, and EAL4+ assures consumers that the software has been methodically designed, tested, and reviewed to meet the evaluation criteria.

As with past Common Criteria certifications, Red Hat worked with atsec information security, a government accredited laboratory in the United States and Germany. atsec tested and validated the security, performance and reliability of the solution against the Common Criteria Standard for Information Security Evaluation (ISO/IEC 15408) at EAL4+.

You can find the Certificate, Security Target and Certification Report at:


You can also find more information on Common Criteria at:


This new certification underlines Red Hat’s commitment to providing the highest possible level of security for our products and adds to existing certifications and accreditations for other middleware and infrastructure products :

Internet of Things MQTT Quality of Service Levels


Next week Red Hat is hosting a Virtual Event,
Building Data-driven Solutions for the Internet of Things.  I am presenting a session on Connect to the IoT with a lightweight protocol: MQTT so I wanted to do some articles on MQTT Basics this week.  Also,  you can visit the Red Hat IoT pages for more insight on IoT.

Message Queue Telemetry Transport (MQTT) is a Client Server publish/subscribe messaging transport protocol. It is light weight, open, simple, and designed so as to be easy to implement. These characteristics make it ideal for use in many situations, including constrained environments such as for communication in Machine to Machine (M2M) and Internet of Things (IoT) contexts where a small code footprint is required and/or network bandwidth is at a premium.   The protocol runs over TCP/IP, or over other network protocols that provide ordered, lossless, bi-directional connections.

MQTT supports three quality of service levels as seen in the diagram above:
  1. Delivered at most once (Fire and forget) which means no confirmation
  2. Delivered at least once, which means confirmation required
  3. Delivered exactly once, which means a 4 step handshake is done
The QoS defines how hard the broker/client will work or attempt to ensure that a message is received. Messages can be sent at any QoS level, and clients may attempt to subscribe to topics at any QoS level, which means that the client chooses the maximum QoS level they will receive. 

For example, if a message is published at QoS 2 and a client is subscribed with QoS 0, the message will be delivered to that client with QoS 0. If a second client is also subscribed to the same topic, but with QoS 2, then it will receive the same message but with QoS 2.

Another example could be if a client is subscribed with QoS 2 and a message is published on QoS 0, the client will receive it on QoS 0. Higher levels of QoS are more reliable, but involve higher latency and have higher bandwidth requirements.

More detail on each QoS is below.  The MQTT Control Packet table is at the bottom of the article describes the control packets from each QoS flow.

Quality of Service Level 0
The message is delivered at most once, or it is not delivered at all which means the delivery across the network is not acknowledged.  The message is NOT stored.   The message might be lost if the client is disconnected, or if the server fails.  This is is the fastest mode of transfer.   The MQTT protocol does not require servers to forward publications at QoS=0 to a client.   If the client is disconnected at the time the server receives the publication, the publication might be discarded, depending on the server. The telemetry (MQXR) service does not discard messages sent with QoS=0. They are stored as nonpersistent messages, and are only discarded if the queue manager stops.   

In the QoS 0 delivery protocol, the Sender:  MUST send a PUBLISH packet with QoS=0, DUP=0

In the QoS 0 delivery protocol, the Receiver:  Accepts ownership of the message when it receives the PUBLISH packet.

Quality of Service Level 1
The message is always delivered at least once.   If the sender does not receive an acknowledgment, the message is sent again with the DUP flag set until an acknowledgment is received.   As a result receiver can be sent the same message multiple times, and might process it multiple times.   The message must be stored locally at the sender and the receiver until it is processed.  The message is deleted from the receiver after it has processed the message. If the receiver is a broker, the message is published to its subscribers. If the receiver is a client, the message is delivered to the subscriber application. After the message is deleted, the receiver sends an acknowledgment to the sender.
The message is deleted from the sender after it has received an acknowledgment from the receiver. 

This level could be used, for example, with ambient sensor data where it does not matter if an individual reading is lost as the next one will be published soon after.

In the QoS 1 delivery protocol, the Sender:
-MUST assign an unused Packet Identifier each time it has a new Application Message to publish.
-MUST send a PUBLISH Packet containing this Packet Identifier with QoS=1, DUP=0.
-MUST treat the PUBLISH Packet as “unacknowledged” until it has received the corresponding PUBACK packet from the receiver.

The Packet Identifier becomes available for reuse once the Sender has received the PUBACK Packet.  Note that a Sender is permitted to send further PUBLISH Packets with different Packet Identifiers while it is waiting to receive acknowledgements.

In the QoS 1 delivery protocol, the Receiver:
-MUST respond with a PUBACK Packet containing the Packet Identifier from the incoming PUBLISH Packet, having accepted ownership of the Application Message
-After it has sent a PUBACK Packet the Receiver MUST treat any incoming PUBLISH packet that contains the same Packet Identifier as being a new publication, irrespective of the setting of its DUP flag.

Quality of Service Level 2
The message is always delivered exactly once.   The message must be stored locally at the sender and the receiver until it is processed.   QoS=2 is the safest, but slowest mode of transfer.   It takes at least two pairs of transmissions between the sender and receiver before the message is deleted from the sender.  The message can be processed at the receiver after the first transmission.   In the first pair of transmissions, the sender transmits the message and gets acknowledgment from the receiver that it has stored the message.   If the sender does not receive an acknowledgment, the message is sent again with the DUP flag set until an acknowledgment is received.   In the second pair of transmissions, the sender tells the receiver that it can complete processing the message, PUBREL.   If the sender does not receive an acknowledgment of the PUBREL message, the PUBREL message is sent again until an acknowledgment is received. The sender deletes the message it saved when it receives the acknowledgment to the PUBREL message.   The receiver can process the message in the first or second phases, provided that it does not reprocess the message. If the receiver is a broker, it publishes the message to subscribers.   If the receiver is a client, it delivers the message to the subscriber application.   The receiver sends a completion message back to the sender that it has finished processing the message.  

This level could be used, for example, with billing systems where duplicate or lost messages could lead to incorrect charges being applied.

In the QoS 2 delivery protocol, the Sender:
-MUST assign an unused Packet Identifier when it has a new Application Message to publish.
-MUST send a PUBLISH packet containing this Packet Identifier with QoS=2, DUP=0.
-MUST treat the PUBLISH packet as “unacknowledged” until it has received the corresponding PUBREC packet from the receiver.
-MUST send a PUBREL packet when it receives a PUBREC packet from the receiver. This PUBREL packet MUST contain the same Packet Identifier as the original PUBLISH packet.
-MUST treat the PUBREL packet as “unacknowledged” until it has received the corresponding PUBCOMP packet from the receiver.
-MUST NOT re-send the PUBLISH once it has sent the corresponding PUBREL packet.

The Packet Identifier becomes available for reuse once the Sender has received the PUBCOMP Packet.  Note that a Sender is permitted to send further PUBLISH Packets with different Packet Identifiers while it is waiting to receive acknowledgements.

In the QoS 2 delivery protocol, the Receiver:
-MUST respond with a PUBREC containing the Packet Identifier from the incoming PUBLISH Packet, having accepted ownership of the Application Message.
-Until it has received the corresponding PUBREL packet, the Receiver MUST acknowledge any subsequent PUBLISH packet with the same Packet Identifier by sending a PUBREC. It MUST NOT cause duplicate messages to be delivered to any onward recipients in this case.
-MUST respond to a PUBREL packet by sending a PUBCOMP packet containing the same Packet Identifier as the PUBREL.
-After it has sent a PUBCOMP, the receiver MUST treat any subsequent PUBLISH packet that contains that Packet Identifier as being a new publication.

MQTT Control Packet Descriptions
References:
OASIS Documents:
-http://docs.oasis-open.org/mqtt/mqtt/v3.1.1
-https://www.oasis-open.org/standards#mqttv3.1.1
Some info pulled from BB Smartworx and IBM Developer Works

Monday, April 13, 2015

Data Virtualization 6.1 Getting Started Videos


Last Month JBoss Data Virtualization 6.1 was released.  It is a released packed with goodness around three major areas: Big Data, Cloud and Development/Deployment Improvements.  To get you started with an initial JDV video series, Blaine Mincey, Senior Solutions Architect, walks you through a "Soups to Nuts" 3 part series.  Look for more videos soon.  I have also included some new features and links on JDV below.

Getting Started Part 1 - Installing JDV and configuring JBDS with JDV and the Teiid Designer components



Getting Started Part 2 - Creates a Teiid project and creates a relational model from an XML file



Getting Started Part 3 - Project created in Part 2 and deploys that to the JDV server and then accesses the VDB from a Java application using the Teiid JDBC driver



JDV 6.1 Overview

JDV 6.1 GA is available for download from
- JBoss.org at http://www.jboss.org/products/datavirt/overview/
- Customer Portal at https://access.redhat.com/products/red-hat-jboss-data-virtualization

JDV 6.1 Documentation is available at https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Data_Virtualization/

JDV 6.1 WebUI (Developer Preview) is available for download at: https://www.jboss.org/products/datavirt/download/

For JDV 6.1, we focused on three major areas:

• Big Data
• Cloud
• Development and Deployment Improvements

with the following new features and enhancements

BIG DATA

- Cloudera Impala
In addition to the Apache Hive support released in JDV 6.0, JDV 6.1 also supports Cloudera Impala for fast SQL query access to data stored in Hadoop. Support of Impala is aligned with our growing partnership with Cloudera that was announced in October.

- Apache Solr
New in JDV 6.1 is support for Apache Solr as a data source. With Apache Solr, JDV customers will be able to take advantage of enterprise search capabilities for organized retrieval of structured and unstructured data.

- MongoDB
Support for MongoDB as a NoSQL data source was released in Technical Preview in JDV 6.0 and is fully supported in JDV 6.1. Support of MongoDB brings support for a document-oriented NoSQL database to JDV customers.

- JDG 6.4
JDV 6.0 introduced Red Hat JBoss Data Grid (JDG) as a read datasource. We expand on this support in JDV 6.1, with the ability to perform richer queries as well as writes, on both Embedded caches (JDG Library mode) and Remote caches (over Hot Rod protocol).

- Apache Cassandra (Tech Preview)
Apache Cassandra will be released as a Technical Preview in JDV 6.1. Support of Apache Cassandra brings support for the popular columnar NoSQL database to JDV customers.

CLOUD

- OpenShift Online with new WebUI
We introduced JDV in OpenShift Online as Developer Preview with the JDV 6.0 release and have updated our Developer Preview cartridge for JDV 6.1. Also with JDV 6.1, we are adding a WebUI that focuses on ease of use for web and mobile developers. This lightweight user interface allows users to quickly access a library of existing data services, or create one of their own in a top-down manner. Getting Started instructions can be found here: https://developer.jboss.org/wiki/IntroToTheDataVirtualizationWebInterfaceOnOpenShift

Note that the JDV WebUI is also available for use with JDV on premise as a Developer Preview and can be downloaded from JBoss.org at the link above.

- SFDC Bulk API
With JDV 6.1 we improve support for the Salesforce.com Bulk API with a more RESTful interface and better resource handling. The SFDC Bulk API is optimized for loading very large sets of data.

- Cloud Enablement
With JDV 6.1 we will have full support of JBoss Data Virtualization on Amazon EC2 and Google Compute Engine.

PRODUCTIVITY AND DEPLOYMENT IMPROVEMENTS

- Security audit log dashboard
Consistent centralized security capabilities across multiple heterogeneous data sources is a key value proposition for JDV. In JDV we add a security audit log dashboard that can be viewed in the dashboard builder which is included with JDV. The security audit log works with JDV’s RBAC feature and displays who has been accessing what data and when.

- Custom Translator improvements
JDV offers a large number of supported data sources out of box and also provides the capability for users to build their own custom translators. In JDV 6.1 we are providing features to improve usability including archetype templates that can be used to generate a starting maven project for custom development. When the project is created, it will contain the essential classes and resources to begin adding custom logic.

- Azul Zing JVM
JDV 6.1 will provide support for Azul Zing JVM. Azul Zing is optimized for Linux server deployments and designed for enterprise applications and workloads that require any combination of large memory, high transaction rates, low latency, consistent response times or high sustained throughput.

- MariaDB
JDV 6.1 will support MariaDB as a data source. MariaDB is the default implementation of MySQL in Red Hat Enterprise Linux 7. MariaDB is a community-developed fork of the MySQL database project, and provides a replacement for MySQL. MariaDB preserves API and ABI compatibility with MySQL and adds several new features.

- Apache POI Connector for Excel
JDV has long supported Microsoft Excel as a data source. In JDV 6.1, we add support for the Apache POI connector that allows reading of Microsoft Excel documents on all platforms.

- Performance Improvements
We continue to invest in improved performance with every release of JDV. In JDV 6.1, we focused particularly on improving performance with dependent joins including greater control over full dependent join pushdown to the datasource(s).

- EAP 6.3
JDV 6.1 will be based on EAP 6.3 and take advantage of the new patching capabilities provided by EAP.

- Java 8
With JDV 6.1 we offer support for Java 8 in addition to Java 7 and Java 6.









Tuesday, April 7, 2015

Building Data-Driven Solutions for the Internet of Things

I am excited to be a part of the Red Hat Internet of Things Virtual Event. 

The virtual event will include a keynote featuring Red Hat executive insight, remarks from an industry analyst, and strategic partner perspectives.

Following the keynote, you can choose from 9 breakout sessions organized into 3 unique tracks as outlined below. Join discussions led by Red Hat subject matter experts and strategic partners and deep dive into aggregating, analyzing, and acting on enterprise IoT data.

Everything is connected in the Internet of Things. People, devices, machines, and more are all part of a network - sending and receiving data to and from other "things." What new opportunities can the IoT create for your business?

Join the Red Hat virtual event Building Data-driven Solutions for the Internet of Things on April 23 at 11 a.m. (EST) / 15:00 (GMT) and hear how open source solutions can help you unlock the value of your enterprise data.

For more information visit the event site.