Monday, March 2, 2015

Using a Customer Context with the Camel Components and Data Virtualization


Overview


Cojan van Ballegooijen, Red Hat Senior Solution Architect, Bill Kemp, Red Hat Senior Solution Architect, and myself have created an example around a Customer Context Use Case to show how to use the Camel Components in Fuse to access a Data Virtualization Virtual Database (VDB).  The data service provides the customer context which is aggregated data from a XML file and CSV file.  The data on each customer provides the name, the credit score, the number of calls the customer has placed to customer support and the sentiment (Hot, Cold, Warm) toward the company from social media.  We will review the components and show how to run the demo.  The demo repository is located in jbossdemocentral on github.  In our project directory we have the individual use cases which are built and deployed when running the scripts.  The Teiid jdbc jar is loaded into the profile with wrap file during the run script.

Use Case 1 - JDBC Component

In the first use case we are set up the bean for the sql query that we want to execute and the bean for the datasource properties.  A timer component runs a query every 60 seconds, the results from the query are then split into individual records and then sent to the log.  We are using the Blueprint DSL in the blueprint.xml.

blueprint.xml design view
blueprint.xml source view with the query property and datasource properties
Note the url that accesses the CustomerContext Virtual Database.  Also the query is set in the body and the datasource name is part of the jdbc URI.

JDBC Component excerpt from the Camel Component Page:
The jdbc component enables you to access databases through JDBC, where SQL queries (SELECT) and operations (INSERT, UPDATE, etc) are sent in the message body. This component uses the standard JDBC API, unlike the Camel SQL Component component, which uses spring-jdbc.

Maven users will need to add thecamel-jdbc dependency to their pom.xml for this component.  This component can only be used to define producer endpoints, which means that you cannot use the JDBC component in a from() statement.  The URI Format for the JDBC component is:

jdbc:dataSourceName[?options]

This component only supports producer endpoints.   You can append query options to the URI in the following format, ?option=value&option=value&...

Use Case 2 - SQL Component

The second use case is similar to the first in that a timer component runs a query every 60 seconds, the results from the query are then split into individual records and then sent to the log.  Also we are using the Blueprint DSL in the blueprint.xml.  The difference with the SQL component is the query is part of the URI of the component.   Also we are loading the datasource into the SqlComponent class.

blueprint.xml design view
blueprint.xml source view of SqlComponent with datasource reference
SQL Component excerpt from the Camel Component Page:
The sql: component allows you to work with databases using JDBC queries. The difference between this component and JDBC component is that in case of SQL the query is a property of the endpoint and it uses message payload as parameters passed to the query.   From Camel 2.11 onwards this component can create both consumer (e.g. from()) and producer endpoints (e.g. to()).  In previous versions, it could only act as a producer.

This component uses spring-jdbc behind the scenes for the actual SQL handling.  Maven users will need to add the camel-sql dependency to their pom.xml for this component.  The SQL component also supports:
The SQL component uses the following endpoint URI notation:

sql:select * from table where id=# order by name[?options]

You can append query options to the URI in the following format, ?option=value&option=value&...

Use Case 3 - Olingo Component

The Olingo component will be a part of Fuse 6.2 so we decided to wait until that release in order to document and add this component to this demo.  You can try an example with Camel 2.14 which we have in the https://github.com/jbossdemocentral/dv-fuse-integration-demo/tree/master/projects/DVWorkspacewithFuseTest/olingo2 folder of the project.  We will cover in more detail in a follow up article for the Olingo component.

Olingo Component excerpt from Camel Component Page:
The Olingo2 component utilizes Apache Olingo version 2.0 APIs to interact with OData 2.0 and 3.0 compliant services. A number of popular commercial and enterprise vendors and products support the OData protocol. A sample list of supporting products can be found on the OData website.

Maven users will need to add the camel-olingo2 dependency to their pom.xml for this component.  The URI format for the Olingo component is:

olingo2://endpoint/?[options]

Use Case 4 - JETTY Component for a REST Service

For Use Case 4 we use a REST service to expose the OData Data Virtualization Service.

blueprint.xml design view
blueprint.xml source view of the route
Note the CustomerContextVDB OData service is being used with the DV Username and Password as parameters.  This returns all the data when accessing the Jetty URL, http://localhost:9000/usecase4.

Jetty Component excerpt from the Camel Component Page:
The jetty component provides HTTP-based endpoints for consuming and producing HTTP requests. That is, the Jetty component behaves as a simple Web server. Jetty can also be used as a http client which mean you can also use it with Camel as a producer.

Maven users will need to add the camel-jetty dependency to their pom.xml for this component.  The URI format is:

jetty:http://hostname[:port][/resourceUri][?options]

You can append query options to the URI in the following format, ?option=value&option=value&...

Running the Project

Step 1: Download and unzip the repository or Clone the repository. If running on Windows, it is reccommended the project be extracted to a location near the root drive path due to limitations of length of file/path names.

Step 2: Add the DV and Fuse Products to the software directory.  You can download them from the Customer Support Portal (CSP) or jboss.org.

Step 3: Run 'init.sh' or 'init.bat' to setup the environment locally. 'init.bat' must be run with Administrative privileges.

Step 4: Run 'run.sh' or 'run.bat' to start the servers, create the container and deploy the bundles.

Step 5: Sign onto the Fuse Management console, http://localhost:8181, with the admin user and check the console log to see the output from the routes for the use cases. You can also view the Camel Diagrams.  Browse to http://localhost:9000/usecase4 to see the data for Use Case 4 through Jetty.

The demo can be run in a docker container in addition to a local install. Full instructions can be found in support/docker/README.md of the project.

Friday, February 20, 2015

SOA and API Summit February 26

This week I prepared our material for the SOA and API Summit that will be presented during a live session February 26. You can download the slides and whitepapers now, then attend the session on February 26th by registering at idevnews.  Click on the reserve a seat which will enable you to download the whitepapers and slidedeck.  My presentation is titled, "Success in the API Economy with Red Hat JBoss".
Title:SOA & APIs Summit
Speakers:Oracle, Redhat, Axway, Talend
Date/Time:February 26, 2015
10am PT / 1pm ET - Online Conference

SOA & APIs Summit is a multi-vendor online event where industry experts will show how SOA and APIs are transforming the way F1000s think about IT and business models.

Topics to include:
  • SOA & APIs Power the ‘Extended Enterprise’ 
    How today’s SOA and API architectures help IT more easily adopt end-to-end solutions for Big Data, Cloud, Mobile and SaaS  
  • Integration for Real-Time Business 
    New architectures (API, JSON, REST, SOA) are delivering real-time integration for decisions, analytics and more.  
  • Learn from F1000 Success Stories 
    How savvy API / SOA investments reward F1000s with happier customers, thriving partner networks,  growing revenues, smarter and quicker apps.   
  • Cut Coding with SOA / API Strategies 
    Every day, 1000s of app ideas come to life quicker thanks to smart integration platforms that reduce coding – even eliminate it
  • The API-Driven Business 
    API platforms power and secure new ways to share, communicate and innovate – with internal teams and outside partners.


Maximize information exchange in your enterprise with AMQP

This week I presented a webinar on Maximizing information exchange in your enterprise with AMQP.  We went through an AMQP overview, comparison of technologies with AMQP, Fuse and A-MQ and a simple demo to show a producer, consumer and broker.  The main features of AMQP include Interoperability, Queueing, Routing, Reliability and Security.

The demo was simple example on the ease of use of AMQP with JBoss A-MQ.  I have included the steps below so that you can give it a try.

Step 1: Download and unzip JBoss A-MQ 6.1 from http://www.jboss.org/products/amq/overview/
Step 2: Clone the repository from https://github.com/fusebyexample/activemq-amqp-example
Step 3: Add the transport to the transportconnectors section in the activemq.xml file in /etc of the A-MQ install directory
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672"/>
Step 4: Start the Fuse server by running amq in /bin of the A-MQ install directory
Step 5: Run mvn -P consumer from the cloned repository
Step 6: Run mvn -P producer from the cloned repository
Note: you should see the messages received by the consumer which are sent by the producer
Step 7: Browse the Management Console at http://localhost:8181 to take a look at the statistics for the producer and consumer

Sunday, February 15, 2015

JBoss Data Virtualization Sizing Architecture Tool


The JBoss Data Virtualization Sizing Architecture Tool is a simple web application that has around 10 - 15 questions.  After all questions are answered and submitted, corresponding recommendations for Data Virtualization will be presented.  The recommendations include:
  • How many servers are need, with how many cores?
  • How much memory/JVM size for each node?
  • Suggestions of configuration changes for any performance improvement.
Follow the link, Sign on with your Red Hat account and click start to enter the responses to the questions to get a recommendation.



Tuesday, February 10, 2015

Web Application Security Top 10

OWASP (Open Web Application Security Project) is an organization focused on improving security of software.  Their mission is to make software security visible so that individuals and organizations can make informed decisions about software security risks.  They published a Top Ten document to promote awareness for Web Application Security.   The top ten represents the most critical web application security flaws.  A couple of points on the top 10:
  • They have many international versions of the Top 10 list.  
  • The Top 10 continues to change and evolve.  
  • There are hundreds of issues that can possibly affect Web Application Security so don't stop with mitigating the top 10.  OWASP has several resources that can assist such as the OWASP Developer's Guide, OWASP Cheat Sheet Series, OWASP Testing Guide and the OWASP Code Review Guide.
The OWASP Top 10 is a list of the 10 Most Critical Web Application Security Risks and for each Risk it provides:
  • A description
  • Example vulnerabilities
  • Example attacks
  • Guidance on how to avoid
  • References to OWASP and other related resources
You can see these details of each risk at the OWASP Project site here.  I included the overview list below which is also here..




API Management Part 1 with Fuse on Openshift and 3scale on Amazon Web Services


Introduction


A way organizations deal with the progression towards a more connected and API driven world, is by implementing a lightweight SOA/REST API architecture for application services to simplify the delivery of modern apps and services.

In the following blog series, we're going to show how solutions based on 3scale and Red Hat JBoss Fuse enable organizations to create right interfaces to their internal systems thus enabling them to thrive in the networked, integrated economy.

Among the API Management scenarios that can be addresses by 3cale and Red Hat with JBoss Fuse on OpenShift, we have selected to showcase the following:

• Scenario 1 – Fuse on Openshift with 3scale on Amazon Web Services (AWS)
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario1.html
• Scenario 2 – Fuse on Openshift with APICast (3scale’s cloud hosted API gateway)
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario2.html
• Scenario 3 – Fuse on Openshift and 3scale on Openshift
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario3.html

The illustration below depicts an overview of the 3scale API Management solution integrated with JBoss.  Conceptually the API Management sits in between the API backend that provides the data, service or functionality and the API Consumers (developers) on the other side.  The 3scale API Management solution subsumes: specification of access control rules and usage policies (such as rate limits), API Analytics and reporting, documentation of the API on developer portals (including interactive documentation), and monetization including end-to-end billing.
This article covers scenario 1 which is 3scale on AWS and Fuse on Openshift. We split this article into four parts:
  • Part 1: Fuse on Openshift setup to design and implement the API
  • Part 2: 3scale setup for API Management using the nginx open-source API gateway
  • Part 3: AWS setup for API gateway hosting
  • Part 4: Testing the API and API Management 
The diagram below shows what role the various parts play in our configuration.

Part 1: Fuse on Openshift setup


We will create a Fuse Application that contains the API to be managed. We will use the REST Quickstart that is included with Fuse 6.1. This requires a Medium or Large gear to be used as using the small gear will result in out of memory errors and/or horrible performance.

Step 1: Sign onto your Openshift Online Account. You can sign up for a Openshift Online account if you don’t have one.
loginopenshift.png

Step 2: Click the Add Application button after singing on.


Step 3: Under xPaaS select the Fuse type for the application
fuseopenshift.png


Step 4: Now we will configure the application. Enter a Public URL, such as restapitest which gives the full url as appname-domain.rhcloud.com. As in the example below restapitest-ossmentor.rhcloud.com. Change the gear size to medium or large which is required for the Fuse cartridge. Now click on Create Application.


Step 5: Click Create Application

Step 6: Browse to the application hawtio console and sign on

Step 7: After signing on click on the Runtime tab and the container. We will add the REST API example.

Step 8: Click on Add a Profile button
Step 9: Scroll down to examples/quickstarts and click the rest checkbox then add. The REST profile should show on the container associated profile page.



Step 10:  Click on the Runtime/APIs tab to verify the REST API profile.

Step 11: Verify the REST API is working. Browse to customer 123 which will return the ID and name in XML format.

Part 2: 3scale setup



Once we have our API set up on Openshift we can start setting it up on 3scale to provide the management layer for access control and usage monitoring.

Step 1: Log in to your 3scale account. You can sign up for a 3scale account for free at www.3scale.net if you don’t already have one. When you log in to your account for the first time you will see a to-do list to guide you through setting up your API with 3scale.



Step 2: If you click on the first item in the list “Add your API endpoint and hit save & test” you’ll be taken directly to the 3scale Integration page where you can enter the public url for your Fuse Application on Openshift that you have just created, e.g restapitest-ossmentor.rhcloud.com and click on “Update & test.” This will test your set up against the 3scale sandbox proxy. The sandbox proxy allows you to test your 3scale set up before deploying your proxy configuration to AWS.


Step 3: The next step is to set up the API methods that you want to monitor and rate limit. You will do this by creating Application Plans that define which methods are available for each type of user and any usage limits you want to enforce for those users. You can get there from the left hand menu by clicking Application Plans.

and clicking on one of the Application Plans set up by default for your account. In this case we will click on “Basic.”

Which will take you to the following screen where you can start creating your API methods

for each of the calls that users can make on the API:

e.g Get Customer for GET and Update Customers for PUT / etc…


Step 4: Once you have all of the methods that you want to monitor and control set up under the application plan, you will need to map these to actual http methods on endpoints of your API. We do this by going back to the Integration page and expanding the “Mapping Rules” section.



And creating proxy rules for each of the methods we created under the Application Plan.

Once you have done that, your mapping rules will look something like this:



Step 5: Once you have clicked “Update and Test” to save and test your configuration, you are ready to download the set of configuration files that will allow you to configure your API gateway on AWS. As an API gateway we use an high-performance and open-source proxy called nginx. You will find the necessary configuration files for nginx in the same Integration page, by scrolling down to the “Production” section




The final section will now take you through installing these configuration files on your Nginx instance on Amazon Web Services (AWS) for hosting.

Part 3: Amazon Web Services (AWS) Setup


We assume that you have already completed these steps:
  • You have your Amazon Cloud account. 
  • You have created your application and are ready to deploy it to Amazon Cloud. 
  • You have created your proxy on 3scale. 
With that accomplished we are ready to setup our Amazon Cloud Server and deploy our application.

STEP 1. Open Your EC2 Management Console

Screen Shot 2014-12-29 at 1.10.25 PM.png

In the left hand side bar you will see “AWS Marketplace”. Select this, type 3scale into the Search and you will see the 3scale Proxy AMI (Amazon Machine Image) show up in the results. The 3scale Proxy AMI implicitly uses and runs an nginx gateway.

Screen Shot 2014-12-29 at 1.17.07 PM.png

Click “Select”

Screen Shot 2014-12-29 at 5.04.48 PM.png



Click “Continue”

Screen Shot 2014-12-29 at 5.06.24 PM.png



Select plan that is most appropriate to your application and then you can either select “Review and Launch” if you want a simple launch with 3scale or “Next: Configure Instance Details” to add additional detail configuration; such as shutdown, storage and security.

Screen Shot 2014-12-29 at 5.47.00 PM.png



And click “Launch”. The next screen will ask you to create or select an existing public private key.

Screen Shot 2014-12-29 at 5.57.48 PM.png
If you already have a public-private key you created on AWS can choose to use it.

If you do not already have a public-private key pair you should choose to create a new pair.

Screen Shot 2014-12-30 at 6.44.04 PM.png

Your 3scale proxy is now running on AWS.  But now we need to update the 3scale AWS instance with the NGINX config.  From 3scale download the nginx config file and upload it to AWS. Once uploaded and placed in the correct directory then restart your proxy instance.  Upload instructions are found at http://www.amazon.com/gp/help/customer/display.html?nodeId=201376650  The instructions below help you manage your proxy.

  1. head over to the your AWS Management Console and go into the running instances list on the EC2 section.
  2. check that your instance is ready to be accessed. That is indicated by a green check mark icon in the column named Status Checks.
  3. click on over the instance the list to find its public DNS and copy it
  4. log in through SSH using the ubuntu user and the private key you chose before. The command will look more or less like:
  5. ssh -i privateKey.pem ubuntu@ec2-12-34-56-78.compute-1.amazonaws.com
  6. once you log in, read the instructions that will be printed to the screen: all the necessary commands to manage your proxy are described there. In case you want to read them later, these instructions are located in a file named 3SCALE_README in the home directory.
Note: Remember that the 3Scale instance runs on Ubuntu on Amazon.  Hence the ubuntu login.

In the next section, we will show how your API and API Management can be tested.

Part 4: Testing the API and API Management



Use your favorite REST client and run the following commands

1. Retrieve the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers/123?user_key=b9871b41027002e68ca061faeb2f972b
2. Create a customer

http://54.149.46.234/cxf/crm/customerservice/customers?user_key=b9871b41027002e68ca061faeb2f972b

3. Update the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers?user_key=b9871b41027002e68ca061faeb2f972b

4. Delete the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers/123?user_key=b9871b41027002e68ca061faeb2f972b

5. Check the analytics of the API Management of your API

If you now log back in to your 3scale account and go to Monitoring > Usage you can see the various hits of the API endpoints represented as graphs.
Usage_-_Index___3scale_API_Management.png
This is just one element of API Management that brings you full visibility and control over your API. Other features are:
  1. Access control 
  2. Usage policies and rate limits 
  3. Reporting 
  4. API documentation and developer portals 
  5. Monetization and billing 
For more details about the specific API Management features and their benefits, please refer to the 3scale product description.

For more details about the specific Red Hat JBoss Fuse Product features and their benefits, please refer to the Fuse Product description.

For more details about running Red Hat JBoss Fuse on OpenShift, please refer to the xPaaS with Fuse on Openshift description.

API Management Part 3 with Fuse on Openshift and 3scale on Openshift

Introduction


A way organizations deal with the progression towards a more connected and API driven world, is by implementing a lightweight SOA/REST API architecture for application services to simplify the delivery of modern apps and services.

In the following blog series, we're going to show how solutions based on 3scale and Red Hat JBoss Fuse enable organizations to create right interfaces to their internal systems thus enabling them to thrive in the networked, integrated economy.

Among the API Management scenarios that can be addresses by 3cale and Red Hat with JBoss Fuse on OpenShift, we have selected to showcase the following:

• Scenario 1 – Fuse on Openshift with 3scale on Amazon Web Services (AWS)
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario1.html
• Scenario 2 – Fuse on Openshift with APICast (3scale’s cloud hosted API gateway)
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario2.html
• Scenario 3 – Fuse on Openshift and 3scale on Openshift
http://www.ossmentor.com/2015/02/apimanagement-fuse-3scale-scenario3.html

The illustration below depicts an overview of the 3scale API Management solution integrated with JBoss. Conceptually the API Management sits in between the API backend that provides the data, service or functionality and the API Consumers (developers) on the other side. The 3scale API Management solution subsumes: specification of access control rules and usage policies (such as rate limits), API Analytics and reporting, documentation of the API on developer portals (including interactive documentation), and monetization including end-to-end billing.
This article covers scenario 1 which is 3scale on AWS and Fuse on Openshift. We split this article into four parts:
  • Part 1: Fuse on Openshift setup to design and implement the API
  • Part 2: 3scale setup for API Management using the nginx open-source API gateway
  • Part 3: Openshift setup for API gateway hosting
  • Part 4: Testing the API and API Management NOTE: If you followed Article 1 and/or 2 for this series then Part 1 and Part 2 should already be done for you and you can start at Part 3.

Part 1: Fuse on Openshift setup

We will create a Fuse Application that contains the API to be managed. We will use the REST Quickstart that is included with Fuse 6.1. This requires a Medium or Large gear to be used as using the small gear will result in out of memory errors and/or horrible performance.

Step 1: Sign onto your Openshift Online Account. You can sign up for a Openshift Online account if you don’t have one.
loginopenshift.png

Step 2: Click the Add Application button after singing on.

Step 3: Under xPaaS select the Fuse type for the application
fuseopenshift.png

Step 4: Now we will configure the application. Enter a Public URL, such as restapitest which gives the full url as appname-domain.rhcloud.com. As in the example below restapitest-ossmentor.rhcloud.com. Change the gear size to medium or large which is required for the Fuse cartridge. Now click on Create Application.

Step 5: Click Create Application

Step 6: Browse to the application hawtio console and sign on

Step 7: After signing on click on the Runtime tab and the container. We will add the REST API example.

Step 8:  Click on Add a Profile button
Step 9:  Scroll down to examples/quickstarts and click the rest checkbox then add. The REST profile should show on the container associated profile page.

Step 10: Click on the Runtime/APIs tab to verify the REST API profile.


Step 11:  Verify the REST API is working. Browse to customer 123 which will return the ID and name in XML format.

Part 2: 3scale setup

Once we have our API set up on Openshift we can start setting it up on 3scale to provide the management layer for access control and usage monitoring.

Step 1: Log in to your 3scale account. You can sign up for a 3scale account for free at www.3scale.net if you don’t already have one. When you log in to your account for the first time you will see a to-do list to guide you through setting up your API with 3scale.

Step 2: If you click on the first item in the list “Add your API endpoint and hit save & test” you’ll be taken directly to the 3scale Integration page where you can enter the public url for your Fuse Application on Openshift that you have just created, e.g restapitest-ossmentor.rhcloud.com and click on “Update & test.” This will test your set up against the 3scale sandbox proxy. The sandbox proxy allows you to test your 3scale set up before deploying your proxy configuration to AWS.  

Step 3: The next step is to set up the API methods that you want to monitor and rate limit. You will do this by creating Application Plans that define which methods are available for each type of user and any usage limits you want to enforce for those users. You can get there from the left hand menu by clicking Application Plans.
and clicking on one of the Application Plans set up by default for your account. In this case we will click on “Basic.”
Which will take you to the following screen where you can start creating your API methods
for each of the calls that users can make on the API:
e.g Get Customer for GET and Update Customers for PUT / etc…
Step 4: Once you have all of the methods that you want to monitor and control set up under the application plan, you will need to map these to actual http methods on endpoints of your API. We do this by going back to the Integration page and expanding the “Mapping Rules” section.

And creating proxy rules for each of the methods we created under the Application Plan.
Once you have done that, your mapping rules will look something like this:

Step 5: Once you have clicked “Update and Test” to save and test your configuration, you are ready to download the set of configuration files that will allow you to configure your API gateway on AWS. As an API gateway we use an high-performance and open-source proxy called nginx. You will find the necessary configuration files for nginx in the same Integration page, by scrolling down to the “Production” section


The final section will now take you through installing these configuration files on your Nginx instance on OpenShift.
Part 3: NGINIX on OpenShift Instance

We assume that you have already completed these steps:
  • You have your Openshift account.
  • You have created your application and are ready to deploy it to Openshift.
  • You have created your proxy on 3scale.
With that accomplished we are ready to setup our Openshift Application and deploy our configuration.

Step 1: Create an application with the DIY cartridge, either with the client tools (rhc) or through the console.

Step 2: Stop the Openshift Application with so you do not get port binding errors, ie. rhc app stop diytestnginix --namespace ossmentor

Step 3: Use SSH to get t the OpenShift shell, ie ssh 54c67sdfsda63fe0b8cd8484000@diytestnginix-ossmentor.rhcloud.com

Step 4: Setup an the PATH variable for ldconfig or you will get the PATH env when enabling luajit error, ie export PATH=$PATH:/sbin

Step 5: Install the PCRE module
  • cd $OPENSHIFT_TMP_DIR 
  • wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.36.tar.bz2
  • tar jxf pcre-8.36.tar.bz2 
Step 6: Install and build the nginix-openresty package
  • wget http://openresty.org/download/ngx_openresty-1.7.7.1.tar.gz
  • tar xzvf ngx_openresty-VERSION.tar.gz 
  • cd ngx_openresty-1.7.7.1
  • ./configure --prefix=$OPENSHIFT_DATA_DIR --with-pcre=$OPENSHIFT_TMP_DIR/pcre-8.36 --with-pcre-jit --with-ipv6 --with-http_iconv_module -j2
  • Run gmake
  • Run gmake install 
Step 7: Go to 3scale and download the nginx config proxy_configs.zip which contains the conf and lua files

Step 8: Copy the two files to the openshift application to the $OPENSHIFT_TMP_DIR using scp, ie. scp nginx_2445581129832.lua 54c6763fe0b8cd8484000020@diytestnginix-ossmentor.rhcloud.com:/tmp/nginix_2445581129832.lua

Step 9: Copy the files to the nginx conf directory, ie cp $OPENSHIFT_TMP_DIR/nginix_244* $OPENSHIFT_DATA_DIR/nginx/conf

Step 10: Rename and Update the nginx.conf file

use the mv command to change the nginx config to nginx.conf
Run env to get OPENSHIFT_DIY_IP and OPENSHIFT_DIY_PORT
Change the Server, IP and port
listen 127.13.112.1:8080;
## CHANGE YOUR SERVER_NAME TO YOUR CUSTOM DOMAIN OR LEAVE IT BLANK IF ONLY HAVE ONE
#server_name diytestnginix-ossmentor.rhcloud.com;
Change the lua file name
## CHANGE THE PATH TO POINT TO THE RIGHT FILE ON YOUR FILESYSTEM IF NEEDED
access_by_lua_file /var/lib/openshift/54c6763fe0b8cd8484000020/app-root/data/nginx/conf/nginix_2445581129832.lua;

Step 11: Start nginx from $OPENSHIFT_DATA_DIR/nginx/sbin

./nginx /var/lib/openshift/54c6763fe0b8cd8484000020/app-root/data/nginx/sbin/nginx -p $OPENSHIFT_DATA_DIR/nginx/ -c $OPENSHIFT_DATA_DIR/nginx/conf/nginx.conf

STEP 12: If you need to stop nginx use ./nginx -s stop

Part 4: Testing the API and API Management


Use your favorite REST client and run the following commands

1. Retrieve the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers/123?user_key=b9871b41027002e68ca061faeb2f972b
2. Create a customer

http://54.149.46.234/cxf/crm/customerservice/customers?user_key=b9871b41027002e68ca061faeb2f972b
3. Update the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers?user_key=b9871b41027002e68ca061faeb2f972b
4. Delete the customer instance with id 123

http://54.149.46.234/cxf/crm/customerservice/customers/123?user_key=b9871b41027002e68ca061faeb2f972b

5. Check the analytics of the API Management of your API

If you now log back in to your 3scale account and go to Monitoring > Usage you can see the various hits of the API endpoints represented as graphs.
Usage_-_Index___3scale_API_Management.png
This is just one element of API Management that brings you full visibility and control over your API. Other features are:
  • Access control
  • Usage policies and rate limits
  • Reporting
  • API documentation and developer portals
  • Monetization and billing
For more details about the specific API Management features and their benefits, please refer to the 3scale product description.

For more details about the specific Red Hat JBoss Fuse Product features and their benefits, please refer to the Fuse Product description.

For more details about running Red Hat JBoss Fuse on OpenShift, please refer to the xPaaS with Fuse on Openshift description.