Deploying an OAM Cluster with Kubernetes in 60 Minutes
Technical series in Oracle IDM Suite, July 2021
One of the exciting things about working in stealth mode from time to time is the opportunity to be immersed and fully dedicated to specific projects, such as finding new ways to innovate on an existing product, perform research and development, or evaluate new capabilities to support container orchestration platforms like Kubernetes.
The industry has some reservations about Kubernetes, but we need to balance the pros and cons of new products and technologies as IT professionals. Most of the large enterprises are using Kubernetes, and although complexity is a clear drawback with this technology, the benefits could outpace the hurdles associated with its management.
The Oracle Identity and Access Management Suite (IDM) is perhaps one of the technologies most companies have been using in the past as the foundation for their enterprise security. Now, as technology evolves, the need to adapt becomes key to remain competitive. So, moving to a micro-services architecture supporting container orchestration platforms like Kubernetes becomes a priority and the main driver within Oracle’s IDM strategy.
This note will begin a series of articles focusing on new capabilities and enhancements in the Oracle IDM Suite of products, initially describing how to deploy Oracle Access Manager (OAM) in a Kubernetes environment.
Some Assumptions
In order to deploy an OAM cluster with Kubernetes, consider the following assumptions:
- A supported Oracle database is running on a separate server.
- Installed Cloud Native Computing Foundation (CNCF) certified Kubernetes (K8s) distribution version 1.18+. E.g., Rancher or Oracle Linux Cloud Native Environment.
- Each node in the K8s cluster with at least 32 GB of RAM and 4 CPUs.
- For storage, each node with one disk and two partitions, or with two disk drives. The first drive or partition intended to store Rancher and Kubernetes files, while the second dedicated to block storage.
- A separate Oracle HTTP server, fronting a web application to be protected.
About Rancher
Rancher is probably one of the most popular open-source multi-cloud Kubernetes (K8s) management platforms available in the market. The rationale for its selection in this article is obvious: easy to install and use, making management of K8s deployments a breeze.
Rancher is not just a management platform; it also offers a CNCF Kubernetes distribution known as Rancher Kubernetes Engine or RKE, which focuses on security and compliance within the U.S. Federal Government sector.
About OAM Kubernetes Architecture
It is not a secret that WebLogic is the application server platform behind most Oracle enterprise applications in the market, including the Oracle IDM Suite.
A few years back, Oracle started developing a WebLogic Kubernetes operator; the goal was to support the execution of WebLogic servers and Fusion Middleware Infrastructure domains on K8s, so it was just a matter of time that support for the Oracle IDM Suite became official.
WebLogic operator main features include:
- Manage lifecycle operations in K8s, e.g., start/stop servers, scale-up/down clusters, and rolling restarts.
- Automate configuration, e.g., domains, clusters, configuration overrides.
- Support standard K8s features like sidecars, e.g., init containers and custom resources.
Figure 2 depicts the WebLogic operator based on the Kubernetes Operator Pattern, functioning as a custom controller that extends Kubernetes to create, configure, and manage WebLogic domains. Two operator pieces are essential: Custom Resource Definitions (CRDs) and a Controller that manages CRDs.
The latest version of Oracle IDM Suite 12c PS4 BP6 was released a couple of months ago. With K8s support came new features like Oracle Adaptive Authentication, Oracle Radius Agent, and Oracle Role Intelligence, all based on a micro-services architecture. Future articles will review the deployment in K8s and their primary capabilities.
Figure 3 depicts OAM in K8s, where the WebLogic operator plays a vital role in the entire architecture. The current implementation of OAM in K8s supports the topology model “Domain in Persistent Volume/Claim” or Domain in PV/C, which means domain configuration resides in a persistent volume shared across servers in the domain.
The following table highlights the main differences between the two available topology models for WebLogic domains.
Deploying OAM in a Kubernetes Cluster
By default, the Helm charts that come with OAM use Network File System (NFS) volumes to store domain configuration. This post deviates from that by using block storage in the form of persistent volumes that can be shared and replicated across nodes; the reason for this is simple: performance, so adjustments to out-of-the-box charts were appropriate to achieve this goal.
The official Oracle IDM container images can be downloaded from My Oracle Support (Doc ID 2723908.1) and later imported into a Docker registry, public or private.
As indicated previously, Rancher and RKE were chosen as the K8s management platform to deploy OAM, so it is essential to mention that Rancher and OAM must reside in separate K8s clusters for production environments. However, for simplicity in this article, Rancher and OAM are deployed within the same K8s cluster.
The following diagram depicts a sample OAM deployment. The Rancher components are omitted to favor visualization of the OAM components and topology.
Step 1: Create Persistent Volume Claims (PVCs) ~ 10 min
Use the Rancher web console to install OpenEBS, a K8s storage solution that defines persistent volumes on local storage synchronously replicated across nodes. Once OpenEBS is installed, configured it to use the second partition or disk as block storage and then create the PVCs for Oracle Unified Directory and OAM.
Step 2: Deploy Oracle Unified Directory (OUD) ~ 15 min
Out of the box, OUD comes with Helm charts for deploying multi-master replicas in a K8s environment, so the charts can be uploaded to a GitHub repository and then use the Rancher web console to deploy OUD. A two replica topology will be holding the OAM identity store.
Step 3: Deploy WebLogic operator ~ 5 min
Upload the Helm charts available with the WebLogic operator to a GitHub repository and then use the Rancher web console to deploy the operator.
Step 4: Prepare OAM database schema ~ 5 min
In order to create the OAM database schema, the Repository Creation Utility (RCU) utility must be available. This tool comes with OAM, so an option is to deploy the OAM image as a temporary K8s workload or helper pod to run RCU and create the OAM schema. Since the database resides in a remote server, we should deploy the workload using the node host network or use host aliases to communicate with the database.
Step 5: Deploy OAM cluster ~ 25 min
Similar to OUD, the OAM Helm charts can be uploaded to a GitHub repository and use the Rancher web console to deploy OAM. By default, the Helm charts deploy OAM clusters with one active policy server and two OAM servers; however, this number can scale up to a maximum of 5 policy and OAM servers.
Conclusion
This article provided insights into the new Oracle IDM Suite strategy based on micro-services architecture and container orchestration platforms like Kubernetes. Stay tuned next articles will focus on actual implementation cases following a step-by-step approach.
About the Author
Ricardo Gutierrez is a Lead Security Architect at Oracle with over 12 years of experience in Identity and Access Management, familiar with cloud infrastructure, databases, application security, cryptography, and enterprise application integration. In his spare time, Ricardo does research and software development using new technologies. He is the creator of the E-Business Suite Asserter (EBS Asserter), a component for SSO bundle with Oracle Identity Cloud Service, and the Dynamic Authenticator, an MFA solution for Oracle databases.