May 20, 2014

Essbase Clustering Part 2: (Active/Passive Clusters)

If you read my earlier post on Essbase clustering, you have realized that Essbase offers a couple of ways to cluster a server:

  1. Active/Passive, which is best for BSO clusters as it supports write back
  2. Active/Active, which is best for ASO clusters as it allows for load balancing as well
On this post, I will outline how to configure an Essbase active/passive cluster. 

What will you need:

If you want to create your own cluster you will need:
  1. Two servers
  2. A load balancer (where you can configure a VIP, aka Virtual IP) and where it can "sense" when one node is not available to failover to the passive node.
  3. A high speed shared disk (your cluster would not be much of a cluster if Essbase cannot access the database files)
  4. Knowledge on how to install Hyperion

My Setup

I'm using the latest version of Oracle EPM 11.1.2.3 and I'm installing it in two virtual machines using CentOS 6.5 Linux which is a RedHat variant and this way I don't have to mess around with Microsoft cluster services. Now, for this demonstration, I'm using an NFS share for my shared repository in one of the servers and I'm not using a load balancer but in real life this would not be an ideal scenario.

Configuration Process on First Node

You need to install your first Essbase node normally, as you would any other Essbase server. On the Essbase configuration step you need to specify the shared disk as the ARBORPATH and you can change the name of the cluster as you see fit:


















As you can see from the screenshot above, I have called my cluster "EssbaseCluster-LAB" and changed the ARBORPATH to "/u01/SHARED/EssbaseServer/essbaseserver1".

After you have configured your first node you can move on to installing and configuring your second node.

Configuration Process on Second Node

You need to install your second node normally but when it comes to configuring the Essbase part you will need to make a change. When you enter the configure Essbase task you will see the all the default values:



















You will have to click on the "Assign To Existing Cluster" button on the top right corner so you can select the cluster that was previously configured and assign this node to it:

You can select the cluster from the drop down. Then when you click OK you will see that you cannot change the cluster name or the ARBORPATH as these need to be the same on both nodes.




















Now you can continue with the configuration process and finish all tasks.

Post-configuration Tasks

After you installed and configured both Essbase nodes you are still not ready to use the cluster. You first need to make some changes to the opmn.xml file in order for opmn to sense when Essbase crashes on one server so it can bring up the Essbase process on the passive node. I followed the steps from the following Oracle Deployment Options Guide

You need to make the following changes in the opmn.xml on both nodes:

Add the topology:














Make sure that the port you use is the OPMN remote port (6712) and not the Essbase port (1423). This tells opmn the OPMN nodes in this cluster.

Add the service failover and weight directives. Now, make sure that the node you want to be primary has a higher value (101) than your passive node (100):

Active Node:






Passive Node:






You also need to make sure that the name of the cluster is in the ias-component ID directive as so:






Lastly, you have to set the "restart on death" to true. Now, this directive will tell OPMN to attempt to restart the active node's Essbase process on death before attempting to pass the baton to the passive node. If this directive is set to false, it will immediately attempt to activate the passive node's Essbase process and not attempt to restart it on the active node. I have set mine to 'false' as I want it to die immediately for test purposes and not attempt to restart Essbase.











Also, as you can see from the "restart" directive you can specify other parameters such as the number of retries and the timeout.

We are now finally ready to test our configuration and start Essbase on both nodes. As you start essbase, you will see that one node will have the opmn processes only and the other should have the opmn process as well as the ESSBASE process running:

Passive Node:










Active Node:






You can also perform opmnctl commands to see the status or to start processes. For example, opmnctl status will tell you the status of the cluster:













Failover Test

So now that it's all configured, you can test that the failover configuration works. To simulate, you can kill the ESSBASE process form the active node and you should see how it automagically starts the ESSBASE process on the passive node. This is where having a separate shared filesystem and a load balancer comes in handy. You would configure your Planning data source using the VIP from the load balancer instead either node. 

At the end, you should end up with something like the following depiction:



Hope this helps explaining how Essbase clusters work and are configured

Pablo