Learn how to set up and configure an Oracle rac 10

Download 0.57 Mb.
NameLearn how to set up and configure an Oracle rac 10
A typeDocumentation
manual-guide.com > manual > Documentation
1   2   3   4   5   6   7   8   9

The hangcheck-timer.o Module

The hangcheck-timer module uses a kernel-based timer that periodically checks the system task scheduler to catch delays in order to determine the health of the system. If the system hangs or pauses, the timer resets the node. The hangcheck-timer module uses the Time Stamp Counter (TSC) CPU register, which is incremented at each clock signal. The TCS offers much more accurate time measurements because this register is updated by the hardware automatically.

Much more information about the hangcheck-timer project can be found here.

Installing the hangcheck-timer.o Module

The hangcheck-timer was originally shipped only by Oracle; however, this module is now included with Red Hat Linux starting with kernel versions 2.4.9-e.12 and higher. If you followed the steps in Section 8 ("Obtain and Install a Proper Linux Kernel"), then the hangcheck-timer is already included for you. Use the following to confirm:

# find /lib/modules -name "hangcheck-timer.o"



In the above output, we care about the hangcheck timer object (hangcheck-timer.o) in the /lib/modules/2.4.21-27.0.2.ELorafw1/kernel/drivers/char directory.

Configuring and Loading the hangcheck-timer Module

There are two key parameters to the hangcheck-timer module:

  • hangcheck-tick: This parameter defines the period of time between checks of system health. The default value is 60 seconds; Oracle recommends setting it to 30 seconds.

  • hangcheck-margin: This parameter defines the maximum hang delay that should be tolerated before hangcheck-timer resets the RAC node. It defines the margin of error in seconds. The default value is 180 seconds; Oracle recommends setting it to 180 seconds.

NOTE: The two hangcheck-timer module parameters indicate how long a RAC node must hang before it will reset the system. A node reset will occur when the following is true:

system hang time > (hangcheck_tick + hangcheck_margin)

Configuring Hangcheck Kernel Module Parameters

Each time the hangcheck-timer kernel module is loaded (manually or by Oracle), it needs to know what value to use for each of the two parameters we just discussed: (hangcheck-tick and hangcheck-margin). These values need to be available after each reboot of the Linux server. To do that, make an entry with the correct values to the /etc/modules.conf file as follows:

# su -

# echo "options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180" >> /etc/modules.conf

Each time the hangcheck-timer kernel module gets loaded, it will use the values defined by the entry I made in the /etc/modules.conf file.

Manually Loading the Hangcheck Kernel Module for Testing

Oracle is responsible for loading the hangcheck-timer kernel module when required. For that reason, it is not required to perform a modprobe or insmod of the hangcheck-timer kernel module in any of the startup files (i.e. /etc/rc.local).

It is only out of pure habit that I continue to include a modprobe of the hangcheck-timer kernel module in the /etc/rc.local file. Someday I will get over it, but realize that it does not hurt to include a modprobe of the hangcheck-timer kernel module during startup.

So to keep myself sane and able to sleep at night, I always configure the loading of the hangcheck-timer kernel module on each startup as follows:

# echo "/sbin/modprobe hangcheck-timer" >> /etc/rc.local

(Note: You don't have to manually load the hangcheck-timer kernel module using modprobe or insmod after each reboot. The hangcheck-timer module will be loaded by Oracle automatically when needed.)

Now, to test the hangcheck-timer kernel module to verify it is picking up the correct parameters we defined in the /etc/modules.conf file, use the modprobe command. Although you could load the hangcheck-timer kernel module by passing it the appropriate parameters (e.g. insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180), we want to verify that it is picking up the options we set in the /etc/modules.conf file.

To manually load the hangcheck-timer kernel module and verify it is using the correct values defined in the /etc/modules.conf file, run the following command:

# su -

# modprobe hangcheck-timer

# grep Hangcheck /var/log/messages | tail -2

Jan 30 22:11:33 linux1 kernel: Hangcheck: starting hangcheck timer 0.8.0 (tick is 30 seconds, margin is 180 seconds).

Jan 30 22:11:33 linux1 kernel: Hangcheck: Using TSC.

I also like to verify that the correct hangcheck-timer kernel module is being loaded. To confirm, I typically remove the kernel module (if it was loaded) and then re-loading it using the following:

# su -

# rmmod hangcheck-timer

# insmod hangcheck-timer

Using /lib/modules/2.4.21-27.0.2.ELorafw1/kernel/drivers/char/hangcheck-timer.o
13. Configure RAC Nodes for Remote Access

Perform the following configuration procedures on all nodes in the cluster!

When running the Oracle Universal Installer on a RAC node, it will use the rsh (or ssh) command to copy the Oracle software to all other nodes within the RAC cluster. The oracle UNIX account on the node running the Oracle Installer (runInstaller) must be trusted by all other nodes in your RAC cluster. Therefore you should be able to run r* commands like rsh, rcp, and rlogin on the Linux server you will be running the Oracle installer from, against all other Linux servers in the cluster without a password. The rsh daemon validates users using the /etc/hosts.equiv file or the .rhosts file found in the user's (oracle's) home directory. (The use of rcp and rsh are not required for normal RAC operation. However rcp and rsh should be enabled for RAC and patchset installation.)

Oracle added support in 10g for using the Secure Shell (SSH) tool suite for setting up user equivalence. This article, however, uses the older method of rcp for copying the Oracle software to the other nodes in the cluster. When using the SSH tool suite, the scp (as opposed to the rcp) command would be used to copy the software in a very secure manner.

First, let's make sure that we have the rsh RPMs installed on each node in the RAC cluster:

# rpm -q rsh rsh-server



From the above, we can see that we have the rsh and rsh-server installed. Were rsh not installed, we would run the following command from the CD where the RPM is located:

# su -

# rpm -ivh rsh-0.17-17.i386.rpm rsh-server-0.17-17.i386.rpm

To enable the "rsh" service, the "disable" attribute in the /etc/xinetd.d/rsh file must be set to "no" and xinetd must be reloaded. Do that by running the following commands on all nodes in the cluster:

# su -

# chkconfig rsh on

# chkconfig rlogin on

# service xinetd reload

Reloading configuration: [ OK ]

To allow the "oracle" UNIX user account to be trusted among the RAC nodes, create the /etc/hosts.equiv file on all nodes in the cluster:

# su -

# touch /etc/hosts.equiv

# chmod 600 /etc/hosts.equiv

# chown root.root /etc/hosts.equiv

Now add all RAC nodes to the /etc/hosts.equiv file similar to the following example for all nodes in the cluster:

# cat /etc/hosts.equiv

+linux1 oracle

+linux2 oracle

+int-linux1 oracle

+int-linux2 oracle

(Note: In the above example, the second field permits only the oracle user account to run rsh commands on the specified nodes. For security reasons, the /etc/hosts.equiv file should be owned by root and the permissions should be set to 600. In fact, some systems will only honor the content of this file if the owner is root and the permissions are set to 600.

Before attempting to test your rsh command, ensure that you are using the correct version of rsh. By default, Red Hat Linux puts /usr/kerberos/sbin at the head of the $PATH variable. This will cause the Kerberos version of rsh to be executed.

I will typically rename the Kerberos version of rsh so that the normal rsh command is being used. Use the following:

# su -
# which rsh

# cd /usr/kerberos/bin

# mv rsh rsh.original

# which rsh


You should now test your connections and run the rsh command from the node that will be performing the Oracle CRS and 10g RAC installation. We will use the node linux1 to perform the install, so run the following commands from that node:

# su - oracle
$ rsh linux1 ls -l /etc/hosts.equiv

-rw------- 1 root root 68 Jan 31 00:39 /etc/hosts.equiv
$ rsh int-linux1 ls -l /etc/hosts.equiv

-rw------- 1 root root 68 Jan 31 00:39 /etc/hosts.equiv
$ rsh linux2 ls -l /etc/hosts.equiv

-rw------- 1 root root 68 Jan 31 00:25 /etc/hosts.equiv
$ rsh int-linux2 ls -l /etc/hosts.equiv

-rw------- 1 root root 68 Jan 31 00:25 /etc/hosts.equiv
14. All Startup Commands for Each RAC Node

Verify that the following startup commands are included on all nodes in the cluster!

Up to this point, we have examined in great detail the parameters and resources that need to be configured on all nodes for the Oracle RAC 10g configuration. In this section we will take a "deep breath" and recap those parameters, commands, and entries (in previous sections of this document) that you must include in the startup scripts for each Linux node in the RAC cluster.

For each of the startup files below, entries in gray should be included in each startup file.


(All parameters and values to be used by kernel modules.)

alias eth0 tulip

alias eth1 b44

alias sound-slot-0 i810_audio

post-install sound-slot-0 /bin/aumix-minimal -f /etc/.aumixrc -L >/dev/null 2>&1 || :

pre-remove sound-slot-0 /bin/aumix-minimal -f /etc/.aumixrc -S >/dev/null 2>&1 || :

alias usb-controller usb-uhci

alias usb-controller1 ehci-hcd

alias ieee1394-controller ohci1394

options sbp2 sbp2_exclusive_login=0

post-install sbp2 insmod sd_mod

post-install sbp2 insmod ohci1394

post-remove sbp2 rmmod sd_mod

options hangcheck-timer hangcheck_tick=30 hangcheck_margin=180


(We wanted to adjust the default and maximum send buffer size as well as the default and maximum receive buffer size for the interconnect.)

# Kernel sysctl configuration file for Red Hat Linux


# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and

# sysctl.conf(5) for more details.
# Controls IP packet forwarding

net.ipv4.ip_forward = 0
# Controls source route verification

net.ipv4.conf.default.rp_filter = 1
# Controls the System Request debugging functionality of the kernel

kernel.sysrq = 0
# Controls whether core dumps will append the PID to the core filename.

# Useful for debugging multi-threaded applications.

kernel.core_uses_pid = 1
# Default setting in bytes of the socket receive buffer

# Default setting in bytes of the socket send buffer

# Maximum socket receive buffer size which may be set by using

# the SO_RCVBUF socket option

# Maximum socket send buffer size which may be set by using

# the SO_SNDBUF socket option



(All machine/IP entries for nodes in our RAC cluster.)

# Do not remove the following line, or various programs

# that require network functionality will fail. localhost.localdomain localhost

# Public Network - (eth0) linux1 linux2

# Private Interconnect - (eth1) int-linux1 int-linux2

# Public Virtual IP (VIP) addresses for - (eth0) vip-linux1 vip-linux2 melody alex bartman


(Allow logins to each node as the oracle user account without the need for a password.)

+linux1 oracle

+linux2 oracle

+int-linux1 oracle

+int-linux2 oracle


(Determine which kernel to use when the node is booted.)

# grub.conf generated by anaconda


# Note that you do not have to rerun grub after making changes to this file

# NOTICE: You have a /boot partition. This means that

# all kernel and initrd paths are relative to /boot/, eg.

# root (hd0,0)

# kernel /vmlinuz-version ro root=/dev/hda2

# initrd /initrd-version.img





title White Box Enterprise Linux (2.4.21-27.0.2.ELorafw1)

root (hd0,0)

kernel /vmlinuz-2.4.21-27.0.2.ELorafw1 ro root=LABEL=/

initrd /initrd-2.4.21-27.0.2.ELorafw1.img

title White Box Enterprise Linux (2.4.21-15.EL)

root (hd0,0)

kernel /vmlinuz-2.4.21-15.EL ro root=LABEL=/

initrd /initrd-2.4.21-15.EL.img


(These commands are responsible for configuring shared memory, semaphores, and file handles for use by the Oracle instance.)



# This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
# +---------------------------------------------------------+


# +---------------------------------------------------------+
echo "2147483648" > /proc/sys/kernel/shmmax

echo "4096" > /proc/sys/kernel/shmmni

# +---------------------------------------------------------+


# | ---------- |

# | |

# | SEMMSL_value SEMMNS_value SEMOPM_value SEMMNI_value |

# | |

# +---------------------------------------------------------+
echo "256 32000 100 128" > /proc/sys/kernel/sem

# +---------------------------------------------------------+


# ----------------------------------------------------------+
echo "65536" > /proc/sys/fs/file-max

# +---------------------------------------------------------+


# | (I do not believe this is required, but doesn't hurt) |

# ----------------------------------------------------------+
/sbin/modprobe hangcheck-timer
15. Check RPM Packages for Oracle 10g

Perform the following checks on all nodes in the cluster!

When installing the Linux O/S (RHEL 3 or WBEL), you should verify that all required RPMs are installed. If you followed the instructions I used for installing Linux, you would have installed Everything, in which case you will have all of the required RPM packages. However, if you performed another installation type (i.e. Advanced Server), you may have some packages missing and will need to install them. All of the required RPMs are on the Linux CDs/ISOs.

Check Required RPMs

The following packages (or higher versions) must be installed:















To query package information (gcc and glibc-devel for example), use the "rpm -q
]" command as follows:

# rpm -q gcc glibc-devel



If you need to install any of the above packages, use "rpm -Uvh
". For example, to install the GCC 3.2.3-24 package, use:

# rpm -Uvh gcc-3.2.3-24.i386.rpm

Reboot the System

At this point, reboot all nodes in the cluster before attempting to install any of the Oracle components!!!

# init 6
16. Install and Configure OCFS

Most of the configuration procedures in this section should be performed on all nodes in the cluster! Creating the OCFS filesystem, however, should be executed on only one node in the cluster.

It is now time to install the Oracle Cluster File System (OCFS). OCFS was developed by Oracle to remove the burden of managing raw devices from DBAs and Sysadmins. It provides the same functionality and feel of a normal filesystem.

In this guide, we will be use OCFS version 1 to store the two files that are required to be shared by CRS. (These will be the only two files stored on the OCFS.) This release of OCFS does NOT support using the filesystem for a shared Oracle Home install (the Oracle Database software). This feature will be available in a future release of OCFS, possibly version 2. Here, we will install the Oracle Database software to a separate $ORACLE_HOME directory locally on each Oracle Linux server in the cluster.

In version 1, OCFS supports only the following types of files:

  • Oracle database files

  • Online Redo Log files

  • Archived Redo Log files

  • Control files

  • Server Parameter file (SPFILE)

  • Oracle Cluster Registry (OCR) file

  • CRS Voting disk.

The Linux binaries used to manipulate files and directories (move, copy, tar, etc.) should not be used on OCFS. These binaries are part of the standard system commands and come with the OS (i.e. mv, cp, tar, etc.); they have a major performance impact when used on the OCFS filesystem. You should instead use Oracle's patched version of these commands. Keep this in mind when using third-party backup tools that also make use of the standard system commands (i.e. mv, tar, etc.).

See this document for more information on OCFS version 1 (including Installation Notes) for RHEL.

Downloading OCFS

First, download the OCFS files (driver, tools, support) from the Oracle Linux Projects Development Group web site (http://oss.oracle.com/projects/ocfs/files/RedHat/RHEL3/i386/). This page will contain several releases of the OCFS files for different versions of the Linux kernel. First, download the key OCFS drivers for either a single processor or a multiple processor Linux server:

ocfs-2.4.21-EL-1.0.14-1.i686.rpm - (for single processor)


ocfs-2.4.21-EL-smp-1.0.14-1.i686.rpm - (for multiple processors)

You will also need to download the following two support files:

ocfs-support-1.0.10-1.i386.rpm - (1.0.10-1 support package)
ocfs-tools-1.0.10-1.i386.rpm - (1.0.10-1 tools package)

If you were curious as to which OCFS driver release you need, use the OCFS release that matches your kernel version. To determine your kernel release:

$ uname -a

Linux linux1 2.4.21-27.0.1.ELorafw1 #1 Tue Dec 28 16:58:59 PST 2004 i686 i686 i386 GNU/Linux

In the absence of the string "smp" after the string "ELorafw1", you are running a single processor (Uniprocessor) machine. If the string "smp" were to appear, then you would be running on a multi-processor machine.

Installing OCFS

We will be installing the OCFS files onto two single-processor machines. The installation process is simply a matter of running the following command on all nodes in the cluster as the root user account:

$ su -

# rpm -Uvh ocfs-2.4.21-EL-1.0.14-1.i686.rpm \

ocfs-support-1.0.10-1.i386.rpm \


Preparing... ########################################### [100%]

1:ocfs-support ########################################### [ 33%]

2:ocfs-2.4.21-EL ########################################### [ 67%]

Linking OCFS module into the module path [ OK ]

3:ocfs-tools ########################################### [100%]

Configuring and Loading OCFS

The next step is to generate and configure the /etc/ocfs.conf file. The easiest way to accomplish that is to run the GUI tool ocfstool We will need to do that on all nodes in the cluster as the root user account:

$ su -

# ocfstool &

This will bring up the GUI as shown below:

Figure 6. ocfstool GUI

Using the ocfstool GUI tool, perform the following steps:

  1. Select [Task] - [Generate Config]

  2. In the "OCFS Generate Config" dialog, enter the interface and DNS Name for the private interconnect. In our example, this would be eth1 and int-linux1 for the node linux1 and eth1 and int-linux2 for the node linux2.

  3. After verifying all values are correct on all nodes, exit the application.

The following dialog shows the settings I used for the node linux1:

Figure 7. ocfstool Settings

After exiting the ocfstool, you will have a /etc/ocfs.conf similar to the following:


# ocfs config

# Ensure this file exists in /etc

node_name = int-linux1

ip_address =

ip_port = 7000

comm_voting = 1

guid = 8CA1B5076EAF47BE6AA0000D56FC39EC

Notice the guid value. This is a group user ID that has to be unique for all nodes in the cluster. Also keep in mind that the /etc/ocfs.conf could have been created manually or by simply running the ocfs_uid_gen -c command that will assign (or update) the GUID value in the file.

The next step is to load the ocfs.o kernel module. Like all steps in this section, run the following command on all nodes as the root user account:

$ su -

# /sbin/load_ocfs

/sbin/insmod ocfs node_name=int-linux1 ip_address= cs=1891

guid=8CA1B5076EAF47BE6AA0000D56FC39EC comm_voting=1 ip_port=7000

Using /lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o

Warning: kernel-module version mismatch

/lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o was compiled for kernel version 2.4.21-27.EL

while this kernel is version 2.4.21-27.0.2.ELorafw1

Warning: loading /lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o will taint the kernel: forced load

See http://www.tux.org/lkml/#export-tainted for information about tainted modules

Module ocfs loaded, with warnings

The two warnings (above) can safely be ignored! To verify that the kernel module was loaded, run the following:

# /sbin/lsmod |grep ocfs

ocfs 299072 0 (unused)

(Note: The ocfs module will stay loaded until the machine is cycled. I will provide instructions for how to load the module automatically later.)

Many types of errors can occur while attempting to load the ocfs module. I have not run into any of these problems, so I include them here only for documentation purposes!

One common error looks like this:

# /sbin/load_ocfs

/sbin/insmod ocfs node_name=int-linux1 \

ip_address= \

cs=1891 \

guid=8CA1B5076EAF47BE6AA0000D56FC39EC \

comm_voting=1 ip_port=7000

Using /lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o

/lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o: kernel-module version mismatch

/lib/modules/2.4.21-EL-ABI/ocfs/ocfs.o was compiled for kernel version 2.4.21-4.EL

while this kernel is version 2.4.21-15.ELorafw1.

This usually means you have the wrong version of the modutils RPM. Get the latest version of modutils and use the following commnad to update your system:

rpm -Uvh modutils-devel-2.4.25-12.EL.i386.rpm

Other problems can occur when using FireWire. If you are still having troubles loading and verifying the loading of the ocfs module, try the following on all nodes that are having the error as the "root" user account:

$ su -

# /lib/modules/`uname -r`/kernel/drivers/addon/ocfs

# ln -s `rpm -qa | grep ocfs-2 | xargs rpm -ql | grep "/ocfs.o$"` \

/lib/modules/`uname -r`/kernel/drivers/addon/ocfs/ocfs.o

Thanks to Werner Puschitz for coming up with the above solutions!

Creating an OCFS Filesystem

(Note: Unlike the other tasks in this section, creating the OCFS filesystem should be executed only on one node. We will be executing all commands in this section from linux1 only.)

Finally, we can start making use of those partitions we created in Section 10 ("Create Partitions on the Shared FireWire Storage Device"). Well, at least the first partition!

To create the file system, we use the Oracle executable /sbin/mkfs.ocfs. For the purpose of this example, I run the following command only from linux1 as the root user account:

$ su -

# mkfs.ocfs -F -b 128 -L /u02/oradata/orcl -m /u02/oradata/orcl -u '175' -g '115' -p 0775 /dev/sda1

Cleared volume header sectors

Cleared node config sectors

Cleared publish sectors

Cleared vote sectors

Cleared bitmap sectors

Cleared data block

Wrote volume header

The following should be noted with the above command:

  • The -u argument is the User ID for the oracle user. This can be obtained using the command id -u oracle and should be the same on all nodes.

  • The -g argument is the Group ID for the oracle:dba user:group. This can be obtained using the command id -g oracle and should be the same on all nodes.

  • /dev/sda1 is the device name (or partition) to use for this filesystem. We created the /dev/sda1 for storing the Cluster Manager files.

The following is a list of the options available with the mkfs.ocfs command:

usage: mkfs.ocfs -b block-size [-C] [-F]

[-g gid] [-h] -L volume-label

-m mount-path [-n] [-p permissions]

[-q] [-u uid] [-V] device
-b Block size in kilo bytes

-C Clear all data blocks

-F Force format existing OCFS volume

-g GID for the root directory

-h Help

-L Volume label

-m Path where this device will be mounted

-n Query only

-p Permissions for the root directory

-q Quiet execution

-u UID for the root directory

-V Print version and exit

One final note about creating OCFS filesystems: You can use the GUI tool ocfstool to perform the same task as the command-line mkfs.ocfs. From the ocfstool utility, use the menu [Tasks] - [Format] .

Mounting the OCFS Filesystem

Now that the file system is created, we can mount it. Let's first do it using the command line, then I'll show how to include it in the /etc/fstab to have it mount on each boot. We will need to mount the filesystem on all nodes as the root user account.

First, here is how to manually mount the OCFS filesystem from the command line. Remember to do this as root:

$ su -

# mount -t ocfs /dev/sda1 /u02/oradata/orcl

If the mount was successful, you will simply got your prompt back. We should, however, run the following checks to ensure the filesystem is mounted correctly with the right permissions. You should run these manual checks on all nodes.

First, let's use the mount command to ensure that the new filesystem is really mounted. This step should be performed on all nodes:

# mount

/dev/hda2 on / type ext3 (rw)

none on /proc type proc (rw)

none on /dev/pts type devpts (rw,gid=5,mode=620)

usbdevfs on /proc/bus/usb type usbdevfs (rw)

/dev/hda1 on /boot type ext3 (rw)

none on /dev/shm type tmpfs (rw)

/dev/sda1 on /u02/oradata/orcl type ocfs (rw)

Next, use the ls command to check ownership. The permissions should be set to 0775 with owner oracle and group dba. If this is not the case for all nodes in the cluster, then it is likely that the oracle UID (175 in this example) and/or the dba GID (115 in this example) are not consistent across all nodes.

# ls -ld /u02/oradata/orcl

drwxrwxr-x 1 oracle dba 131072 Feb 2 18:02 /u02/oradata/orcl

Configuring OCFS to Mount Automatically at Startup

Let's take a look at what we have done so far. We downloaded and installed the OCFS that will be used to store the files needed by Cluster Manager files. After going through the install, we loaded the OCFS module into the kernel and then created the cluster filesystem. Finally, we mounted the newly created filesystem. This section walks through the steps responsible for loading the OCFS module and ensure the filesystem(s) are mounted each time the machine(s) are booted.

We start by adding the following line to the /etc/fstab file on all nodes:

/dev/sda1 /u02/oradata/orcl ocfs _netdev 0 0

(Notice the _netdev option for mounting this filesystem. This option prevents the OCFS from being mounted until all networking services are enabled.)

Now, let's make sure that the ocfs.o kernel module is being loaded and that the filesystem will be mounted during the boot process.

If you have been following along with the examples in this guide, the actions to load the kernel module and mount the OCFS filesystem should already be enabled. However, we should still check those options by running the following on all nodes as root:

$ su -

# chkconfig --list ocfs

ocfs 0:off 1:off 2:on 3:on 4:on 5:on 6:off

The flags that I have marked in bold should be set to on. If for some reason these options are set to off, you can use the following command to enable them:

$ su -

# chkconfig ocfs on

(Note that loading the ocfs.o kernel module will also mount the OCFS filesystem(s) configured in /etc/fstab!)

Before starting the next section, this would be a good place to reboot all the nodes in the cluster. When the machines come up, ensure that the ocfs.o kernel module is being loaded and that the filesystem we created is being mounted.

17. Install and Configure Automatic Storage Management and Disks

Most of the installation and configuration procedures should be performed on all nodes. Creating the ASM disks, however, will only need to be performed on a single node within the cluster.

In this section, we will configure Automatic Storage Management (ASM) to be used as the filesystem/volume manager for all Oracle physical database files (data, online redo logs, control files, archived redo logs).

ASM was introduced in Oracle Database 10g and relieves the DBA from having to manage individual files and drives. ASM is built into the Oracle kernel and provides the DBA with a way to manage thousands of disk drives 24x7 for single as well as clustered instances. All the files and directories to be used for Oracle will be contained in a disk group. ASM automatically performs load balancing in parallel across all available disk drives to prevent hot spots and maximize performance, even with rapidly changing data usage patterns.

First we'll discuss the ASMLib libraries and the associated driver for Linux, plus other methods for configuring ASM with Linux. Next, I will provide instructions for downloading the ASM drivers (ASMLib Release 1.0) specific to our Linux kernel. Finally, we will install and configure the ASM drivers while finishing off the section with a demonstration of how we created the ASM disks.

If you would like to learn more about the ASMLib, visit www.oracle.com/technology/tech/linux/asmlib/install.html.

Methods for Configuring ASM with Linux (For Reference Only)

When I first started this guide, I wanted to focus on using ASM for all database files. I was curious to see how well ASM works with this test RAC configuration with regard to load balancing and fault tolerance.

There are two different methods to configure ASM on Linux:

  • ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices managed by ASM using ASMLib calls. Raw devices are not required with this method as ASMLib works with block devices.

  • ASM with Standard Linux I/O: This method creates all Oracle database files on raw character devices managed by ASM using standard Linux I/O system calls. You will be required to create raw devices for all disk partitions used by ASM.

We will examine the "ASM with ASMLib I/O" method here.

Before discussing the installation and configuration details of ASMLib, however, I thought it would be interesting to talk briefly about the second method, "ASM with Standard Linux I/O". If you were to use this method (which is a perfectly valid solution, just not the method we will be implementing), you should be aware that Linux does not use raw devices by default. Every Linux raw device you want to use must be bound to the corresponding block device using the raw driver. For example, if you wanted to use the partitions we've created, (/dev/sda2, /dev/sda3, and /dev/sda4), you would need to perform the following tasks:

  1. Edit the file /etc/sysconfig/rawdevices as follows:

  2. # raw device bindings

  3. # format:

  4. #

  5. # example: /dev/raw/raw1 /dev/sda1

  6. # /dev/raw/raw2 8 5

  7. /dev/raw/raw2 /dev/sda2

  8. /dev/raw/raw3 /dev/sda3

  9. /dev/raw/raw4 /dev/sda4

The raw device bindings will be created on each reboot.

  1. You would then want to change ownership of all raw devices to the "oracle" user account:

  2. # chown oracle:dba /dev/raw/raw2; chmod 660 /dev/raw/raw2

  3. # chown oracle:dba /dev/raw/raw3; chmod 660 /dev/raw/raw3

  4. # chown oracle:dba /dev/raw/raw4; chmod 660 /dev/raw/raw4

  5. The last step is to reboot the server to bind the devices or simply restart the rawdevices service:

# service rawdevices restart

As I mentioned earlier, the above example was just to demonstrate that there is more than one method for using ASM with Linux. Now let's move on to the method that will be used for this article, "ASM with ASMLib I/O."

Downloading the ASMLib Packages

As with OCFS, we need to download the version for the Linux kernel and number of processors on the machine. We are using kernel 2.4.21 and the machines I am using are both single-processor machines:

# uname -a

Linux linux1 2.4.21-27.0.2.ELorafw1 #1 Tue Dec 28 16:58:59 PST 2004 i686 i686 i386 GNU/Linux

Oracle ASMLib Downloads

Installing ASMLib Packages

This installation needs to be performed on all nodes as the root user account:

$ su -

# rpm -Uvh oracleasm-2.4.21-EL-1.0.3-1.i686.rpm \

oracleasmlib-1.0.0-1.i386.rpm \


Preparing... ########################################### [100%]

1:oracleasm-support ########################################### [ 33%]

2:oracleasm-2.4.21-EL ########################################### [ 67%]

Linking module oracleasm.o into the module path [ OK ]

3:oracleasmlib ########################################### [100%]

Configuring and Loading the ASMLib Packages

Now that we downloaded and installed the ASMLib Packages for Linux, we need to configure and load the ASM kernel module. This task needs to be run on all nodes as root:

$ su -

# /etc/init.d/oracleasm configure

Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library driver. The following questions will determine whether the driver is loaded on boot and what permissions it will have. The current values will be shown in brackets ('[]'). Hitting without typing an answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle

Default group to own the driver interface []: dba

Start Oracle ASM library driver on boot (y/n) [n]: y

Fix permissions of Oracle ASM disks on boot (y/n) [y]: y

Writing Oracle ASM library driver configuration [ OK ]

Creating /dev/oracleasm mount point [ OK ]

Loading module "oracleasm" [ OK ]

Mounting ASMlib driver filesystem [ OK ]

Scanning system for ASM disks [ OK ]

Creating ASM Disks for Oracle

In Section 8, we created three Linux partitions to be used for storing Oracle database files such as online redo logs, database files, control files, SPFILEs, and archived redo log files.

Here is a list of the partitions we created:

Oracle ASM Partitions Created

Filesystem Type



Mount Point

File Types





Oracle Database Files





Oracle Database Files





Oracle Database Files






1   2   3   4   5   6   7   8   9


Learn how to set up and configure an Oracle rac 10 iconSet call options When you first sign in to Lync, you’ll be guided...

Learn how to set up and configure an Oracle rac 10 iconObjectives In this episode you will: Learn about adverbs of frequency....

Learn how to set up and configure an Oracle rac 10 iconPartners in 17 Categories Recognized at Oracle OpenWorld for Excellence...

Learn how to set up and configure an Oracle rac 10 iconFull life cycle development, implementation and support with expertise...

Learn how to set up and configure an Oracle rac 10 iconIdc white Paper sponsored by Oracle Corporation, "Oracle e-business...

Learn how to set up and configure an Oracle rac 10 iconOracle® e-business Suite, Siebel crm, PeopleSoft Enterprise and jd...

Learn how to set up and configure an Oracle rac 10 iconPhp version: 17 set foreign key checks=0; set sql mode="no auto value...

Learn how to set up and configure an Oracle rac 10 iconThe student will learn about reading instruction from birth to secondary...

Learn how to set up and configure an Oracle rac 10 iconDG102sh console Command Set(set coding, set h323, and show)

Learn how to set up and configure an Oracle rac 10 iconResearch into an Intermediate Accounting level practice set was conducted....

Learn how to set up and configure an Oracle rac 10 iconAbstract This integration is a part of Siebel Integrations between...

Learn how to set up and configure an Oracle rac 10 iconTo set up the fw-1884 in Cubase or Nuendo 3 To set up the fw-1884 in Mackie Emulation protocol 3

Learn how to set up and configure an Oracle rac 10 iconDownload and installation: Configure the initial scan

Learn how to set up and configure an Oracle rac 10 iconHow to configure timeGuardian/QuickBooks for direct integration

Learn how to set up and configure an Oracle rac 10 iconConfigure the cics environment for the Debug Tool

Learn how to set up and configure an Oracle rac 10 iconPart 2: Configure ssh access to the Switches

Learn how to set up and configure an Oracle rac 10 iconOracle® SuperCluster T5-8

Learn how to set up and configure an Oracle rac 10 iconOracle® SuperCluster M6-32

Learn how to set up and configure an Oracle rac 10 iconMacBook-Pro-de-Martin-Ortuno: libusb 9 martinortuno$./configure

Learn how to set up and configure an Oracle rac 10 iconLearning to drive -how do you want to learn?


When copying material provide a link © 2017