Learn how to set up and configure an Oracle rac 10




Download 0.57 Mb.
NameLearn how to set up and configure an Oracle rac 10
page1/9
A typeDocumentation
manual-guide.com > manual > Documentation
  1   2   3   4   5   6   7   8   9
This document is collected by Akshaya_Patra

Akshaya_Patra@satyam.com

Build Your Own Oracle RAC 10g Cluster on Linux and FireWire
by Jeffrey Hunter

Learn how to set up and configure an Oracle RAC 10g development cluster for less than US$1,800.

Contents

  1. Introduction

  2. Oracle RAC 10g Overview

  3. Shared-Storage Overview

  4. FireWire Technology

  5. Hardware & Costs

  6. Install the Linux Operating System

  7. Network Configuration

  8. Obtain and Install a Proper Linux Kernel

  9. Create "oracle" User and Directories

  10. Creating Partitions on the Shared FireWire Storage Device

  11. Configure the Linux Servers

  12. Configure the hangcheck-timer Kernel Module

  13. Configure RAC Nodes for Remote Access

  14. All Startup Commands for Each RAC Node

  15. Check RPM Packages for Oracle 10g

  16. Install and Configure Oracle Cluster File System

  17. Install and Configure Automatic Storage Management and Disks

  18. Download Oracle RAC 10g Software

  19. Install Oracle Cluster Ready Services Software

  20. Install Oracle Database 10g Software

  21. Create TNS Listener Process

  22. Create the Oracle Cluster Database

  23. Verify TNS Networking Files

  24. Create/Altering Tablespaces

  25. Verify the RAC Cluster/Database Configuration

  26. Starting & Stopping the Cluster

  27. Transparent Application Failover

  28. Conclusion

  29. Acknowledgements




Downloads for this guide:
White Box Enterprise Linux 3 or Red Hat Enterprise Linux 3
Oracle Cluster File System
Oracle Database 10g EE and Cluster Ready Services
Precompiled FireWire Kernel for WBEL/RHEL
ASMLib Drivers





1. Introduction

One of the most efficient ways to become familiar with Oracle Real Application Clusters (RAC) 10g technology is to have access to an actual Oracle RAC 10g cluster. There's no better way to understand its benefits—including fault tolerance, security, load balancing, and scalability—than to experience them directly.

Unfortunately, for many shops, the price of the hardware required for a typical production RAC configuration makes this goal impossible. A small two-node cluster can cost from US$10,000 to well over US$20,000. That cost would not even include the heart of a production RAC environment—typically a storage area network—which start at US$8,000.

For those who want to become familiar with Oracle RAC 10g without a major cash outlay, this guide provides a low-cost alternative to configuring an Oracle RAC 10g system using commercial off-the-shelf components and downloadable software at an estimated cost of US$1,200 to US$1,800. The system involved comprises a dual-node cluster (each with a single processor) running Linux (White Box Enterprise Linux 3.0 Respin 1 or Red Hat Enterprise Linux 3) with a shared disk storage based on IEEE1394 (FireWire) drive technology. (Of course, you could also consider building a virtual cluster on a VMware Virtual Machine, but the experience won't quite be the same!)

This guide does not work (yet) for the latest Red Hat Enterprise Linux 4 release (Linux kernel 2.6). Although Oracle's Linux Development Team provides a stable (patched) precompiled 2.6-compatible kernel available for use with FireWire, a stable release of OCFS version 2—which is required for the 2.6 kernel—is not yet available. When that release becomes available, I will be update this guide to support RHEL4.

Please note that this is not the only way to build a low-cost Oracle RAC 10g system. I have seen other solutions that utilize an implementation based on SCSI rather than FireWire for shared storage. In most cases, SCSI will cost more than our FireWire solution where a typical SCSI card is priced around US$70 and an 80GB external SCSI drive will cost US$700-US$1,000. Keep in mind that some motherboards may already include built-in SCSI controllers.

It is important to note that this configuration should never be run in a production environment and that it is not supported by Oracle or any other vendor. In a production environment, fiber channel—the high-speed serial-transfer interface that can connect systems and storage devices in either point-to-point or switched topologies—is the technology of choice. FireWire offers a low-cost alternative to fiber channel for testing and development, but it is not ready for production.

Although in past experience I have used raw partitions for storing files on shared storage, here we will make use of the Oracle Cluster File System (OCFS) and Oracle Automatic Storage Management (ASM). The two Linux servers will be configured as follows:

Oracle Database Files

RAC Node Name

Instance Name

Database Name

$ORACLE_BASE

File System

Linux1

orcl1

orcl

/u01/app/oracle

ASM
















Linux2

orcl2

orcl

/u01/app/oracle

ASM
















Oracle CRS Shared Files

File Type

File Name

Partition

Mount Point

File System

Oracle Cluster Registry

/u02/oradata/orcl/OCRFile

/dev/sda1

/u02/oradata/orcl

OCFS
















CRS Voting Disk

/u02/oradata/orcl/CSSFile

/dev/sda1

/u02/oradata/orcl

OCFS
















The Oracle Cluster Ready Services (CRS) software will be installed to /u01/app/oracle/product/10.1.0/crs_1 on each of the nodes that make up the RAC cluster. However, the CRS software requires that two of its files, the Oracle Cluster Registry (OCR) file and the CRS Voting Disk file, be shared with all nodes in the cluster. These two files will be installed on the shared storage using OCFS. It is also possible (but not recommended by Oracle) to use raw devices for these files.

The Oracle Database 10g software will be installed into a separate Oracle Home; namely /u01/app/oracle/product/10.1.0/db_1. All of the Oracle physical database files (data, online redo logs, control files, archived redo logs) will be installed to different partitions of the shared drive being managed by ASM. (The Oracle database files can just as easily be stored on OCFS. Using ASM, however, makes the article that much more interesting!)

Note: For the previously published Oracle9i RAC version of this guide, click here.

2. Oracle RAC 10g Overview

Oracle RAC, introduced with Oracle9i, is the successor to Oracle Parallel Server (OPS). RAC allows multiple instances to access the same database (storage) simultaneously. It provides fault tolerance, load balancing, and performance benefits by allowing the system to scale out, and at the same time—because all nodes access the same database—the failure of one instance will not cause the loss of access to the database.

At the heart of Oracle RAC is a shared disk subsystem. All nodes in the cluster must be able to access all of the data, redo log files, control files and parameter files for all nodes in the cluster. The data disks must be globally available to allow all nodes to access the database. Each node has its own redo log and control files but the other nodes must be able to access them in order to recover that node in the event of a system failure.

One of the bigger differences between Oracle RAC and OPS is the presence of Cache Fusion technology. In OPS, a request for data between nodes required the data to be written to disk first, and then the requesting node could read that data. In RAC, data is passed along with locks.

Not all clustering solutions use shared storage. Some vendors use an approach known as a federated cluster, in which data is spread across several machines rather than shared by all. With Oracle RAC 10g, however, multiple nodes use the same set of disks for storing data. With Oracle RAC, the data files, redo log files, control files, and archived log files reside on shared storage on raw-disk devices, a NAS, a SAN, ASM, or on a clustered file system. Oracle's approach to clustering leverages the collective processing power of all the nodes in the cluster and at the same time provides failover security.

For more background about Oracle RAC, visit the Oracle RAC Product Center on OTN.

3. Shared-Storage Overview

Fibre Channel is one of the most popular solutions for shared storage. As I mentioned previously, Fibre Channel is a high-speed serial-transfer interface used to connect systems and storage devices in either point-to-point or switched topologies. Protocols supported by Fibre Channel include SCSI and IP.

Fibre Channel configurations can support as many as 127 nodes and have a throughput of up to 2.12 gigabits per second. Fibre Channel, however, is very expensive; the switch alone can cost as much as US$1,000 and high-end drives can reach prices of US$300. Overall, a typical Fibre Channel setup (including cards for the servers) costs roughly US$5,000.

A less expensive alternative to Fibre Channel is SCSI. SCSI technology provides acceptable performance for shared storage, but for administrators and developers who are used to GPL-based Linux prices, even SCSI can come in over budget at around US$1,000 to US$2,000 for a two-node cluster.

Another popular solution is the Sun NFS (Network File System) found on a NAS. It can be used for shared storage but only if you are using a network appliance or something similar. Specifically, you need servers that guarantee direct I/O over NFS, TCP as the transport protocol, and read/write block sizes of 32K.
4. FireWire Technology

Developed by Apple Computer and Texas Instruments, FireWire is a cross-platform implementation of a high-speed serial data bus. With its high bandwidth, long distances (up to 100 meters in length) and high-powered bus, FireWire is being used in applications such as digital video (DV), professional audio, hard drives, high-end digital still cameras and home entertainment devices. Today, FireWire operates at transfer rates of up to 800 megabits per second while next generation FireWire calls for speeds to a theoretical bit rate to 1,600 Mbps and then up to a staggering 3,200 Mbps. That's 3.2 gigabits per second. This speed will make FireWire indispensable for transferring massive data files and for even the most demanding video applications, such as working with uncompressed high-definition (HD) video or multiple standard-definition (SD) video streams.

The following chart shows speed comparisons of the various types of disk interface. For each interface, I provide the maximum transfer rates in kilobits (kb), kilobytes (KB), megabits (Mb), and megabytes (MB) per second. As you can see, the capabilities of IEEE1394 compare very favorably with other available disk interface technologies.

Disk Interface

Speed

Serial

115 kb/s - (.115 Mb/s)

Parallel (standard)

115 KB/s - (.115 MB/s)

USB 1.1

12 Mb/s - (1.5 MB/s)

Parallel (ECP/EPP)

3.0 MB/s

IDE

3.3 - 16.7 MB/s

ATA

3.3 - 66.6 MB/sec

SCSI-1

5 MB/s

SCSI-2 (Fast SCSI/Fast Narrow SCSI)

10 MB/s

Fast Wide SCSI (Wide SCSI)

20 MB/s

Ultra SCSI (SCSI-3/Fast-20/Ultra Narrow)

20 MB/s

Ultra IDE

33 MB/s

Wide Ultra SCSI (Fast Wide 20)

40 MB/s

Ultra2 SCSI

40 MB/s

IEEE1394(b)

100 - 400Mb/s - (12.5 - 50 MB/s)

USB 2.x

480 Mb/s - (60 MB/s)

Wide Ultra2 SCSI

80 MB/s

Ultra3 SCSI

80 MB/s

Wide Ultra3 SCSI

160 MB/s

FC-AL Fiber Channel

100 - 400 MB/s



5. Hardware & Costs

The hardware we will use to build our example Oracle RAC 10g environment comprises two Linux servers and components that you can purchase at any local computer store or over the Internet.

Server 1 - (linux1)

Dimension 2400 Series
     - Intel Pentium 4 Processor at 2.80GHz
     - 1GB DDR SDRAM (at 333MHz)
     - 40GB 7200 RPM Internal Hard Drive
     - Integrated Intel 3D AGP Graphics
     - Integrated 10/100 Ethernet
     - CDROM (48X Max Variable)
     - 3.5" Floppy
     - No monitor (Already had one)
     - USB Mouse and Keyboard

US$620

1 - Ethernet LAN Cards
     - Linksys 10/100 Mpbs - (Used for Interconnect to linux2)

 

Each Linux server should contain two NIC adapters. The Dell Dimension includes an integrated 10/100 Ethernet adapter that will be used to connect to the public network. The second NIC adapter will be used for the private interconnect.




US$20

1 - FireWire Card
     - SIIG, Inc. 3-Port 1394 I/O Card




Cards with chipsets made by VIA or TI are known to work. In addition to the SIIG, Inc. 3-Port 1394 I/O Card, I have also successfully used the Belkin FireWire 3-Port 1394 PCI Card and StarTech 4 Port IEEE-1394 PCI Firewire Card I/O cards.




US$30

Server 2 - (linux2)

Dimension 2400 Series
     - Intel Pentium 4 Processor at 2.80GHz
     - 1GB DDR SDRAM (at 333MHz)
     - 40GB 7200 RPM Internal Hard Drive
     - Integrated Intel 3D AGP Graphics
     - Integrated 10/100 Ethernet
     - CDROM (48X Max Variable)
     - 3.5" Floppy
     - No monitor (already had one)
     - USB Mouse and Keyboard

US$620

1 - Ethernet LAN Cards
     - Linksys 10/100 Mpbs - (Used for Interconnect to linux1)

 

Each Linux server should contain two NIC adapters. The Dell Dimension includes an integrated 10/100 Ethernet adapter that will be used to connect to the public network. The second NIC adapter will be used for the private interconnect.




US$20

1 - FireWire Card
     - SIIG, Inc. 3-Port 1394 I/O Card




Cards with chipsets made by VIA or TI are known to work. In addition to the SIIG, Inc. 3-Port 1394 I/O Card, I have also successfully used the Belkin FireWire 3-Port 1394 PCI Card and StarTech 4 Port IEEE-1394 PCI Firewire Card I/O cards.




US$30


Miscellaneous Components

FireWire Hard Drive
     - Maxtor One Touch 250GB USB 2.0 / Firewire External Hard Drive


    

Ensure that the FireWire drive that you purchase supports multiple logins. If the drive has a chipset that does not allow for concurrent access for more than one server, the disk and its partitions can only be seen by one server at a time. Disks with the Oxford 911 chipset are known to work. Here are the details about the disk that I purchased for this test:
Vendor: Maxtor
Model: OneTouch
Mfg. Part No. or KIT No.: A01A200 or A01A250
Capacity: 200 GB or 250 GB
Cache Buffer: 8 MB
Spin Rate: 7200 RPM
"Combo" Interface: IEEE 1394 and SPB-2 compliant (100 to 400 Mbits/sec) plus USB 2.0 and USB 1.1 compatible




US$260

1 - Extra FireWire Cable
     - Belkin 6-pin to 6-pin 1394 Cable

US$15

1 - Ethernet hub or switch
     - Linksys EtherFast 10/100 5-port Ethernet Switch
     (Used for interconnect int-linux1 / int-linux2)

US$30

4 - Network Cables
     - Category 5e patch cable - (Connect linux1 to public network)
     - Category 5e patch cable - (Connect linux2 to public network)
     - Category 5e patch cable - (Connect linux1 to interconnect ethernet switch)
     - Category 5e patch cable - (Connect linux2 to interconnect ethernet switch)

US$5
US$5
US$5
US$5

Total  

  US$1,665  


Note that the Maxtor OneTouch external drive does have two IEEE1394 (FireWire) ports, although it may not appear so at first glance. Also note that although you may be tempted to substitute the Ethernet switch (used for interconnect int-linux1/int-linux2) with a crossover CAT5 cable, I would not recommend this approach. I have found that when using a crossover CAT5 cable for the interconnect, whenever I took one of the PCs down the other PC would detect a "cable unplugged" error, and thus the Cache Fusion network would become unavailable.

Now that we know the hardware that will be used in this example, let's take a conceptual look at what the environment looks like:


Figure 1: Architecture

As we start to go into the details of the installation, keep in mind that most tasks will need to be performed on both servers.

6. Install the Linux Operating System

This section provides a summary of the screens used to install the Linux operating system. This article was designed to work with the Red Hat Enterprise Linux 3 (AS/ES) operating environment. As an alternative, and what I used for this article, is White Box Enterprise Linux (WBEL): a free and stable version of the RHEL3 operating environment.

For more detailed installation instructions, it is possible to use the manuals from Red Hat Linux. I would suggest, however, that the instructions I have provided below be used for this configuration.

Before installing the Linux operating system on both nodes, you should have the FireWire and two NIC interfaces (cards) installed.

Also, before starting the installation, ensure that the FireWire drive (our shared storage drive) is NOT connected to either of the two servers.

Download the following ISO images for WBEL:

After downloading and burning the WBEL images (ISO files) to CD, insert WBEL Disk #1 into the first server (linux1 in this example), power it on, and answer the installation screen prompts as noted below. After completing the Linux installation on the first node, perform the same Linux installation on the second node while substituting the node name linux1 for linux2 and the different IP addresses where appropriate.

Boot Screen
The first screen is the WBEL boot screen. At the boot: prompt, hit [Enter] to start the installation process.

Media Test
When asked to test the CD media, tab over to [Skip] and hit [Enter]. If there were any errors, the media burning software would have warned us. After several seconds, the installer should then detect the video card, monitor, and mouse. The installer then goes into GUI mode.

Welcome to White Box Enterprise Linux
At the welcome screen, click [Next] to continue.

Language / Keyboard / Mouse Selection
The next three screens prompt you for the Language, Keyboard, and Mouse settings. Make the appropriate selections for your configuration.

Installation Type
Choose the [Custom] option and click [Next] to continue.

Disk Partitioning Setup
Select [Automatically partition] and click [Next] continue.

If there were a previous installation of Linux on this machine, the next screen will ask if you want to "remove" or "keep" old partitions. Select the option to [Remove all partitions on this system]. Also, ensure that the [hda] drive is selected for this installation. I also keep the checkbox [Review (and modify if needed) the partitions created] selected. Click [Next] to continue.

You will then be prompted with a dialog window asking if you really want to remove all partitions. Click [Yes] to acknowledge this warning.

  1   2   3   4   5   6   7   8   9

Share in:

Related:

Learn how to set up and configure an Oracle rac 10 iconSet call options When you first sign in to Lync, you’ll be guided...

Learn how to set up and configure an Oracle rac 10 iconObjectives In this episode you will: Learn about adverbs of frequency....

Learn how to set up and configure an Oracle rac 10 iconPartners in 17 Categories Recognized at Oracle OpenWorld for Excellence...

Learn how to set up and configure an Oracle rac 10 iconIdc white Paper sponsored by Oracle Corporation, "Oracle e-business...

Learn how to set up and configure an Oracle rac 10 iconFull life cycle development, implementation and support with expertise...

Learn how to set up and configure an Oracle rac 10 iconOracle® e-business Suite, Siebel crm, PeopleSoft Enterprise and jd...

Learn how to set up and configure an Oracle rac 10 iconPhp version: 17 set foreign key checks=0; set sql mode="no auto value...

Learn how to set up and configure an Oracle rac 10 iconThe student will learn about reading instruction from birth to secondary...

Learn how to set up and configure an Oracle rac 10 iconDG102sh console Command Set(set coding, set h323, and show)

Learn how to set up and configure an Oracle rac 10 iconResearch into an Intermediate Accounting level practice set was conducted....

Learn how to set up and configure an Oracle rac 10 iconAbstract This integration is a part of Siebel Integrations between...

Learn how to set up and configure an Oracle rac 10 iconTo set up the fw-1884 in Cubase or Nuendo 3 To set up the fw-1884 in Mackie Emulation protocol 3

Learn how to set up and configure an Oracle rac 10 iconDownload and installation: Configure the initial scan

Learn how to set up and configure an Oracle rac 10 iconConfigure the cics environment for the Debug Tool

Learn how to set up and configure an Oracle rac 10 iconPart 2: Configure ssh access to the Switches

Learn how to set up and configure an Oracle rac 10 iconHow to configure timeGuardian/QuickBooks for direct integration

Learn how to set up and configure an Oracle rac 10 iconMacBook-Pro-de-Martin-Ortuno: libusb 9 martinortuno$./configure

Learn how to set up and configure an Oracle rac 10 iconOracle® SuperCluster M6-32

Learn how to set up and configure an Oracle rac 10 iconOracle® SuperCluster T5-8

Learn how to set up and configure an Oracle rac 10 iconUsers (Oracle Schemas) 27




manual




When copying material provide a link © 2017
contacts
manual-guide.com
search