Archive

Archive for February, 2011

Installation of Oracle RAC 11g R2 (11.2.0.2) –with ACFS file system as Database home

February 13, 2011 Leave a comment

This article is intended as a brief guide to installing Oracle Database 11g (11.2.0.2) Real Application Clusters (RAC) on RedHat Enterprise Linux X86_64.

Environment:

Each node requires at least Two 2 NIC cards one for Public IP and other for Private interconnect.

Node1 : racnode1.ukatru.com
Public IP Address : 192.168.2.52(racnode1.ukatru.com)
Virtual IP Address : 192.168.2.54(racnode1-vip.ukatru.com)

Node2:racnode2.ukatru.com

Public IP Address : 192.168.2.53(racnode2.ukatru.com)
Virtual IP Address : 192.168.2.55(racnode2-vip.ukatru.com)

We need 15GB disk on both nodes to install oracle Cluster software,ASM home and Database home.
 

Set Kernel Parameters

Add the following lines to the /etc/sysctl.conf file:

# Controls the maximum number of shared memory segments, in pages
kernel.shmall = 1073741824
# For 11g, recommended value for file-max is 6815744
fs.file-max = 6815744
# For 10g, uncomment ‘fs.file-max 327679’, comment other entries for this parameter and re-run sysctl -p
# fs.file-max:327679
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
net.core.rmem_default = 262144
# For 11g, recommended value for net.core.rmem_max is 4194304
net.core.rmem_max = 4194304
# For 10g, uncomment ‘net.core.rmem_max 2097152’, comment other entries for this parameter and re-run sysctl -p
# net.core.rmem_max=2097152
net.core.wmem_default = 262144
# For 11g, recommended value for wmem_max is 1048576
net.core.wmem_max = 1048576
# For 10g, uncomment ‘net.core.wmem_max 262144’, comment other entries for this parameter and re-run sysctl -p
# net.core.wmem_max:262144
fs.aio-max-nr = 3145728
# For 11g, recommended value for ip_local_port_range is 9000 65500
net.ipv4.ip_local_port_range = 9000 65500
# For 10g, uncomment ‘net.ipv4.ip_local_port_range 1024 65000’, comment other entries for this parameter and re-run sysctl -p
# net.ipv4.ip_local_port_range:1024 65000
# Added min_free_kbytes 50MB to avoid OOM killer on EL4/EL5
vm.min_free_kbytes = 51200

[root@oral01 ~]# sysctl -p

Add the following lines to the /etc/security/limits.conf file:

oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    50000000
oracle   hard   memlock    50000000

grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
Configure the iSCSI (Initiator) Service:
rpm -Uvh iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing…                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]

root@racnode1 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode1 Server]# chkconfig iscsid on
[root@racnode1 Server]# chkconfig iscsi on

[root@racnode2 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode2 Server]# chkconfig iscsid on
[root@racnode2 Server]# chkconfig iscsi on

Configure oracleasm :
[root@racnode1 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Create ASM Disks(You need to do this on one node only).

[root@racnode1 ~]# oracleasm createdisk ASM1 /dev/sdc1
Writing disk header: done
Instantiating disk: done

Logonto racnode2 and execute the following command to scan asm disks.

[root@racnode2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode2 ~]# /etc/init.d/oracleasm listdisks
ASM1

$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 12428 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3999 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-13_02-13-26PM. Please wait …$ You can find the log of this install session at:
 /u01/app/oraInventory/logs/installActions2011-02-13_02-13-26PM.log

$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 12357 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3845 MB    Passed
Checking monitor: must be configured to display at least 256 colors.    Actual 16777216    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-13_03-07-26PM. Please wait …$

[root@racnode1 ~]#
Running Oracle 11g root script…

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
  root wallet
  root wallet cert
  root cert export
  peer wallet
  profile reader wallet
  pa wallet
  peer wallet keys
  pa wallet keys
  peer cert request
  pa cert request
  peer cert
  pa cert
  peer root cert TP
  profile reader root cert TP
  pa root cert TP
  peer pa cert TP
  pa peer cert TP
  profile reader pa cert TP
  profile reader peer cert TP
  peer user cert
  pa user cert
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies – this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start ‘ora.mdnsd’ on ‘racnode1’
CRS-2676: Start of ‘ora.mdnsd’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.gpnpd’ on ‘racnode1’
CRS-2676: Start of ‘ora.gpnpd’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.cssdmonitor’ on ‘racnode1’
CRS-2672: Attempting to start ‘ora.gipcd’ on ‘racnode1’
CRS-2676: Start of ‘ora.cssdmonitor’ on ‘racnode1’ succeeded
CRS-2676: Start of ‘ora.gipcd’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.cssd’ on ‘racnode1’
CRS-2672: Attempting to start ‘ora.diskmon’ on ‘racnode1’
CRS-2676: Start of ‘ora.diskmon’ on ‘racnode1’ succeeded
CRS-2676: Start of ‘ora.cssd’ on ‘racnode1’ succeeded

ASM created and started successfully.

Disk Group CRS_VOTING created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk a8528c91f2eb4f99bf077f568d5b94b1.
Successfully replaced voting disk group with +CRS_VOTING.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
—  —–    —————–                ——— ———
 1. ONLINE   a8528c91f2eb4f99bf077f568d5b94b1 (ORCL:ASM1) [CRS_VOTING]
Located 1 voting disk(s).
CRS-2672: Attempting to start ‘ora.asm’ on ‘racnode1’
CRS-2676: Start of ‘ora.asm’ on ‘racnode1’ succeeded
CRS-2672: Attempting to start ‘ora.CRS_VOTING.dg’ on ‘racnode1’
CRS-2676: Start of ‘ora.CRS_VOTING.dg’ on ‘racnode1’ succeeded
ACFS-9200: Supported
ACFS-9200: Supported
CRS-2672: Attempting to start ‘ora.registry.acfs’ on ‘racnode1’
CRS-2676: Start of ‘ora.registry.acfs’ on ‘racnode1’ succeeded
Configure Oracle Grid Infrastructure for a Cluster … succeeded

**********************************************
Running Oracle 11g root script…

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
OLR initialization – successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies – this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node racnode1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@racnode2 bin]# ./crs_stat -t
Name           Type           Target    State     Host
————————————————————
ora….TING.dg ora….up.type ONLINE    ONLINE    racnode1
ora….ER.lsnr ora….er.type ONLINE    ONLINE    racnode1
ora….N1.lsnr ora….er.type ONLINE    ONLINE    racnode2
ora….N2.lsnr ora….er.type ONLINE    ONLINE    racnode1
ora….N3.lsnr ora….er.type ONLINE    ONLINE    racnode1
ora.asm        ora.asm.type   ONLINE    ONLINE    racnode1
ora.cvu        ora.cvu.type   ONLINE    ONLINE    racnode1
ora….network ora….rk.type ONLINE    ONLINE    racnode1
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    racnode1
ora.ons        ora.ons.type   ONLINE    ONLINE    racnode1
ora….SM1.asm application    ONLINE    ONLINE    racnode1
ora….E1.lsnr application    ONLINE    ONLINE    racnode1
ora….de1.ons application    ONLINE    ONLINE    racnode1
ora….de1.vip ora….t1.type ONLINE    ONLINE    racnode1
ora….SM2.asm application    ONLINE    ONLINE    racnode2
ora….E2.lsnr application    ONLINE    ONLINE    racnode2
ora….de2.ons application    ONLINE    ONLINE    racnode2
ora….de2.vip ora….t1.type ONLINE    ONLINE    racnode2
ora….ry.acfs ora….fs.type ONLINE    ONLINE    racnode1
ora.scan1.vip  ora….ip.type ONLINE    ONLINE    racnode2
ora.scan2.vip  ora….ip.type ONLINE    ONLINE    racnode1
ora.scan3.vip  ora….ip.type ONLINE    ONLINE    racnode1

Create ACFS File system—ora home shared by both nodes:

$ ./asmca &
[1]     10311

[root@racnode1 sysconfig]# /u01/app/grid/cfgtoollogs/asmca/scripts/acfs_script.sh
ACFS file system is running on racnode1,racnode2

[root@racnode1 sysconfig]# df -h
/dev/asm/orahomevg-12
                      9.0G   83M  9.0G   1% /u01/app/oracle
[root@racnode2 disks]# df -h

/dev/asm/orahomevg-12
                      9.0G   83M  9.0G   1% /u01/app/oracle

Database Software Installation:

$ ./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 12358 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 3836 MB    Passed

Database Creation:

$ export ORACLE_HOME=/u01/app/oracle/product/11.2.0.2/db_1
$ ./dbca &
[1]     25448

racnode1.ukatru.com:/home/oracle>export ORACLE_SID=oradv11
racnode1.ukatru.com:/home/oracle>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 13 19:23:45 2011

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL>

Now you have fully functional RAC two node 11gR2 database is available.

Categories: oracle-install

Installation of Oracle RAC 10g R2 (10.2.0.4) –with OCFS2 as cluster file system.

February 13, 2011 Leave a comment

This article is intended as a brief guide to installing Oracle Database 10g (10.2.0.4) Real Application Clusters (RAC) on RedHat Enterprise Linux X86_64.

Environment:

Each node requires at least Two 2 NIC cards one for Public IP and other for Private interconnect.

Node1 : racnode1.ukatru.com
Public IP Address : 192.168.2.52(racnode1.ukatru.com)
Private IP Address : 192.168.1.52(racnode1-priv.ukatru.com)
Virtual IP Address : 192.168.2.54(racnode1-vip.ukatru.com)

Node2:racnode2.ukatru.com

Public IP Address : 192.168.2.53(racnode2.ukatru.com)
Private IP Address : 192.168.1.53(racnode2-priv.ukatru.com)
Virtual IP Address : 192.168.2.55(racnode2-vip.ukatru.com)

We need 15GB disk on both nodes to install oracle Cluster software,ASM home and Database home.

Cluster File system : OCFS2 as cluster file system to store OCR and Voting disks for CRS.

Adding following text to .profile on both nodes for oracle user:

export VISUAL=vi
export EDITOR=/usr/bin/vi
ENV=$HOME/.kshrc
export ENV
umask 022
stty erase ^?
export HOST=`hostname`
export PS1=’$HOST:$PWD>’
export PS2=”$HOST:`pwd`>>”
export PS3=”$HOST:`pwd`==”
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.4/db_1
export ASM_HOME=/u01/app/oracle/product/10.2.0/asm
export CRS_HOME=/u01/app/root/product/crs
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
unalias ls

Execute following commands on both nodes to create directories on /u01 file system.

mkdir -p /u01/app/oracle
mkdir -p /u01/app/root

Create rsa keys on both nodes and setup keyless authentication between two rac nodes for oracle user.
 racnode1.ukatru.com:/home/oracle>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory ‘/home/oracle/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
b6:12:04:58:c2:43:95:56:71:cc:60:73:8c:3d:08:91 oracle@racnode1.ukatru.com

racnode2.ukatru.com:/home/oracle>ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/oracle/.ssh/id_rsa):
Created directory ‘/home/oracle/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/oracle/.ssh/id_rsa.
Your public key has been saved in /home/oracle/.ssh/id_rsa.pub.
The key fingerprint is:
87:96:89:8e:82:c2:8f:6c:ae:d0:ab:13:56:55:9d:e2 oracle@racnode2.ukatru.com

racnode1.ukatru.com:/home/oracle/.ssh>cat id_rsa.pub > authorized_keys
racnode1.ukatru.com:/home/oracle/.ssh>chmod 600 authorized_keys

racnode2.ukatru.com:/home/oracle/.ssh>cat id_rsa.pub > authorized_keys
racnode2.ukatru.com:/home/oracle/.ssh>chmod 600 authorized_keys

racnode1.ukatru.com:/home/oracle/.ssh>ssh racnode2
The authenticity of host ‘racnode2 (192.168.2.53)’ can’t be established.
RSA key fingerprint is fc:74:38:f0:d8:f1:97:62:e8:6b:05:69:3d:2c:9b:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘racnode2,192.168.2.53’ (RSA) to the list of known hosts.
Last login: Sat Feb 12 18:32:03 2011 from 192.168.2.128

racnode2.ukatru.com:/home/oracle>ssh racnode1
The authenticity of host ‘racnode1 (192.168.2.52)’ can’t be established.
RSA key fingerprint is fc:74:38:f0:d8:f1:97:62:e8:6b:05:69:3d:2c:9b:d8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘racnode1,192.168.2.52’ (RSA) to the list of known hosts.
Last login: Sat Feb 12 18:45:20 2011 from 192.168.2.128

############################

Set Kernel Parameters

Add the following lines to the /etc/sysctl.conf file:

fs.file-max=327679
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4294967295
net.core.rmem_default = 262144
net.core.rmem_max=2097152
net.core.wmem_default = 262144
net.core.wmem_max=262144
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range=1024 65000

[root@oral01 ~]# sysctl -p

Add the following lines to the /etc/security/limits.conf file:
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    3500000
oracle   hard   memlock    3500000

Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows:
SELINUX=disabled

We are using Openfiler as our NAS/SAN appliance for shared disks.

Configure the iSCSI (Initiator) Service:
rpm -Uvh iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm
warning: iscsi-initiator-utils-6.2.0.871-0.10.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing…                ########################################### [100%]
   1:iscsi-initiator-utils  ########################################### [100%]

root@racnode1 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode1 Server]# chkconfig iscsid on
[root@racnode1 Server]# chkconfig iscsi on

[root@racnode2 Server]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@racnode2 Server]# chkconfig iscsid on
[root@racnode2 Server]# chkconfig iscsi on

[root@racnode1 Server]# iscsiadm -m discovery -t sendtargets -p sanl001
192.168.2.11:3260,1 iqn.2006-01.com.openfiler:crs_racnode
[root@racnode2 Server]# iscsiadm -m discovery -t sendtargets -p sanl001
192.168.2.11:3260,1 iqn.2006-01.com.openfiler:crs_racnode

Manually Login to iSCSI Target(s)
[root@racnode2 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs_racnode -p 192.168.2.11 –login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]: successful

[root@racnode1 Server]# iscsiadm -m node -T iqn.2006-01.com.openfiler:crs_racnode -p 192.168.2.11 –login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:crs_racnode, portal: 192.168.2.11,3260]: successful

[root@racnode2 Server]# fdisk -l

Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes

Disk /dev/sdc doesn’t contain a valid partition table

[root@racnode1 Server]# fdisk -l

Disk /dev/sdc: 1073 MB, 1073741824 bytes
34 heads, 61 sectors/track, 1011 cylinders
Units = cylinders of 2074 * 512 = 1061888 bytes

Disk /dev/sdc doesn’t contain a valid partition table

installing and configuring OCFS2:

    * ocfs2-tools
    * ocfs2
    * ocfs2console (optional)

Now the node1 and node2 has a shared iSCSI disk configured, and you can see that by issuing a “fdisk -l”. Let’s configure the OCFS2 part.

Create the file /etc/ocfs2/cluster.conf:
mkdir -p /etc/ocfs2
vi /etc/ocfs2/cluster.conf
Add following text to cluster.conf file:
cluster:
        node_count = 2
        name = ocfs2

node:
        ip_port = 7777
        ip_address = 192.168.2.52
        number = 1
        name = racnode1
        cluster = ocfs2

node:
        ip_port = 7777
        ip_address = 192.168.2.53
        number = 2
        name = racnode2
        cluster = ocfs2

If you execute /etc/init.d/o2cb configure you’ll get:
[root@racnode2 ~]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot.  The current values will be shown in brackets (‘[]’).  Hitting
without typing an answer will keep that current value.  Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter “none” to clear) [ocfs2]:
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Creating directory ‘/dlm’: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK

Execute same on other node also to configure ocfs2:

[root@racnode2 ~]# /etc/init.d/o2cb status
Driver for “configfs”: Loaded
Filesystem “configfs”: Mounted
Driver for “ocfs2_dlmfs”: Loaded
Filesystem “ocfs2_dlmfs”: Mounted
Checking O2CB cluster ocfs2: Online
Heartbeat dead threshold = 31
  Network idle timeout: 30000
  Network keepalive delay: 2000
  Network reconnect delay: 2000
Checking O2CB heartbeat: Not active

Create ocfs2 file system on /dev/sdc1 :
N — means number of nodes in the cluster.

[root@racnode2 openfiler:crs_racnode]# mkfs.ocfs2 -N 6 /dev/sdc1
mkfs.ocfs2 1.4.2
Cluster stack: classic o2cb
Filesystem label=
Block size=2048 (bits=11)
Cluster size=4096 (bits=12)
Volume size=1073537024 (262094 clusters) (524188 blocks)
17 cluster groups (tail covers 8142 clusters, rest cover 15872 clusters)
Journal size=33554432
Initial number of node slots: 6
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)

execute  mkdir /crs on both nodes.

Add following entry in /etc/fstab on both nodes to make sure the file system is mounted after reboots.

/dev/sdc1     /crs   ocfs2   _netdev,datavolume     0 0

[root@racnode2 openfiler:crs_racnode]# mount | grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdc1 on /crs type ocfs2 (rw,_netdev,datavolume,heartbeat=local)

[root@racnode1 ~]#  mount | grep ocfs2
ocfs2_dlmfs on /dlm type ocfs2_dlmfs (rw)
/dev/sdc1 on /crs type ocfs2 (rw,_netdev,datavolume,heartbeat=local)

chown -R oracle:oinstall /crs

Now we have ocf2 file system is configured and mounted on both nodes.

Configure oracleasm :
[root@racnode1 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Create ASM Disks(You need to do this on one node only).

[root@racnode1 ~]# oracleasm createdisk ASM1 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@racnode1 ~]# oracleasm createdisk ASM2 /dev/sde1
Writing disk header: done
Instantiating disk: done

Logonto racnode2 and execute the following command to scan asm disks.

[root@racnode2 ~]# /etc/init.d/oracleasm scandisks
Scanning the system for Oracle ASMLib disks:               [  OK  ]
[root@racnode2 ~]# /etc/init.d/oracleasm listdisks
ASM1
ASM2

CRS Installation:

racnode1.ukatru.com:/u01/clusterware/cluvfy>./runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 -r 10gR2 -verbose

you can ignore following error from the above command:

ERROR:
Could not find a suitable set of interfaces for VIPs.

Result: Node connectivity check failed.

Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable:

DISPLAY=:0.0; export DISPLAY
 

Start the Oracle Universal Installer (OUI) by issuing the following command in the ./clusterware

./runInstaller
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
Passed


All installer requirements met.

Preparing to launch Oracle Universal Installer from 
/tmp/OraInstall2011-02-12_08-49-59PM. Please wait ...
racnode1.ukatru.com:/u01>Oracle Universal Installer, 
Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved.

Execute roo.sh on both nodes one after another:Here is the output from both nodes.
[root@racnode1 ~]# /u01/app/root/product/10.2.0/crs/root.sh
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Now formatting voting device: /crs/voting_disk1
Format of 1 voting devices complete.
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
CSS is inactive on these nodes.
        racnode2
Local node checking complete.
Run root.sh on remaining nodes to start CRS daemons.

***************************************************************
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/root.sh
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.

Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
node 2: racnode2 racnode2-priv racnode2
clscfg: Arguments check out successfully.

NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 90 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        racnode1
        racnode2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps

Apply 10.2.0.4 patch set to the newly installed CRS home:

racnode1.ukatru.com:/u01/Disk1>./runInstaller
Starting Oracle Universal Installer…

Checking installer requirements…

Checking operating system version: must be redhat-3, SuSE-9, SuSE-10, redhat-4, redhat-5, UnitedLinux-1.0, asianux-1, asianux-2 or asianux-3
                                      Passed

All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-12_09-16-49PM. Please wait …racnode1.ukatru.com:/u01/Disk1>Oracle Universal Installer, Version 10.2.0.4.0 Production
Copyright (C) 1999, 2008, Oracle. All rights reserved

The installer has detected that your Cluster Ready Services (CRS) installation is distributed across the following nodes:

    racnode1
    racnode2

Because the software consists of local identical copies distributed across each of the nodes in the cluster, it is possible to patch your CRS installation in a rolling manner, one node at a time.

To complete the installation of this patchset, you must perform the following tasks on each node:

    1.    Log in as the root user.
    2.    As the root user, perform the following tasks:

        a.    Shutdown the CRS daemons by issuing the following command:
                /u01/app/root/product/10.2.0/crs/bin/crsctl stop crs
        b.    Run the shell script located at:
                /u01/app/root/product/10.2.0/crs/install/root102.sh
            This script will automatically start the CRS daemons on the
            patched node upon completion.

    3.    After completing this procedure, proceed to the next node and repeat.

[root@racnode1 ~]# /u01/app/root/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/root/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10204 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 1: racnode1 racnode1-priv racnode1
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
clscfg -upgrade completed successfully
************************************************
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/bin/crsctl stop crs
Stopping resources.
Successfully stopped CRS resources
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
[root@racnode2 ~]# /u01/app/root/product/10.2.0/crs/install/root102.sh
Creating pre-patch directory for saving pre-patch clusterware files
Completed patching clusterware files to /u01/app/root/product/10.2.0/crs
Relinking some shared libraries.
Relinking of patched files is complete.
WARNING: directory ‘/u01/app/root/product/10.2.0’ is not owned by root
WARNING: directory ‘/u01/app/root/product’ is not owned by root
WARNING: directory ‘/u01/app/root’ is not owned by root
WARNING: directory ‘/u01/app’ is not owned by root
WARNING: directory ‘/u01’ is not owned by root
Preparing to recopy patched init and RC scripts.
Recopying init and RC scripts.
Startup will be queued to init within 30 seconds.
Starting up the CRS daemons.
Waiting for the patched CRS daemons to start.
  This may take a while on some systems.
.
10204 patch successfully applied.
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node 2: racnode2 racnode2-priv racnode2
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
clscfg -upgrade completed successfully
*************************************************
Login to racnode1 as root and run vipca command.

[root@racnode1 bin]# pwd
/u01/app/root/product/10.2.0/crs/bin
[root@racnode1 bin]# ./vipca &
[1] 17871

racnode1.ukatru.com:/u01>cd /u01/app/root/product/10.2.0/crs/bin
racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./crs_stat -t
Name           Type           Target    State     Host
————————————————————
ora….de1.gsd application    ONLINE    ONLINE    racnode1
ora….de1.ons application    ONLINE    ONLINE    racnode1
ora….de1.vip application    ONLINE    ONLINE    racnode1
ora….de2.gsd application    ONLINE    ONLINE    racnode2
ora….de2.ons application    ONLINE    ONLINE    racnode2
ora….de2.vip application    ONLINE    ONLINE    racnode2

ASM Home Installation :

Start the Oracle Universal Installer (OUI) by issuing the following command in the database software dir.
racnode1.ukatru.com:/u01/database>./runInstaller
Starting Oracle Universal Installer…

Checking installer requirements…

Checking operating system version: must be redhat-3, SuSE-9, redhat-4,
UnitedLinux-1.0, asianux-1 or asianux-2
                                      Passed

All installer requirements met.

Preparing to launch Oracle Universal Installer from /tmp/OraInstall2011-02-12_09-40-57PM.
Please wait …racnode1.ukatru.com:/u01/database>Oracle Universal Installer,
Version 10.2.0.1.0 Production
Copyright (C) 1999, 2005, Oracle. All rights reserved

[root@racnode1 bin]# /u01/app/oracle/product/10.2.0/asm/root.sh
Running Oracle10 root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/asm

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

[root@racnode2 ~]# /u01/app/oracle/product/10.2.0/asm/root.sh
Running Oracle10 root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/asm

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

********************************************

Apply 10.2.0.4 patch to the ASM home:

Create ASM Instance:

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0/asm>export ORACLE_HOME=/u01/app/oracle/product/10.2.0/asm

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0/asm/bin>./dbca &
[1]     8643

[root@racnode1 bin]# ./crs_stat -t
Name           Type           Target    State     Host
————————————————————
ora….SM1.asm application    ONLINE    ONLINE    racnode1
ora….E1.lsnr application    ONLINE    ONLINE    racnode1
ora….de1.gsd application    ONLINE    ONLINE    racnode1
ora….de1.ons application    ONLINE    ONLINE    racnode1
ora….de1.vip application    ONLINE    ONLINE    racnode1
ora….SM2.asm application    ONLINE    ONLINE    racnode2
ora….E2.lsnr application    ONLINE    ONLINE    racnode2
ora….de2.gsd application    ONLINE    ONLINE    racnode2
ora….de2.ons application    ONLINE    ONLINE    racnode2
ora….de2.vip application    ONLINE    ONLINE    racnode2

DB Home Installation :

racnode1.ukatru.com:/u01/database>./runInstaller

Apply 10.2.0.4 Patch to the newly installed Database home:

racnode1.ukatru.com:/u01/Disk1>./runInstaller
Starting Oracle Universal Installer…

Checking installer requirements…

Database Creation:

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1>export ORACLE_HOME=/u01/app/oracle/product/10.2.0.4/db_1
racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1>export ORACLE_SID=oradv1

racnode1.ukatru.com:/u01/app/oracle/product/10.2.0.4/db_1/bin>./dbca &
[1]     32718

racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./crs_stat -t
Name           Type           Target    State     Host
————————————————————
ora.oradv1.db  application    ONLINE    ONLINE    racnode2
ora….11.inst application    ONLINE    ONLINE    racnode1
ora….12.inst application    ONLINE    ONLINE    racnode2
ora….SM1.asm application    ONLINE    ONLINE    racnode1
ora….E1.lsnr application    ONLINE    ONLINE    racnode1
ora….de1.gsd application    ONLINE    ONLINE    racnode1
ora….de1.ons application    ONLINE    ONLINE    racnode1
ora….de1.vip application    ONLINE    ONLINE    racnode1
ora….SM2.asm application    ONLINE    ONLINE    racnode2
ora….E2.lsnr application    ONLINE    ONLINE    racnode2
ora….de2.gsd application    ONLINE    ONLINE    racnode2
ora….de2.ons application    ONLINE    ONLINE    racnode2
ora….de2.vip application    ONLINE    ONLINE    racnode2

racnode1.ukatru.com:/u01/app/root/product/10.2.0/crs/bin>./cemutlo -n
crsracnode

Categories: oracle-install

Openfiler —Installation(Open source storage appliance software)

February 8, 2011 Leave a comment

In this article I am going to describe step by step  installation of openfiler software which we use to configure shared storage for future Oracle RAC installations.

Openfiler Installation and configuration:

Download openfiler-2.3-x86_64-disc1.iso from http://www.openfiler.com/ website.

You can skip the media test and start the installation be selecting SKIP and press enter.

In the above screen its shown that you can access the openfiler web interface through

 
 
Default User : openfiler
Password: password.

Change the password according to your environment after initial log in.

First you need to set a Network access Point. This machine / network will be used as client.

If you are setting a iscsi device for a host, u need to give 255.255.255.255 as netmask.
If you are setting a iscsi device for a network, u can give 255.255.255.0 as netmask.
 
Take the web interface and click the system tab:
 
Client system : oral01
Ip Address : 192.168.2.51
Creating Volumes:
 
Openfiler uses LVM concept like Linux.
we need to create physical volume and then logical volume.

Step1) Click on volumes tab and select block devices on the right hand side of the volume section.Please see screen below screen shot.

 
In Linux before Assigning  Disk to volume group you need to create at least one partition.In openfiler also we need to follow the same approach.

Click on the block device which you need to use and then select create button.

Now Assign this physical volume to a volume group.

Create a file system.
Select Add Volume on right side and you will see below screen.

Volume Name (*no spaces*. Valid characters [a-z,A-Z,0-9]): = oradisk1
Volume Description:Test volume to setup shared disk for oracle rac installation
Required Space (MB): 992
Filesystem / Volume type:iSCSI

Now make sure iSCSI Target server is running by clicking on services tab.In this case it is disables,so please click on enable.

Now again click on Volumes tab and click on iSCSI Targets on the right side in the volumes section.

Click on Lun Mapping tab and assign previously created volume to this target.

Click on Network ACL and assign this lun to previously configured network.

Client [iscsi initiator] Configuration:
Here we are using a redhat enterprise linux5 as client. In order to use openfiler target as disk, it has to be set as iscsi initiator. For that we need check iscsi-inittiator is installed.

Login to the client system as root.

[root@oral01 ~]# rpm -qa | grep -i iscsi
iscsi-initiator-utils-6.2.0.871-0.10.el5
[root@oral01 ~]# service iscsid start
Turning off network shutdown. Starting iSCSI daemon:       [  OK  ]
                                                           [  OK  ]
[root@oral01 ~]# chkconfig iscsid on
[root@oral01 ~]# chkconfig iscsi on
[root@oral01 ~]# chkconfig –list | grep iscsi
iscsi           0:off   1:off   2:on    3:on    4:on    5:on    6:off
iscsid          0:off   1:off   2:on    3:on    4:on    5:on    6:off

Searching the iscsi target:

[root@oral01 ~]# iscsiadm -m discovery -t sendtargets -p 192.168.2.55
192.168.2.55:3260,1 iqn.2006-01.com.openfiler:test

You will get the scanned result as the line above.
Manually Login to iSCSI Target(s)
iscsiadm -m node -Tiqn.2006-01.com.openfiler:test -p 192.168.2.55 –login

[root@oral01 ~]# iscsiadm -m node -Tiqn.2006-01.com.openfiler:test -p 192.168.2.55 –login
Logging in to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.55,3260]
Login to [iface: default, target: iqn.2006-01.com.openfiler:test, portal: 192.168.2.55,3260]: successful

Configure Automatic Login

iscsiadm -m node -T iqn.2006-01.com.openfiler:test -p 192.168.2.55 –op update -n node.startup -v automatic

[root@oral01 ~]# fdisk -l
Disk /dev/sdd: 1040 MB, 1040187392 bytes
32 heads, 62 sectors/track, 1024 cylinders
Units = cylinders of 1984 * 512 = 1015808 bytes

Disk /dev/sdd doesn’t contain a valid partition table

Now we’ll partition the iscsi device:

[root@oral01 ~]# fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won’t be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1

First cylinder (1-1024, default 1): Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-1024, default 1024):
Using default value 1024

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks


Create ext3 file system on this device and mount it as /crs.
[root@oral01 ~]# mkfs.ext3 -m 0 /dev/sdd1
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
126976 inodes, 253944 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=260046848
8 block groups
32768 blocks per group, 32768 fragments per group
15872 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376

Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@oral01 ~]# mkdir /crs
[root@oral01 ~]# mount /dev/sdd1 /crs
/dev/sdd1             977M   18M  960M   2% /crs

You can check the status of client connects in the openfiler web interface.


 Congratulations:

Now you have your own SAN server is up and running.

Categories: oracle-install

Installation of Oracle 11g R2 (11.2.0.1) on RedHat EL 5

February 7, 2011 Leave a comment

This article describes step-by-step installation of Oracle 11g R2 database software on RedHat Enterprise Linux 5.

Step 1: This article will instruct how to install Oracle Grid infrastructure 11g R2 for stand alone server in Linux environment.

Verify System Requirements :

grep MemTotal /proc/meminfo(minimum required RAM is 1.5 GB for Oracle Grid Infrastructure for a Cluster)
grep SwapTotal /proc/meminfo(minimum required swap space is 1.5 GB)

Pre-Instalation Tasks:
Log in as root and create grid user account which belongs to dba group.

[root@oral01 ~]# useradd -m -s /bin/ksh -g dba -u 1001 grid
mkdir -p /u01/app/11.2.0/grid

[root@oral01 results]# mkdir -p /u01/app/grid
[root@oral01 results]# chown -R grid:dba /u01/app/grid
[root@oral01 results]# mkdir -p /u01/app/11.2.0/grid
[root@oral01 results]# chown -R grid:dba /u01/app/11.2.0/grid

Add the following lines to the /etc/security/limits.conf file:
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    3500000
oracle   hard   memlock    3500000

ASMLib 2.0 is delivered as a set of three Linux packages:
■ oracleasmlib-2.0 – the Oracle ASM libraries
■ oracleasm-support-2.0 – utilities needed to administer ASMLib
■ oracleasm – a kernel module for the Oracle ASM library

Configure oracleasm :
[root@oral01 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@oral01 tmp]# oracleasm createdisk ASM1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@oral01 tmp]# oracleasm listdisks
ASM1

Linux x86-64 Oracle Grid Infrastructure and Oracle RAC Package Requirements :

########from oracle documentation###################
The following packages (or later versions) must be installed:
Note: Starting with Oracle Grid Infrastructure11g Release 2
(11.2), all the 32-bit packages listed in the following table, except
for gcc-32bit-4.3, are no longer required for installation. Only the
64-bit packages are required. However, for Oracle 11g release 1
(11.2.0.1), both the 32-bit and 64-bit packages listed in the
following table are required.

binutils-2.17.50.0.6
compat-libstdc++-33-3.2.3
compat-libstdc++-33-3.2.3 (32 bit)
elfutils-libelf-0.125
elfutils-libelf-devel-0.125
gcc-4.1.2
gcc-c++-4.1.2
glibc-2.5-24
glibc-2.5-24 (32 bit)
glibc-common-2.5
glibc-devel-2.5
glibc-devel-2.5 (32 bit)
glibc-headers-2.5
ksh-20060214
libaio-0.3.106
libaio-0.3.106 (32 bit)
libaio-devel-0.3.106
libaio-devel-0.3.106 (32 bit)
libgcc-4.1.2
libgcc-4.1.2 (32 bit)
libstdc++-4.1.2
libstdc++-4.1.2 (32 bit)
libstdc++-devel 4.1.2
make-3.81
numactl-devel-0.9.8.x86_64
sysstat-7.0.2
unixODBC-2.2.11
unixODBC-2.2.11 (32 bit)
unixODBC-devel-2.2.11
unixODBC-devel-2.2.11 (32 bit)

rpm -q binutils compat-libstdc++ elfutils gcc glibc libaio ksh libgcc libstdc++ \
make sysstat unixodbc

Add following kernal parameters in /etc/sysctl.conf file.

kernel.shmall = 1073741824
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
fs.aio-max-nr = 3145728
vm.swappiness = 5
fs.file-max = 6815744
net.ipv4.ip_local_port_range = 9000 65500
net.core.wmem_max = 1048576

Login as the grid user and add the following lines .profile :

#########################################
export VISUAL=vi
export EDITOR=/usr/bin/vi
ENV=$HOME/.kshrc
export ENV
umask 022
stty erase ^?
export HOST=`hostname`
export PS1=’$HOST:$PWD>’
export PS2=”$HOST:`pwd`>>”
export PS3=”$HOST:`pwd`==”
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
export GRID_HOME=/u01/app/11.2.0/grid
export PATH=$PATH:$HOME/bin:$GRID_hOME/bin
unalias ls

#########################################


Installation:

oral01.ukatru.com:/home/grid>export DISPLAY=192.168.2.152:0.0
oral01.ukatru.com:/u01/app/grid/grid>./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 13182 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2495 MB    Passed

 

[root@oral01 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete.

[root@oral01 ~]# /u01/app/11.2.0/grid/root.sh
Running Oracle 11g root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
2011-02-06 17:03:09: Checking for super user privileges
2011-02-06 17:03:09: User has super user privileges
2011-02-06 17:03:09: Parsing the host name
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user ‘grid’, privgrp ‘dba’..
Operation successful.
CRS-4664: Node oral01 successfully pinned.
Adding daemon to inittab
CRS-4123: Oracle High Availability Services has been started.
ohasd is starting

oral01     2011/02/06 17:03:42     /u01/app/11.2.0/grid/cdata/oral01/backup_20110206_170342.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB.   Actual 2495 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.

oral01.ukatru.com:/u01/app/11.2.0/grid/bin>./sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Feb 6 19:14:27 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Automatic Storage Management option

SQL>

Database Software Installation :
oral01.ukatru.com:/u01/app/database>./runInstaller
Starting Oracle Universal Installer…

Checking Temp space: must be greater than 120 MB.   Actual 13117 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2346 MB    Passed

Database Creation :

oral01.ukatru.com:/home/oracle>export ORACLE_HOME=/u01/app/oracle/product/11.2.0.1/db_1
oral01.ukatru.com:/home/oracle>cd /u01/app/oracle/product/11.2.0.1/db_1
oral01.ukatru.com:/u01/app/oracle/product/11.2.0.1/db_1>cd bin
oral01.ukatru.com:/u01/app/oracle/product/11.2.0.1/db_1/bin>export ORACLE_SID=oradv1
oral01.ukatru.com:/u01/app/oracle/product/11.2.0.1/db_1/bin>sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sun Feb 6 21:21:43 2011

Copyright (c) 1982, 2009, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Automatic Storage Management, Oracle Label Security, OLAP,
Data Mining, Oracle Database Vault and Real Application Testing options

SQL>

#####
Oracle ASM Configuration Assistant:
Oracle ASM Configuration Assistant (ASMCA) supports installing and configuring ASM instances, disk groups, volumes, and Oracle Automatic Storage Management Cluster File System (Oracle ACFS). In addition, you can use the ASMCA command-line interface as a non-GUI utility.

Categories: oracle-install

Installation of Oracle 10g R2 (10.2.0.4) on RedHat EL 5

February 6, 2011 Leave a comment

This article describes step-by-step installation of Oracle 10g R2 database software on RedHat Enterprise Linux 5.

Pre-Instalation Tasks:
Step1)Create oracle dba group and user account.

Log in as root and create oracle user account which belongs to dba group.
 su –
[root@oral01 ~]# groupadd -g 1000 dba
[root@oral01 ~]# echo $?
0
[root@oral01 ~]# useradd -m -s /bin/ksh -g dba -u 1000 oracle
[root@oral01 ~]# echo $?
0
[root@oral01 ~]# passwd oracle
Changing password for user oracle.
New UNIX password:
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@oral01 ~]# id oracle
uid=1000(oracle) gid=1000(dba) groups=1000(dba)

Login as the oracle user and add the following lines .profile :

#########################################
export VISUAL=vi
export EDITOR=/usr/bin/vi
ENV=$HOME/.kshrc
export ENV
umask 022
stty erase ^?
export HOST=`hostname`
export PS1=’$HOST:$PWD>’
export PS2=”$HOST:`pwd`>>”
export PS3=”$HOST:`pwd`==”
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0.4/db_1
export PATH=$PATH:$HOME/bin:$ORACLE_HOME/bin
unalias ls

############################

Set Kernel Parameters

Add the following lines to the /etc/sysctl.conf file:

fs.file-max=327679
kernel.msgmni = 2878
kernel.msgmax = 8192
kernel.msgmnb = 65536
kernel.sem = 250 32000 100 142
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4294967295
net.core.rmem_default = 262144
net.core.rmem_max=2097152
net.core.wmem_default = 262144
net.core.wmem_max=262144
fs.aio-max-nr = 3145728
net.ipv4.ip_local_port_range=1024 65000

[root@oral01 ~]# sysctl -p

Add the following lines to the /etc/security/limits.conf file:
oracle   soft   nofile    131072
oracle   hard   nofile    131072
oracle   soft   nproc    131072
oracle   hard   nproc    131072
oracle   soft   core    unlimited
oracle   hard   core    unlimited
oracle   soft   memlock    3500000
oracle   hard   memlock    3500000

Disable secure linux by editing the /etc/selinux/config file, making sure the SELINUX flag is set as follows:

SELINUX=disabled

mkdir -p /u01/app/oracle

Configure oracleasm :
[root@oral01 tmp]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

[root@oral01 tmp]# oracleasm createdisk ASM1 /dev/sdc1
Writing disk header: done
Instantiating disk: done
[root@oral01 tmp]# oracleasm listdisks
ASM1

Installation:

Log into the oracle user. If you are using X emulation then set the DISPLAY environmental variable:

DISPLAY=:0.0; export DISPLAY

We are installing oracle 10.2.0.1 on RHEL5 and you will get os not certified and you can safely ignore it.

Start the Oracle Universal Installer (OUI) by issuing the following command in the database directory to create ASM home.

./runInstaller -ignoreSysPrereqs
Starting Oracle Universal Installer...

Checking installer requirements...

Checking operating system version: must be redhat-3,

SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
Passed

All installer requirements met.

Preparing to launch Oracle Universal  
Installer from /tmp/OraInstall2011-02-05_11-19-20PM. 

 Please wait …oral01.ukatru.com:/u01/database>Oracle Universal Installer, Version 10.2.0.1.0 Production

Copyright (C) 1999, 2005, Oracle. All rights reserved

 [root@oral01 oraInventory]# ./orainstRoot.sh
Changing permissions of /u01/app/oraInventory to 770.
Changing groupname of /u01/app/oraInventory to dba.
The execution of the script is complete

[root@oral01 asm]# /u01/app/oracle/product/10.2.0/asm/root.sh
Running Oracle10 root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0/asm

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin …
   Copying oraenv to /usr/local/bin …
   Copying coraenv to /usr/local/bin …

Creating /etc/oratab file…
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed

Step 2:Install Database home.

  /u01/app/oracle/product/10.2.0.4/db_1/root.sh
Running Oracle10 root.sh script…

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/10.2.0.4/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file “dbhome” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “oraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:
The file “coraenv” already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]:

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.

 Step 3:Applying oracle 10g R2  patch set 3 on both Asm and Database homes.

Step 3:Create and start ASM instance.
oral01.ukatru.com:/u01/app/oracle/product/10.2.0/asm>export ORACLE_HOME=/u01/app/oracle/product/10.2.0/asm
oral01.ukatru.com:/u01/app/oracle/product/10.2.0/asm>export ORACLE_SID=+ASM
oral01.ukatru.com:/u01/app/oracle/product/10.2.0/asm/bin>./dbca &
[1]     25327

[root@oral01 bin]# ./localconfig add
/etc/oracle does not exist. Creating it now.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user ‘root’, privgrp ‘root’..
Operation successful.
Configuration for local CSS has been initialized

Adding to inittab
Startup will be queued to init within 30 seconds.
Checking the status of new Oracle init process…
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
        oral01
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)

oral01.ukatru.com:/u01/app/oracle/product/10.2.0/asm/bin>ps -ef | grep asmoracle   25626     1  0 00:06 ?        00:00:00 /u01/app/oracle/product/10.2.0/asm/bin/ocssd.bin
oracle   25887     1  0 00:08 ?        00:00:00 asm_pmon_+ASM
oracle   25889     1  0 00:08 ?        00:00:00 asm_psp0_+ASM
oracle   25891     1  0 00:08 ?        00:00:00 asm_mman_+ASM
oracle   25893     1  0 00:08 ?        00:00:00 asm_dbw0_+ASM
oracle   25895     1  0 00:08 ?        00:00:00 asm_lgwr_+ASM
oracle   25897     1  0 00:08 ?        00:00:00 asm_ckpt_+ASM
oracle   25899     1  0 00:08 ?        00:00:00 asm_smon_+ASM
oracle   25901     1  0 00:08 ?        00:00:00 asm_rbal_+ASM
oracle   25903     1  0 00:08 ?        00:00:00 asm_gmon_+ASM

Step 4:Create Database
export ORACLE_HOME=/u01/app/oracle/product/10.2.0.4/db_1

 

SQL> select INSTANCE_NAME,HOST_NAME,VERSION from v$instance;

INSTANCE_NAME
—————-
HOST_NAME
—————————————————————-
VERSION
—————–
oradv1
oral01.ukatru.com
10.2.0.4.0

Categories: oracle-install

Installation of RedHat Enterprise Linux-server-5.4-x86_64 on Esxi Host

February 6, 2011 Leave a comment

Creating a New Virtual Machine

Before You Begin

To install a Redhat Enterprise Linux 5.4 server on the virtual machine, make sure that you have the ISO image of the operating system accessible from the PC (DVD/CD). The ISO image is mapped as a virtual boot drive so when you start the VM, the installation initiates from the mapped ISO image. 

Steps:

To create the new virtual machine, follow these steps:  
Step 1 After launching the VMware vSphere Client and connecting to your ESXi server  click on the File menu and select New Virtual Machine. 

Step2:The Create New Virtual Machine window appears. 

Step3:Select Custom option to create virtual machine with additions options and devices.Enter the host name for the virtual machine in the Name field.
VM names are user defined and can contain up to 80 characters; and they must be unique within each VMware vCenter Server VM folder. 

 Step4:Highlight to select the datastore in which to store the VM files. 

Step 5 Click Next. The Guest Operating System window appears.

Step 6 Select Linux in the guest operating system field and select Red Hat Enterprise Linux 5 (64-bit) as the version from the Version drop-down menu, as shown in below figure.

Step 7 Click Next. Select the number of virtual processors in the Virtual Machine.

Step 8 Click Next. Configure the Virtual Machine Memory size.In my case I am allocating 1GB memory.


 Step 9:Click Next. Configure the Virtual Machine Network Connections.

Step 10:Click Next.Select which SCSI Controller to use for this Virtual Machine.

Step 11:Click Next. The Create a Disk window appears, as shown in below figure.

 
Step 12:Click Next. The Ready to Complete window appears. Review your new VM configuration.
 
Step13:Select the Edit the virtual machine settings before completion radio button and then click finish. The Virtual Machine Properties page appears for your newly created VM with the Hardware tab selected by default, as shown below.

Step 15 Click OK to complete the new VM configuration. 
Step 16 Right Click on the Virtual machine and click on PowerOn.
 
Step 17 Go to the console tab in the Vsphere client to start guest installation.

Step 18 Press Enter To start installation.You will see the following screen and select skip to avoid testing the CD media.

 

Conclusion
Congratulations, you now have an RHEL5 server up and running!


Categories: oracle-install

VMware ESXi4 Server Installation

February 6, 2011 Leave a comment

Install Vmware Esxi

Server : Dell PowerEdge T110
Architecture :x86_64
Memory : 2 GB

Virtualization Software : Vmware Esxi 4.1

Download vmware Esxi from http://www.vmware.com.

Start the Installation:

Now you are ready to boot the server off your ESXi CD, which should load this screen: 

Press Enter to start Esxi installation,then you will see following screens.

  Now again press Enter to start installation and then press F11 to accept and continue.

The next screen will prompt you to choose which drive you wish to install ESXi on, in this example I have only one drive with 1GB available space and press Enter to continue.

Press F11 to start installation.

When it is complete it will prompt you to remove the CD and reboot the system. Provided your boot order is correct (and if it isn’t you will need to check in the BIOS) the server should now boot into ESXi for the first time:

If you don’t have DHCP server running then customize the system by assigning static IP address to the esxi server.
Press F2 and it will ask for root password.First time setup Press Enter with out typing password and you will see below screen.
username : root
password : pass(you can change it according to your environment).
The first thing you should do is set the root password so leave “Configure Password” selected and press Enter to change it, then enter a suitably complex password. Next you need to configure the management network, so select that option and enter a static IP address and your other network details.

You can change IP configuration  by entering into the following screen.

Now open the web browser and enter give ip addrees to access Esxi host.

Now you can download vSphere clinet from the above link (Download vSphere Client).
Please see below screen shot of vSphere client from my environment.

Conclusion

Congratulations, you now have an ESXi host server up and running! As you can see deployment is a quick and simple process which should give you an idea of how easily you can deploy extra hosts when you need more capacity, or to replace a failed server. Since the ESXi hypervisor provides a standardised virtual hardware environment it doesn’t particularly matter what make of server you are deploying (provide it is compatible hardware of course), you know it will be ready to host your VMs as soon as ESXi is running.

Categories: oracle-install
%d bloggers like this: