Archive for April, 2013

Recently, i was installing 11g grid infrastructure on Oracle Enterprise Linux 5 and due to some reason i have to remove that installation from server and start a fresh installation. This installation was for migrating a existing database to ASM. Following are the process which are active on server after completing the grid infrastructure installation first time. These process are related to ASM only.

[root@NODE1 ~]# ps -ef|grep grid
grid      3890     1  0 06:27 ?        00:00:02 /u01/app/grid/product/11.2.0/grid/bin/ohasd.bin reboot
grid      4444     1  0 06:28 ?        00:00:05 /u01/app/grid/product/11.2.0/grid/bin/oraagent.bin
grid      4457     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid      4469     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmd.bin
grid      4476     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/cssdagent
grid      4501     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/ocssd.bin
grid      4523  4469  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmlogger.bin -o /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.log
grid      4586     1  0 06:29 ?        00:00:00 asm_pmon_+ASM
grid      4588     1  0 06:29 ?        00:00:00 asm_psp0_+ASM
grid      4590     1  0 06:29 ?        00:00:09 asm_vktm_+ASM
grid      4594     1  0 06:29 ?        00:00:00 asm_gen0_+ASM
grid      4596     1  0 06:29 ?        00:00:00 asm_diag_+ASM
grid      4598     1  0 06:29 ?        00:00:01 asm_dia0_+ASM
grid      4600     1  0 06:29 ?        00:00:00 asm_mman_+ASM
grid      4602     1  0 06:29 ?        00:00:00 asm_dbw0_+ASM
grid      4604     1  0 06:29 ?        00:00:00 asm_lgwr_+ASM
grid      4606     1  0 06:29 ?        00:00:00 asm_ckpt_+ASM
grid      4608     1  0 06:29 ?        00:00:00 asm_smon_+ASM
grid      4610     1  0 06:29 ?        00:00:00 asm_rbal_+ASM
grid      4612     1  0 06:29 ?        00:00:00 asm_gmon_+ASM
grid      4614     1  0 06:29 ?        00:00:00 asm_mmon_+ASM
grid      4616     1  0 06:29 ?        00:00:00 asm_mmnl_+ASM
grid      4618     1  0 06:29 ?        00:00:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
root      4851  4810  0 07:05 pts/2    00:00:00 grep grid

Database server is already down on this server. Now, i want to remove above grid infrastructure installation. First, of all shutdown the ASM instance which was created during the earlier installation. Remember that DB is still not migrated to ASM.

SQL> conn sys as sysasm
Enter password:
Connected.
SQL> shutdown immediate;
ORA-15100: invalid or missing diskgroup name


ASM instance shutdown
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Automatic Storage Management option

After shutting down the ASM instance, only the below process exists on server.

[grid@NODE1 grid]$ ps -ef|grep grid
grid      3890     1  0 06:27 ?        00:00:03 /u01/app/grid/product/11.2.0/grid/bin/ohasd.bin reboot
grid      4444     1  0 06:28 ?        00:00:05 /u01/app/grid/product/11.2.0/grid/bin/oraagent.bin
grid      4457     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid      4469     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmd.bin
grid      4476     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/cssdagent
grid      4501     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/ocssd.bin
grid      4523  4469  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmlogger.bin -o /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.log
root      4854  4810  0 07:05 pts/2    00:00:00 su - grid
grid      4855  4854  0 07:05 pts/2    00:00:00 -bash
grid      4894  4855  0 07:06 pts/2    00:00:00 ps -ef
grid      4895  4855  0 07:06 pts/2    00:00:00 grep grid

To remove these process gracefully, follow the steps as below.

[grid@NODE1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/

############ ORACLE DEINSTALL & DECONFIG TOOL START ############


######################### CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/grid/product/11.2.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/grid/product/11.2.0/grid
Checking for sufficient temp space availability on node(s) : 'NODE1'

## [END] Install check configuration ##

Traces log file: /u01/app/oraInventory/logs//crsdc.log

Network Configuration check config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_check2013-04-23_07-11-27-AM.log

Specify all Oracle Restart enabled listeners that are to be de-configured [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_check2013-04-23_07-11-54-AM.log

Specify the ASM Diagnostic Destination [ ]: 
Specify the diskstring [/dev/oracleasm/disks/*]: 
Specify the diskgroups that are managed by this ASM instance []: 


######################### CHECK OPERATION END #########################


####################### CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/grid/product/11.2.0/grid
The cluster node(s) on which the Oracle home deinstallation will be performed are:null
Oracle Home selected for deinstall is: /u01/app/grid/product/11.2.0/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-04-23_07-11-25-AM.out'
Any error messages from this session will be written to: '/u01/app/oraInventory/logs/deinstall_deconfig2013-04-23_07-11-25-AM.err'

######################## CLEAN OPERATION START ########################
ASM de-configuration trace file location: /u01/app/oraInventory/logs/asmcadc_clean2013-04-23_07-12-21-AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /u01/app/oraInventory/logs/netdc_clean2013-04-23_07-12-25-AM.log

De-configuring Oracle Restart enabled listener(s): LISTENER

De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Listener stopped successfully.
    Unregistering listener: LISTENER
    Listener unregistered successfully.
    Deleting listener: LISTENER
    Listener deleted successfully.
Listener de-configured successfully.

De-configuring Listener configuration file...
Listener configuration file de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

Run the following command as the root user or the administrator on node "node1".

/tmp/deinstall2013-04-23_07-10-54AM/perl/bin/perl -I/tmp/deinstall2013-04-23_07-10-54AM/perl/lib -I/tmp/deinstall2013-04-23_07-10-54AM/crs/install /tmp/deinstall2013-04-23_07-10-54AM/crs/install/roothas.pl -force  -deconfig -paramfile "/tmp/deinstall2013-04-23_07-10-54AM/response/deinstall_Ora11g_gridinfrahome1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

Once the above command is run from “root” user in another terminal, press enter. Following are logs from running above command.

[root@NODE1 ~]# /tmp/deinstall2013-04-23_07-10-54AM/perl/bin/perl -I/tmp/deinstall2013-04-23_07-10-54AM/perl/lib -I/tmp/deinstall2013-04-23_07-10-54AM/crs/install /tmp/deinstall2013-04-23_07-10-54AM/crs/install/roothas.pl -force  -deconfig -paramfile "/tmp/deinstall2013-04-23_07-10-54AM/response/deinstall_Ora11g_gridinfrahome1.rsp"
Using configuration parameter file: /tmp/deinstall2013-04-23_07-10-54AM/response/deinstall_Ora11g_gridinfrahome1.rsp
PRKO-2573 : ONS daemon is already stopped.
CRS-2673: Attempting to stop 'ora.cssd' on 'node1'
CRS-2677: Stop of 'ora.cssd' on 'node1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'node1'
CRS-2673: Attempting to stop 'ora.evmd' on 'node1'
CRS-2677: Stop of 'ora.evmd' on 'node1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'node1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle Restart stack
[root@NODE1 ~]# 

The above command will stop the remaining services. Now, press enter in first terminal and following logs are continuation from the first one.

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/u01/app/grid/product/11.2.0/grid' from the central inventory on the local node : Done

Delete directory '/u01/app/grid/product/11.2.0/grid' on the local node : Done

The Oracle Base directory '/u01/app/grid' will not be removed on local node. The directory is not empty.

Oracle Universal Installer cleanup was successful.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2013-04-23_07-10-54AM' on node 'NODE1'

## [END] Oracle install clean ##


######################### CLEAN OPERATION END #########################


####################### CLEAN OPERATION SUMMARY #######################
ASM instance was de-configured successfully from the Oracle home
Following Oracle Restart enabled listener(s) were de-configured successfully: LISTENER
Oracle Restart was already stopped and de-configured on node "node1"
Oracle Restart is stopped and de-configured successfully.
Successfully detached Oracle home '/u01/app/grid/product/11.2.0/grid' from the central inventory on the local node.
Successfully deleted directory '/u01/app/grid/product/11.2.0/grid' on the local node.
Oracle Universal Installer cleanup was successful.

Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL & DECONFIG TOOL END #############

Above process is for deinstalling the grid infrastructure gracefully. In case this is not working for you and want to remove all the things forcefully by just deleting the directories from OS, then in that case, it’s a bit pain to remove all the files having information of old installation. I found a good link to manually clean up the installation. Please found link here.

Happy deinstallation ūüėČ

Advertisements

I found it bit difficult to get all the steps for migration of Non-ASM 11g database to ASM 11g. All most all of the web links i found starts directly with RMAN commands to do the actual job. Since in 11g we need to install the grid infrastructure seperatly for ASM, so i thought of posting a complete steps of doing migration in a single post. In 10g, ASM creation is included in the same installer through which we install database, so that was bit easy and straight forward. But 11g has a different installer for ASM. Let’s start this post and following assumptions are made:

1) I have a database YYAPRD running on normal file system
2) I have created disks DG1, DG2, DG3 and DG4 which will be used during ASM instance creation

I can’t include the steps for creating disks as i don’t have much experties on that. I just followed few links to create these disks. The links, which i have followed can be found here. Follow this link to download the following files:

1) p10404530_112030_LINUX_1of7.zip
2) p10404530_112030_LINUX_2of7.zip
3) p10404530_112030_LINUX_3of7.zip

First two files are for 11g database installer and third file is for 11g grid infrastructure which needs to be run before using ASM or before setting up cluster services for RAC database.

11g grid infrastructure will be installed in a seperate ORACLE_HOME then that of database and with different OS user as recommended by oracle. So, we have to create OS user “grid” and following OS groups:

1) asmdba
2) asmoper
3) asmadmin

Make sure that OS user “grid” has primary group as “oinstall” and secondary groups as “dba”, “asmdba”, “asmoper” and “asmadmin”. We need to modify OS user “oracle” too a bit. we need to add “asmdba” as secondary group to OS user “oracle”. After all this our OS users “grid” and “oracle” should look like below:

[oracle@NODE1 ~]$ id
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(asmdba)

[grid@NODE1 ~]$ id
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(asmdba),54324(asmoper),54325(asmadmin)

Now, we unzip the files to some location on server and then run the runInstaller from the grid folder which will be created after unzipping the above files. In my case location of installer is /u01/11g_soft/11203_soft/grid. Following are the screen shots for grid infrastructure installation:

A1

 

Start the installer from ./runInstaller command as in above screen shot.

A2

Select “skip software update” and click next>

A3

Select the “configure grid infrastructure for standalone server”¬†as we not using ASM for RAC database.

A4

click next>

A5

On this screen, you can see the disks which i created earlier. I have keept the redundency “External” so that only one disk is required for a disk group. For redundancy “Normal” we need to have a two disks in a diskgroup and for “High” we need a minimum for three disks. Allocation unit size of 1MB is fine. Select the disk which has to be added to diskgroup named “+DATA”.

A6

Provide password for SYS and ASMSNMP accounts for the ASM instance.

A7

if you created all the groups mentioned earlier in this post then default selection should be fine.

A8

Choose the ORACLE_BASE and ORACLE_HOME location. Make sure that ORACLE_HOME location is different then from database home location. In my case locations are as:

ASM ORACLE_HOME=/u01/app/grid/product/11.2.0/grid

DB   ORACLE_HOME=/u01/app/oracle/product/11.2.0

A9

click next>

A10

wait for installation to prompt for root.sh

New

 

 

 

 

 

 

 

run the root.sh from location mentioned above.

A11

 

 

 

 

 

Once root.sh is run successfully as in above screen shot, installation is completed for grid infrastructure with a ASM instance +ASM running on server.

A12

alert_+ASM.log can be found at location /u01/app/grid/diag/asm/+asm/+ASM/trace and below services are running from os user “grid” on server.


[grid@NODE1 ~]# ps -ef|grep grid
grid      3890     1  0 06:27 ?        00:00:02 /u01/app/grid/product/11.2.0/grid/bin/ohasd.bin reboot
grid      4444     1  0 06:28 ?        00:00:05 /u01/app/grid/product/11.2.0/grid/bin/oraagent.bin
grid      4457     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid      4469     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmd.bin
grid      4476     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/cssdagent
grid      4501     1  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/ocssd.bin
grid      4523  4469  0 06:28 ?        00:00:00 /u01/app/grid/product/11.2.0/grid/bin/evmlogger.bin -o /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/grid/product/11.2.0/grid/evm/log/evmlogger.log
grid      4586     1  0 06:29 ?        00:00:00 asm_pmon_+ASM
grid      4588     1  0 06:29 ?        00:00:00 asm_psp0_+ASM
grid      4590     1  0 06:29 ?        00:00:09 asm_vktm_+ASM
grid      4594     1  0 06:29 ?        00:00:00 asm_gen0_+ASM
grid      4596     1  0 06:29 ?        00:00:00 asm_diag_+ASM
grid      4598     1  0 06:29 ?        00:00:01 asm_dia0_+ASM
grid      4600     1  0 06:29 ?        00:00:00 asm_mman_+ASM
grid      4602     1  0 06:29 ?        00:00:00 asm_dbw0_+ASM
grid      4604     1  0 06:29 ?        00:00:00 asm_lgwr_+ASM
grid      4606     1  0 06:29 ?        00:00:00 asm_ckpt_+ASM
grid      4608     1  0 06:29 ?        00:00:00 asm_smon_+ASM
grid      4610     1  0 06:29 ?        00:00:00 asm_rbal_+ASM
grid      4612     1  0 06:29 ?        00:00:00 asm_gmon_+ASM
grid      4614     1  0 06:29 ?        00:00:00 asm_mmon_+ASM
grid      4616     1  0 06:29 ?        00:00:00 asm_mmnl_+ASM
grid      4618     1  0 06:29 ?        00:00:00 oracle+ASM (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
root      4851  4810  0 07:05 pts/2    00:00:00 grep grid

Now, after our ASM instance is ready, we are all set for moving our database to ASM disks. Below are steps to move non-asm database to ASM. First of all get the details of all the file which needs to be moce to ASM.

SQL> select name from v$database;

NAME
---------
YYAPRD

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------
/u04/oradata/YYAPRD/system01.dbf
/u04/oradata/YYAPRD/sysaux01.dbf
/u04/oradata/YYAPRD/undotbs01.dbf
/u04/oradata/YYAPRD/users01.dbf

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------
/u04/oradata/YYAPRD/temp01.dbf

SQL> col member form a50
SQL> select member from v$logfile;

MEMBER
--------------------------------------------------
/u02/oradata/YYAPRD/redo01a.log
/u03/oradata/YYAPRD/redo01b.log
/u02/oradata/YYAPRD/redo02a.log
/u03/oradata/YYAPRD/redo02b.log
/u02/oradata/YYAPRD/redo03a.log
/u03/oradata/YYAPRD/redo03b.log

6 rows selected.

SQL> show parameter spfile

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
spfile                               string                           /u01/app/oracle/product/11.2.0
                                                                      .3/dbs/spfileYYAPRD.ora
SQL> show parameter control_file

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
control_file_record_keep_time        integer                          7
control_files                        string                           /u01/oradata/YYAPRD/control01.
                                                                      ctl, /u02/oradata/YYAPRD/contr
                                                                      ol02.ctl, /u03/oradata/YYAPRD/
                                                                      control03.ctl

Now, changed the location for control_file parameter to newly created ASM disks and set parameter DB_CREATE_FILE_DEST to ASM disks which will be used for datafiles. This parameter will serve as default location for all the datafiles which will be created in database. I am using multiplexing for controlfiles, so using two disk groups “+DATA” and “+DATA1”. Once parameters are set, restart the database in nomount state for RMAN to start movement.

SQL> alter system set control_files='+DATA','+DATA1' scope=spfile;

System altered.

SQL> alter system set db_create_file_dest='+DATA' scope=spfile;

System altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL>
SQL> startup nomount
ORACLE instance started.

Total System Global Area 2042241024 bytes
Fixed Size                  1345968 bytes
Variable Size             838862416 bytes
Database Buffers         1191182336 bytes
Redo Buffers               10850304 bytes
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Now, connect with rman and restore control file first from the existing one. This is create a new control file in ASM disk “+DATA” and “+DATA1” and then mount the database and backup the database as copy in ASM disk “+DATA” and as last step switch database to copy. Following is the logs from this activity.

[oracle@NODE1 11.2.0.3]$ rman target /

Recovery Manager: Release 11.2.0.3.0 - Production on Mon Apr 8 23:24:36 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: YYAPRD (not mounted)

RMAN> restore controlfile from '/u01/oradata/YYAPRD/control01.ctl';

Starting restore at 08-APR-13
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
output file name=+DATA/yyaprd/controlfile/current.256.812244559
output file name=+DATA1/yyaprd/controlfile/current.256.812244561
Finished restore at 08-APR-13

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> backup as copy database format '+DATA';

Starting backup at 08-APR-13
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=5 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u04/oradata/YYAPRD/system01.dbf
output file name=+DATA/yyaprd/datafile/system.257.812244625 tag=TAG20130408T233025 RECID=1 STAMP=812244720
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:35
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u04/oradata/YYAPRD/users01.dbf
output file name=+DATA/yyaprd/datafile/users.258.812244721 tag=TAG20130408T233025 RECID=2 STAMP=812244816
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:01:45
channel ORA_DISK_1: starting datafile copy
input datafile file number=00002 name=/u04/oradata/YYAPRD/sysaux01.dbf
output file name=+DATA/yyaprd/datafile/sysaux.259.812244827 tag=TAG20130408T233025 RECID=3 STAMP=812244873
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u04/oradata/YYAPRD/undotbs01.dbf
output file name=+DATA/yyaprd/datafile/undotbs.260.812244881 tag=TAG20130408T233025 RECID=4 STAMP=812244927
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:55
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=+DATA/yyaprd/controlfile/backup.261.812244937 tag=TAG20130408T233025 RECID=5 STAMP=812244940
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 08-APR-13
channel ORA_DISK_1: finished piece 1 at 08-APR-13
piece handle=+DATA/yyaprd/backupset/2013_04_08/nnsnf0_tag20130408t233025_0.262.812244945 tag=TAG20130408T233025 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 08-APR-13

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/yyaprd/datafile/system.257.812244625"
datafile 2 switched to datafile copy "+DATA/yyaprd/datafile/sysaux.259.812244827"
datafile 3 switched to datafile copy "+DATA/yyaprd/datafile/undotbs.260.812244881"
datafile 4 switched to datafile copy "+DATA/yyaprd/datafile/users.258.812244721"

RMAN> alter database open;

database opened

RMAN> exit

Recovery Manager complete.

Now, we need to move the temporary tablespace files and online redo logfiles to ASM. Best way is to create new ones in ASM and drop the old ones. Following the steps to do that.

SQL> select file_name from dba_data_files;

FILE_NAME
--------------------------------------------------
+DATA/yyaprd/datafile/system.257.812244625
+DATA/yyaprd/datafile/sysaux.259.812244827
+DATA/yyaprd/datafile/undotbs.260.812244881
+DATA/yyaprd/datafile/users.258.812244721

SQL> alter tablespace temp add tempfile size 500M;

Tablespace altered.

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------
/u04/oradata/YYAPRD/temp01.dbf
+DATA/yyaprd/tempfile/temp.263.812245559

SQL> alter database tempfile '/u04/oradata/YYAPRD/temp01.dbf' drop including datafiles;

Database altered.

SQL> select file_name from dba_temp_files;

FILE_NAME
--------------------------------------------------
+DATA/yyaprd/tempfile/temp.263.812245559

SQL> alter database add logfile group 4 ('+DATA','+DATA1') size 10M;

Database altered.

SQL> alter database add logfile group 5 ('+DATA','+DATA1') size 10M;

Database altered.

SQL> alter database add logfile group 6 ('+DATA','+DATA1') size 10M;

Database altered.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME  NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ----------- ------------ -----------
         1          1        115   10485760        512          2 NO  INACTIVE                244292 08-APR-2013       268271 08-APR-2013
         2          1        116   10485760        512          2 NO  INACTIVE                268271 08-APR-2013       271180 08-APR-2013
         3          1        117   10485760        512          2 NO  CURRENT                 271180 08-APR-2013   2.8147E+14
         4          1          0   10485760        512          2 YES UNUSED                       0                        0
         5          1          0   10485760        512          2 YES UNUSED                       0                        0
         6          1          0   10485760        512          2 YES UNUSED                       0                        0

		 SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter system switch logfile;

System altered.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME  NEXT_CHANGE# NEXT_TIME
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ----------- ------------ -----------
         3          1        117   10485760        512          2 NO  INACTIVE                271180 08-APR-2013       274462 08-APR-2013
         4          1        118   10485760        512          2 NO  INACTIVE                274462 08-APR-2013       274465 08-APR-2013
         5          1        119   10485760        512          2 NO  CURRENT                 274465 08-APR-2013   2.8147E+14
         6          1          0   10485760        512          2 YES UNUSED                       0                        0

SQL> alter database drop logfile group 3;

Database altered.

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------
+DATA/yyaprd/onlinelog/group_4.264.812246035
+DATA1/yyaprd/onlinelog/group_4.257.812246037
+DATA/yyaprd/onlinelog/group_5.265.812246095
+DATA1/yyaprd/onlinelog/group_5.258.812246097
+DATA/yyaprd/onlinelog/group_6.266.812246115
+DATA1/yyaprd/onlinelog/group_6.259.812246117

6 rows selected.

Now, the DB is migrated to ASM. Other things like FRA, archivlogs can be moved to ASM also by setting the respective parameters for these features.

Recently, after restarting a 11.2.0.1 database running on AIX 6.1, all of sudden following lines are getting written to alert.log file of database. Alert.log size increases to from few MB to some GB in span of few minutes.

Mon Apr 15 17:08:37 2013
Archived Log entry 3144 added for thread 1 sequence 3152 ID 0x29e688a7 dest 1:
Mon Apr 15 17:08:38 2013
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:
Errors in file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc:

This trace file generated is as:

oracle@titan $ head -300 findb_ora_21495880.trc
Trace file /u01/app/oracle/diag/rdbms/findb/FINDB/trace/findb_ora_21495880.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/11.2
System name:    AIX
Node name:      titan
Release:        1
Version:        7
Machine:        002A8FXEXC00
Instance name: findb
Redo thread mounted by this instance: 1
Oracle process number: 4
Unix process pid: 21495880, image: oracle@titan


*** 2013-04-15 16:52:44.151
*** SESSION ID:(1.1) 2013-04-15 16:52:44.151
*** CLIENT ID:() 2013-04-15 16:52:44.151
*** SERVICE NAME:(SYS$USERS) 2013-04-15 16:52:44.151
*** MODULE NAME:(OMS) 2013-04-15 16:52:44.151
*** ACTION NAME:() 2013-04-15 16:52:44.151

psdgbt: bind csid (1) does not match session csid (873)
psdgbt: session charset is AL32UTF8

*** 2013-04-15 16:52:44.151
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x0, level=1, mask=0x0)
----- Error Stack Dump -----
----- Current SQL Statement for this session (sql_id=43c5ykm1mcp2a) -----
begin dbms_application_info.set_module(:1, :2); dbms_application_info.set_client_info(:3); dbms_session.set_identifier(:4); end;

i searched with “psdgbt: bind csid (1) does not match session csid” and got the solution. Well, reason for this problem was grid agent. This error may come when we start database or listener after starting the grid agent. In my case gris agent start automatically at server reboot and i have never faced this problem before. Solution for this problem is to shutdown the grid agent first, then restart the database and listener. After this start the grid agent. My issue is resolved with this.

While starting 11.2.0.3 database on Oracle Linux 5, i got the below error.

[oracle@NODE1 11.2.0.3]$ sqlplus /nolog

SQL*Plus: Release 11.2.0.3.0 Production on Mon Apr 8 17:24:02 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

SQL> conn sys as sysdba
Enter password: 
Connected to an idle instance.
SQL> startup
ORA-00845: MEMORY_TARGET not supported on this system
SQL> 

Below are few lines from alert.log file for this error:

Thread 1 advanced to log sequence 115 (LGWR switch)
  Current log# 1 seq# 115 mem# 0: /u02/oradata/YYAPRD/redo01a.log
  Current log# 1 seq# 115 mem# 1: /u03/oradata/YYAPRD/redo01b.log
Mon Apr 08 17:24:12 2013
Adjusting the default value of parameter parallel_max_servers
from 160 to 135 due to the value of parameter processes (150)
Starting ORACLE instance (normal)
WARNING: You are trying to use the MEMORY_TARGET feature. This feature requires the /dev/shm file system to be 
mounted for at least 2046820352 bytes. /dev/shm is either not mounted or is mounted with available space less 
than this size. Please fix this so that MEMORY_TARGET can work as expected. Current available is 1952489472 and 
used is 169725952 bytes. Ensure that the mount point is /dev/shm for this directory.
memory_target needs larger /dev/shm

Well, for this error below is the explanation from oracle:

The new Automatic Memory Management functionality uses /dev/shm on Linux for SGA and PGA management. The errors occur if either MEMORY_TARGET or MEMORY_MAX_TARGET is configured larger than the configured /dev/shm size, or if /dev/shm is mounted incorrectly.

I followed the metalink doc 465048.1 to resolve the issue. But i found a very nice link explaning, what is /dev/shm and what it is used for. This link also contains the solution as suggested by oracle. I thought of mentioning this link here , so that one can get more details about this /dev/shm.

I got below error while creating a database manually with 11.2.0.3 version on Oracle Linux 5.

create database  YYAPRD
*
ERROR at line 1:
ORA-01092: ORACLE instance terminated. Disconnection forced
ORA-01501: CREATE DATABASE failed
ORA-02084: database name is missing a component
Process ID: 5980
Session ID: 63 Serial number: 3

Below is my database creation script.

create database  YYAPRD
        maxdatafiles  500
        maxinstances  1
        maxlogfiles   32
        maxlogmembers 4
        character set AL32UTF8
        national character set AL16UTF16
DATAFILE '/u04/oradata/YYAPRD/system01.dbf' SIZE 1024M reuse
SYSAUX DATAFILE '/u04/oradata/YYAPRD/sysaux01.dbf' SIZE 500M reuse
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE '/u04/oradata/YYAPRD/temp01.dbf' SIZE 500M reuse
UNDO TABLESPACE UNDOTBS DATAFILE '/u04/oradata/YYAPRD/undotbs01.dbf' SIZE 500M reuse
LOGFILE GROUP 1 ('/u02/oradata/YYAPRD/redo01a.log','/u03/oradata/YYAPRD/redo01b.log') SIZE 10M reuse,
        GROUP 2 ('/u02/oradata/YYAPRD/redo02a.log','/u03/oradata/YYAPRD/redo02b.log') SIZE 10M reuse,
        GROUP 3 ('/u02/oradata/YYAPRD/redo03a.log','/u03/oradata/YYAPRD/redo03b.log') SIZE 10M reuse;

crosschecked my script few times and could not find anything wrong with it. Then i looked at init file for this db and found the culprit parameter. By mistake DB_DOMAIN parameter is set to value “.world”. Since, “world” is default value for this parameter, it’s not required to set this parameter explicitly and if doing so then never with a “.”, this dot (.) is creating a problem because GLOBAL_DB_NAME defaults to DB_NAME.DB_DOMAIN and GLOBAL_DB_NAME values is getting constructed something like YYAPRD..world (double dots).

Once, i removed the dot from DOMAIN_NAME parameter, DB creation went fine.