Monday, January 11, 2016

How to Chang VIP Address in Oracle RAC 11g

Today, I have a request from network administrator for changing vip address in oracle RAC database because of he needs that ip that it set on node1. You can follow below scripts for changing VIP IP address.
On node1:

[root@test-1 bin]# ./srvctl stop vip -n test-1 –f

[root@test-1 bin]# ./srvctl config vip -n test-1
VIP exists: /test-1-vip/10.8.72.27/10.8.72.0/255.255.255.0/eth0, hosting node test-1

·         Be careful, you should add new IP address to your host to each node.
·         We want to change 10.8.72.27 to 10.8.72.32.

[root@test-1 bin]# cd /grid/product/11.2.0/grid_1/bin/

[root@test-1 bin]# ./srvctl  modify nodeapps -n test-1 -A 10.8.72.32/255.255.255.0/eth0

·         Check your new IP:

[root@test-1 bin]# /sbin/ifconfig -a | egrep '(eth0|Mask)'

Sunday, January 3, 2016

Setting up GG Replication

In the following post we’ll understand to set up a replication using GoldenGate, for instruction on installing the software take a look on my small post on the same

Now we assume that the GG is installed and we have two servers on which Oracle databases are running, the GG Manager process is up and running on both these servers as GG is installed on both these servers, the TCP/IP network connectivity open from source server to target server’s manager port and basic understanding of GG replication flow which can be quickly revised from my post


Basic Replication Steps

Preparing the Source Database

STEP 1:- The Source database should be in archivelog mode, if not, follow the below steps:-

SQL> shutdown immediate

SQL> startup mount

SQL> alter database archivelog;

SQL> alter database open;

SQL> select log_mode from v$database;

LOG_MODE

————

ARCHIVELOG

STEP 2:- Enable minimal supplemental logging:

SQL> alter database add supplemental log data;

Database altered.

STEP 3:- Turn off the database recyclebin feature and purge it.

SQL> PURGE DBA_RECYCLEBIN;
SQL> alter system set recyclebin=off scope=spfile;

DBA Recyclebin purged.

STEP 4:- Create GoldenGate Schema (tablespace/user) and grant necessary priveleges:-

SQL>  create tablespace gg_tbs datafile ‘/u01/app/oracle/oradata/orcl/gg_tbs_data01.dbf’ size 100m;

Tablespace created.

SQL>  create user ggate identified by oracle default tablespace gg_tbs quota unlimited on gg_tbs;

User created.

SQL>  grant CREATE SESSION, CONNECT, RESOURCE, ALTER ANY TABLE, ALTER SYSTEM, CREATE TABLE, DBA, LOCK ANY TABLE, SELECT ANY TRANSACTION, FLASHBACK ANY TABLE to ggate;

Grant succeeded.

SQL>  grant execute on utl_file to ggate;

Grant succeeded.

STEP 5:- Now go to the GoldenGate home directory and run scripts for creating all necessary objects for replication.

$ cd /u01/app/oracle/product/gg/

$ sqlplus / as sysdba

SQL> @marker_setup.sql
SQL> @ddl_setup.sql
SQL> @role_setup.sql
SQL> grant GGS_GGSUSER_ROLE to ggate;

SQL> @ddl_enable.sql
STEP 6:- Now edit the parameter file:-

$./ggsci

GGSCI  1> EDIT PARAMS ./GLOBALS

GGSCHEMA ggate

STEP 7:- Now lets create the schema for replication on both source and target database.

SQL> create user source identified by source default tablespace users temporary tablespace temp;

User created.

SQL> grant connect,resource,unlimited tablespace to source;

Grant succeeded.

SQL>

SQL> create user target identified by target default tablespace users temporary tablespace temp;

User created.

SQL> grant connect,resource,unlimited tablespace to target;

Grant succeeded.

STEP 8:- Now Creating the directories for trail files on both servers and create directory for discard file on target server only.

Source server

mkdir /u01/app/oracle/product/gg/dirdat/tr

Target Server

mkdir /u01/app/oracle/product/gg/dirdat/tr

mkdir /u01/app/oracle/product/gg/discard

STEP 9:- Now, Configuring extract process on Source Database:-

$ ./ggsci

GGSCI  1> edit params mgr

PORT 7809

GGSCI  2> start manager

Manager started.

GGSCI (db.us.oracle.com) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

STEP 10:- Login into database and add additional information about primary keys into log files.

GGSCI 4> dblogin userid ggate
Password:
Successfully logged into database.

GGSCI 5> ADD SCHEMATRANDATA source

2015-08-03 10:30:20 INFO OGG-01788 SCHEMATRANDATA has been added on schema source.

GGSCI 6> add extract ext1, tranlog, begin now
EXTRACT added.

GGSCI 7> add exttrail /u01/app/oracle/product/gg/dirdat/tr, extract ext1
EXTTRAIL added.

GGSCI 8> edit params ext1
extract ext1
userid ggate, password oracle
exttrail /u01/app/oracle/product/gg/dirdat/tr
ddl include mapped objname source.*;
table source.*;

GGSCI 9> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EXT1 00:00:00 00:01:29

GGSCI 10> add extract pump1, exttrailsource /u01/app/oracle/product/gg/dirdat/tr , begin now
EXTRACT added.

GGSCI 11> add rmttrail /u01/app/oracle/product/gg/dirdat/tr, extract pump1
RMTTRAIL added.

GGSCI 12> edit params pump1
EXTRACT pump1
USERID ggate, PASSWORD oracle
RMTHOST db2, MGRPORT 7809
RMTTRAIL /u01/app/oracle/product/gg/dirdat/tr
PASSTHRU
table source.*;

GGSCI 13> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT STOPPED EXT1 00:00:00 00:02:33
EXTRACT STOPPED PUMP1 00:00:00 00:02:56

Preparing the Target Database

STEP 1:- Verify if the manager is running on the Target Server and Start if not already started.

$ ./ggsci

GGSCI 1> edit params mgr
PORT 7809

GGSCI 2> start manager

Manager started.

GGSCI 3> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING

STEP 3:- Create a checkpoint table and change the GLOBAL file.

GGSCI  4> EDIT PARAMS ./GLOBALS
CHECKPOINTTABLE     target.checkpoint

GGSCI  5> dblogin userid target
Password:
Successfully logged into database.

GGSCI  6> add checkpointtable target.checkpoint

Successfully created checkpoint table target.checkpoint.

STEP 4:- Add replicat process

GGSCI  8> add replicat rep1, exttrail /u01/app/oracle/product/gg/dirdat/tr, begin now
REPLICAT added.

GGSCI (db2) 9> edit params rep1
REPLICAT rep1
ASSUMETARGETDEFS
USERID target, PASSWORD oracle
discardfile /u01/app/oracle/product/gg/discard/rep1_discard.txt, append, megabytes 10 DDL
map source.*, target target.*;

GGSCI  10> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT STOPPED REP1 00:00:00 00:01:32

GGSCI  11>

Starting Replication

At Source Server

STEP 1:- Start extract process

GGSCI  6> start extract ext1

Sending START request to MANAGER …
EXTRACT EXT1 starting

GGSCI  7> start extract pump1

Sending START request to MANAGER …
EXTRACT PUMP1 starting

GGSCI  8> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:00 00:00:01
EXTRACT RUNNING PUMP1 00:00:00 00:01:01

At Destination Server

STEP 2:- Start Replicat Process

GGSCI  6> start replicat rep1

Sending START request to MANAGER …
REPLICAT REP1 starting

GGSCI (db2) 7> info all

Program Status Group Lag at Chkpt Time Since Chkpt

MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:06

Finally our replication has been configured between source and destination DB, so now you can create objects under Source schema and it would get replicated to Target Schema. There are several features of GoldenGate which makes it far better than Oracle Streams and CDC.

Refrence

Installing Golden Gate

In this post we’ll cover step by step approach for installing Golden Gate Software on both Windows based and Unix based operating system. Below are some requirements that should be met before installing the Golden Gate software:-

Memory Requirements

At least between 25 and 55 Mb of RAM memory is required for each GoldenGate Replicat and Extract process. Oracle Goldengate supports up to 300 concurrent processes for Extract and Replicat per Goldengate instance. As a rule of thumb, you will need to take into consideration that at least 1–2 Extract processes and multiple Replicat processes will be required in addition to manager processes for a basic Oracle Goldengate installation. The best way to assess total memory requirement is to run the GGSCI command to view the current report file and to examine the PROCESS AVAIL VM FROM OS (min) to determine if you have sufficient swap memory for your platform.

Disk Space Requirements

Following are some things you should do to ensure having enough disk space to support your Goldengate replication needs:
• Allocate at least 50–150 MB of disk space for the Oracle GoldenGate software binaries.
• Allocate 40 MB of disk space per instance of Goldengate for working directories and files per server. For a basic configuration with Oracle Goldengate, you will need to allocate 40 MB on the source and 40 MB on the target system for a total requirement of 80 MB of disk space.
• Allocate sufficient disk space to temporary files to accommodate GoldenGate operations. By default, Goldengate stores temporary files in the dirtmp directory under the default installation directory. A good rule of thumb to use for temp file space is around 10 GB of disk space.
• Plan on at least 10 MB per trail file. As a rule of thumb, we recommend that you start with at least 1 GB of disk space allocated per system for trail files. Alternatively, use the following formula that Oracle provides to determine the amount of disk space to set aside:
[log volume in one hour] x [number of hours downtime] x 0.4 = trail disk space.
One way to calculate the total amount required for trail file space is by querying the V$ARCHIVED_LOG view from within the source Oracle database. The following query shows you how to do so:

select trunc(COMPLETION_TIME),count(*)*100 size_in_MB
from v$archived_log
group by trunc(COMPLETION_TIME);
TRUNC(COM SIZE_IN_MB
——— ———-
15-MAY-11 500

Run tests after installing Goldengate to measure your specific transaction mix and
load, and to gauge the total disk space required for trail files.

Network Requirements

Since Oracle Goldengate software operates between source and target systems over networks, you must configure TCP/IP networking to accommodate all hosts within DNS to include host names that will be included in the Oracle Goldengate infrastructure deployed. In the event of firewalls, hosts must be allowed to send and receive data for open ports that the manager, Extract, and Replicat processes require access to in order to send and receive data. This range of ports must be allocated for the Goldengate environments. Also allocate ports for Goldengate manager, Extract, and Replicat processes. By default, manager uses port 7840. The recommendation is that you keep this port available. In addition, keep a record of ports allocated to Goldengate processes to avoid port conflicts.

Operating System Requirements

Linux or UNIX:
• Grant read and write privileges for the operating system (OS) account used to install the Oracle Goldengate software.
• Place the Oracle Goldengate software on a shared disk in a clustered environment, or on a shared clustered filesystem that all cluster nodes have access to.
• Install from an operating and database system account that has read/write

Installation

I assume that you have downloaded the software by now, I have downloaded Oracle GoldenGate V11.2.1.0.1 for Oracle 11g on Linux x86. Now lets proceed with the installation:-

Step 1:- Create GoldenGate directory and copy the downloaded file.

mkdir /u01/app/oracle/product/gg

cp V32409-01.zip /u01/app/oracle/product/gg/

cd /u01/app/oracle/product/gg/

Step 2:- Unzip the file.

unzip V32409-01.zip

ls -ltr

fbo_ggs_Linux_x86_ora11g_32bit.tar

Oracle GoldenGate 11.2.1.0.1 README.doc
V32409-01.zip

OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

Oracle GoldenGate 11.2.1.0.1 README.txt
Step 3:- Untar the following tar ball

tar -xf fbo_ggs_Linux_x86_ora11g_32bit.tar

ls

bcpfmt.tpl defgen libxml2.txt

bcrypt.txt demo_more_ora_create.sql logdump

cfg demo_more_ora_insert.sql marker_remove.sql
chkpt_ora_create.sql demo_ora_create.sql marker_setup.sql
cobgen demo_ora_insert.sql marker_status.sql
convchk demo_ora_lob_create.sql mgr

db2cntl.tpl demo_ora_misc.sql notices.txt
ddl_cleartrace.sql demo_ora_pk_befores_create.sql oggerr

ddlcob demo_ora_pk_befores_insert.sql OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

ddl_ddl2file.sql demo_ora_pk_befores_updates.sql Oracle GoldenGate 11.2.1.0.1 README.doc
ddl_disable.sql dirjar Oracle GoldenGate 11.2.1.0.1 README.txt
ddl_enable.sql dirprm params.sql
ddl_filter.sql emsclnt prvtclkm.plb
ddl_nopurgeRecyclebin.sql extract pw_agent_util.sh
ddl_ora10.sql fbo_ggs_Linux_x86_ora11g_32bit.tar remove_seq.sql
ddl_ora10upCommon.sql freeBSD.txt replicat

ddl_ora11.sql ggcmd retrace

ddl_ora9.sql ggMessage.dat reverse

ddl_pin.sql ggsci role_setup.sql
ddl_purgeRecyclebin.sql help.txt sequence.sql
ddl_remove.sql jagent.sh server

ddl_session1.sql keygen sqlldr.tpl
ddl_session.sql libantlr3c.so tcperrs

ddl_setup.sql libdb-5.2.so ucharset.h

ddl_status.sql libgglog.so ulg.sql
ddl_staymetadata_off.sql libggrepo.so UserExitExamples

ddl_staymetadata_on.sql libicudata.so.38 usrdecs.h

ddl_tracelevel.sql libicui18n.so.38 V32409-01.zip

ddl_trace_off.sql libicuuc.so.38 zlib.txt
ddl_trace_on.sql libxerces-c.so.28
Step 4:- Set the LD_LIBRARY_PATH variable as

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/u01/app/oracle/product/gg

Step 5:- Run the GGSCI and create the subdirs

$ ./ggsci

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI 1> create subdirs

Creating subdirectories under current directory /u01/app/oracle/product/gg

Parameter files /u01/app/oracle/product/gg/dirprm: already exists

Report files /u01/app/oracle/product/gg/dirrpt: created

Checkpoint files /u01/app/oracle/product/gg/dirchk: created

Process status files /u01/app/oracle/product/gg/dirpcs: created

SQL script files /u01/app/oracle/product/gg/dirsql: created

Database definitions files /u01/app/oracle/product/gg/dirdef: created

Extract data files /u01/app/oracle/product/gg/dirdat: created

Temporary files /u01/app/oracle/product/gg/dirtmp: created

Stdout files /u01/app/oracle/product/gg/dirout: created


GGSCI 2> exit

Refrence

Saturday, January 2, 2016

Golden Gate Architecture

In this post we’ll understand the Golden Gate replication flow and examines each of the architectural components.

GoldenGate Replication Flow:

GoldenGate Replication Flow

ِDiffrent type of Golden Gate

The typical GoldenGate flow shows new and changed database data being captured from the source database. The captured data is written to a file called the source trail. The trail is then read by a data pump, sent across the network, and written to a remote trail file by the Collector process. The delivery function reads the remote trail and updates the target database. Each of the components is managed by the Manager process.

GoldenGate Components

From the above flow you can see there are following GoldenGate components:-
1.            Source Database
2.            Capture (Local Extract) Process
3.            Source Trail
4.            Data Pump
5.            Network
6.            Collector
7.            Remote Trail
8.            Delivery (Replicat)
9.            Target Database
10.          Manager

Now we’ll understand each and every component individually:-

SOURCE DATABASE

This is basically your Oracle Database from where you want to replicate your data.

Capture (Local Extract) Process

Capture is the process of extracting data that is inserted into, updated on, or deleted from the source database. In GoldenGate, the capture process is called the Extract. In this case, the Extract is called a Local Extract (sometimes called the Primary Extract) because it captures data changes from the local source database.

Extract is an operating-system process that runs on the source server and captures changes from the database redo logs (and in some exceptional cases, the archived redo logs) and writes the data to a file called the Trail File. Extract only captures committed changes and filters out other activity such as rolled-back changes. Extract can also be configured to write the Trail File directly to a remote target server, but this usually isn’t the optimum configuration. In addition to database data manipulation language (DML) data, you can also capture data definition language (DDL) changes and sequences using Extract if properly configured. You can use Extract to capture data to initially load the target tables, but this is typically done using DBMS utilities such as export/import or Data Pump for Oracle.  You can configure Extract as a single process or multiple parallel processes depending on your requirements. Each Extract process can act independently on different tables. For example, a single Extract can capture all the changes for of the tables in a schema, or you can create multiple Extracts and divide the tables among the Extracts. In some cases, you may need to create multiple parallel Extract processes to improve performance, although this usually isn’t necessary. You can stop and start each Extract process independently.
You can set up Extract to capture an entire schema using wildcarding, a single table, or a subset of rows or columns for a table. In addition, you can transform and filter captured data using the Extract to only extract data meeting certain criteria. You can instruct Extract to write any records that it’s unable to process to a discard file for later problem resolution. And you can generate reports automatically to show the Extract configuration. You can set these up to be updated periodically at user-defined intervals with the latest Extract processing statistics.

Source Trail

The Extract process sequentially writes committed transactions as they occur to a staging file that GoldenGate calls a source trail. Data is written in large blocks for high performance. Data that is written to the trail is queued for distribution to the target server or another destination to be processed by another GoldenGate process, such as a data pump. Data in the trail files can also be encrypted by the Extract and then unencrypted by the data pump or delivery process. You can size the trail files based on the expected data volume. When the specified size is reached, a
new trail file is created. To free up disk space, you can configure GoldenGate to automatically purge trail files based on age or the total number of trail files. By default, data in the trail files is stored in a platform-independent, GoldenGate proprietary format. In addition to the database data, each trail file contains a file header, and each record also contains its own header. Each of the GoldenGate processes keeps track of its position in the trail files using checkpoints, which are stored in separate files.

Data Pump

The data pump is another type of GoldenGate Extract process. The data pump reads the records in the source trail written by the Local Extract, pumps or passes them over the TCP/IP network to the target, and creates a target or remote trail. Although the data pump can be configured for data filtering and transformation (just like the Local Extract), in many cases the data pump reads the records in the source trail and simply passes all of them on as is. In GoldenGate terminology, this is called passthru mode. If data filtering or transformation is required, it’s a good idea to do this with the data pump to reduce the amount of data sent across the network.

Network

GoldenGate sends data from the source trail using the Local or data pump Extract over a TCP/IP network to a remote host and writes it to the remote trail file. The Local or data pump Extract communicates with another operating-system background Extract process called the Collector on the target. The Collector is started dynamically for each Extract process on the source that requires a network connection to the target. The Collector listens for connections on a port configured for GoldenGate. Although it can be configured, often the Collector process is ignored because it’s started dynamically and does its job without requiring changes or intervention on the target. During transmission from the source to the target, you can compress the data to reduce bandwidth.
In addition, you can tune the TCP/IP socket buffer sizes and connection timeout parameters for the best performance across the network. If needed, you can also encrypt the GoldenGate data sent across the network from the source and automatically decrypt it on the target.

Collector

The Collector process is started automatically by the Manager as needed by the Extract. The Collector process runs in the background on the target system and writes records to the remote trail. The records are sent across the TCP/IP network connection from the Extract process on the source system (either by a data pump or a Local Extract process).

Remote Trail

The remote trail is similar to the source trail, except it is created on the remote server, which could be the target database server or some other middle tier server. The source trails and the remote trails are stored in a filesystem directory named dirdat by default. They are named with a two-character prefix followed by a six-digit sequence number. The same approach for sizing for the source trail applies to the remote trail. You should size the trail files based on the expected data volume. When the specified size is reached, a new trail file will be created. You can also configure GoldenGate to automatically purge the remote trail files based on age or the total number of trail files to free up disk space. Just like the source trail, data in the remote trail files is stored in platform-independent, GoldenGate-proprietary format. Each remote trail file contains a file header, and each record also contains its own header. The GoldenGate processes keep track of its position in the remote trail files using checkpoints, which are stored in separate GoldenGate files or optionally in a database table.

Delivery (Replicat)

Delivery is the process of applying data changes to the target database. In GoldenGate, delivery is done by a process called the Replicat using the native database SQL. The Replicat applies data changes written to the trail file by the Extract process in the same order in which they were committed on the source database. This is done to maintain data integrity. In addition to replicating database DML data, you can also replicate DDL changes and sequences using the Replicat, if it’s properly configured. You can configure a special Replicat to apply data to initially load the target tables, but this is typically done using DBMS utilities such as Data Pump for Oracle. Just like the Extract, you can configure Replicat as a single process or multiple parallel processes
depending on the requirements. Each Replicat process can act independently on different tables. For example, a single Replicat can apply all the changes for all the tables in a schema, or you can create multiple Replicats and the divide the tables among them. In some cases, you may need to create multiple Replicat processes to improve performance. You can stop and start each Replicat process independently. Replicat can replicate data for an entire schema using wildcarding, a single table, or a subset of rows or columns for a table. You can configure the Replicat to map the data from the source to the target database, transform it, and filter it to only replicate data meeting certain criteria. You can write any records that Replicat is unable to process to a discard file for problem resolution. Reports can be automatically generated to show the Replicat configuration; these reports can be updated periodically at user-defined intervals with the latest processing statistics.

Target Database

This is basically the Oracle Database where you want the changes to be replicated.

Manager

The GoldenGate Manager process is used to manage all of the GoldenGate processes and resources. A single Manager process runs on each server where GoldenGate is executing and processes commands from the GoldenGate Software Command Interface (GGSCI). The Manager process is the first GoldenGate process started. The Manager then starts and stops each of the other GoldenGate processes, manages the trail files, and produces log files and reports.
This is pretty much all the theory you need to know for Oracle GoldeGate, simple enough. right ?, that is the beauty of Golden Gate and thats why all other replication methodologies Oracle CDC and Oracle Streams have been de-supported from 12c and may be depricated in future releases.

Friday, January 1, 2016

Calculate kernel parameters for running Oracle 11g Release 2 on RHEL Linux

Values required by Oracle according to the documentation


The following tables list the required MINIMUM values



Calculate the required values

The following table shows calculation formulas for setting the required kernel parameters for running oracle 11g release 2 on Linux.


The shell limits shown above should be enough initially. However if you have many data files (> 1000) i would increase the number of open files by doubling the value. All other parameters are sized big enough for almost every environment.
For using the new Automatic Memory Management Feature which automatically sized SGA and PGA Oracle uses a pseudo file system /dev/shm. If activating this feature by setting MEMORY_MAX_SIZE or MEMORY_TARGET in your init.ora you have to size /dev/shm appropriately. The following table outlines how to do it: