tag:blogger.com,1999:blog-90567413814336898032024-03-27T13:40:33.811-05:00David RogersTopics from my corner of data engineeringDavid Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.comBlogger7125tag:blogger.com,1999:blog-9056741381433689803.post-88933250611944517392014-03-20T14:59:00.000-05:002014-08-02T08:21:57.132-05:00Creating a Windows Guest on Oracle Database Appliance(ODA) Virtualized PlatformWhen it comes to deploying a Windows guest as a virtual machine on Oracle's ODA, the documentation is lacking. After some research and trial and error, I was able to put together a process that will successfully install Windows.<br />
<br />
The ODA utilizes the oakcli command line, so OVM is not an option. In this post, I will walk through the steps needed to prepare and deploy a Windows 7 virtual machine.<br />
<br />
Here is a high level summary of the steps.<br />
<br />
1. Create an unformatted virtual disk image<br />
2. Create a configuration file for the VM template<br />
3. Create a shared repository<br />
4. Import the VM template<br />
5. Clone a VM from the template<br />
6. Boot from iso file and install Windows<br />
7. Configure Windows network<br />
8. Install Paravirtialized Drivers on Windows guest<br />
9. Modify VM's network<br />
<br />
<br />
NOTE: For the purposes of this example, all logins for Dom0 and ODA_BASE are on node 0.<br />
<br />
First, we need to log into Domain 0 (Dom0) and create an empty unformatted virtual disk image. The image file name must be System.img. In the example below, the file size is fixed at 50G.<br />
<br />
<pre class="brush: plain">$ mkdir /OVS/staging/vm_temp/win7x64
$ dd if=/dev/zero of=/OVS/staging/vm_temp/win7x64/System.img oflag=direct bs=1M count=51200</pre>
<br />
<br />
In the same staging directory, create a vm.cfg file with the following content. The "boot" parameter is set as "dc" which sets the CDROM first in the boot order. The "disk" parameter names the System.img file created above as the device "hda" in "read/write" mode. Before the Paravirtualized Drivers(PV) get installed, the "vif" parameter needs to be set with a type of "ioemu". Later, when the VM is started, a vnc session for the console will be started. The vnc port is set in the "vfb" parameter with a vncdisplay of 10. The 10 sets the port to 5910. A value of 1 would set it to 5901.<br />
<br />
<pre class="brush: plain">kernel = 'hvmloader'
builder = 'hvm'
vcpus = '4'
memory = '4096'
boot = 'dc'
disk = ['file:/OVS/staging/vm_temp/win7x64/System.img,hda,w']
name = 'win7x64'
vif = [ 'type=ioemu,bridge=net1' ]
on_poweroff = 'destroy'
on_reboot = 'destroy'
on_crash = 'destroy'
acpi = '1'
apic = '1'
usbdevice='tablet'
vfb = [ 'type=vnc,vnclisten=0.0.0.0,vncdisplay=10' ]</pre>
<br />
<br />
The next step is creating the tar file containing the two files created above to be imported.<br />
<br />
<pre class="brush: plain">$ cd /OVS/staging/vm_temp/win7x64
$ tar -Sczvf win7x64.tgz System.img vm.cfg</pre>
<br />
<br />
There are default repositories created as part of the ODA_BASE. The repos "odarepo1" and "odarepo2" are local repositories and will not allow VMs running from these repos to fail over to the other node. To avoid this, create a shared repository.<br />
<br />
NOTE: Once you create the repo, the size cannot be changed, so plan accordingly.<br />
<br />
Log on to ODA_BASE and run the following command to create a 150GB shared repository in the DATA disk group.<br />
<br />
<pre class="brush: plain">$ oakcli create repo odashr -size 150G -dg DATA</pre>
<br />
<br />
Now the environment is set up to import the template. Login to ODA_BASE as root and run the following command.<br />
<br />
<pre class="brush: plain">$ oakcli import vmtemplate tmpl_win7x64 -files "/OVS/staging/vm_temp/win7x64/win7x64.tgz" -repo odashr -node 0</pre>
<br />
<br />
Once the template has been imported, the VM can finally be cloned. Using the vmtemplate, run the following command on ODA_BASE.<br />
<br />
<pre class="brush: plain">$ oakcli clone vm vm_win7x64 -vmtemplate tmpl_win7x64 -repo odashr -node 0</pre>
<br />
<br />
Copy the Windows installation iso file to a staging location on Dom0. Then modify the file /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg on Dom0 by adding the iso as a CDROM. Below, the iso file /OVS/staging/X17-24281.iso has been defined as device "hdc" and made read only.<br />
<br />
<pre class="brush: plain">disk = [u'file:/OVS/Repositories/odashr/VirtualMachines/vm_win7x64/System.img,hda,w', 'file:/OVS/staging/X17-24281.iso,hdc:cdrom,r']</pre>
<br />
<br />
Now start the VM by logging into ODA_BASE and running the following command.<br />
<br />
<pre class="brush: plain">$ oakcli start vm vm_win7x64</pre>
<br />
<br />
Once the VM starts, log onto Dom0 and attach to the console with a VNC client using the vnc port defined above. In our example, the port is 5910. From the VNC console, you need to install Windows.<br />
<br />
After the installation is complete, log into Dom0 and modify the /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg file. Change the boot parameter from "dc" to "cd". This will name the boot device to the virtual hard drive rather than the CDROM.<br />
<br />
Log into ODA_BASE and restart the VM.<br />
<br />
<pre class="brush: plain">$ oakcli stop vm vm_win7x64
$ oakcli start vm vm_win7x64</pre>
<br />
<br />
Now attach to your vnc session again and configure the network settings for the VM guest as per your requirements. Once the network is configured and functioning, you can download the Windows PV Drivers to the guest and install them by running the executable. As of the date of this post, the PV drivers can be downloaded from edelivery.oracle.com. Go to "Cloud Portal Oracle Linux/VM", and enter "Oracle VM" for the product pack and "x86 64 bit" for the platform.<br />
<br />
After the PV drivers have been installed, modify the /OVS/Repositories/odashr/VirtualMachines/vm_win7x64/vm.cfg file by changing the "vif" parameter as follows.<br />
<br />
<pre class="brush: plain">From
vif = [ 'type=ioemu,bridge=net1']
To
vif = [ 'type=netfront,bridge=net1']</pre>
<br />
<br />
Now restart your VM, and the guest should be ready for use. If you want to by-pass using the vnc session, configure remote desktop and connect to the console.<br />
<br />
<pre class="brush: plain">$ oakcli stop vm vm_win7x64
$ oakcli start vm vm_win7x64</pre>
<br />
<br />
<br />David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com10tag:blogger.com,1999:blog-9056741381433689803.post-31527474642263607892014-02-18T14:05:00.000-06:002014-03-06T08:31:07.229-06:00SQL Tuning: Using Oracle's Global Hints With Multiple Query BlocksThere's nothing much more frustrating than knowing how to tune a query, but being unable to due to one or more third party views that cannot be modified. A great way to work around this is by using Oracle's global hints in the main query block. This is accomplished by referring to the view name and the table name with this format /*+ hint(view.table) */.<br />
<br />
According to Oracle's documentation, however, the optimizer ignores global hints in this format that refer to multiple query blocks. So if you wanted to use the "leading" hint with more than one view like this /*+ leading(view1.table1 view2.table2) */, Oracle will not utilize your input, and you will be left banging your head against the wall.<br />
<br />
Luckily, there is another format for referencing these query blocks, but it takes a little digging to get it right. In the following example, I will create a table with a view that references it along with another view on the ever famous "dual" table. These two views will be joined in a query and we will see what options there are for manipulating the execution plan without changing the views.<br />
<br />
Here are the table and view statements to set up the example.<br />
<br />
<pre class="brush: plain">CREATE TABLE orders AS
SELECT
LEVEL order_id,
SYSDATE + DBMS_RANDOM.VALUE(-1000, 1000) order_date,
DBMS_RANDOM.STRING('A', 20) comments
FROM dual
CONNECT BY LEVEL <= 100000;
CREATE INDEX order_id_ix ON orders ( order_id );
CREATE INDEX order_dt_id_ix ON orders ( order_date, order_id );
CREATE VIEW vw_orders AS
SELECT * FROM orders;
CREATE VIEW vw_dual AS
SELECT * FROM dual;
</pre>
<br />
Here is a basic query which joins the two views<br />
<br />
<pre class="brush: plain">SELECT /*+ gather_plan_statistics */
order_id,
order_date,
comments
FROM vw_orders,
vw_dual
WHERE TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY')
AND order_id = 1;</pre>
<br />
<br />
After running the query, we can take a look at the execution plan chosen by the optimizer with the following script.<br />
<br />
<pre class="brush: plain">set pages 999
set lines 200
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR
(
null,
null,
'allstats'
));</pre>
<br />
<br />
The plan shows that DUAL is the driving table and a nested loop is utilized to access the ORDERS table via an index.<br />
<br />
<pre class="brush: plain">PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dh3p87qwt5njy, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ order_id, order_date,
comments FROM vw_orders, vw_dual WHERE TRUNC(order_date) =
TO_DATE('12-JAN-13','DD-MON-YY') AND order_id = 1
Plan hash value: 1760841149
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 4 |
| 1 | NESTED LOOPS | | 1 | 9 | 1 |00:00:00.01 | 4 |
| 2 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 |
|* 3 | TABLE ACCESS BY INDEX ROWID| ORDERS | 1 | 9 | 1 |00:00:00.01 | 4 |
|* 4 | INDEX RANGE SCAN | ORDER_ID_IX | 1 | 9 | 1 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(TRUNC(INTERNAL_FUNCTION("ORDER_DATE"))=TO_DATE('12-JAN-13','DD-MON-YY'))
4 - access("ORDER_ID"=1)
Note
-----
- dynamic sampling used for this statement (level=2)
34 rows selected.</pre>
<br />
<br />
Now, if we want to force the optimizer to use a full table scan on the ORDERS table, we can throw in a global hint that references the view and the table. As you can see from the execution plan, the optimizer follows the directive and a full table scan is performed.<br />
<br />
<pre class="brush: plain">SELECT /*+ gather_plan_statistics full(vw_orders.orders) */
order_id,
order_date,
comments
FROM vw_orders,
vw_dual
WHERE TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY')
AND order_id = 1;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 9xby7nxuz82tc, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics full(vw_orders.orders) */
order_id, order_date, comments FROM vw_orders, vw_dual WHERE
TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY') AND order_id = 1
Plan hash value: 2691672058
---------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 545 |
| 1 | NESTED LOOPS | | 1 | 9 | 1 |00:00:00.01 | 545 |
| 2 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 |
|* 3 | TABLE ACCESS FULL| ORDERS | 1 | 9 | 1 |00:00:00.01 | 545 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(("ORDER_ID"=1 AND TRUNC(INTERNAL_FUNCTION("ORDER_DATE"))=TO_DATE(
'12-JAN-13','DD-MON-YY')))
Note
-----
- dynamic sampling used for this statement (level=2)
32 rows selected.</pre>
<br />
<br />
<br />
If we try the same approach with the "leading" hint which references more than one query block, Oracle is not so cooperative. After running this example, we see that the "leading" hint is ignored and DUAL is still the driving table.<br />
<br />
<pre class="brush: plain">SELECT /*+ gather_plan_statistics full(vw_orders) leading(vw_orders.orders vw_dual.dual) */
order_id,
order_date,
comments
FROM vw_orders,
vw_dual
WHERE TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY')
AND order_id = 1;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID cbp5hxyg5mjkx, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics full(vw_orders)
leading(vw_orders.orders vw_dual.dual) */ order_id, order_date,
comments FROM vw_orders, vw_dual WHERE TRUNC(order_date) =
TO_DATE('12-JAN-13','DD-MON-YY') AND order_id = 1
Plan hash value: 2691672058
---------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 545 |
| 1 | NESTED LOOPS | | 1 | 9 | 1 |00:00:00.01 | 545 |
| 2 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 |
|* 3 | TABLE ACCESS FULL| ORDERS | 1 | 9 | 1 |00:00:00.01 | 545 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - filter(("ORDER_ID"=1 AND TRUNC(INTERNAL_FUNCTION("ORDER_DATE"))=TO_DATE(
'12-JAN-13','DD-MON-YY')))
Note
-----
- dynamic sampling used for this statement (level=2)
33 rows selected.</pre>
<br />
<br />
<br />
In order to give the optimizer the information it needs in this situation, we need to change our "DBMS_XPLAN" script so it will include the "alias" and the "outline" information. In the "DBMS_XPLAN" script below, I have also eliminated the "predicate" and "note" data.<br />
<br />
<br />
<pre class="brush: plain">SELECT /*+ gather_plan_statistics */
order_id,
order_date,
comments
FROM vw_orders,
vw_dual
WHERE TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY')
AND order_id = 1;
set pages 999
set lines 200
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR
(
null,
null,
'allstats +alias +outline -note -predicate'
));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 12b497umk9ytc, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ order_id, order_date,
comments FROM vw_orders, vw_dual WHERE TRUNC(order_date) =
TO_DATE('12-JAN-13','DD-MON-YY') AND order_id = 1
Plan hash value: 1760841149
------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1 |00:00:00.01 | 4 |
| 1 | NESTED LOOPS | | 1 | 9 | 1 |00:00:00.01 | 4 |
| 2 | FAST DUAL | | 1 | 1 | 1 |00:00:00.01 | 0 |
| 3 | TABLE ACCESS BY INDEX ROWID| ORDERS | 1 | 9 | 1 |00:00:00.01 | 4 |
| 4 | INDEX RANGE SCAN | ORDER_ID_IX | 1 | 9 | 1 |00:00:00.01 | 3 |
------------------------------------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$5428C7F1
2 - SEL$5428C7F1 / DUAL@SEL$3
3 - SEL$5428C7F1 / ORDERS@SEL$2
4 - SEL$5428C7F1 / ORDERS@SEL$2
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.2')
DB_VERSION('11.2.0.2')
ALL_ROWS
OUTLINE_LEAF(@"SEL$5428C7F1")
MERGE(@"SEL$2")
MERGE(@"SEL$3")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$3")
INDEX_RS_ASC(@"SEL$5428C7F1" "ORDERS"@"SEL$2" ("ORDERS"."ORDER_ID"))
LEADING(@"SEL$5428C7F1" "DUAL"@"SEL$3" "ORDERS"@"SEL$2")
USE_NL(@"SEL$5428C7F1" "ORDERS"@"SEL$2")
END_OUTLINE_DATA
*/
53 rows selected.</pre>
<br />
<br />
In the output above, the operations are numbered in the execution plan. These numbers correspond to the numbers identifying the query block and object alias. These two query blocks and aliases are found in the outline section with a leading hint. This leading hint can be modified by listing the ORDERS table first as shown below and using the ORDERS alias in the full table scan hint as well. The optimizer now accepts the hints and executes the query as directed.<br />
<br />
<pre class="brush: plain">SELECT /*+ gather_plan_statistics full(@"SEL$5428C7F1" "ORDERS"@"SEL$2") leading(@"SEL$5428C7F1" "ORDERS"@"SEL$2" "DUAL"@"SEL$3") */
order_id,
order_date,
comments
FROM vw_orders,
vw_dual
WHERE TRUNC(order_date) = TO_DATE('12-JAN-13','DD-MON-YY')
AND order_id = 1;
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID bpy75y38s6phy, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics full(@"SEL$5428C7F1"
"ORDERS"@"SEL$2") leading(@"SEL$5428C7F1" "ORDERS"@"SEL$2"
"DUAL"@"SEL$3") */ order_id, order_date,
comments FROM vw_orders, vw_dual WHERE TRUNC(order_date) =
TO_DATE('12-JAN-13','DD-MON-YY') AND order_id = 1
Plan hash value: 795833099
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 544 |
| 1 | MERGE JOIN CARTESIAN| | 1 | 10 | 0 |00:00:00.01 | 544 |
| 2 | TABLE ACCESS FULL | ORDERS | 1 | 10 | 0 |00:00:00.01 | 544 |
| 3 | BUFFER SORT | | 0 | 1 | 0 |00:00:00.01 | 0 |
| 4 | FAST DUAL | | 0 | 1 | 0 |00:00:00.01 | 0 |
-----------------------------------------------------------------------------------------
Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------
1 - SEL$5428C7F1
2 - SEL$5428C7F1 / ORDERS@SEL$2
4 - SEL$5428C7F1 / DUAL@SEL$3
Outline Data
-------------
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('12.1.0.1')
DB_VERSION('12.1.0.1')
ALL_ROWS
OUTLINE_LEAF(@"SEL$5428C7F1")
MERGE(@"SEL$2")
MERGE(@"SEL$3")
OUTLINE(@"SEL$1")
OUTLINE(@"SEL$2")
OUTLINE(@"SEL$3")
FULL(@"SEL$5428C7F1" "ORDERS"@"SEL$2")
LEADING(@"SEL$5428C7F1" "ORDERS"@"SEL$2" "DUAL"@"SEL$3")
USE_MERGE_CARTESIAN(@"SEL$5428C7F1" "DUAL"@"SEL$3")
END_OUTLINE_DATA
*/
48 rows selected.</pre>
<br />
<br />David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com1tag:blogger.com,1999:blog-9056741381433689803.post-41348560992945937352014-02-07T09:41:00.000-06:002014-02-07T09:42:33.526-06:00Using Linux's Logrotate to Manage Alert and Listener Log FilesIf you are running Oracle on Linux, there is a handy little utility called logrotate that can be used to manage those unwieldy alert logs and listener logs.<br />
<br />
I recently worked on a client's database, and their alert log had information going back four years. When trying to examine the log with vi, it would take several minutes to open the file due to it's enormity. To help them out, I used this simple little tool to keep the files in check.<br />
<br />
As root, I simply created a new file in /etc/logrotate.d on each RAC node called ora_cleanup. The example below is from node 2. This will copy the alert log with a numerical extension and truncate the existing file. It performs this monthly and keeps 13 months worth of log information before deleting old files. The listener log grows much faster, so it is configured to rotate weekly and keep 53 weeks of files. The copytruncate parameter is important for the listener log because the listener process holds an open handle on the existing file.<br />
<br />
<pre class="brush: plain"># alert log
/u01/app/oracle/diag/rdbms/orcl/orcl2/trace/alert_orcl2.log {
monthly
rotate 13
notifempty
missingok
copytruncate
nocreate
}
# listener log
/u01/app/asm/diag/tnslsnr/db102/listener_db102/trace/listener_db102.log {
weekly
rotate 53
notifempty
missingok
copytruncate
nocreate
}</pre>
<br />
After a few months, this is what the managed files look like in their directories.<br />
<br />
<pre class="brush: plain">$ cd /u01/app/oracle/diag/rdbms/orcl/orcl2/trace
$ ls -arltp alert_orcl2*
-rw-rw---- 1 oracle oinstall 50051579 Oct 10 15:17 alert_orcl2.log.5
-rw-rw---- 1 oracle oinstall 993219 Nov 1 04:02 alert_orcl2.log.4
-rw-rw---- 1 oracle oinstall 1124244 Dec 1 04:02 alert_orcl2.log.3
-rw-rw---- 1 oracle oinstall 1088332 Jan 1 04:02 alert_orcl2.log.2
-rw-rw---- 1 oracle oinstall 2163268 Feb 1 04:02 alert_orcl2.log.1
-rw-rw---- 1 oracle oinstall 279484 Feb 7 09:34 alert_orcl2.log
$ cd /u01/app/asm/diag/tnslsnr/db102/listener_db102/trace
$ ls -arltp listener_db102*
-rw-rw---- 1 oracle oinstall 3857048716 Oct 10 15:28 listener_db102.log.18
-rw-rw---- 1 oracle oinstall 196362891 Oct 18 13:13 listener_db102.log.17
-rw-rw---- 1 oracle oinstall 40298571 Oct 20 04:02 listener_db102.log.16
-rw-rw---- 1 oracle oinstall 178408740 Oct 27 04:02 listener_db102.log.15
-rw-rw---- 1 oracle oinstall 188301497 Nov 3 04:02 listener_db102.log.14
-rw-rw---- 1 oracle oinstall 185073120 Nov 10 04:02 listener_db102.log.13
-rw-rw---- 1 oracle oinstall 178345071 Nov 17 04:02 listener_db102.log.12
-rw-rw---- 1 oracle oinstall 178945914 Nov 24 04:02 listener_db102.log.11
-rw-rw---- 1 oracle oinstall 165829858 Dec 1 04:02 listener_db102.log.10
-rw-rw---- 1 oracle oinstall 179395363 Dec 8 04:02 listener_db102.log.9
-rw-rw---- 1 oracle oinstall 175671704 Dec 15 04:02 listener_db102.log.8
-rw-rw---- 1 oracle oinstall 195136727 Dec 22 04:02 listener_db102.log.7
-rw-rw---- 1 oracle oinstall 195512012 Dec 29 04:02 listener_db102.log.6
-rw-rw---- 1 oracle oinstall 201759600 Jan 5 04:02 listener_db102.log.5
-rw-rw---- 1 oracle oinstall 204589312 Jan 12 04:02 listener_db102.log.4
-rw-rw---- 1 oracle oinstall 213276652 Jan 19 04:02 listener_db102.log.3
-rw-rw---- 1 oracle oinstall 212229296 Jan 26 04:02 listener_db102.log.2
-rw-rw---- 1 oracle oinstall 211244326 Feb 2 04:02 listener_db102.log.1
-rw-rw---- 1 oracle oinstall 159275570 Feb 7 09:35 listener_db102.log</pre>
<br />
<br />David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com2tag:blogger.com,1999:blog-9056741381433689803.post-34121268427458351912014-01-28T10:22:00.000-06:002014-08-02T08:22:58.035-05:00GoldenGate: 12c Conflict Detection and Resolution (CDR)One of my favorite 12c enhancement for GoldenGate is its new conflict detection resolution (CDR) feature for two way replication. In previous version, the replicat parameter files had to contain SQLEXEC commands that would query the target table before applying any DML. This added an additional call to the database and slowed performance.<br />
<br />
CDR provides new parameters that simplify the detection and resolution of conflicting data. Here are some of the optional keywords that can be combined with the new RESOLVECONFLICT parameter.<br />
<br />
UPDATEROWEXISTS<br />
UPDATEROWMISSING<br />
INSERTROWEXISTS<br />
INSERTROWMISSING<br />
DELETEROWEXISTS<br />
DELETEROWMISSING<br />
<br />
These can be configured to regard one data source the master and always overwrite the other. They can utilize a timestamp column that determines the "winner" based on the most recent data. For data such as inventory, the "delta" can be used to make changes to both sites and adjust the value rather than simply replicating it.<br />
<br />
GoldenGate requires the "before" images to be captured for all relevant columns in the source database. Here is how trandata can be configured for all columns in the SCOTT schema in the PDBORCL pluggable database.<br />
<br />
<pre class="brush: plain">GGSCI (oel1.localdomain) 103> ADD SCHEMATRANDATA pdborcl.scott ALLCOLS
2014-01-20 15:21:04 INFO OGG-01788 SCHEMATRANDATA has been added on schema scott.
2014-01-20 15:21:05 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema scott.
2014-01-20 15:21:05 INFO OGG-01977 SCHEMATRANDATA for all columns has been added on schema scott.
</pre>
<br />
In addition, the extract parameter files need the GETBEFORECOLS option as a part of the TABLE parameter. In the example below, the extract will capture all of the columns in the SCOTT.ITEMS table for each update and delete. The before image of each of these records will be loaded into the trail file.<br />
<br />
<pre class="brush: plain">EXTRACT e1aa
USERIDALIAS ggsadm domain d1
LOGALLSUPCOLS
EXTTRAIL ./dirdat/aa
TABLE pdborcl.scott.emp;
TABLE pdborcl.scott.dept;
SOURCECATALOG pdborcl
TABLE scott.bonus;
TABLE scott.salgrade;
TABLE scott.items,
GETBEFORECOLS
(
ON UPDATE ALL,
ON DELETE ALL
);
</pre>
<br />
<br />
The final step is to configure the replicat parameter files with the RESOLVECONFLICT paramter. In the example below, CDR is configured for the SCOTT.ITEMS table. If the record exists on the target for an update, the DML_TIMESTAMP column will be compared, and the record with the most recent time will "win" for all of the columns except QTY_ON_HAND. This column will be updated with the difference between the old and new record value on the source using the UPDATEROWEXISTS and USEDELTA keywords.<br />
<br />
For example: If QTY_ON_HAND is updated on both databases in a bi-directional replication setup, the change needs to be reflected appropriately on each system. Let's say that the value on both system starts out at 22. The value on DB1 is decremented to 18, while the value on DB2 is incremented to 30. When the two updates pass each other and get applied on their respective targets, the records cannot simply be replicated as is. Simple arithmetic must be applied to both sides in order to account for the change in inventory on both sides. The "delta" for DB1 is -4. The "delta" for DB2 is +8. Therefore, the value on DB1 (18) will be incremented by 8 for a new total of 26. The value on DB2 (30) will be decremented by 4 for a new total of 26. Now, rather than having a "winner", both sources of data will be equal based on the "delta".<br />
<br />
The replicat parameter file below is set up for such a situation.<br />
<br />
<pre class="brush: plain">REPLICAT r2aa
USERIDALIAS ggsadm domain d2
ASSUMETARGETDEFS
MAP pdborcl.scott.emp, TARGET scott.emp;
MAP pdborcl.scott.dept, TARGET scott.dept;
SOURCECATALOG pdborcl
MAP scott.bonus, TARGET scott.bonus;
MAP scott.salgrade, TARGET scott.salgrade;
MAP scott.items, TARGET scott.items,
COMPARECOLS
(
ON UPDATE KEYINCLUDING (item_name, qty_on_hand, sales_price, dml_timestamp),
ON DELETE KEYINCLUDING (item_name, qty_on_hand, sales_price, dml_timestamp)
),
RESOLVECONFLICT (UPDATEROWEXISTS,
(delta_combine, USEDELTA, COLS (qty_on_hand)),
(DEFAULT, USEMAX (dml_timestamp))),
RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMAX (dml_timestamp))),
RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)),
RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)),
RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD))
;
</pre>
David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com3tag:blogger.com,1999:blog-9056741381433689803.post-50725919617040924342014-01-08T16:37:00.000-06:002014-01-08T16:43:30.972-06:00GoldenGate: 12c Credential Store Secure Login<div class="MsoNormal">
The Credential Store is a new 12c security feature in
GoldenGate that has been implemented as an autologin wallet in Oracle’s Credential
Store Framework. User IDs and passwords
are encrypted in the store and, as a result, an encryption key in the
connection string is no longer needed.<br />
<br />
The default location of the store is in the ./dircrd
directory of the GoldenGate software home.
If you want to change the location, you can edit the ./GLOBALS file with
the following CREDENTIALSTORELOCATION parameter.<br />
<br />
<pre class="brush: plain">GGSCI> edit params ./GLOBALS
CREDENTIALSTORELOCATION /home/oracle/ggs/dircrd</pre>
<br />
<br />
You must exit and restart ggsci before proceeding or the
file will be created in the default location.<br />
<br />
<pre class="brush: plain">GGSCI> exit
$ ./ggsci
GGSCI> add credentialstore
Credential store created in /home/oracle/ggs/dircrd/.
GGSCI> exit
$ ls /home/oracle/ggs/dircrd
cwallet.sso</pre>
<br />
<br />
Once the credential store has been created, users and
password can be added to it. One of the
key features of the store is the use of domains which can be used to logically
group login aliases. The same alias can
be defined in different domains with different credentials. This can be handy when developing and testing
in different database environments from the same GoldenGate installation. The default domain is “Oracle GoldenGate”.<br />
<br />
In this example, the user c##ggsadmin is added to the store
in the “test” domain. If the “password”
keyword is omitted, GoldenGate will prompt for the password and hide it from
the output.<br />
<br />
<pre class="brush: plain">GGSCI> alter credentialstore add user c##ggsadmin, alias ggsadm, domain test
Password:
Credential store in /home/oracle/ggs/dircrd/ altered.</pre>
</div>
<br />
<br />
If you want to see the information maintained in the store,
you can use the INFO CREDENTIONSTORE command.
If you don’t specify the domain, it will default to “Oracle
GoldenGate”. As you can see below, the
default domain is still empty.
<br />
<br />
<pre class="brush: plain">GGSCI> info credentialstore
Reading from /home/oracle/ggs/dircrd/:
No information found in credential store.
GGSCI> info credentialstore domain test
Reading from /home/oracle/ggs/dircrd/:
Domain: test
Alias: ggsadm
Userid: c##ggsadmin</pre>
<br />
<br />
In older version of GoldenGate, you had to supply the username and password in plain text or encrypted for your login credentials.<br />
<br />
<br />
Using DBLOGIN at the command line.<br />
<br />
<pre class="brush: plain">GGSCI> DBLOGIN USERID c##ggsadmin@orcl, PASSWORD AACAAAAAAAAAAAJAUEUGODSCVGJEEIUGKJDJTFNDKEJFFFTC AES128, ENCRYPTKEY securekey1
Successfully logged into database CDB$ROOT.
</pre>
<br />
Using credentials in the parameter files.<br />
<br />
<pre class="brush: plain">GGSCI> edit params e1aa
EXTRACT e1aa
USERID c##ggsadmin@orcl, PASSWORD AACAAAAAAAAAAAJAUEUGODSCVGJEEIUGKJDJTFNDKEJFFFTC AES128, ENCRYPTKEY securekey1
LOGALLSUPCOLS
EXTTRAIL ./dirdat/aa
TABLE pdborcl.scott.emp;
TABLE pdborcl.scott.dept;
SOURCECATALOG pdborcl
TABLE scott.bonus;
TABLE scott.salgrade;</pre>
<br />
<br />
In 12c, the credential store secures the information and makes the connection much easier through the use of the alias that was created.<br />
<br />
<br />
Using DBLOGIN at the command line.<br />
<br />
<pre class="brush: plain">GGSCI> dblogin useridalias ggsadm domain test
Successfully logged into database CDB$ROOT.</pre>
<br />
<br />
Using credentials in the parameter files.<br />
<br />
<pre class="brush: plain">GGSCI> edit params e1aa
EXTRACT e1aa
USERIDALIAS ggsadm domain test
LOGALLSUPCOLS
EXTTRAIL ./dirdat/aa
TABLE pdborcl.scott.emp;
TABLE pdborcl.scott.dept;
SOURCECATALOG pdborcl
TABLE scott.bonus;
TABLE scott.salgrade;</pre>
<br />David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com0tag:blogger.com,1999:blog-9056741381433689803.post-1461336760720718632013-12-24T13:57:00.001-06:002014-02-21T14:21:27.811-06:00GoldenGate: Integrated Capture and Apply on 12c Multitenant DatabasesWith the new features of GoldenGate 12c and the architectural changes brought on by the multitenant databases, there also come some new requirements for configuring replication.<br />
<div>
<br /></div>
<div>
In order to create an extract process for a multitenant database, it must be created at the root container level with a "common" database user and must be defined to run in the "integrated" capture mode. Replicats, on the other hand, must be created at the pluggable database level and can be defined to run in either the "classic" or "integrated" modes.</div>
<div>
<br /></div>
<div>
Below, I will step through the configuration of the databases, extract, and replicat for a multitenent environment. In this example, there are two databases sharing a host, so there is no pump and there is only one GoldenGate environment.</div>
<div>
<br /></div>
<h4>
GoldenGate</h4>
<div>
Version - 12.1.2.0</div>
<div>
<br /></div>
<h4>
Source Database</h4>
<div>
Version - 12.1.0.1</div>
<div>
Root SID - orcl</div>
<div>
Pluggable Databases - pdborcl, pdb2orcl</div>
<div>
<br /></div>
<div>
<h4>
Target Database</h4>
<div>
Version - 12.1.0.1</div>
<div>
Root SID - orcl2</div>
<div>
Pluggable Databases - pdborcl2, pdb2orcl2</div>
</div>
<div>
<br /></div>
<div>
Preparing the source database includes the creation of a "common" user, adding supplemental logging at the database level, enabling flashback query, and properly setting the streams_pool_size init parameter.<br />
<br />
<br />
<br />
On the source, enable supplemental logging in the root container.</div>
<div>
<br /></div>
<pre class="brush: plain">$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base remains unchanged with value /u01/app/oracle
$ sqlplus / as sysdba
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> alter database add supplemental log data;
Database altered.
SQL> alter database force logging;
Database altered.
SQL> select supplemental_log_data_min, force_logging from v$database;
SUPPLEME FORCE_LOGGING
-------- ---------------------------------------
YES YES
SQL> alter system switch logfile;
System altered.</pre>
<br />
<br />
<br />
On the source, enable flashback query by setting UNDO_MANAGEMENT to AUTO and UNDO_RETENTION to a value that makes sense for your environment.
<br />
<br />
<pre class="brush: plain">SQL> show parameter undo
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
</pre>
<br />
<br />
On the source, create a common GoldenGate admin user in the root and pluggable databases.
<br />
<br />
<pre class="brush: plain">SQL> create user c##ggsadmin identified by ggsadmin
default tablespace ggsdata
temporary tablespace temp
container=all;
User created.
SQL> grant dba to c##ggsadmin container=all;
Grant succeeded.
SQL> grant flashback any table to c##ggsadmin container=all;
Grant succeeded.
</pre>
<br />
<br />
<br />
Each extract process will use 1G of the streams pool. Make sure you add space to the streams pool based on the number of extracts in your GoldenGate environment.
<br />
<br />
<pre class="brush: plain">SQL> show parameter streams
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
streams_pool_size big integer 1280M
</pre>
<br />
<br />
<br />
On the source, create the extract parameter file ./dirprm/e1aa.prm. Notice that the TABLE parameter must include the container name along with the schema. Alternatively, the SOURCECATALOG parameter may be utilized. Examples of both are in this file. If configuring an integrated replicat, use the required LOGALLSUPCOLS parameter in the extract to capture the before and after values of the primary key, unique indexes, and foreign keys.<br />
<br />
<pre class="brush: plain">EXTRACT e1aa
USERID c##ggsadmin@orcl, PASSWORD ggsadmin
LOGALLSUPCOLS
EXTTRAIL ./dirdat/aa
TABLE pdborcl.scott.emp;
TABLE pdborcl.scott.dept;
SOURCECATALOG pdborcl
TABLE scott.bonus;
TABLE scott.salgrade;
</pre>
<br />
<br />
<br />
On the source, start GGSCI and add supplemental logging for the objects to be replicated, create and register the extract, and create the trail file. The name of the container must precede the schema name.<br />
<br />
<pre class="brush: plain">$ ./ggsci
GGSCI (oel1.localdomain) 2> dblogin userid c##ggsadmin, password ggsadmin
Successfully logged into database CDB$ROOT.
GGSCI (oel1.localdomain) 3> add schematrandata pdborcl.scott
2013-12-24 08:52:01 INFO OGG-01788 SCHEMATRANDATA has been added on schema scott.
2013-12-24 08:52:02 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema scott.
GGSCI (oel1.localdomain) 2> add extract e1aa, integrated tranlog, begin now
EXTRACT added.
GGSCI (oel1.localdomain) 3> add exttrail ./dirdat/aa, extract e1aa, megabytes 100
EXTTRAIL added.
GGSCI (oel1.localdomain) 6> register extract e1aa database container (pdborcl, pdb2orcl)
Extract E1AA successfully registered with database at SCN 2139002.
GGSCI (oel1.localdomain) 9> start e1aa
Sending START request to MANAGER ...
EXTRACT E1AA starting
GGSCI (oel1.localdomain) 31> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING E1AA 00:00:06 00:00:09
GGSCI (oel1.localdomain) 32> exit
</pre>
<br />
<br />
<br />
On the target, create a local GoldenGate admin user in the pluggable database.
<br />
<br />
<pre class="brush: plain">[oracle@oel1 ggs]$ . oraenv
ORACLE_SID = [orcl] ? orcl2
The Oracle base remains unchanged with value /u01/app/oracle
[oracle@oel1 ggs]$ sqlplus / as sysdba
SQL> alter session set container=pdborcl2;
Session altered.
SQL> show con_name
CON_NAME
------------------------------
PDBORCL2
SQL> create user ggsadmin identified by ggsadmin
default tablespace ggsdata
temporary tablespace temp
container=current;
User created.
SQL> grant dba to ggsadmin container=current;
Grant succeeded.
SQL> exit
</pre>
<br />
<br />
<br />
On the target, create the replicat parameter file ./dirprm/r2aa.prm. There is no need to create a checkpoint table for a replicat in "integrated" mode. Again, the MAP parameter must include the source container name. This may also be accomplished with the SOURCECATALOG parameter. Examples of both are in this file.<br />
<br />
<pre class="brush: plain">REPLICAT r2aa
USERID ggsadmin@pdborcl2, PASSWORD ggsadmin
ASSUMETARGETDEFS
MAP pdborcl.scott.emp, TARGET scott.emp;
MAP pdborcl.scott.dept, TARGET scott.dept;
SOURCECATALOG pdborcl
MAP scott.bonus, TARGET scott.bonus;
MAP scott.salgrade, TARGET scott.salgrade;
</pre>
<br />
<br />
<br />
On the target, create the replicat.
<br />
<br />
<pre class="brush: plain">$ ./ggsci
GGSCI (oel1.localdomain) 2> dblogin userid ggsadmin@pdborcl2, password ggsadmin
Successfully logged into database PDBORCL2.
GGSCI (oel1.localdomain) 3> add replicat r2aa, integrated, exttrail ./dirdat/aa
REPLICAT (Integrated) added.
GGSCI (oel1.localdomain) 8> start r2aa
Sending START request to MANAGER ...
REPLICAT R2AA starting
GGSCI (oel1.localdomain) 12> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING E1AA 00:00:05 00:00:05
REPLICAT RUNNING R2AA 00:00:00 00:00:09</pre>
<br />
<br />
<br />
<br />David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com1tag:blogger.com,1999:blog-9056741381433689803.post-14052735132534926992013-12-19T14:34:00.001-06:002014-01-08T15:51:47.265-06:00GoldenGate: Installation of 12c Using Oracle Universal Installer (OUI)As you know, Oracle bought GoldenGate four years ago. In its new 12c release, Oracle has now integrated the product further into its standards by offering an installation option using the OUI.<br />
<br />
For those of you who have installed earlier versions, you're probably thinking that there's not much to the old install of unzipping a file and creating the subdirs. I agree, but it makes sense why Oracle is moving in this direction. As you'll see, the installation process updates the Oracle Inventory and there is a new OPatch directory allowing for a standard method of patching.<br />
<br />
The documentation specifies that this version of the OUI does not support upgrades, so you'll have to revert to the old method if that is your current situation.<br />
<br />
Here are the steps to installing GoldenGate with the new OUI.<br />
<br />
Once you have downloaded the software, unzip it in a temporary location. Don't unzip it in the GoldenGate home as you would with the previous versions.<br />
<br />
For my environment, the file name is 121200_fbo_ggs_Linux_x64_shiphome.zip, and it creates the subdirectory fbo_ggs_Linux_x64_shiphome.<br />
<br />
<pre class="brush: plain">$ cd fbo_ggs_Linux_x64_shiphome/Disk1
$ ./runInstaller
</pre>
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRW8yr2ez1GUT52jhXBTD3Cp11W2mdAUxCmZSVZliAE4DnydJ203v7734-AY0giuKGbPDNsjaDTuJAfIP0ISCC-LkuWDWBLvFhsgD-yByZAZckGbsYHDbkSA2ZDhHuwVRxeD-qgzT2M8A/s1600/GGInstall01.png" imageanchor="1"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRW8yr2ez1GUT52jhXBTD3Cp11W2mdAUxCmZSVZliAE4DnydJ203v7734-AY0giuKGbPDNsjaDTuJAfIP0ISCC-LkuWDWBLvFhsgD-yByZAZckGbsYHDbkSA2ZDhHuwVRxeD-qgzT2M8A/s400/GGInstall01.png" width="400" /></a><br />
<br />
Select the version of Oracle that the capture and/or apply process will be running against.<br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYaintOAO59NsXemANfYF-3hEoLNJU_GJ_LdhB1RjFWJp34UITYiausgghBrwUEEqL8Z70p6IIRXsa0ptQssf_HATxQuChl7t8VbuGcgpkVMFvMrA90pILZttJi1AN19cioSWzu323k3U/s1600/GGInstall02.png" imageanchor="1"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhYaintOAO59NsXemANfYF-3hEoLNJU_GJ_LdhB1RjFWJp34UITYiausgghBrwUEEqL8Z70p6IIRXsa0ptQssf_HATxQuChl7t8VbuGcgpkVMFvMrA90pILZttJi1AN19cioSWzu323k3U/s400/GGInstall02.png" width="400" /></a><br />
<br />
Enter the GoldenGate software home and the location of the database home that GoldenGate will be operating against. You can also check whether or not you want the manager process to be started and customize the desired port.<br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxw02EgxLenDx7nRKtKZpandBXaXz2ckpJgoe1khmy_X_2jK9wpZH8VaHTep_1OmqY8bN58x_DKAJfC7wfaZ-J07R5LjSk3IWigakNO69NSNejM0k9GUwR11dH0ucUGJliSGOwlw0GlGY/s1600/GGInstall03.png" imageanchor="1"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxw02EgxLenDx7nRKtKZpandBXaXz2ckpJgoe1khmy_X_2jK9wpZH8VaHTep_1OmqY8bN58x_DKAJfC7wfaZ-J07R5LjSk3IWigakNO69NSNejM0k9GUwR11dH0ucUGJliSGOwlw0GlGY/s400/GGInstall03.png" width="400" /></a><br />
<br />
Review the options and click "Install"<br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoAtlgNPIbyUWYowJAgGEkhtoAaiS8BMmsMFAu3y8d2pKbFYyz9lIAwt9ID1gISof2QJ1LjnhAOPQLIa1UrdZjkdFK8LSjoMrHNDpNR35swCoqTubMq2DeYJWwuUGg9pZe9cDiVi97uxg/s1600/GGInstall04.png" imageanchor="1"><img border="0" height="301" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgoAtlgNPIbyUWYowJAgGEkhtoAaiS8BMmsMFAu3y8d2pKbFYyz9lIAwt9ID1gISof2QJ1LjnhAOPQLIa1UrdZjkdFK8LSjoMrHNDpNR35swCoqTubMq2DeYJWwuUGg9pZe9cDiVi97uxg/s400/GGInstall04.png" width="400" /></a><br />
<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8m1Y5BEDq5QpMjKEE0cWVdZaLQX5VLxSQz4OPSsjmOKsiM78H2zgyohmjAJQSkVm0tNU8IWCoNAJ4GIXzJLAJf33ae46V_1XAe67VqIkgPce3EfhNcOc00EK1zCtCLZt5aDJckIzSQ1w/s1600/GGInstall05.png" imageanchor="1"><img border="0" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8m1Y5BEDq5QpMjKEE0cWVdZaLQX5VLxSQz4OPSsjmOKsiM78H2zgyohmjAJQSkVm0tNU8IWCoNAJ4GIXzJLAJf33ae46V_1XAe67VqIkgPce3EfhNcOc00EK1zCtCLZt5aDJckIzSQ1w/s400/GGInstall05.png" width="400" /></a><br />
<br />
<br />
And that's it. The installation is complete. The subdirs have been created and the manager process has been started.<br />
<br />
If you navigate to your new GoldenGate home that you specified above and list the contents, you'll see the OPatch directory along with all of the subdirs.<br />
<br />
Additionally, you can navigate to the OPatch directory and list the Oracle inventory as follows.<br />
<br />
<br />
<pre class="brush: plain">[oracle@oel1 ggs]$ export ORACLE_HOME=/u01/app/oracle/product/ggs
[oracle@oel1 ggs]$ cd $ORACLE_HOME/OPatch
[oracle@oel1 OPatch]$ ./opatch lsinventory
Invoking OPatch 11.2.0.1.7
Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/oracle/product/ggs
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/ggs/cfgtoollogs/opatch/opatch2013-12-19_14-21-54PM.log
Lsinventory Output file location : /u01/app/oracle/product/ggs/cfgtoollogs/opatch/lsinv/lsinventory2013-12-19_14-21-54PM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle GoldenGate Core 12.1.2.0.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
--------------------------------------------------------------------------------
OPatch succeeded.</pre>
David Rogershttp://www.blogger.com/profile/06772879159067318673noreply@blogger.com0