Friday 29 February 2008

Database Security

Database Security

Password Management
Components of password mgmt:
Account Locking
Password Aging and Expiration
Password History
Password Complexity Verification
Specify the max number of failed login attempts and time to keep acct locked by using the CREATE PROFILE command.
Password Settings that can be used in a profile:
FAILED_LOGIN_ATTEMPTS - number of unsuccessful login attempts before locking out the user.
PASSWORD_LOCK_TIME - Number of days a lock stays in place after a password expires.
PASSWORD_LIFE_TIME - Number of days a password stays effective before expiring.
PASSWORD_GRACE_TIME - Number of days "grace period" for creating a new password after first login after password has expired.
PASSWORD_REUSE_TIME - Number of days before a password can be reused.
PASSWORD_REUSE_MAX - Max number of times that a password can be reused.
PASSWORD_VERIFY_FUNCTION - PL/SQL function used to verify the complexity of a password before allowing it to be used as a valid password. Function must be created in the SYS schema and follow specific guidelines. Oracle provides a password complexity function that is created by the utlpwdmg.sql script that must be run as the SYS user.
Oracle default password verify function characteristics:
Min length is 4 characters
Password not equal to username
Must have at least 1 special character, 1 numeric value, and 1 character value.
Does not match simple words like welcome and user.
Must differ from previous password by at least 3 characters.
Invokers Rights Security Mgmt
CREATE PROCEDURE procedure_name (...)
AUTHID invoker_rights_clause IS
.... ;
Valid values for the invoker_rights_clause are:
DEFINER - procedure executes with the privileges of the schema owner where the procedure was created. External name will try to resolve in the procedure owner schema. AUTHID is the DEFAULT.
CURRENT_USER - procedure executes with the privileges of the current user. An “invoker-rights” procedure is created. All external names in queries, DML and dynamic SQL must resolve to the schema of the current user. All other external name references resolve to the procedure owner schema, these are known as references to other statements for test wording.
Fine-Grained Access Control
Provides a method to associate security policies with tables and views. Define once on the database server and then it is applicable to all applications that access the table or view. Database server automatically enforces the policies.
Implement the security policy with package functions and then associate with the table or view.
A table or view may have multiple policies defined on it. The policy enforcement is cumulative. AND logic is used to evaluate the multiple access control conditions that have been defined.
Application Context Areas
Use Application Context to help facilitate the implementation of fine-grained access control. You can associate security policies with applications, similar to associating them with tables or views.
Each application may have its own application specific context that the user of the app can not arbitrarily change.
Steps to define an application context:
1. Create the validation functions as part of a PL/SQL package.
2. Issue the CREATE CONTEXT command to associate a unique context name with the package of functions created in step 1.
3. Either reference the context in a policy function OR use an event trigger to set the context for a user upon initial login to an application.
N-Tier Authentication
In N-Tier authentication an application server that sits between the client machine and the database server assumes the responsibility for controlling security.
The application server can validate the credentials of a client and the database server can audit the actions of the application server performed on behalf of the client.

Thursday 28 February 2008

SQLPlus, PL/SQL, and National language Support

SQLPlus, PL/SQL, and National language Support

SQL*Plus for Database Management
SERVER MANAGER is being phased out. Future releases will not support SERVER MANAGER. Start using SQL*Plus for all database administration tasks. CONNECT INTERNAL is also going away; can use CONNECT / AS SYSDBA as a possible alternative.
Can use SQLPLUS /nolog to enter SQL*Plus for DBA mgmt tasks and not be connected to a database.
SQL*Plus will not allow the # comment symbol or blank lines, which were both allowed in SERVER MANAGER. Use REM or -- (double dash) as comment char in SQL*Plus.
PL/SQL - Event Triggers
Triggers in 8i can now be written for dml, ddl, User, or database events.
ddl events include:
CREATE - trigger fires when CREATE adds a new object to data dictionary
ALTER - trigger fires when ALTER modifies object in data dictionary
DROP - trigger fires when DROP removes object from data dictionary.
Database events include:
SERVERERROR - trigger fires when server error msg is logged
LOGON - trigger fires when a client app logs into the database
LOGOFF - trigger fires when a client app logs off the database
STARTUP - trigger fires when the database is opened
SHUTDOWN - trigger fires when an instance is shutdown
Event triggers are classified into 2 levels, database and schema. Database level event triggers will fire for all users while schema level triggers will fire for the specified schema or user only.
The DBMS_STANDARD can be used by database event triggers to obtain various bits of information. An example would be to call the DATABASE_NAME function before a database shutdown to get obtain the name of the database being shutdown.
PL/SQL - Additional Enhancements
Invoker Rights - A new invoker rights clause has been added to allow PL/SQL programs to execute under various authorities.
New API’s that allow for monitoring and analysis of PL/SQL applications:
DBMS_TRACE, DBMS_DEBUG and DBMS_PROFILER.
NLS Enhancements
The NVARCHAR2 is a variable length national language support character datatype. Max value 4000.
NCHAR and NCLOB are fixed width datatypes that support national character set values.
The new NLS_UNION_CURRENCY parameter supports the EURO currency when set to a value of “Y”.
Use the NLS_COMP parameter to indicate that comparison operations should use linguistic ordering. DEFAULT value is binary. Makes it so you don’t have to use the NLSSORT function within the where clause.

Wednesday 27 February 2008

Features of Net8

Features of Net8

Net8 is easier to configure and admin by using the new Net8 Easy Config and Net8 Assistant tools.
Clients access a database via a service name instead of a SID. Allows access to multiple services made available by a single database or a single service that spans multiple instances.
Database instance registration is a new feature of 8i that allows database instances to register themselves with the listener.
Two init.ora parameters to set that allow an instance to register itself with the listener of a local machine when the database is started: INSTANCE_NAME and SERVICE_NAME.
Setting these values for database instance registration also provides for automatic connect-time failover to another listener. Connection failover occurs when the listener selected by a client is unavailable and the connection request is routed to the next available listener.
The connect descriptor SID parameter in the tnsnames.ora file has been replaced with SERVICE_NAME and INSTANCE_NAME, in order to allow a database to have more than one service associated with it.
Connection load balancing is facilitated via the listener because each instance is registered with all identified listeners. Listeners make routing decisions based upon volume of dispatcher connections and node load. MTS must be enabled for connection load balancing.
New virtual private database (VPD) feature in Net8 to help secure data. Provides for server-enforced granular access control. Attaches a security policy to a table or view.
Two components to VPD, fine-grained access control and application context.
With fine grained access, you create a security policy function and then use the DBMS_RLS package to associate the function with the desired table or view.

Tuesday 26 February 2008

Recovery Manager

Recovery Manager

Miscellaneous Availability and Recoverability Features
Duplexed & Multiple Archive Logs
Multiplexing allows you to place archive redo-logs in more than one location.
Can specify up to 5 locations for local destination and 1 location for remote destination. Use the following init parm to specify locations:
LOG_ARCHIVE_DEST_n = ‘LOCATION= /dir_name/’
where dir-name is a valid directory location. Note the use of the LOCATION key word.
LOG_ARCHIVE_DEST_n = ‘SERVICE= tnsnames-service’
where tnsnames-service is a valid service (alias) entry in the tnsname.ora file that represents a standby database.
Both forms of the commands above can include the MANDATORY or OPTIONAL parameter values. If marked as MANDATORY, archiving must be successful before the redo-log can be over-written. If marked as OPTIONAL, successful archiving to the specified location is not required for the redo-log to be reused.
Use the optional LOG_ARCHIVE_MIN_SUCCEED_DEST=n parameter to help control the minimum number of successful archive locations before LGWR can reuse a redo-log file. The default value of n is 1. Valid values are 1 to 5.
Use the LOG_ARVHIVE_DEST_STATE_n parameter to specify the availability state of each destination. Valid values are ENABLE and DEFER. If enabled, the location can be used for automatic and manual archiving operations. If disabled (DEFER), the location is excluded from archiving operations.
Can change the value of LOG_ARCHIVE_DEST_STATE_n using:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_n= value
or
ALTER SESSION SET LOG_ARCHIVE_DEST_STATE_N = value
The V$ARCHIVE_DEST view shows the values in use for the current session.
Note the following:
If you do not specify MANDATORY for a destination, it is treated as OPTIONAL.
You do need to set at least one local destination that can be specified as OPTIONAL or MANDATORY.
When using LOG_ARCHIVE_MIN_SUCCEED_DEST, a minimum of one destination will be operationally treated as MANDATORY because the default is 1.
If any MANDATORY locations fail to be archived, the LOG_ARCHIVE_MIN_SUCCEED_DEST value has no impact on processing.
LOG_ARCHIVE_MIN_SUCCEED_DEST value can not be larger than the total number of destinations.
If a mandatory destination is deferred and the online redo-log is overwritten without sending the archived log to a standby database, the logs will have to be transferred manually.
Use these steps to set archive redo log destinations using LOG_ARCHIVE_DEST_n
1. SHUTDOWN NORMAL or IMMEDIATE
2. Edit the init.ora file and set up the LOG_ARCHIVE_DEST_n parameters
3. Edit the init.ora file and set up the LOG_ARCHIVE_FORMAT parameter.
%s - seq number (%S to left pad zeros)
%t – thread number (%T to left pad zeros)
The REOPEN option of the LOG_ARCHIVE_DEST_n parameter specifies in seconds how long the ARCn process should wait before trying again to archive the log. Default value is 300 seconds. If set to 0 (zero), the REOPEN option is disabled.
If REOPEN is specified for an OPTIONAL destination, the online redo-log can still be over-written (reused) if there is an error. However, if REOPEN is specified for a MANDATORY destination and the redo-logwrite fails, the database will come to a standstill.
You can start multiple archiver processes with the LOG_ARCHIVE_MAX_PROCESSES parameter. Set to the number of archive processes you want Oracle to start when LOG_ARCHIVE_START=TRUE. Having multiple archive processes can help to prevent bottlenecks.
As an alternative to using LOG_ARCHIVE_DEST_n method to specify multiple locations for archive logs, you can use the combination of the following 2 parameters.
LOG_ARCHIVE_DEST - primary archive location.
LOG_ARCHIVE_DUPLEX_DEST - optional secondary location
Using this method you can not have the archive file automatically copied to a different machine. In other words, you can’t archive to a remote location.
You can not use the first method described in this section and this method at the same time.
Standby dB in Sustained Recovery Mode
Can create a standby database and keep it in sustained (managed) recovery mode. In this mode, archived redo logs are automatically sent and applied to the standby database SERVICE (alias).
Standby transmission mode is used for maintaining a local or remote standby database.
ARCn processes create corresponding RFS connections for each standby database. If there are 4 ARCn processes archiving to 2 standby databases there will be a total of 8 RFS connections (4 for each database).
The ARCn process “pushes” information to the RFS process.
The RFS process is active when the standby instance is started and is responsible for:
- Receiving I/O from the ARCn process.
- Create and populate the log files from the primary database on the standby database. The location is specified by the STANDBY_ARCHIVE_DEST parameter.
- Updates the log history in the standby control file.
The TIMEOUT option of the RECOVER command can be used to specify an optional timeout interval (specified in minutes). If Oracle “times out” before an update to the standby database control file, managed recovery is exited. By default, no TIMEOUT value specified, managed recovery will wait indefinitely and will shutdown only through manual intervention, a shutdown or a system crash.
RECOVER MANAGED STANDBY DATABASE CANCEL command will terminate managed recovery, but wait until the current managed recovery operation is complete.
RECOVER MANAGED STANDBY DATABASE CANCEL IMMEDIATE command will terminate managed recovery before reading another block from the redo log or before opening the next redo log, but it will wait until the current managed recovery operation is complete. If terminated while processing a log, the standby database will be in an inconsistent state.
Use the following commands to place a standby database in managed recovery mode:
1. STARTUP NOMOUNT
2. ALTER DATABASE MOUNT STANDBY DATABASE
3. RECOVER MANAGED STANDBY DATABASE TIMEOUT 45
(TIMEOUT parameter is optional)
Start a Database for Read-Only Operations
You can startup a standby database in read-only mode and make it available for query (DSS) users.
Note that while a standby database is opened as read-only, it becomes unavailable for managed recovery.
If you need to use your standby database for fail-over while it is open for read-only, you will need to run a recovery operation against it to resynchronize it with your production dB (applying logs). This can be time consuming. It is recommended to keep one standby database for query read-only users and a second for actual fail-over requirements. The one for fail-over should always by in managed recovery mode.
When placed in read-only mode, the only writes that occur on the standby database are for creation of temporary sort segments.
There are a few scenarios for getting a standby database into read-only mode:
-Put in read-only mode from SHUTDOWN STATE:
STARTUP NOMOUNT
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE OPEN READ ONLY ;
-Put in read-only mode from MANUAL recovery mode:
RECOVER CANCEL
ALTER DATABASE OPEN READ ONLY;
-Put in read-only mode from MANAGED recovery mode:
RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE OPEN READ ONLY;
Note: Application of redo-logs will stop after the end of applying the current redo-log.
There are a couple of scenarios for getting a read-only standby database into recovery mode:
-Put into MANAGED recovery mode from read-only mode:
Terminate or kill all of the sessions that are active against the standby dB.
RECOVER MANAGED STANDBY DATABASE; (TIMEOUT parameter optional)
-Put into MANUAL recovery mode from read-only mode:
Terminate or kill all of the sessions that are active against the standby dB.
RECOVER STANDBY DATABASE; (TIMEOUT parameter optional)
LogMiner
LogMiner can read info that is stored in online and archived redo logs, based upon selection criteria, without having to restore archived redo log files. Good for tracking changes to the database. Don’t use for audits.
Can be very useful in determining and reversing logical corruption by constructing SQL that can redo and/or undo operations. V$LOGMNR_CONTENTS view contains these SQL statements in the SQL_REDO and SQL_UNDO columns.
LogMiner creates 1 row in the V$LOGMNR_CONTENTS view for each redo record analyzed.
You can also use the data in V$LOGMNR_CONTENTS to:

  • Identify the SCN before and/or after logical recovery occurred.
  • Follow the flow of transactions applied to a specific table.
  • Follow the flow of transactions make by a specific user.
  • Identify patterns of data access
  • Perform volume/capacity testing and performance tuning.

LogMiner has the following functionality and/or restrictions:

  • Only works in 8.1 or higher
  • Analyze redo for any 8.0 and higher dB that is on the same platform and character page as the instance running LogMiner from.
  • Performs all of its analysis using a dictionary created by a PL/SQL package.
  • Collects DML operation info on conventional tables and NOT ON
    IOT’s
  • Clustered tables and indexes
  • Non-Scalar data types
  • Chained rows

The session that analyzed redo-logs and created the data in the V$LOGMINER_CONTENTS view is the only session that can access that data. If it needs to be made available to other sessions, move the data in the view into a table.
Fast-Start Parallel Rollback(FSPR)
FAST_START_IO_TARGET parameter – specified as an integer. Helps to control the number of I/O’s required during instance or crash recovery by making the DBWn’s work harder and write more dirty buffers to disk. Smaller values should result in faster recovery times at the expense of more I/O writes during normal processing.
Oracle ensures that the checkpoint position does not lag behind the redo log by more than 90% of the size of the smallest redo log.
V$INSTANCE_RECOVERY – gets info that will help you to determine the impact of your fast-start checkpointing parameters, such as info on redo blocks and recovery I/O’s
V$FILESTAT – Use to get average I/O statistics that will help to determine the FAST_START_IO_TARGET value. AVGIOTIM column has the I/O average time statistics.
SMON process coordinates the rollback of transaction sets in parallel via multiple server processes. SMON evaluates a threshold to begin the parallel rollback.
Can be very useful when there are long running parallel DML operations that need to get rolled back during instance recovery upon startup.
FSPR sometimes uses a technique call intra-transaction recovery. In this situation, there may be multiple transactions that need to be rolled back and all of them are of equal size except one that may be significantly larger than the rest. Oracle will cause the large transaction rollback to be divided and shared among the individual parallel processes that are allocated to the other smaller processes. In this way, the overall time to perform the instance recovery is reduced.
Use the FAST_START_PARALLEL_ROLLBACK parameter to control the number of processes that get involved in transaction recovery. Valid values are:
FALSE – disables fast-start parallel rollback.
LOW - total number of recovery servers; can not be more that 2 times the CPU_COUNT parameter
HIGH - total number of recovery servers; can not be more that 4 times the CPU_COUNT parameter
V$FAST_START_SERVERS – data for all FSPR processes.
V$FAST_START_TRANSACTIONS – progress of the transactions.
Manage Corrupt Block Detection and Repair
The DBMS_REPAIR package can be used to detect and repair corrupt table and index blocks, while objects remain available to normal processing.
Procedures available in the DBMS_REPAIR package
check_object – detect and report table or index corruption
fix_corrupt_blocks – mark blocks identified by the check-object procedure as corrupt.
dump_orphan_keys – identify index entries that reference rows in corrupt data blocks.
rebuild_freelists – rebuild the object freelist.
skip_corrupt_blocks – will allow the DBMS_REPAIR package to skip blocks that were marked as corrupt during full table and index scans.
admin_tables – admin functions for creating, dropping, and purging DBMS_REPAIR repair and orphan key tables. These tables live in the SYS schema.
The DBMS_REPAIR package has the following restriction:
-LOB’s, VARRAYS and nested are supported, but out-of-line columns are ignored.
- skip_corrupt_blocks and rebuild_freelist procedures support clusters, but the check object procedure does not.
- IOT’s and LOB indexes are not supported.
- dump_orphan_keys procedure does not support bitmap indexes or function-based indexes.
- The largest key that the dump_orphan_keys procedure can process is 3,950 bytes.
Additional methods for determining corruption:
DB_VERIFY – performs block checking on datafile blocks, from a command prompt, for an online and offline datafiles. Minimal performance impact.
ANALYZE – the VALIDATE STRUCTURE option will verify the structural integrity of an index, table or cluster. Can check or verify that tables and indexes are in sync. Use this method for checking and reporting corruption on partitioned tables. Can identify invalid rows in a partitioned table.
DB_BLOCK_CHECKING – Will identify a corrupt block before it is actually marked as corrupt, by performing checks at the time changes are made to a block. Is a init parameter. When block detection is detected and DB_BLOCK_CHECKING=TRUE, an Ora-1578 error is written to the alter file.

Monday 25 February 2008

Miscellaneous Manageability Features

Miscellaneous Manageability Features

New Database Limits
Maximum database size: 512 petabytes (petabyte is 2 to the 50th power of bytes)
Maximum number of tablespaces: ~2 billion
Maximum number of datafiles per tablespace: 1022
Maximum number of partitions per table or index: 64,000
Maximum number of columns per table: 1000
Maximum columns per index: 32
New maximum size for CHAR: 2000 bytes
New maximum size for VARCHAR2: 4000 bytes
NCHAR and NVARCHAR2 are new multibyte national language support character equivalents of CHAR and VARCHAR2 and store the same amount of data respectively.
Size of Oracle 8 extended ROWID: 10 bytes
Size of Oracle 7 restricted ROWID: 6 bytes
Size of BLOB (binary large object type): 4GB
Size of CLOB (character large object type): 4GB
Size of NCLOB (NLS character large object type): 4GB

Relocate and Reorg tables
You can use the ALTER TABLE statement to reorganize extents or move a table to a different tablespace.
Can move (relocate) a table from one tablespace to another using the following command:
ALTER TABLE table_name MOVE TABLESPACE target_tablespace_name;
Documentation says that any associated indexes, constraints and grants in the original tablespace should also be moved. May require up to twice the amount of space currently being consumed by the table and its related objects.
You can not perform any DML (insert, update, or delete) operations on a table that is in the process of being moved
Can not relocate a partitioned table.
Use the following command to “rebuild” an Index-Organized Table:
ALTER TABLE iot_table_name MOVE INITRANS 10;
Can be performed online for non-partitioned IOT’s if there are no overflow data segments with following command:
ALTER TABLE docindex MOVE ONLINE INITRANS 10;
With an overflow segment use:
ALTER TABLE iot_table_name MOVE TABLESPACE iot_index_tbs OVERFLOW TABLESPACE iot_over-flow_tbs;

Remove Unused Columns From a Table
Use the ALTER TABLE ...DROP COLUMN command to remove a column from a table. ALTER TABLE table_name DROP COLUMN column_name;
dropping multiple columns at once:
ALTER TABLE table_name DROP (col1, col2, ... coln);
If the columns to be dropped are part of unique constraints or primary keys use the CASCADE CONSTRAINTS option on the ALTER TABLE command. This will drop any referential integrity constraints referring to a primary or foreign key defined on the column.
Use the DROP COLUMN INVALIDATE command to remove the column and any views, triggers, or stored programs that are dependent on the column.
Can’t drop a pseudo-column, nested table type of column, or a partition key column.
Can add a column to a table with the same name as the dropped column name.
The DROP COLUMN option actually removes the column from the table description (DESCRIPTORS), removes the column data and its length usage from each row of the table and frees up space in the data block. Can be slow on large tables.
Can use the ALTER TABLE...SET UNUSED option to mark a column as unused. Data becomes unavailable, but still exists in the rows of the table. Much faster!
Example:
ALTER TABLE table_name SET UNUSED COLUMN column_name;
To drop an “unused” column at a later time use the following command:
ALTER TABLE table_name DROP UNUSED COLUMNS;
During an IMPORT, any columns marked as UNUSED are removed and not included in the imported table. During an EXPORT, fields marked as unused are ignored.
Related data dictionary tables:
ALL_, DBA_, and USER_UNUSED_COL_TABS - Shows tables with columns that have been marked as unused.

Define Temporary Tables
Temp tables hold data only for the length of a transaction or session.
No re-do is generated for temp tables so they are not protected by the redo log.
Views, Indexes and Triggers can be created based upon temp tables.
You can export a temp table, but it will contain only the table structure...NO ROWS.
Temp tables store their data in the session’s sort space. If more space is required, the temporary tablespace of the USER is used.
You can query the DBA_TABLES view to determine if a table is a temporary table. It contains a DURATION column that reflects the length of temp table existence.
Examples:
Create temp table to last duration of session:
CREATE GLOBAL TEMPORARY TABLE table_name....ON COMMIT PRESERVE ROWS;
Create temp table to last duration of transaction:
CREATE GLOBAL TEMPORARY TABLE table_name....ON COMMIT DELETE ROWS;
Create temp table as a clone of another table:
CREATE GLOBAL TEMPORARY TABLE table_name AS SELECT ...

SQL*Loader Enhancements
Can load objects, collections and LOB’s.
Removal of the 64K physical record limit.
New FILLER keyword for filler fields.
New VARCHARC, VARRAW, LONG VARRAW and VARRAWC datatypes that function similar to VARCHAR.
Can specify a customer record separator.
Can AND together DEFAULTIF and NULLIF predicates.
Can specify a field delimiter that is longer than one character.

Export and Import Utility Enhancements
Can use a query in EXPORT to unload tables.
Can specify multiple dump files and work around the previous 2GB export file limit.
Can export tables with objects and LOB’s, even in direct mode.
Can export and import precalculated optimizer stats without the need to recompute stats during import, for selected exports and tables.
More control over the validation of type object identifiers during import.

Monitor Long-running Operations
Reference V$FILESTAT and correlate I/O activity to the EXPLAIN plan is one way to follow progress.

Sunday 24 February 2008

Database Resource Manager(DRM)

Database Resource Manager(DRM)

Using the DRM, you can allocate CPU time percentages to different applications and users. You could assign a DSS a lower priority than an OLTP system, and vice versa. In essence, you have the ability to configure your production environment based upon various application priorities.
DRM can partition the CPU to a maximum of eight levels. Uses percentages to specify how to partition CPU at each level.
DRM can also be used to limit the degree of parallelism for a set of users. By setting the max degree of parallelism for a consumer group in a plan, you can manage parallelism at the plan level.
Elements of the Database Resource manager
Resource Consumer Group - user sessions that are grouped based upon similar resource usage requirements.
Resource Plan – specifies which resources such as CPU and parallelism to allocate to each Resource Consumer Group. Many can be defined, but only one is used by an instance at a particular point in time.
Resource Allocation Method – What the DRM uses when allocating a particular resource. These methods are “used” by both Resource Consumer Groups and Resource Plans.
Resource Plan Directive – The administrator uses this to allocate resources among Resource Consumer groups and to associate Resource Consumer Groups with Resource Plans.
The following order represents the approach for Database Resource Management:

  1. Create resource plans
  2. Create resource consumer groups
  3. Create resource plan directives
  4. Assign users to consumer groups
  5. Specify plan to be used by an instance


DBMS_RESOURCE_MANAGER Package – Use to maintain Resource Consumer Groups, Resource Plans and Plan Directives, as well as group changes to the plan schema. Need SYSTEM privilege to admin the DRM.
Uses the following packages to perform its functions:

  • create_plan
  • update_plan
  • delete_plan
  • delete_plan_cascade
  • create_consumer_group
  • update_consumer_group
  • delete_consumer_group
  • create_plan_directive
  • update_plan_directive
  • delete_plan_directive

Use a scratch area called the pending area to make changes to plan schema. The process is 1) create the pending area, 2) make changes, and 3) submit the changes. Validating the changes is an optional step. The submit_pending_area procedure must complete successfully in order for changes to take effect.
Use these 4 procedures in this order:


1. dbms_resource_manager.create_pending_area
2. dbms_resource_manager.validate_pending_area (optional)
3. dbms_resource_manager.clear_pending_area
4. dbms_resource_manager.submit_pending_area


DATABASE_RESOURCE_MANAGER package is also used to assign Resource Consumer Groups to users using the following procedure:
set_initial_consumer_group(user in varchar2, consumer_group in varchar2)
Use the DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE procedure along with the OTHER_GROUPS consumer group to create a plan directive to share for all user sessions that are not explicitly assigned a directive in a resource plan. This procedure includes the Resource Plan, Consumer Group and other rules.
The DBMS_RESOURCE_MANAGER_PRIVS package is used to maintain privileges that are associated to Resource Consumer Groups. In essence, grant and revoke classes to users. Procedures are executed with the privileges of the individual procedure caller.

  • use to grant switch privileges
  • use to revoke switch privileges

Use the DBMS_SESSION package to your current Resource Consumer group, using the following procedure:
switch_current_consumer_group

Saturday 23 February 2008

Tablespace Management

Tablespace Management

Extended ROWID’s and Tablespace Sizing
The new Oracle 8 ROWID format is 10 bytes and has a different format from the Oracle 7 ROWID. The Oracle 8 ROWID type is known as extended and the Oracle 7 ROWID type is known as restricted.
The restricted ROWID format is BRF: 1) dB block number, 2) row in dB block, and 3) datafile number.
The extended ROWID format is OFBR: 1) data object number, 2) relative datafile number, 3) dB block number, and 4) row in dB block.
The data object number refers to the tablespace, which contains the row.
The relative datafile number refers to the datafile within the given object. While all datafile numbers must be unique within a given object, they can be the same in different tablespaces (i.e., objects).
The database block number refers to the dB block within the given datafile, which contains the row in question.
The row number refers to the specific row within the given dB block.
The extended ROWID format consists of 18 Base 64 characters (Base 64 format: A-Z, a-z, 0-9, +, /).

Locally-Managed Tablespaces
Locally managed tablespaces perform their own extent management, as opposed to dictionary managed tablespaces where the extent mgmt is performed by the data-dictionary.
You can not change the extent management method of a tablespace after it has been created.
Locally managed tablespaces use bitmaps in each datafile to keep track of the status of their data blocks. There is a bitmap value for each block or group of blocks in the tablespace.
No rollback is generated as a result of extents being allocated or freed for reuse.
You do not specify NEXT, MINEXTENTS, MAXEXTENTS, PCTINCREASE or DEFAULT STORAGE for locally managed tablespaces.
Advantages and Rules:
-Reduces space mgmt operations associated with maintaining rollback segments and updating the data dictionary.
-Takes away the need to coalesce free extents.
-Extent sizes can be managed by the system or all have the same size. Use the UNIFORM or AUTOALLOCATE (system-managed) options to specify. If using AUTOALLOCATE, you can specify the initial extent size and then Oracle will determine the size of additional extents (minimum and default additional extent size is 64K). When using UNIFORM, the default is 1M unless you specify a value. All locally managed temporary tablespaces must have UNIFORM specified.
-Can create the system tablespace EXTENT MANAGEMENT LOCAL as part of the CREATE DATABASE command. When system tablespace is managed locally you can still create dictionary managed tablespaces. However, you must create all rollback segments in locally managed tablespaces.
-Can use EXTENT MANAGEMENT LOCAL as past of the CREATE TABLE command for permanent tablespaces, other than system.
-Can specify EXTENT MANAGEMENT LOCAL as part of the CREATE TEMPORARY TABLESPACE command.
Example of permanent locally managed tablespace with non-default uniform extent size:
CREATE TABLESPACE tablespace_name ...
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M;

Transportable Tablespaces
Use to copy or move tablespaces between Oracle databases that have the same block size, character set, and are on the same hardware platform.
Faster than export and import. Transporting a tablespace requires only a copy of the datafiles and the integration of the tablespace metadata into the data dictionary.
Can also transport indexes so they don’t need to be rebuilt.
The tablespace must be fully “self-contained”. This means that none of the following conditions must be true:
- There is an index inside of the tablespace set that is based upon a table that is not contained in the tablespace set.
- A partitioned table does not have all of its partitions located within the tablespace set. In other words, some of the table’s partitions live in a different tablespace set.
- There is an LOB column in one of the tables within the tablespace set that points to LOB’s outside of the tablespace set.
Perform the following to create a self-contained transportable set:
- ALTER TABLESPACE tablespace_name READ ONLY; for all tablespaces in the tablespace set.
- Use the EXPORT command with TRANSPORT_TABLESPACE=Y and TABLESPACES= ts1, ts2...tsn. Where ts1, ts2 are the tablespace names in the self-contained set. Use the TRIGGERS=Y/N, CONSTRAINTS=Y/N, GRANTS=Y/N, and FILE=filename parameters as well. Note that the FILE value specifies the name of the output file that will contain the structural information. Note that Y is the default setting for the Y/N value parameters specified.
When transporting from an OLTP system to a data-warehouse environment specify GRANTS = Y, TRIGGERS = Y, CONSTRAINTS = Y and TRANSPORT TABLESPACE = Y option in the export command to be sure the objects and all associated table information is brought over. However, if the Oracle exam asks you which 2 parameters you are most likely NOT need, that is set to N, answer TRIGGERS and CONSTRAINTS.
The TRIGGERS export parameter is new to Oracle 8i and is there to control the transport of TRIGGERS related to the transportable tablespace set.
Specify the DATAFILES import parameter to name the datafiles that belong to the transferred tablespace being imported.

Read-Only Tablespace Enhancements
Use ALTER TABLESPACE tablespace_name READ ONLY command to place tablespaces in read-only mode. Oracle will wait for any active transactions to complete and then put the tablespace in read-only mode.
Query the V$TRANSACTION view to identify long running transactions that may be keeping a tablespace from becoming READ-ONLY. You can then kill those transactions and place the tablespace in READ-ONLY mode.

Friday 22 February 2008

Parallel DDL, Parallel DML, and Parallel Queries

Parallel DDL, Parallel DML, and Parallel Queries

Oracle 8i supports the following parallel operations:
table scan, nested loop join, sort merge join, hash join, “not in”, group by,
select distinct, union and union all, aggregation, PL/SQL functions called from SQL,
order by, create table as select, create index, rebuild index, rebuild index partition,
move partition, split partition, update, delete, insert ... select,
enable constraint (table scan is parallelized), star transformation, cube, and rollup.
Parallel query can be established at both the instance and schema object level. The default is set at the schema object level.
Parallel DML automatically performs multiple DML activities and removes the disadvantages of manual parallel DML such as requiring multiple open sessions, manual coordination of transaction commits and the complexity associated with determining where to “split” the work.
Use Parallel DML to speed up DML operations against large database objects. Very useful in a DSS requiring performance and scalability.
Parallel DML setup overhead takes away any advantage for using it with OLTP transactions, but can improve performance of batch OLTP type jobs.
Use the ALTER SESSION ENABLE PARALLEL DML statement. Parallel DML is disabled by default.
Set parameter PARALLEL_AUTOMATIC_TUNING= TRUE for Oracle to automatically tune and set intelligent defaults for parallel execution.
When parallel query is set at the instance level, PARALLEL_AUTOMATIC_TUNING=TRUE, parallel execution buffers are allocated from the large pool. Otherwise, parallel execution buffers are allocated from the shared pool.
Can set degree of parallelism in the PARALLEL clause which is part of the CREATE TABLE command. Can specify PARALLEL with or without an integer representing the degree. If degree is not specified, Oracle will determine optimal degree value based upon the PARALLEL_THREADS_PER_CPU parameter.
If OPS is being used, the quantity of CPU’s and instances are used to calculate the default number of query servers or threads used.
Example of PDML:
CREATE TABLE table1_name
PARALLEL
AS SELECT * FROM table2_name ;
Can also specify a parallel hint in the form:
/*+ PARALLEL(table, integer) */
PARALLEL_DEGREE_LIMIT is a resource limit that can be specified for a profile and then assigned to a user. Controls the maximum degree of parallelism that a user can deploy.
Some restrictions when using Parallel DML:
- If a table is not partitioned then no parallel update or delete operations are allowed.
- Once a parallel DML statement has modified a table including a serial direct-load insert, no subsequent serial or parallel DML or query statements can access the table in the same transaction.
- Parallel DML is not supported for triggers.
- Parallel DML is not supported in replication.
- Parallel DML is not allowed with self-referential integrity, delete cascade and deferred integrity.
- Parallel DML is not supported on tables with object columns or LOB columns.
- Parallel DML is not supported on index-organized or clustered tables.
- Parallel DML txns can not be distributed txns.
If you do use hints on both parts and the degrees of parallelism are different, the degree of parallelism that will be used will be that which is specified on the INSERT parallel hint.
If you do a JOIN on two tables that have different degrees of parallelism specified, the degree used for the join operation will be the higher of the two.
There are 2 types of partition-wise joins, full and partial. Benefits of creating a partition-wise join include skipping or reducing the “redistribution” when one or both of the tables in the join are partitioned on the join key. This increases overall performance and reduces the required memory and temp storage.
A full partition-wise join takes a large join and turns it into individual joins between the partitions of each join table. Both tables must be equipartitioned on the join key.
Partial partition-wise joins are used when only one of the tables being joined is partitioned on the join key.
Oracle performs parallel transaction recovery (rollback and rollforward) after transaction failure, process failure, instance failure, and system failure.
Set the FAST_START_PARALLEL_ROLLBACK so that parallel transaction recovery will occur. If not set, no parallel transaction recovery. Valid values are HIGH, LOW, or FALSE. The default value is LOW.
SMON determines if a large recovery transaction becomes a candidate for parallel recovery. When performing parallel recovery, SMON is the coordinator and the slaves are the individual recovery processes.
When the coordinator process fails, PMON recovers the coordinator and then all slave processes rollback their own work in parallel.
When a single slave process fails, PMON will take care of rolling back the work for the failed process. All the remaining slaves’ processes will rollback their own work.
When multiple slave processes fail, PMON will rollback all work serially.
Related Data Dictionary Views:
V$PX_SESSION - contains real-time data for sessions running parallel execution. The columns in this view are:
SADDR - session address
SID - session identifier
SERIAL# - session serial number
QCSID - session identifier of the parallel coordinator
QCSERIAL# - session serial number of the parallel coordinator
QCINST_ID - instance number on which the parallel coordinator is running
SERVER_GROUP - logical group of servers to which this parallel server process belongs
SERVER_SET - logical set of servers that this parallel server process belongs to. A single server group will have at most two server sets
SERVER# - logical number of a parallel server process within a server set
DEGREE - degree of parallelism being used by the server set
REQ_DEGREE - degree of parallelism that was requested by the user when the statement was issued and prior to any resource, multi-user, or load balancing reductions
V$PX_SESSTAT - join view of v$px_session and v$sesstat views. Has info about the session running parallel execution: Include the following columns:
SADDR - session address
SID - session identifier
SERIAL# - session serial number
QCSID - session identifier of the parallel coordinator
QCSERIAL# - session serial number of the parallel coordinator
QCINST_ID - instance number on which the parallel coordinator is running
SERVER_GROUP - logical group of servers to which this parallel server process belongs
SERVER_SET - logical set of servers that this parallel server process belongs to. A single server group will have at most two server sets
SERVER# - logical number of a parallel server process within a server set
DEGREE - degree of parallelism being used by the server set
REQ_DEGREE - degree of parallelism that was requested by the user when the statement was issued and prior to any resource, multi-user, or load balancing reductions
STATISTIC# - Statistic number (identifier)
VALUE - Statistic value
V$PX_PROCESS - Has info for session running parallel execution processes. Includes the following columns:
SERVER_NAME - name of the parallel server (P000, P001, etc)
STATUS - state of the parallel server. Either In Use or Available
PID - process identifier
SPID - OS process ID
SID - session ID of slave, if in use
SERIAL# - session serial number of slave, if in use
V$PX_PROCESS_SYSSTAT - Status of query server and buffer allocation stats. Has parallel statistic info about processes. Has the following columns:
STATISTIC - name of the statistic
VALUE - value of the statistic
V$PQ_SESSTAT - Status of all current server groups. Info about how load balancing and multi-user algorithms are affecting hinted and default values. Will be come obsolete in future release.
V$FILESTAT - Use to evaluate I/O and workload.
V$PQ_TQSTAT - Shows message traffic at the table queue level. Data is valid when queried from a session executing parallel SQL statements.
V$SESSTAT and V$SYSSTAT - Shows parallel execution stats for each session, including total number of queries, DML and DDL statements executed in a given session.
V$SESSION - Includes a new column PDML_ENABLED that contains a value of YES or NO to indicate if PDML is enabled or not.
Examples of Serial and Parallel Direct-Load INSERT
You can specify serial direct-load INSERT with the APPEND hint, for example:
INSERT /*+ APPEND */ INTO emp
SELECT * FROM t_emp;
COMMIT;
You can specify parallel direct-load INSERT by setting the PARALLEL attribute of the table into which rows are inserted, for example:
ALTER TABLE emp PARALLEL (10);
ALTER SESSION ENABLE PARALLEL DML;
INSERT INTO emp
SELECT * FROM t_emp;
COMMIT;
You can also specify parallelism for the SELECT operation by setting the PARALLEL attribute of the table from which rows are selected:
ALTER TABLE emp PARALLEL (10);
ALTER TABLE t_emp PARALLEL (10);
ALTER SESSION ENABLE PARALLEL DML;
INSERT INTO emp
SELECT * FROM t_emp;
COMMIT;
The PARALLEL hint for an INSERT or SELECT operation takes precedence over a table’s PARALLEL attribute. For example, the degree of parallelism in the following INSERT ... SELECT statement is 12 regardless of whether the PARALLEL attributes are set for the EMP and T_EMP tables:
ALTER SESSION ENABLE PARALLEL DML;
INSERT /*+ PARALLEL(emp,12) */ INTO emp
SELECT /*+ PARALLEL(t_emp,12) */ * FROM t_emp;
COMMIT;
For more information on parallel INSERT statements, see "Rules for Parallelizing
INSERT ... SELECT" on page 26-23.

Thursday 21 February 2008

Composite Partitioning

Composite Partitioning

Composite partitions are hash-method subpartitions of a range-method-partitioned table. Good for historical data and striping. Provides benefits of regular range-partitioned tables such as better manageability as well as including the parallelism and data placement benefits of hash partitioning.
Composite-partitioned tables and indexes are logical structures. Their data is actually stored in the segments of their subpartitions.
You can define range-partitioned global indexes.
You can name the subpartitions and specify specific tablespaces to store them in.
Indexes on composite partitions are local and are stored BY DEFAULT in the same tablespace as the table subpartition. Can specify different storage parameters using the following command:
CREATE INDEX index_name ON table_name (col) LOCAL STORE IN (tsp_1, tsp_2, tsp_3);
Example of Composite Partition:
CREATE TABLE employee
(emp_no NUMBER,
emp_name VARCHAR(32),
salary NUMBER)
PARTITION BY RANGE (emp_no) SUBPARTITION BY HASH (emp_name)
SUBPARTITIONS 8 STORE IN (ts1, ts3, ts5, ts7)
(PARTITION p1 VALUES LESS THAN (100) PCTFREE 40,
PARTITION p2 VALUES LESS THAN (200) STORE IN (ts2, ts4, ts6, ts8),
PARTITION p3 VALUES LESS THAN (MAXVALUE)
(SUBPARTITION p3_s1 TABLESPACE ts4,
SUBPARTITION p3_s2 TABLESPACE ts5));
Must specify:
partition method (range)
partition key
partition descriptions and bounds
subpartition method (hash)
subpartition columns
number of subpartitions per partition
Mark a subpartition of a local index unusable as follows:
ALTER INDEX ...MODIFY SUBPARTITION ....UNUSABLE;
Rebuild the subpartition of a local index using the following command:
ALTER INDEX ... REBUILD SUBPARTITION ...
TABLESPACE tabspace10 PARALLEL (DEGREE 2);
Rename an index subpartition using the following command:
ALTER INDEX index_name RENAME SUBPARTITION current_name TO new_name;
Rename a table subpartition using the following command:
ALTER TABLE table_name RENAME SUBPARTITION current_name TO new_name;
Exchange partitions to convert a subpartition of a table into a non-partitioned table using the following command:
ALTER TABLE table_name EXCHANGE SUBPARTITION partition_name
WITH TABLE new_table_name INCLUDING INDEXES;
Add a subpartition using the following command:
ALTER TABLE table_name MODIFY PARTITION range_partition_name
ADD SUBPARTITION new_subpartition_name TABLESPACE tabspace10 ;
The new subpartition is populated with rows from the existing subpartitions for the partition specified, using the hash function.
Coalesce a subpartition using the following command:
ALTER TABLE table_name MODIFY PARTITION range_partition COALESE SUBPARTITION ;
Move a subpartition using the following command:
ALTER TABLE table_name MOVE SUBPARTITION sub_partition_name TABLESPACE new_tablespace;
When moving a subpartition, local index subpartitions corresponding to the subpartition are marked UNUSABLE unless they are empty.
Truncate a subpartition using the following command:
ALTER TABLE table_name TRUNCATE SUBPARTITION sub_partition_name DROP STORAGE;
Can verify the creation of composite partitions by querying the DBA_SEGMENTS data dictionary view.
Example:
SELECT segment_name, partition_name, segment_type
FROM dba_segments
WHERE segment_name = 'parent_table_name';
The ANALYZE TABLE command will analyze all subpartitions of a table even though the command only references the table.
Example: ANALYZE TABLE table_name COMPUTE STATISTICS ;
If you want to EXPORT only a sub-partition you can only use TABLE mode of export command and you must specify the table_name and subpartition.
Table-level export is when a table and all it’s dependent indexes, partitions and subpartitions are exported, when specifying just the table for export. This table-level export is supported in all export modes.

Partitioned Index-Organized Tables, Large Objects, and Object Types
General
The following attribute types can NOT be part of a partition key:
1. an object
2. a REF
3. part of a nested table
4. a VARRAY

LOB’s and Partitioned Tables
Partitioned tables can contain LOB columns.
A partition key CAN NOT contain a LOB column.
LOB data is always equipartitioned with the table it is part of. In other words the LOB data is partitioned on the same field as the parent table's partition keys.
There is an LOB data segment that corresponds to the data partition and a LOB index segment that corresponds to the index partition for every partition of a partitioned table that contains an LOB. These segments contain the LOB that belongs to each row in the associated partition. Note that LOB data and index data are not separate entities. The index is an integral part of the LOB column storage and is created and maintained by the system.
The information in the previous paragraph also holds true for composite partition tables as well.
LOB index partitions will always be stored in the same tablespace as the LOB data partitions they correspond. They are co-located with the data partitions.
You cannot directly specify any attributes for a LOB index or any of its partitions. Instead, they are taken from those of the LOB data partition and/or the tablespace the LOB lives in.
When using hash partitioning, tablespace is the only storage attribute that can be specified at the LOB partition level. The remaining storage attributes are set at the table level LOB column definition.
You can specify non-hash LOB storage characteristics at the partition or table level.
Rules for determining the tablespace of an LOB data partition:
The tablespace specified for the LOB data partition
Otherwise, the default tablespace specified at the table level for all LOB partitions of a given LOB column.
Otherwise, stored in the same tablespace as the table partition to which it corresponds.

To view the table level info and default attributes of LOB data partitions reference the DBA_PART_LOBS view.
To modify storage attributes for a partitioned LOB column use the ALTER TABLE table_name MODIFY PARTITION statement.
If you need to add a new partition for an LOB column on a table that is already partitioned the ALTER TABLE table_name ADD PARTITION command.

Partitioned Index-Organized Tables
Rules for partitioning an index-organized table:
- Can only use range partitioning
- The partition columns have to be a subset of the primary key columns
- You can partition secondary indexes, locally and globally
- The overflow data segments are always equipartitioned w/the table partitions
- You can specify storage attributes at the table or partition level on both the table data and the overflow data
Characteristics of an index-organized table with Overflow:
- Each partition has an index segment and an overflow data segment
- Overflow data segments are equipartitioned with the primary key index segments.
- Can specify default values for physical attributes at the table or partition level
- Attributes specified before the overflow keyword apply to the primary key index segment, and those specified after overflow keyword apply to the overflow data segment.
- You can only specify values for PCTTHRESHOLD and INCLUDING column at the table level.
- If an overflow tablespace location is not specified at the partition level, the tablespace specified at the table level is used. If not specified at the table level, overflow is stored in the tablespace of the corresponding partition index segment.
- SYS_IOT_TOP_Pn and SYS_IOT_OVER_Pn are the system-generated names for the index and overflow segment names, respectively.

Partitioned Object Tables
Composite indexes are supported on partitioned object tables.
You can use range, hash and composite partitioning on partitioned object tables.
You can use global indexes on range partitions of both composite and non-composite partitioned tables.

Partitioned Tables With Object Types
Tables containing object types can be partitioned.
Object attributes CANNOT be part of the partition key.
The object identifier(OID) can be user defined, but the partition key of an object table CANNOT explicitly use an OID.
Some or all of the columns of a user-defined OID can be used in the partition key.

Wednesday 20 February 2008

Partition Maintenance Operations

Partition Maintenance Operations

Partition Management Commands
Moving Partitions
Use the ALTER TABLE command with the MOVE PARTITION clause to move a partition to a different tablespace. If the partition contains data, all global and local index partitions are marked as unusable. Either rebuild them or drop and re-create the indexes after moving a table partition.
Adding Partitions
Use the ALTER TABLE command with the ADD PARTITION clause to add a new partition to the high end. If the partition bound on the last partition in MAXVALUE or you want to add a partition at the start or middle of a table use the SPLIT PARTITION clause of the ALTER TABLE command.
If there is a local index defined on the table you are adding a partition to, Oracle will automatically add a matching index partition. Can only add a new highest partition to a global index and must use the SPLIT INDEX clause.
To add a partition to a hash index use either:
ALTER TABLE employee ADD PARTITION;
or
ALTER TABLE employee ADD PARTITION p3 TABLESPACE t3;
Local indexes need to be rebuilt when adding hash partitions.
Dropping Partitions
Use the ALTER TABLE command with the DROP PARTITION clause to drop a table partition. This command drops the partition and the data in the partition.
Any local indexes defined on the table will have their corresponding partitions dropped. You can not explicitly drop a local index partition.
You can not drop the only partition in a table.
If a global index partition is empty it can be dropped using the ALTER INDEX...DROP PARTITION command.
Dropping a global index partition with data causes the next highest partition to be marked as unusable. Must perform a rebuild on the higher partition after dropping the lower one.
Can not drop the highest partition of a global index.
Two methods for dropping partitions that contain data and have a global index defined on them:
1. Don’t drop the global indexes. Issue ALTER TABLE...DROP PARTITION statement with actually deleting the data from the partition.
· Global indexes and non-partitioned are marked as unusable and you need to rebuild them afterwards.
· Can only REBUILD a partitioned index “one partition at a time”. Issue ALTER INDEX...REBUILD COMMAND for each index partition to rebuild.
· Use this method for large tables where the partition that is being dropped comprises a large portion of the total data in the table.
2. Delete the rows from the partition before dropping the partition. Use on small tables or partitions with little data.
· The delete command will generate redo and fire any related triggers. Use ALTER TABLE...MODIFY PARTITION...NOLOGGING to turn off logging for the individual partition before executing the delete command and it will reduce the amount of time it takes to perform the delete.

Two methods for dropping partitions that contain data and have referential integrity constraints defined on them:
1. Disable the integrity constraints, drop the partition and then re-enable the integrity constraint(s).
Use this method for large tables where the partition that is being dropped comprises a large portion of the total data in the table.
2. Delete the rows and then drop the partition. The delete command will generate redo and fire any related triggers. Use on small tables or partitions with little data.

Coalescing Partitions
Use the ALTER TABLE...COALESCE PARTITION command to reduce the number of partitions created by the HASH method and redistribute the remaining data among the remaining partitions. Removes a single partition and Oracle selects which one. Any affected local indexes must be rebuilt.
Truncating Partitions
The ALTER TABLE...TRUNCATE PARTITION command will remove all rows from table partition. This command will also truncate any associate local index partitions.
Can not TRUNCATE an index partition directly.
If a partition contains data and global indexes, it can be truncated in one of 2 ways:
Use the ALTER TABLE...TRUNCATE PARTITION command leaving the global indexes in place. Marks any global and non-partitioned indexes as unusable. Must rebuild each partition of the global partitioned index, one at a time. Use when the table is large or has the majority of data in the table.
Delete the rows in the partition before truncating. Use when the table is small or the partition in question has a small % of total rows in table.
Two methods for truncating partitions that contain data and have referential integrity constraints defined on them:
1. Disable the integrity constraints, truncate the partition and then re-enable the integrity constraint(s).
Use this method for large tables where the partition that is being truncated comprises a large portion of the total data in the table.
2. Delete the rows and then truncate the partition. The delete command will generate redo and fire any related triggers. Use on small tables or partitions with little data.

Splitting Partitions
Use the ALTER TABLE...SPLIT PARTITION command to split a partition. When you split a table partition, new local index partitions are created if a local index exists on the table.
Oracle gives system generated names and default storage parameters to partitions created by splitting. These names and storage values can be changed.
If there is data in the partition being split and local and global indexes are marked as unusable after a split. You need to rebuild each partition individually, after the split.
Local index partitions can not be explicitly split, only indirectly as the result of a table partition split.
Merging Partitions
Two adjacent range or composite partitions can be merged into a single partition using the ALTER TABLE...MERGE PARTITIONS ...INTO PARTITION command. The new partition gets the higher upper bound from the two merged partitions. Data is physically moved.
Exchanging Partitions
Use this feature to convert a partitioned table into a non-partitioned table and vice-versa. Data and index segments are exchanged between the two tables.
In an exchange, no data is actually moved. Just the data dictionary is updated. It is a 2-way exchange and the user must have ALTER TABLE privileges on both tables to perform the operation.

Partition Management Restrictions
ALTER TABLE....
Can not add LONG or LONG RAW column types.
Can not add a column with a user-specified default.
Can not alter the datatype of the partition column.
Can use this command to change logical table attributes, which include enabling constraints.
Move Partition Restrictions
For a HASH partition can only specify the TABLESPACE attribute in the MOVE clause.
Can not move the partition if there is a composite partition, must move each individual sub-partitions using the MOVE_SUBPARTITION_CLAUSE.
Add Range-Partition Restrictions
Max partitions = (64K–1)
Can not add a partition if high partition is bound with the MAXVALUE. Use split partition command to add partition at beginning or middle of table.
Split Partition Restrictions
Can not use this command for a HASH partitioned table.
Merge Partition Restrictions
Can not use on an index-organized table.
Can not use on a HASH partitioned table.
Hash Partition Restrictions
ALTER TABLE...SPLIT PARTITION
ALTER TABLE...DROP PARTITION
ALTER TABLE...MERGE PARTITIONS

Partition Management Privileges
ALTER TABLE....
In your own schema
ALTER table privilege
ALTER ANY INDEX
...DROP or TRUNCATE PARTITION
DROP ANY TABLE... if not the table owner.
...ADD, MODIFY, MOVE, SPLIT PARTITION
1. Need space quota in applicable tablespaces.

Related Data Dictionary Views
ALL_TAB_PARTITIONS
ALL_TAB_SUBPARTITIONS
ALL_TABLES
ALL_IND_PARTITIONS
ALL_IND_PARTITIONS
ALL_INDEXES
ALL_LOB_PARTITIONS
ALL_LOB_SUBPARTITIONS
ALL_PART_COL_STATISTICS
ALL_PART_HISTOGRAMS
ALL_PART_INDEXES
ALL_PART_KEY_COLUMNS
ALL_PART_LOBS
ALL_PART_TABLES

Tuesday 19 February 2008

Basic Partitioning Concepts

Basic Partitioning Concepts

Benefits of Table and Index Partitioning and General Partition Rules
Improved query performance due to distributed data and less I/O contention.
Better manageability for large tables. Can be deleted or loaded in smaller pieces.
Better backup and recovery performance with more options available than having to backup or recover an entire table.
Better availability of data because partitions are available independently of each other.
Use local partition indexes in a DSS environment to limit the impact of regularly scheduled drop partition operations
Partitioned Indexes
Partitioned indexes can either be local or global and either prefixed or non-prefixed.
64,000 partitions maximum for a partitioned index.
A local index is an index whose partition boundaries are the same as the corresponding table and is created with the LOCAL keyword. Local indexes are equipartitioned with a table, meaning they have the same partition key and range values.
Local index equipartioning is beneficial because Oracle will automatically maintain the index partitions and keep them in line with their corresponding table partitions. As the DBA you don't have to worry about them. Also, this scheme provides partition independence since you can take down a partitioned table and its corresponding local index partition and not affect any other partition. It is also good for limiting queries to a single partition.
A global index is an index whose values refer to data in more than one table partition. They are not equipartitioned and must be maintained by the DBA as a separate entity. If a single partition is somehow corrupt all the partitions of a global index will require REBUILDing. You might want to use a global index if the table is not partitioned or if you want a unique index on a column that is not part of the partition key.
While global indexes can be created with the same partition range as their corresponding tables (e.g., equipartitioned), there is no benefit to this since Oracle will not automatically maintain them. If you find this is the case, use a local index instead.
A prefixed index is one whose leftmost columns are the same as the columns of the partition key of the index. You can have both local and global prefixed indexes.
A non-prefixed index is an index whose leftmost columns are not the same as the partition key.
Example: You partition an index on the hire_date column but you base the actual index on the salary column.
While you can have local non-prefixed indexes in Oracle 8, global non-prefixed indexes are not allowed. This means that while you can partition on a column that is not a part of the index for a local index, for a global index you must partition on the same columns you are indexing.
Unique non-prefixed local indexes are NOT supported. The only exception to this is if the index is a subset of the partition key. Otherwise in order to have a unique index on columns not in the partition key, the index must be global.
Local non-prefixed indexes are ideal for DSS systems where you need to partition by a date range but want to query often on another column.
Global indexes and local prefixed indexes are better for OLTP systems because they minimize index probes.
You can drop partitioned indexes (global only) using the ALTER INDEX index_name DROP PARTITION partition_name statement.
You cannot drop the highest partition of a partitioned index.
All partitions for a given table or index must have the same logical definition. However each partition can reside on different tablespaces and have different storage parameters.
You can have any combination of a partitioned table only, a partitioned index only, or both index and table partitioned.
MAXVALUE is a keyword that specifies a value, which nothing else can be equal to or higher than. It is always a good idea to specify MAXVALUE on the last partition to catch NULL and other data types whose values are higher than the range specified.
If you do not specify MAXVALUE and try to INSERT a row, which contains a value in a partition key column, which is higher than the range specified in the PARTITION BY RANGE clause, the INSERT statement will fail!

Range and Hash Partitioning
Example of RANGE partitioning:
CREATE TABLE usergroup
( member_id NUMBER,
lastname VARCHAR2,
firstname VARCHAR2,
member_since DATE, )
PARTITION BY RANGE ( lastname )
( PARTITION P1 VALUES LESS THAN ( 'F' )
TABLESPACE t1,
( PARTITION P2 VALUES LESS THAN ( 'L' )
TABLESPACE t2,
( PARTITION P3 VALUES LESS THAN ( 'N' )
TABLESPACE t3,
( PARTITION P4 VALUES LESS THAN ( 'R' )
TABLESPACE t4,
( PARTITION P5 VALUES LESS THAN ( MAXVALUE )
TABLESPACE t5 );
Examples of HASH partitioning:
CREATE TABLE usergroup
( member_id NUMBER,
lastname VARCHAR2,
firstname VARCHAR2,
member_since DATE, )
PARTITION BY HASH ( lastname )
PARTITIONS 5
STORE IN (t1, t2, t3, t4, t5);
In the above example, partitions will be named by the system in the format SYS_Pnnn. At a minimum you must specify a partition key, the partition method, and the number of partitions when creating a hash partition.
CREATE TABLE usergroup
( member_id NUMBER,
lastname VARCHAR2,
firstname VARCHAR2,
member_since DATE, )
PARTITION BY HASH ( lastname )
(PARTITIONS p1 tablespace t1,
PARTITIONS p2 tablespace t2,
PARTITIONS p3 tablespace t3,
PARTITIONS p4 tablespace t4,
PARTITIONS p5 tablespace t5);
DBA_TAB_PARTITIONS data dictionary view contains table partition storage parameters, partition levels, and partition statistics.
DBA_IND_PARTITIONS data dictionary view contains index partition storage parameters, partition levels, and partition statistics.

Monday 18 February 2008

Manage Large Objects

Manage Large Objects

LONG’s vs. LOB’s
There are four types of large object types, or LOBs: CLOB, NCLOB, BLOB and BFILE.
While CLOBs, NCLOBs and BLOBs are stored in tablespaces, BFILEs are pointers to binary files stored outside the actual database. Because of this BFILEs are read-only.
Tables can contain only one LONG column and many LOB columns.
A table containing a LONG can not be partitioned, with LOB’s it can.
Max size of a LONG is 2GB, max size of a LOB is 4GB.
LONG’s are read sequentially while LOB’s can be randomly accessed.
LONG’s are stored in-line, that is within the column of the table. LOB’s can be stored in-line or out-of-line. LOB’s are made up of 2 parts, the locator and the actual LOB data. The Locator is stored in the column, but the LOB data can be stored in a separate tablespace. BFILES are pointers to external files.
LONG’s can not be defined as attributes in a user-defined object, while LOB’s can.
LOB bind variables can be defined.

Implement Oracle DIRECTORY Objects
Since BFILEs are stored externally they do not require the LOB storage clause but instead need a directory object which tells Oracle where they are located in the file system. Directory objects are created with the CREATE OR REPLACE DIRECTORY statement and specify the operating system file path of the directory which contains the BFILE in question.
You need to make sure that Oracle has OS permissions to access a directory object. Oracle will let you create invalid directory objects (i.e., non-existent directories or directories for which it does not have permissions) and will not check their validity and will NOT create them for you, so check your typing carefully when creating them. You will only get an error down the line when Oracle tries to access the BFILE and cannot.
Creating directory objects which reference directories that contain the actual Oracle binaries, datafiles, control files, etcetera is a big security risk and is strongly discouraged by Oracle.

Create LOB storage
There is an additional storage clause that defines out-of-line storage characteristics for LOB’s.
Example:
CREATE TABLE docs
(doc_id varchar2(50),
doc CLOB,
CONSTRAINT pk_docs PRIMARY KEY (doc_id))
STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0)
TABLESPACE doc1
LOB (doc) STORE AS
(TABLESPACE doc2
STORAGE (INITIAL 1M NEXT 1M PCTINCREASE 0)
CHUNK 16K PCTVERSION 10 NOCACHE LOGGING) ;
Note that the LOB has its own tablespace defined for out-of-line storage.
CHUNK: number of dB blocks in a LOB “manipulation” chunk. Max value is 32K. Must be less than or equal to the NEXT value. It is a multiple of the dB block size and is the minimum unit when accessing LOB’s. CHUNKs must be composed of contiguous blocks. On the other hand, CHUNKs need not be stored next to each other.
PCTVERSION: % of how much LOB storage needs to be changed before empty chunk space is reused. Older versions of LOB’s will not be overwritten until at least this much of the storage has been consumed. 10 is the default.
(NO)CACHE (NO)LOGGING: specifies whether or not to load object into the dB buffer cache and whether to turn on or off redo logging.
Oracle does not maintain data integrity for external files, BFILES.

Sunday 17 February 2008

Defining Object Relational Features

Defining Object Relational Features
An object relational database contains the major aspects of a relational database, and provides for those of an object oriented(OO) database. Relational aspects are structures, operations and integrity rules. OO databases provide for user-defined objects that contain both structures (attributes) and methods.
Oracle Object concepts include abstract data types, object tables, collections such as nested tables and varying arrays (VARRAYS), large objects (LOB’s), references (REFs), and object views.

Define a Basic Object
CREATE TYPE address_obj AS OBJECT
(street VARCHAR2(60),
city VARCHAR2(35),
state CHAR(2),
zip NUMBER);
CREATE TYPE customer_obj AS OBJECT
(name VARCHAR2(50),
Address ADDRESS_OBJ);
CREATE TABLE customer
(cust_id NUMBER,
customer CUSTOMER_OBJ);

Create a Collection Object- VARRAY
CREATE OR REPLACE TYPE flavors_varray AS VARRAY(31) OF VARCHAR2(20);
CREATE TABLE ice_cream
(manufacturer VARCHAR2(40),
flavors FLAVORS_VARRAY,
CONSTRAINT pk_ice_cream PRIMARY KEY (manufacturer));
Each manufacturer in the ice_cream table is uniquely identified by the manufacturer column and can make up 31 different flavors of ice cream.

Create a Collection Object-NESTED TABLE
CREATE OR REPLACE TYPE address_obj AS OBJECT
(type VARCHAR2(10),
street VARCHAR2(60),
city VARCHAR2(35),
state CHAR(2),
zip NUMBER);
note: address_obj contains one record for each address. Record indicates the type of address it is such as billing, shipping, etc.
CREATE TYPE addresses_nt AS TABLE OF address_obj;
note: addresses_nt is an object of type nested table.
CREATE TABLE customer
(name VARCHAR2(50),
addresses ADDRESSES_NT)
NESTED TABLE addresses STORE AS addresses_nt_tab;
Note: create customer table containing the addresses_nt nested table. Out-of-line nested table data stored in addresses_nt_tab table.

Create and use an Object View
Use object views to implement OO structures on-top of an existing relational dB.
Can update the underlying relational table via the object view or via inserts to the relational table itself.
Can use the INSTEAD OF trigger on an object view.
The general steps to create an OBJECT VIEW are:
identify or create the relational table(s)
create the objects ==> CREATE TYPE.....AS OBJECT
create the object view
Based upon this relational table and these objects:
CREATE TABLE customer
(id NUMBER PRIMARY KEY,
name VARCHAR2(40),
street VARCHAR2(60),
city VARCHAR2(35),
state CHAR(2),
zip NUMBER);
CREATE TYPE address_obj AS OBJECT
(street VARCHAR2(60),
city VARCHAR2(35),
state CHAR(2),
zip NUMBER);
CREATE TYPE customer_obj AS OBJECT
(name VARCHAR2(50),
Address ADDRESS_OBJ);
The syntax to create an object view would be:
CREATE VIEW customer_object_view
(id, customer)
AS
SELECT id,
customer_obj(name,
address_obj(street, city, state, zip))
FROM customer ;

Saturday 16 February 2008

Indexes and Indexed-Organized tables

Indexes and Indexed-Organized tables

Bitmap Indexes (BI)
ALTER TABLE statement no longer invalidates a BI.
New ALTER TABLE statement RECORDS_PER_BLOCK clause that restricts the quantity of rows that can be stored in each dB block.
MINIMIZE RECORDS_PER_BLOCK – Calculates the max number of records in any of the current dB blocks and sets that as a max that can be inserted into any new blocks. Improves query performance.
Can not specify if bitmap index already exists on the specified table.
Can not specify for an Index-organized table or nested table
Can not specify on an empty table.
NOMINIMIZE RECORDS_PER_BLOCK – disables minimize. DEFAULT

Reverse-Key Indexes
Reverses the bytes of each index column.
Helps to improve performance in Oracle Parallel Server (OPS) when index changes are centered on a small set of leaf blocks.
Insertions will be better distributed across all leafs keys in an index instead of clustered together, when index values are sequential.
Use when inserts are in ascending values and deletes are occurring to the lower values.
May help an OLTP in OPS in some cases.

Function Based Indexes
An index created on functions or expressions based upon one or more columns of the tables being accessed. These expressions may be arithmetic, built-ins or user-defined.
Use when different NLS collating sequences are present.
Pre-calculates the values of the function or expression and stores it in an index.
Can be a B-tree or bitmapped index.
Expression can not contain aggregate functions, nested table, REF or LOB columns or if an object-type contains a nested table, REF or LOB.
Must use cost-based optimizer and collect statistics for a functional index to be used.
Requires QUERY REWRITE or GLOBAL QUERY REWRITE privileges to create.
To enable function based indexes your must issue
ALTER SESSION SET QUERY_REWRITE_ENABLE = TRUE
And
ALTER SESSION SET QUERY_REWRITE_INTEGRITY = TRUSTED

Descending Indexes
Means that Indexes are generated in descending order instead of ascending order. Ascending order is the default.
Once ASC or DESC is specified, the database character set determines the coalition sequence.
Use to reduce query-sorting requirements when multiple queries require various column-ordering sequences.
Descending indexes are treated by Oracle as function-based indexes, don’t need QUERY REWRITE or GLOBAL QUERY REWRITE privileges to create.
Won’t be used until index and table are analyzed and stats are produced.
Can’t specify ASC or DESC for a domain index.
Can’t specify DESC for a bitmapped or reverse index.

Online Index Rebuild
Can perform online index builds on partitioned, nonpartitioned, and index-organized tables with b-tree indexes, using either of the following syntax:
ALTER INDEX index_name REBUILD ONLINE;
CREATE INDEX index_name ON table_name (col1...coln) ONLINE;
During ONLINE index rebuild a DML SS-locks exist that disallow any DDL tasks.
Can perform DML. However, this is not recommended, especially if a large percent of rows are involved.
Can not perform any parallel DML during an online index build.
Usually, twice the amount of current index storage is required for an online rebuild.
Indexes that can not be rebuilt on-line include cluster type, bitmap type and secondary indexes created on IOT’s.
Index rebuild has 3 phases:
prepare phase – short locks placed on the table
build phase – journal table populated
merge phase – unlocked journal rows deleted

Compute Index Statistics
Can compute index statistics is various ways.
ALTER INDEX index_name REBUILD COMPUTE STATISTICS ;
On a partitioned index this statement rebuilds index and gathers only index stats.
On a non-partitioned index gathers table, column and index stats.
CREATE INDEX index_name ON table_name (col1...coln) COMPUTE STATISTICS;
On a non-partitioned index the stats generated by the above command are stored in the data dictionary.
On a partitioned index this statement creates index and gathers only index stats.
On a non-partitioned index creates index and gathers table, column and index stats.
ANALYZE INDEX index_name COMPUTE STATISTICS ;
DBMS_DDL.ANALYZE_OBJECT(‘INDEX’ , ‘SCHEMA’, ‘INDEX_NAME’, ‘METHOD’);
The statistics generated on an index that is a concatenated key refer only to the first column.

Index-Organized Table (IOT)
An IOT is a table that stores its data in only an index. Instead of having data and index storage structures, it contains only an index structure that contains both the index value and the data values.
Fast key-based access for queries using exact match and range searches based upon the primary key in applications such as Information Retrieval, Spatial data and OLAP applications
IOT’s can be reorganized with the ALTER TABLE statement MOVE clause.
Reduced storage requirements, key columns are not duplicated in an index and a table.
CREATE TABLE command is used to create an IOT and ORGANIZATION INDEX clause defines it as an IOT.
Example: CREATE TABLE iot_table
( f1 char(20),
f2 NUMBER,
f3 NUMBER,
CONSTRAINT pk_iot_table PRIMARY KEY (f1, f2))
ORGANIZATION INDEX
TABLESPACE iot_tablespace
PCTTHRESHOLD 20
OVERFLOW TABLESPACE iot_overflow_tablespace ;
must specify a primary key on an IOT using a column constraint (single column) or table constraint clause (multicolumn).
The ORGANIZATION INDEX qualifier defines the table as an IOT.
optional row overflow specification
PCTTHRESHOLD – amount of space, expressed as a percent, to save in each index block for an IOT row.
Must be big enough to hold the primary key.
Can not specify on individual partitions of a partitioned IOT.
OVERFLOW TABLESPACE – tablespace location to store portions of rows that exceed the PCTTHRESHOLD value.
Can not use ROWID column datatype on an IOT.
No unique constraints or LONG datatype columns.
If partitioned, can not contain LOB or varray type columns.
If non-partitioned can contain nested table column types.
reorganize with the ALTER TABLE command MOVE clause
Query the DBA_TABLES view and evaluate the IOT_NAME and IOT_TYPE columns to view information about IOT’s. These fields are new to this table to support IOT’s.

Logical Rowid’s
A logical ROWID is a logical row identifier within an IOT that is based upon the IOT’s primary key.
Is used to create secondary indexes on IOT’s.
Can be accessed with the ROWID pseudocolumn. Returns the primary key value and a “physical guess”. The physical guess is created or updated when the secondary index is created or rebuilt. Contains the block location of the row in the IOT.
Logical ROWID value will remain static as long as the primary key value it is based upon does not change.
Fastest way to get to a row in an IOT, even if it takes more than one block access.
Logical ROWID’s can not be used to see how a table has been organized.
Logical ROWID’s can be stored in a UROWID datatype column.

Friday 15 February 2008

Summary Management

Summary Management


Dimension Tables – Describe business entities. Lookup or cross-reference tables. Examples include categorical information such as item-lists or department lists.
Fact Tables – Describe business transactions. Detail tables. Include measures that are character or numeric columns of a fact table. A measure can be simple, computed or multi-table. Simple is a single column in a fact table. Computed is an expression including only simple measures. Multi-table is a computed measure that is defined across multiple tables.

Materialized Views(MV)
Used to implement data-warehouses (summaries), distributed computing and mobile computing.
System privileges required to create a MV in your own schema

  • CREATE SNAPSHOT or CREATE MATERIALIZED VIEW
  • CREATE TABLE
  • CREATE INDEX
  • CREATE VIEW

System privileges required to create a MV in a different schema

  • CREATE ANY SNAPSHOT or CREATE ANY MATERIALIZED VIEW


The QUERY REWRITE privilege is required to enable or disable query rewrite on a MV that references base tables in your own schema. If in another schema, user must have GLOBAL QUERY REWRITE privilege.
Use the CREATE MATERIALIZED VIEW command to create and load a MV. Can specify BUILD IMMEDIATE to load the MV data immediately or BUILD DEFERRED to load the data at a time in the future.
The query used to load a MV contains what Oracle calls the master or detail tables. These tables live in what Oracle calls the master database.
Use DBMS_MVIEW.REFRESH package to populate a MV that has been created with the BULD DEFERRED option.
Can create an MV on an existing summary table by using the PREBUILT option of The CREATE MATERIALIZED VIEW command. This is called “Registration of an Existing MV”.
If a MV is based upon a prebuilt user-defined table, the name of the MV must be the same as the prebuilt table.
Use the DBMS_OLAP.ESTIMATE_SIZE procedure to get an estimate of space required for a MV. MV’s based upon existing tables do not require storage parameters.
The Query rewrite feature is disabled by default. Specify ENABLE QUERY REWRITE in the CREATE MATERIALIZED VIEW or ALTER MATERIALIZED VIEW commands to enable.
MV’s can be updateable only by using the Advanced Replication Feature of Oracle, otherwise they are read-only.
A difference between MV’s and regular views is that MV’s contain both the view definition and the execution results. The results of the view query and the data are stored in a table when an MV is populated. Regular views are just stored queries and no results.
MV’s ON DEMAND refresh options
Complete C – re-runs the defining views SQL query.
Fast F – applies only incremental changes from detail tables
Can not use fast refresh when:

  • Multiple table aggregated materialized view and DML has happened on the fact tables since the last full refresh.
  • MV has detail relations that are snapshots or views.
  • If MV contains the AVG() function and does not contain the COUNT() function.
  • If MV contains the VARIANCE() function and does not contain the COUNT() and SUM() functions.
  • If MV contains the STDDEV() function and does not contain the COUNT() and SUM() functions.

Force ? – uses the default refresh option. If force is the default option will attempt a fast refresh. If fast refresh is not allowed will do a complete
Always A – always does a complete refresh.
DBMS_MVIEW package ON DEMAND MANUAL refresh procedures
DBMS_MVIEW.REFRESH – refresh one or more MV’s.
DBMS_MVIEW.REFRESH_ALL_VIEWS – refresh all MV’s.
DBMS_MVIEW.REFRESH_DEPENDENT – refresh all MV’s that are table-based and depend upon on a single or list of detail tables.
To use the DBMS_MVIEW package certain queues need to be made available by setting the following initialization parameters
JOB_QUEUE_PROCESSES – set to a value between 1 and 36
JOB_QUEUE_INTERVAL – set to a value between 1 and 3600
UTL_FILE_DIR – directory location of refresh log.
Use the DBMS_OLAP.SET_LOGFILE_NAME to rename the refresh.log file produced by ON DEMAND refreshes.
Monitor the progress of a MV refresh in the V$SESSION_LONGOPS view.
Monitor jobs on each queue in the DBA_JOBS_RUNNING table.
The ALL_MVIEW_ANALYSIS table contains moving avgs for time most recently refreshed and avg time to refresh for both full and incremental types of refreshes.
The ALL_REFRESH_DEPENDENCIES table contains the names of container tables for a MV in your own schema.
MVDATA Export/Import parameter – If set to NO the MV without contents is imported. If set to YES, MV contents are imported.

Summary Advisor
The Summary Advisor uses procedures within the DBMS_OLAP package to gather summary and dimension information. Results, except for estimate MV size, are stored in a table in the dB that can be queried.
The summary advisor collects 3 types of information:
· Summary usage
· Summary recommendations
· Space requirements
Summary management is the process of creating, retaining and dropping summaries (MV’s) using recommendations of the summary advisor.
Must create dimensions to use the summary advisor. Good reason to create them!
Some DBMS_OLAP procedures used for summary management:
Advisory procedures
RECOMMEND_MV – only uses structural stats to make recommendations.
RECOMMEND_MV_W – uses structural and workload stats to make recommendations.
Evaluation procedures
EVALUATE_UTILIZATION -
EVALUATE_UTILIZATION_W – analyzes data collected by the Trace facility and places results in the MVIEWS$_EVALUATIONS table.
Query the ALL_REFRESH_DEPENDENCIES view to find out the names of dependent detail or container tables for summaries in the current schema.

Dimensions
Dimension Tables – Describe business entities. Lookup or cross-reference tables. Examples include categorical information such as item-lists or department lists. A data dictionary structure that is used to define categorized hierarchies that are based on columns that currently exist in tables.
Benefits of creating Dimensions:

  • enable query rewrites without the full use of constraints – desirable in a warehouse type of system.
  • can explicitly document hierarchies and dimensions.
  • Can be used by OLAP tools
  • help query rewrite perform complex types of rewrites
  • summary advisor uses dimensions to recommend the creation of MV’s based upon the dimension hierarchy

You can roll up (go up a level) or drill down (go down a level) through the levels of a dimension hierarchy.
Dimensions can be based upon multiple tables and dimensions can contain multiple hierarchies. A dimension level can contain up to 32 fields.
Must have CREATE DIMENSION system privilege to create a dimension in your current schema. Need CREATE ANY DIMENSION to create in a different schema.
Related data dictionary views:
USER_DIMENSIONS – contains name, owner, state and revision
USER_DIM_HIERARCHIES – contains owner, dim-name and hierarchy name
USER_DIM_CHILD_OF – level pair hierarchy details
USER_DIM_JOIN_KEY – level join keys
USER_DIM_LEVELS – single level detail
USER_DIM_LEVEL_KEY – column within a level
USER_DIM_ATTRIBUTES – relationship between level field and actual associated table field.
Execute SMDIM.SQL script to create the DEMO_DIM package. The DEMO_DIM package includes procedures to view existing dimensions:
DEMO_DIM.PRINT_DIM - print one dimension
DEMO_DIM.PRINT_ALLDIMS - print all dimensions
Use the DBMS_OLAP.VALIDATE_DIMENSOIN procedure to validate a dimension.

Thursday 14 February 2008

New SQL Keywords for Computing Totals

New SQL Keywords for Computing Totals

ROLLUP – addition to the GROUP BY clause. Will return a single superaggregate row summary line for each group specified. It is considered an aggregate operator. Can produce subtotals and a grand total.
Usage is:
GROUP BY ROLLUP (f1...fn), where f is a field or fields referenced in the SELECT portion of a SQL statement.
Example:
SELECT empno,
COUNT(*) "emp count"
FROM emp
GROUP BY ROLLUP (empno);
This example will list the count of rows in this table for each employee and a extra single line that will contain a “rolled up” summary value of the count(*) field.
CUBE – addition to the GROUP BY clause. Will return a row summary line for each combination of groups specified. Use to create values for a cross-tabulation type of report. Is considered an aggregate operator. These aggregate operators are more efficient than regularly written SQL statements that create the same result because these aggregate functions use hints.
Usage is:
GROUP BY CUBE (f1...fn), where f is a field or fields referenced in the SELECT portion of a SQL statement.
Example:
SELECT deptno,
job,
COUNT(*) "emp count",
AVG(sal) "average salary"
FROM emp
GROUP BY CUBE (deptno, job);
This example returns the employee count and avg salary for each job within each department. The CUBE option also produces a summary line with the employee count and avg salary for in each dept across all jobs, a summary line for each job across all departments and a single line with values summed over all depts and jobs.
GROUPING function – returns a 0 (zero) if a NULL value represents an actual NULL. Will return a 1 (one) if a NULL value is the result of the ROLLUP or CUBE aggregate functions.

Wednesday 13 February 2008

Top-N SQL queries

Top-N SQL queries

These type of queries force the ROWNUM condition in a SQL statement to be applied after ordering of the result set. This is achieved by creating a SQL query that contains the ROWNUM condition and a subquery that contains an ORDER BY clause. Top-N SQL queries are more efficient and faster because Oracle avoids sorting all of the rows in the table at once. A feature of Top-N SQL queries is this enhanced sorting mechanism.
The following query will return the 20 smallest check numbers:
SELECT * FROM
(SELECT check_number FROM check_reg ORDER BY check_number)
WHERE rownum < 21 ;
An in-line view is a feature of a Top-N SQL query. It is a subquery that differs from a regular subquery because it contains an ORDER BY. An ORDER BY clause is not allowed in a regular subquery.

Tuesday 12 February 2008

DBMS_STATS Package

DBMS_STATS Package

The procedures in this package gather and manage statistics. Procedures are:
GATHER_ INDEX_STATS - get index stats.
GATHER_TABLE_STATS - get table, column, and index stats.
GATHER_SCHEMA_STATS - get stats for all objects in the schema.
GATHER_DATABASE_STATS - get stats for all objects in the database.
EXPORT_SCHEMA_STATS - extracts and saves the current stats.
IMPORT_SCHEMA_STATS - restores previously exported stats.

Monday 11 February 2008

Optimizer and Query Improvements

Optimizer and Query Improvements

Optimizer Plan Equivalence and Stored Plans

Stored plans are also known as stored outlines. Execution plans are generated from the outlines and used to provide stability for SQL execution performance despite changes to system configuration or statistics values. A stored plan is mostly a set of hints that cause a SQL statement to use the same execution plan each time it is run.
Stored plans can also be used to help with migration from the rule-based optimizer to the cost-based optimizer.
Plan Stability needs to exactly match the text of the SQL statement to use the stored outline. Oracle compares cursors in the shared pool to the stored outline.
Use bind variables in applications to avoid problems with literal substitutions causing mismatches between a SQL statement and its corresponding stored outline.
CREATE_STORED_OUTLINES Parameter – Can be set to TRUE or a category name. Will create outlines in the DEFAULT category if set to TRUE or the category specified. Rely on hints that primarily use the cost-based optimizer. Set parm using ALTER SESSION or ALTER SYSTEM command (SESSION and INSTANCE level).
USE_STORED_OUTLINES Parameter – Can be set to TRUE or FLASE. Instructs Oracle to use or not use stored outlines. Set parm using ALTER SESSION or ALTER SYSTEM commands (SESSION and INSTANCE level).
CREATE OUTLINE statement – SQL statement to create a stored outline.
ALTER OUTLINE statement – SQL statement to change an outline category, rename an outline or rebuild an outline.
The new OUTLN userid is created when a database is created. This user owns the stored outlines. This user owns the OL$ table which contains the outline name, statement text, category, and the creation date. This user also owns the OL$HINTS table which contains the hints for the outlines in the OL$ table.
Can also view outlines in the DBA_OUTLINES, DBA_OUTLINE_HINTS, USER_OUTLINES and USER_OUTLINE_HINTS views.
OUTLN_PKG package can be used to manage outlines. Procedures include:
DROP_UNUSED - Drops outlines not used since created.
DROP_BY_CAT - Drops outlines assigned to a specific category name.
UPDATE_BY_CAT - Moves outlines from one category to another.
When configuring a hybrid system with different requirements during daytime processing vs. nighttime processing, you can create two different stored outlines in different categories.
The first step in ensuring a SQL statement uses the same execution plan each time is to create a stored outline for the SQL statement using the CREATE OUTLINE command.
Stored outlines can be easily transferred from one database to another by exporting and importing the schema that is owned by the OUTLN userid. All stored outlines are stored in the OUTLN schema.