Auto Task in Oracle 11g  

Posted by Mawahid

1. Check auto stats gather task assigned which group and check if that window-group is exists or not.

SELECT client_name, window_group FROM   dba_autotask_client;

2. Created Window Group (ORA$AT_WGRP_OS) as it was not exists

BEGIN
DBMS_SCHEDULER.CREATE_WINDOW_GROUP ('ORA$AT_WGRP_OS');
end;
/

3. Now we need to a window to that window-group. Thus created a new window and added.

BEGIN
dbms_scheduler.create_window(window_name=> 'MONDAY_WINDOW', duration =>  numtodsinterval(2, 'hour'),resource_plan=> 'DEFAULT_MAINTENANCE_PLAN',repeat_interval => 'FREQ=DAILY;byday=MON;byhour=22;byminute=0; bysecond=0');
end;
/

BEGIN
dbms_scheduler.add_window_group_member(group_name  => 'ORA$AT_WGRP_OS', window_list => ''MONDAY_WINDOW');
end;
/


Similarly created all windows and added to a group.

select * from dba_scheduler_wingroup_members;

4. Now enable the auto-task for gather statistics.

BEGIN
  DBMS_AUTO_TASK_ADMIN.ENABLE(
    client_name => 'auto optimizer stats collection',
    operation => NULL,
    window_name => NULL);
END;
/

5. Query to check window timings and duration.

SELECT wgm.window_name, w.repeat_interval
FROM   dba_scheduler_wingroup_members wgm
       JOIN dba_scheduler_windows w ON wgm.window_name = w.window_name
WHERE  wgm.window_group_name = 'ORA$AT_WGRP_OS';

6. Query to check the client history

SELECT client_name, window_name, jobs_created, jobs_started, jobs_completed
FROM dba_autotask_client_history
WHERE client_name like '%stats%';

7. Query to check auto task history

SELECT client_name, job_status, job_start_time, job_duration
FROM dba_autotask_job_history
WHERE client_name like '%stats%'
ORDER BY job_start_time;

8. Query to check the task-name of the auto-task clients

SELECT client_name, task_name, task_target_type
FROM dba_autotask_task;

9. Program Name

SELECT program_name
FROM dba_scheduler_programs
WHERE program_name='GATHER_STATS_PROG';

References Sites:

http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/scheduse007.htm#i1011367
http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/tasks004.htm#CIHCDBEC

http://www.oracle-base.com/articles/11g/AutomatedDatabaseMaintenanceTaskManagement_11gR1.php
http://www.toadworld.com/KNOWLEDGE/KnowledgeXpertforOracle/tabid/648/TopicID/OAUTO1/Default.aspx
http://www.oracle-base.com/forums/viewtopic.php?f=1&t=11890

References Views:

dba_scheduler_wingroup_members
dba_scheduler_windows
dba_scheduler_window_groups

dba_autotask_client
dba_autotask_client_history
dba_autotask_job_history
DBA_AUTOTASK_CLIENT_JOB
dba_autotask_operation
dba_autotask_task
DBA_AUTOTASK_WINDOW_CLIENTS
DBA_AUTOTASK_WINDOW_HISTORY
DBA_AUTOTASK_SCHEDULE



Character set Conversion  

Posted by Mawahid

We are now changing NLS_CHARACTERSET to AL32UTF8 in our case.
For reference please check Doc ID - 260192.1
Before starting this operation, take a complete database cold backup.
1.       Cross-Check existing NLS character of the database.

SELECT value FROM NLS_DATABASE_PARAMETERS WHERE parameter =  'NLS_CHARACTERSET';

The NLS_CHARACTERSET is defining the characterset of the CHAR, VARCHAR2, LONG and CLOB datatypes.

2.       Do NOT use Expdp/Impdp when going to (AL32)UTF8 or an other multibyte characterset on ALL 10g versions lower then 10.2.0.4 (including 10.1.0.5). Also 11.1.0.6 is affected.

3.       Check for any Invalid Objects.

SELECT owner,
object_name,
object_type,
status
FROM dba_objects
WHERE status ='INVALID’;

If there are any invalid objects run utlrp.sql

                SQL> @?/rdbms/admin/utlrp.sql

If there are any left after running utlrp.sql then please manually resolve

4.       Check for Orphaned Datapump master tables (10g and up)

SELECT o.status,
o.object_id,
o.object_type,
o.owner
||'.'
||object_name "OWNER.OBJECT"
FROM dba_objects o,
dba_datapump_jobs j
WHERE o.owner =j.owner_name
AND o.object_name=j.job_name
AND j.job_name NOT LIKE 'BIN$%'
ORDER BY 4,2;
                If there are any, then refer Doc ID - 336014.1 for cleaning up Orphaned Datapump Jobs.
5.       Leftover Temporary tables using CHAR semantics.

SELECT C.owner
||'.'
|| C.table_name
||'.'
|| C.column_name
||' ('
|| C.data_type
||' '
|| C.char_length
||' CHAR)'
FROM all_tab_columns C
WHERE C.char_used = 'C'
AND C.table_name IN
(SELECT table_name FROM dba_tables WHERE TEMPORARY='Y'
)
AND C.data_type IN ('VARCHAR2', 'CHAR')
ORDER BY 1;
These tables may give ERROR during Alter database Charter Set or Csalter
ORA-00604: error occurred at recursive SQL level 1
ORA-14450: attempt to access a transactional temp table already in use.

6.       Check the Source database for "Lossy" (invalid code points in the current source character set).

csscan \"sys/<syspassword>@<TNSalias> as sysdba\" FULL=Y FROMCHAR=<current NLS_CHARACTERSET> TOCHAR=<current NLS_CHARACTERSET> LOG=dbcheck CAPTURE=N ARRAY=1000000 PROCESS=2

eg: 

csscan userid=\'mshaik as sysdba\' FULL=Y FROMCHAR=WE8ISO8859P1 TOCHAR=WE8ISO8859P1 LOG=dbcheck CAPTURE=N ARRAY=1000000 PROCESS=5


If csscan fails with ORA-00904: "CNVTYPE": invalid identifier, then try to re-install csscan (@?/rdbms/admin/csminst.sql)
                Always run Csscan connecting with a 'sysdba' connection/user, do not use "system" or "csmig" user.

              If there is any "Lossy" data then those rows contain code points that are not currently defined correctly and they should be cleared up before you can continue.
7.       Once Csscan is actually scanning the tables v$session_longops can be used to see the progress of scans of big tables.

SELECT target,
TO_CHAR(start_time,'HH24:MI:SS - DD-MM-YY'),
time_remaining,
sofar,
totalwork,
sid,
serial#,
opname
FROM v$session_longops
WHERE sid IN
(SELECT sid FROM v$session WHERE upper(program) LIKE 'CSSCAN%'
)
AND sofar < totalwork
ORDER BY start_time;

8.       Csscan will create 3 files :

dbcheck.out a log of the output of csscan
dbcheck.txt a Database Scan Summary Report
dbcheck.err contains the rowid's of the Lossy rows reported in dbcheck.txt (if any).

This is to check if all data is stored correctly in the current character set. Because the TOCHAR and FROMCHAR character sets as the same there cannot be any "Convertible" or "Truncation" data reported in dbcheck.txt.

9.       Now, Check for "Convertible" and "Truncation" data when going to our desired characterset i.e, AL32UTF8 in our case

csscan \"sys/<syspassword>@<TNSalias> as sysdba\" FULL=Y FROMCHAR=<current NLS_CHARACTERSET> TOCHAR= TOCHAR=AL32UTF8 LOG=TOUTF8 CAPTURE=Y ARRAY=1000000 PROCESS=2

There should be NO entries under "Lossy" in “.txt” file.
Please check Doc ID - 260192.1 for dealing with truncation or convertible data.
10.   In our case, there were few tables reported under Lossy. Therefore took the export of those few tables and truncated them. Also check point 14th.

11.   Run again csscan described in step 9, to ensure there is no lossy data reported.

12.   Before using Csalter / Alter Database Character Set, check the database for

a)      Partitions using CHAR semantics:

SELECT C.owner,
C.table_name,
C.column_name,
C.data_type,
C.char_length
FROM all_tab_columns C,
all_tables T
WHERE C.owner = T.owner
AND C.table_name = T.table_name
AND C.char_used = 'C'
AND T.PARTITIONED ='YES'
AND C.table_name NOT IN
(SELECT table_name FROM all_external_tables
)
AND C.data_type IN ('VARCHAR2', 'CHAR')
ORDER BY 1,2 ;
If this select gives rows back then the change to AL32UTF8 will fail with "ORA-14265: data type or length of a table subpartitioning column may not be changed" or " ORA-14060: data type or length of a table partitioning column may not be changed" if those columns using CHAR semantics are used as partitioning key or subpartition key seen the change to AL32UTF8 will adapt the actual byte length of CHAR semantic columns ( data_lengh in all_tab_columns).
If there are columns using CHAR semantics used as partitioning key or subpartition key the partitioned tables need to be exported, dropped and , after the change, imported. An other solution is to temporarily go back to BYTE semantics for these columns before the change to AL32UTF8 and then go back to CHAR semantics afterwards..
Note 330964.1 provides more background on this issue.
b)      Functional indexes on CHAR semantics columns

SELECT OWNER,
INDEX_NAME,
TABLE_OWNER,
TABLE_NAME,
STATUS,
INDEX_TYPE,
FUNCIDX_STATUS
FROM DBA_INDEXES
WHERE INDEX_TYPE LIKE 'FUNCTION-BASED%'
AND TABLE_NAME IN
(SELECT UNIQUE (table_name) FROM dba_tab_columns WHERE char_used ='C'
)
ORDER BY 1,2 ;
If this gives rows back then the change to AL32UTF8 will fail with "ORA-30556: functional index is defined on the column to be modified" or with "ORA-02262: ORA-904 occurs while type-checking column default value expression" . If there are functional indexes on columns using CHAR semantics (this is including Nchar, Nvarchar2 columns) the index need to be dropped and recreated after the change.
Note that a disable will not be enough. The DDL of all those indexes can be found using:
                        conn / AS sysdba
SET LONG 2000000
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'STORAGE',false);
SELECT DBMS_METADATA.GET_DDL('INDEX',u.index_name,u.owner)
FROM DBA_INDEXES u WHERE u.INDEX_TYPE LIKE  'FUNCTION-BASED%' AND u.TABLE_NAME IN (SELECT UNIQUE (x.TABLE_NAME) FROM DBA_TAB_COLUMNS x WHERE x.char_used ='C' );
EXECUTE DBMS_METADATA.SET_TRANSFORM_PARAM(DBMS_METADATA.SESSION_TRANSFORM,'DEFAULT')
/
c)       SYSTIMESTAMP in the DEFAULT value clause for tables using CHAR semantics

conn / AS sysdba
SET serveroutput ON
BEGIN
FOR rec IN
(SELECT OWNER,
TABLE_NAME,
COLUMN_NAME,
DATA_DEFAULT
FROM dba_tab_columns
WHERE CHAR_USED='C'
)
LOOP
IF UPPER(rec.DATA_DEFAULT) LIKE '%TIMESTAMP%' THEN
DBMS_OUTPUT.PUT_LINE(rec.OWNER ||'.'|| rec.TABLE_NAME ||'.'|| rec. COLUMN_NAME);
END IF;
END LOOP;
END;
/
This will give ORA-604 error occurred at recursive SQL level %s , ORA-1866 the datetime class is invalid during the change to AL32UTF8. The workaround is to temporary change affected tables to use a DEFAULT NULL clause eg: ALTER TABLE tab MODIFY ( col ... DEFAULT NULL NOT NULL ); After the character set change the default clause can be restored.
d)      Clusters using CHAR semantics

SELECT OWNER,
OBJECT_NAME
FROM ALL_OBJECTS
WHERE OBJECT_TYPE = 'CLUSTER'
AND OBJECT_NAME IN
(SELECT UNIQUE (TABLE_NAME) FROM DBA_TAB_COLUMNS WHERE char_used ='C'
)
ORDER BY 1,2
/
If this gives rows back then the change will fail with "ORA-01447: ALTER TABLE does not operate on clustered columns". Those clusters need to be dropped and recreated after the change.
e)      Unused columns using CHAR semantics

SELECT OWNER,
TABLE_NAME
FROM DBA_UNUSED_COL_TABS
WHERE TABLE_NAME IN
(SELECT UNIQUE (TABLE_NAME) FROM DBA_TAB_COLUMNS WHERE char_used ='C'
)
ORDER BY 1,2
/
Unused columns using CHAR semantics will give an ORA-00604: error occurred at recursive SQL level 1 with an "ORA-00904: "SYS_C00002_09031813:50:03$": invalid identifier". Note that the "SYS_C00002_09031813:50:03$" will change for each column. These unused columns need to be dropped.
                SQL> ALTER TABLE table_name DROP UNUSED COLUMNS;
f)       Ensure we have enough tablespace space available.

g)      Check if there are any objects in recyclebin.

SQL> PURGE DBA_RECYCLEBIN;
                This will remove unneeded objects and otherwise during CSALTER an ORA-38301 will be seen.
h)      Check if the compatible parameter is set to your base version

SQL>  show parameter compatible;
                Do not try to migrate for example an 10g database with compatible=9.2
13.   If everything is ready Run Csscan again as final check.

For 10g and up the Csscan output needs to be

* "Changeless" for all CHAR VARCHAR2, and LONG data (Data Dictionary and User/Application data )
* "Changeless" for all User/Application data CLOB
* "Changeless" and/or "Convertible" for all Data Dictionary CLOB

And in order to run Csalter you need to see in the “.txt” file under [Scan Summary] these 2 messages:

******************************************************************************
                All character type application data remain the same in the new character set
and
                The data dictionary can be safely migrated using the CSALTER script
******************************************************************************

14.   Summary of steps needed to use Alter Database Character Set / Csalter:

For 10g and up:

a)      Export all the "Convertible" User/Application Data (make sure that the character set part of the NLS_LANG is set to the current database character set during the export session)

b)      Fix any non-CLOB Data Dictionary "Convertible" using Note 258904.1, All "9i only" fixes in Note 258904.1 Convertible data in Data Dictionary: Workarounds when changing character set should NOT be done in 10g and up - those are sys-CLOB objects handled by Csalter.

c)       Truncate the exported "Convertible" User/Application Data tables.

d)      Run csscan to verify you only have "convertible" CLOB in the Data Dictionary and all other data is "Changeless".

15.   Running Csalter/Alter Database Character Set

For 10g and up

Csalter.plb needs to be used within 7 days after the Csscan run, otherwise you will get a 'The CSSCAN result has expired' message.

Shutdown the listener and any application that connects locally to the database.
There should be only ONE connection the database during the WHOLE time and that's the sqlplus session where the change is done. RAC systems need to be started as single instance.

If more then one session is connected Csalter will fail and report "Sorry only one session is allowed to run this script"
Run in sqlplus connected as "/ AS SYSDBA":

Set job_queue_processes to zero

SQL> Shutdown immediate;
SQL> startup restrict
SQL> SPOOL Nswitch.log
-- do this alter system or you might run into "ORA-22839: Direct updates on SYS_NC columns are disallowed"
-- This is only needed in 11.1.0.6, fixed in 11.1.0.7, not applicable to 10.2 or lower

SQL> ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT LEVEL 1,FOREVER';

Now, run csalter script,

SQL> @?/rdbms/admin/csalter.plb
SQL> Shutdown immediate;
SQL> startup

and the database will be AL32UTF8.

16.   Now import the lossy tables truncated in our above step.

Issues Faced:
In our case, we encountered import failed error for one of the table, thus have changed semantics/datatype of table at which error occurred to byte and imported and later changed to original.

Compile Invalid Objects  

Posted by Mawahid

There are five ways to recompile invalid objects in schema.

1. DBMS_DDL2. DBMS_UTILITY
3. UTL_RECOMP
4. UTLRP.SQL
5. Manually Recompile

DBMS_DDL.ALTER_COMPILE

Definition

This procedure is equivalent to the following SQL statement:
ALTER PROCEDUREFUNCTIONPACKAGE [.] COMPILE [BODY]

Syntax

Exec dbms_ddl.alter_compile ( type , schema, name);
Type : Must be either PROCEDURE, FUNCTION, PACKAGE, PACKAGE BODY or TRIGGER.
Schema : Database Username

Name : Objects name

Example

SQL> exec dbms_ddl.alter_compile ('PROCEDURE','SCOTT','TEST');

PL/SQL procedure successfully completed.

DBMS_UTILITY.COMPILE_SCHEMA

Definition

This procedure compiles all procedures, functions, packages, and triggers in the specified schema.

Syntax

Exec dbms_utility.compile_schema ( schema,compile all)

Schema : Database Username

Compile All : Object type ( procedure, function, packages,trigger)

Example

SQL> exec dbms_utility.compile_schema('SCOTT');
PL/SQL procedure successfully completed.

UTL_RECOMP

Definition

This script is particularly useful after a major-version upgrade that typically invalidates all PL/SQL and Java objects.

Syntax

Exec UTL_RECOMP.RECOMP_SERIAL ();

Example

SQL> Exec UTL_RECOMP.RECOMP_SERIAL ();

PL/SQL procedure successfully completed.

Note: Required SYS user to run this package.

UTLRP.SQL scripts

Definition

Recompiles all existing PL/SQL modules that were previously in an INVALID state, such as packages, procedures, and types.

Syntax
Located: $ORACLE_HOME/rdbms/admin

Example
SQL> @c:\oracle\product\10.1.0\db_1\rdbms\admin\UTLRP.SQL

TIMESTAMP
-----------------------------------------------------------------------

COMP_TIMESTAMP UTLRP_BGN 2007-08-04 12:47:21

PL/SQL procedure successfully completed.


TIMESTAMP
-----------------------------------------------------------------------

COMP_TIMESTAMP UTLRP_END 2007-08-04 12:47:26

PL/SQL procedure successfully completed.

Note: Required SYS user to run this script.
Recommended: After upgrade or migrate database.

Best Approach is manually recompiling all Invalid Objects

Spool recompile.sql

Select ‘alter ‘object_type’ ’object_name’ compile;’
From user_objects
Where status <> ‘VALID’
And object_type IN (‘VIEW’,’SYNONYM’,
‘PROCEDURE’,’FUNCTION’,
‘PACKAGE’,’TRIGGER’);

Spool off
@recompile.sql

Note: VIEW,SYNONYM,PROCEDURE,PACKAGE,FUNCTION,TRIGGER

Spool pkg_body.sql

Select ‘alter package ’object_name’ compile body;’
From user_objects
where status <> ‘VALID’
And object_type = ‘PACKAGE BODY’;

Spool off
@pkg_body.sql

Spool undefined.sql

select ‘alter materizlized view ’object_name’ compile;’
From user_objects
where status <> ‘VALID’
And object_type =‘UNDEFINED’;

Spool off
@undefined.sql

Spool javaclass.sql

Select ‘alter java class ’object_name’ resolve;’
from user_objects
where status <> ‘VALID’
And object_type =‘JAVA CLASS’;

Spool off
@javaclass.sql


Spool typebody.sql

Select ‘alter type ‘object_name’ compile body;’
From user_objects
where status <> ‘VALID’
And object_type =‘TYPE BODY’;

Spool off
@typebody.sql

Spool public_synonym.sql

Select ‘alter public synonym ‘object_name’ compile;’
From user_objects
Where status <> ‘VALID’
And owner = ‘PUBLIC’
And object_type = ‘SYNONYM’;

Spool off
@public_synonym.sql

Objects need to recompile are:
VIEW, SYNONYM, PUBLIC SYNONYM, PROCEDURE, FUNCTION, PACKAGE, PACKAGE BODY, TRIGGER, UNDEFINED (MATERIALIZED VIEW), JAVA CLASS, TYPE, TYPE BODY

Invalid Public Synonyms  

Posted by Mawahid

This article shows how to recompile public synonyms in an Oracle database that are currently invalid
Public synonyms may show as invalid in DBA_OBJECTS even though they are actually valid and working correctly.  This often occurs after the target object is modified or dropped and recreated.  When this occurs, Oracle flags the public synonym as invalid as a precaution so it knows to re-validate / compile it next time it is used.

To view a list of invalid public synonyms ...

SELECT *
  from dba_synonyms
where owner = 'PUBLIC'
   and synonym_name in (
          SELECT object_name
            from dba_objects
           where object_type = 'SYNONYM'
             and owner  =  'PUBLIC'
             and status <> 'VALID'
             );

Dont worry too much about the invalid public synonyms as the Oracle database will automatically re-validate them and change the status to VALID when they are next accessed.  This occurs transparently so your users wont notice anything apart from the slight time delay while Oracle does its thing.
However, if you are a neat freak and dont like invalid objects in your nice clean database, you can run this script to force any invalid public synonyms to be compiled / revalidated.

    -- force invalid public synonyms to recompile by "touching" them
    -- using this query for each public synonym ...
    -- 
SELECT count(*) temp_count from <OBJECT_NAME> where 1=2
    -- note that  "1=2" is used to make the queries run very fast while still forcing the public synonyms to be validated


BEGIN
        -- loop through all invalid public synonyms and

        -- generate a SELECT statement that will "touch" the object
   FOR c IN (SELECT    'select count(*) temp_count from '
                    || '"' || synonym_name || '"'
                    || ' where 1=2' synonym_sql
               from DBA_SYNONYMS
              where owner = 'PUBLIC'
                and synonym_name in (
                       SELECT object_name        -- invalid public synonyms
                         from DBA_OBJECTS
                        where object_type = 'SYNONYM'
                          AND owner = 'PUBLIC'
                          AND status <> 'VALID')
             )
  LOOP
        -- run the "touch" query to force the public synonym to re-validate if possible
      EXECUTE IMMEDIATE (c.synonym_sql);
  END LOOP;
END;
/


   -- after the script has completed, the Public Synonyms should now be in a valid state
    -- show invalid public synonyms
set linesize 300
set pagesize 5000
column db_link format a30

SELECT *
  from dba_synonyms
where owner = 'PUBLIC'
   and synonym_name in (
          SELECT object_name
            from dba_objects
           where object_type = 'SYNONYM'
             and owner  =  'PUBLIC'
             and status <> 'VALID'
             );

Scheduled Jobs Hanging  

Posted by Mawahid

Problem:

Jobs are not executing automatically on its scheduled intervals, next_date is not updating though it is run manually.

Resolution:

1. If it is not executing automatically means, it is not entering into queue at the first place.

2. Generally any job scheduled through dbms_job will run in background, means helper processes(ora_j00,....etc) runs it, so, its queue-coordinator's (CJQ bg-process) responsibilty to create helper process to execute scheduled jobs. Here in this case, check at OS level if helper processes are exists or not.

ps -ef | grep <sid> | ora_j

If this return nothing means, cjq0 not doing its responsibility.

3. Check cjq0 process and kill it.

SELECT A1.SID, A2.SPID FROM V$SESSION A1,V$PROCESS A2
WHERE A1.PADDR = A2.ADDR AND TYPE='BACKGROUND' AND A1.PROGRAM LIKE '%CJQ%';

4. Set job_queue_process=0, this would kill all the helper process (ora_j001, ora_j002 etc) then after couple of mins, set it to old value.

5. Once we set to a value > 0, oracle automatically creates these processes. Now it will start executing scheduled jobs according to their interval.

6. If you still see any problems, check in dba_jobs_running and see if there is any locks, if yes, try to kill or release them.

Resetting Sequence  

Posted by Mawahid

Problem:

Some refresh or migration happened and because of that, users seeing their sequence last number is reset to 1 or some other value smaller to max(id) of the sequence table, and hence they are not able to insert the data.

Resolution:

Suppose there is a column in a table(TAG) called tag_id, which should be unique. Some applications designed in such a way that data will get inserted through sequence number or similarly. So, in general in such case, sequence - last_number should be greater than max(tag_id), in order to sucussful insertion of data.

Here the last_number of sequence is reset to 1 because of which users unable to insert data, as it throws tag_id with value 1 is already exists.

            select sequence_name,last_number from dba_sequences where sequence_name='<sequence_name>';
           
            select max(tag_id) from TAG;

Here TAG is the sequence table.

Check the metadata of the sequence, and found increment is set to 1.

            alter sequence <sequence_name> increment by 26589;  - This is the vale greater than max(tag_id) or put the value which match with production.

Now check its nextval

            select <sequence_name>.nextval from dual;

Now change the increment value to 1 as before, in order to match the metadata.

            alter sequence <sequence_name> increment by 1;
or
            alter sequence <sequence_name> increment by -2;   - We can also give minus values in order to set accordingly.

Now the last_number would be set as we required.

           
select sequence_name,last_number from dba_sequences where sequence_name='<sequence_name>';