[Distribution of Convertible Data per Column]
USER.TABLE|COLUMN Convertible Truncation Lossy
-------------------------------------------------- ---------------- ---------------- ----------------
ACEREP.CXCOPYLOGMSG|MESSAGE 0 0 4
ACEREP.CXEXP|EXPSTRING 0 0 10
ACEREP.CXEXPORTLOGMSG|MESSAGE 0 0 1
ACEREP.CXKBSBLOB|KBSBLOB 0 0 23
ACEREP.CXPURGEKBSELEMENT043004|PURGE_LOG 17,804 0 0
...
SYS.HISTGRM$|EPVALUE 0 0 6
SYS.METASTYLESHEET|STYLESHEET 58 0 0
SYS.RULE$|CONDITION 23 0 0
SYS.SOURCE$|SOURCE 0 0 147
-------------------------------------------------- ---------------- ---------------- ----------------
[Indexes to be Rebuilt]
USER.INDEX on USER.TABLE(COLUMN)
-----------------------------------------------------------------------------------------
APPLSYS.FND_CONCURRENT_PROGRAMS_TL_U2 on APPLSYS.FND_CONCURRENT_PROGRAMS_TL(APPLICATION_ID)
...
GCTA.COUNTRY_CODE_IDX on GCTA.CFI_HTS_MASS_APPROVE(COUNTRY_CODE)
-----------------------------------------------------------------------------------------
The .txt file shows:
Time Started / Time Completed: duration of the Csscan run. Csscan will do a fetch of all character data, so running time is
in most cases at least the time to do a full export.
[Database Size]: the size of the data within the database, the
Expansion
column (if applicable) gives an
estimation
on how
much more place you need in the current tablespace when going to the new characterset. The Tablespace Expansion for
tablespace X is calculated as the grand total of the differences between the byte length of a string converted to the target
character set and the original byte length of this string over all strings scanned in tables in X. The distribution of values in blocks,
PCTFREE, free extents, etc., are not taken into account.
[Database Scan Parameters]: the parameters used to run Csscan
[Scan Summary]: which gives you directly an idea if you can do a full exp/imp, use Csalter or "Alter Database Character Set"
as described in point D)
[Data Dictionary Conversion Summary]: gives an overview of the amount of Changeless, Convertible, Truncation or Lossy
data there is in the Data Dictionary.
[Application Data Conversion Summary]: gives an overview of the amount of Changeless, Convertible, Truncation or Lossy
data there is in User data.
[Distribution of Convertible Data per Table]: gives a breakdown on table basis.
[Distribution of Convertible Data per Column]: gives a breakdown on column basis.
[Indexes to be Rebuilt]: gives which indexes are going to be affected by convertible data. The name of the section is bit
misleading. When using full export/import there nothing to do on those indexes. When using Cslater/alter database characterset
together with a partial export/import it depends on the amount of 'convertible' data in the underlying columns. If only a few rows
in the underlying columns are 'convertible' then there is nothing to do (the indexes do not need to be rebuild as such). But if you
have a lot of 'convertible' data in underlying columns it might be a good idea to drop and recreate them after the import, simply
for performance reasons. The only exception is an index on a CHAR/NCHAR column that you need to adapt for "truncation". In
that case all key values of a CHAR/NCHAR index key have to be padded with blanks to the new length and it may be more
efficient to drop and recreate the index.
[Truncation Due To Character Semantics]: (not often seen) This can be seen if you use Char Semantics in the current
database.The Truncation Due to Character Semantics section identifies the number of data cells that would be truncated if they
were converted to the target character set (for example, by the SQL CONVERT function or another inline conversion process)
before the database character set is updated with the Csalter script. If the data conversion occurs after the database character
set is changed (= you use export/import for convertible data), then this section can be ignored.
A.3) The .err file.
The output in the .err file depends on the CAPTURE=Y or CAPTURE=N Csscan parameter.
If you use CAPTURE=N then the .err file will only log rows who are Lossy or Truncation. Convertible data is not logged in the .err
file.
If you use CAPTURE=Y then the .err file will log rows who are Convertible, Lossy or Truncation. Using CAPTURE=Y may also
increase the running time of csscan (and the space needed for the csmig tables) if there are a lot of 'Convertible' entry's, seen for
every 'Convertible' row an insert is done in a table in the csmig schema.
The SUPPRESS parameter limits the size of the .err file by limiting the amount of information logged for a table. Using SUPPRESS=1000
will log max 1000 rows for a table in the .err file and also reduce the space needed for the csscan Csmig tables.
It will not affect the information in the .txt file.
This parameter may be useful for the first scan done on big databases to limit the size of the .err file and the Csmig tables.
Document Display https://support.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx...
4/16 2013/1/15 下午 10:18