没有合适的资源?快使用搜索试试~ 我知道了~
首页数字IC设计验证:Writing Testbenches第二版
数字IC设计验证:Writing Testbenches第二版
需积分: 12 2 下载量 185 浏览量
更新于2024-07-22
1
收藏 5.51MB PDF 举报
"Writing Testbenches 2nd" 是一本关于数字集成电路设计验证的经典书籍,作者Janick Bergeron在Qualis Design Corporation工作。这本书是第二版,专注于Verilog语言,同时也有VHDL和SystemVerilog版本供不同需求的读者选择。
在数字集成电路设计中,测试平台(Testbench)的编写是验证硬件描述语言(HDL,如Verilog)模型功能是否正确的重要环节。这本书详细介绍了如何构建有效的测试平台来进行功能验证,确保设计的IC在实际应用中能够按照预期工作。测试平台通常用于模拟真实环境,通过提供输入信号并捕获输出结果,来评估设计的功能性和性能。
第一版到第二版的更新可能包括了新的验证方法、技术进步以及对Verilog语言的深入解析。Janick Bergeron作为作者,可能分享了他在验证领域的专业经验和最佳实践,帮助读者提高验证效率,避免常见的设计错误。
此书涵盖了多个关键知识点,例如:
1. **验证基础**:介绍验证的基本概念和流程,包括为何验证至关重要,以及如何规划和组织验证工作。
2. **Verilog测试平台设计**:讲解如何使用Verilog构建模块化的、可重用的测试平台,以适应不断变化的设计需求。
3. **激励生成**:探讨如何创建随机或确定性的激励序列,以充分覆盖设计的各种行为状态。
4. **断言与覆盖率**:解释如何利用断言来定义期望的行为,并衡量验证的完整性。
5. **环境构造**:介绍如何构造复杂的验证环境,包括激励混合器、代理、监控器和DUT接口等组件。
6. **高级验证技术**:可能包括基于UVM(Universal Verification Methodology)的验证方法学,以及使用OOP(面向对象编程)原则进行验证设计。
7. **调试与问题解决**:分享如何有效地调试测试平台和设计问题,以及如何记录和报告验证结果。
8. **案例研究**:通过具体的例子和项目,帮助读者理解和应用所学的知识。
这本书对于从事IC设计和验证的专业人士,尤其是初学者来说,是一份宝贵的资源。它不仅提供了理论知识,还强调了实践经验,有助于提升读者在数字集成电路验证领域的技能。
What is Verification?
model of the universe as far as the design is concerned. The verifi-
cation challenge is to determine what input patterns to supply to the
design and what is the expected output of a properly- working
design when submitted to those input patterns.
Testbench
Design
under
Verification
Design
under
Verification
P
THE IMPORTANCE OF VERIFICATION
If you look at a typical book on Verilog or VHDL, you will find that
most of the chapters are devoted to describing the syntax and
semantics of the language. You will also invariably find two or
three chapters on synthesizeable coding style or Register Transfer
Level (RTL) subset.
Most often, only a single chapter is dedicated to testbenches. Very
little can be adequately explained in one chapter and these explana-
tions are usually very simplistic. In nearly all cases, these books
limit the techniques described to applying simple sequences of vec-
tors in a synchronous fashion. The output is then verified using a
waveform viewing tool. Most books also take advantage of the
topic to introduce the file input mechanisms offered by the lan-
guage, devoting yet more content to detailed syntax and semantics.
Given the significant proportion of literature devoted to writing
synthesizeable VHDL or Verilog code compared to writing test-
benches to verify their functional correctness, you could be tempted
to conclude that the former is a more daunting task than the latter.
The evidence found in all hardware design teams points to the con-
trary.
Today, in the era of multi-million gate ASICs, reusable intellectual
property (IP), and system-on-a-chip (SoC) designs, verification
consumes about 70% of the design effort. Design teams, properly
staffed to address the verification challenge, include engineers ded-
icated to verification. The number of verification engineers can be
up to twice the number of RTL designers.
2 Writing Testbenches: Functional Verification of HDL Models
Figure 1-1.
Generic
structure of a
testbench and
design under
verification
Most books focus
on syntax, seman-
tic and RTL sub-
set.
70% of design
effort goes to veri-
fication.
The Importance of Verification
Verification is on
the critical path.
Verification time
can be reduced
through parallel-
Verification time
can be reduced
through abstrac-
tion.
Using abstraction
reduces control
over low-level
details.
Verification time
can be reduced
through automa-
tion.
Given the amount of effort demanded by verification, the shortage
of qualified hardware design and verification engineers, and the
quantity of code that must be produced, it is no surprise that, in all
projects, verification rests squarely in the critical path. The fact that
verification is often considered after the design has been completed,
when the schedule has already been ruined, compounds the prob-
lem. It is also the reason verification is currently the target of new
tools and methodologies. These tools and methodologies attempt to
reduce the overall verification time by enabling parallelism of
effort, higher abstraction levels and automation.
If efforts can be parallelized, additional resources can be applied
effectively to reduce the total verification time. For example, dig-
ging a hole in the ground can be parallelized by providing more
workers armed with shovels. To parallelize the verification effort, it
is necessary to be able to write—and debug—testbenches in paral-
lel with each other as well as in parallel with the implementation of
the design.
Providing higher abstraction levels enables you to work more effi-
ciently without worrying about low-level details. Using a backhoe
to dig the same hole mentioned above is an example of using a
higher abstraction level.
Higher abstraction levels are usually accompanied by a reduction in
control and therefore must be chosen wisely. These higher abstrac-
tion levels often require additional training to understand the
abstraction mechanism and how the desired effect can be produced.
Using a backhoe to dig a hole suffers from the same loss-of-control
problem: The worker is no longer directly interacting with the dirt;
instead the worker is manipulating levers and pedals. Digging hap-
pens much faster, but with lower precision and only by a trained
operator. The verification process can use higher abstraction levels
by working at the transaction- or bus-cycle levels (or even higher
ones), instead of always dealing with low-level zeroes and ones.
Automation lets you do something else while a machine completes
a task autonomously, faster and with predictable results. Automa-
tion requires standard processes with well-defined inputs and out-
puts. Not all processes can be automated. For example, holes must
be dug in a variety of shapes, sizes, depths, locations and in varying
Writing Testbenches: Functional Verification of HDL Models
3
What is Verification?
soil conditions, which render general-purpose automation impossi-
ble.
Randomization
can be used as an
automation tool.
Verification faces similar challenges. Because of the variety of
functions, interfaces, protocols and transformations that must be
verified, it is not possible to provide a general purpose automation
solution for verification, given today's technology. It is possible to
automate some portion of the verification process, especially when
applied to a narrow application domain. For example, trenchers
have automated digging holes used to lay down conduits or cables
at shallow depths. Tools automating various portions of the verifi-
cation process are being introduced. For example, there are tools
that will automatically generate bus-functional models from a
higher-level abstract specification.
For specific domains, automation can be emulated using random-
ization. By constraining a random generator to produce valid inputs
within the bounds of a particular domain, it is possible to automati-
cally produce almost all of the interesting conditions. For example,
the tedious process of vacuuming the bottom of a pool can be auto-
mated using a broom head that, constrained by the vertical walls,
randomly moves along the bottom. After a few hours, only the cor-
ners and a few small spots remain to be cleaned manually. This type
of automation process takes more computation time to achieve the
same result, but it is completely autonomous, freeing valuable
resources to work on other critical tasks. Furthermore, this process
can be parallelized
1
easily by concurrently running several random
generators. They can also operate overnight, increasing the total
number of productive hours.
1. Optimizing these concurrent processes to reduce the amount of overlap
is another question!
4 Writing Testbenches: Functional Verification of HDL Models
Reconvergence Model
RECONVERGENCE MODEL
Do you know what
you are actually
verifying?
The reconvergence model is a conceptual representation of the veri-
fication process. It is used to illustrate what exactly is being veri-
fied.
One of the most important questions you must be able to answer is:
What are you verifying? The purpose of verification is to ensure
that the result of some transformation is as intended or as expected.
For example, the purpose of balancing a checkbook is to ensure that
all transactions have been recorded accurately and confirm that the
balance in the register reflects the amount of available funds.
Figure 1-2.
Reconvergent
paths in
verification
Transformation
Verification
Verification is the
reconciliation,
through different
means, of a speci-
fication and an
output.
Figure 1-2 shows that verification of a transformation can be
accomplished only through a second reconvergent path with a com-
mon source. The transformation can be any process that takes an
input and produces an output. RTL coding from a specification,
insertion of a scan chain, synthesizing RTL code into a gate-level
netlist and layout of a gate-level netlist are some of the transforma-
tions performed in a hardware design project. The verification pro-
cess reconciles the result with the starting point. If there is no
starting point common to the transformation and the verification, no
verification takes place.
The reconvergent model can be described using the checkbook
example as illustrated in Figure 1-3. The common origin is the pre-
vious month's balance in the checking account. The transformation
is the writing, recording and debiting of several checks during a
Writing Testbenches: Functional Verification of HDL Models
5
one-month period. The verification reconciles the final balance in
the checkbook register using this month's bank statement.
Figure 1-3.
Balancing a
checkbook is a
verification
process
Balance from
last month's
statement
Recording Checks
Reconciliation
Balance from
latest
statement
THE HUMAN FACTOR
If the transformation process is not completely automated from end
to end, it is necessary for an individual (or group of individuals) to
interpret a specification of the desired outcome and then perform
the transformation. RTL coding is an example of this situation. A
design team interprets a written specification document and pro-
duces what they believe to be functionally correct synthesizeable
HDL code. Usually, each engineer is left to verify that the code
written is indeed functionally correct.
Figure 1-4.
Reconvergent
paths in
ambiguous
situation
Specifi-^ Interpre-
cation ^ tation
RTL coding
Verifying your
own design veri-
fies against your
interpretation, not
against the specifi-
cation.
Figure 1-4 shows the reconvergent path model of the situation
described above. If the same individual performs the verification of
the RTL coding that initially required interpretation of a specifica-
tion, then the common origin is that interpretation, not the specifi-
cation.
In this situation, the verification effort verifies whether the design
accurately represents the implementer's interpretation of that speci-
fication. If that interpretation is wrong in any way, then this verifi-
cation activity will never highlight it.
Any human intervention in a process is a source of uncertainty and
unrepeatability. The probability of human-introduced errors in a
6
Writing Testbenches: Functional Verification of HDL Models
process can be reduced through several complementary mecha-
nisms: automation, poka-yoke or redundancy.
Automation
Eliminate human
intervention.
Automation is the obvious way to eliminate human-introduced
errors in a process. Automation takes human intervention com-
pletely out of the process. However, automation is not always pos-
sible, especially in processes that are not well-defined and continue
to require human ingenuity and creativity, such as hardware design.
Poka-Yoke
Make human inter-
vention foolproof.
Another possibility is to mistake-proof the human intervention by
reducing it to simple, and foolproof steps. Human intervention is
needed only to decide on the particular sequence or steps required
to obtain the desired results. This mechanism is also known as
poka-yoke in Total Quality Management circles. It is usually the
last step toward complete automation of a process. However, just
like automation, it requires a well-defined process with standard
transformation steps. The verification process remains an art that,
to this day, does not yield itself to well-defined steps.
Redundancy
Have two individ-
uals check each
other's work.
The final alternative to removing human errors is redundancy. It is
the simplest, but also the most costly mechanism. Redundancy
requires every transformation resource to be duplicated. Every
transformation accomplished by a human is either independently
verified by another individual, or two complete and separate trans-
formations are performed with each outcome compared to verify
that both produced the same or equivalent output. This mechanism
is used in high-reliability environments, such as airborne and space
systems. It is also used in industries where later redesign and
Writing Testbenches: Functional Verification of HDL Models 7
replacement of a defective product would be more costly than the
redundancy itself, such as ASIC design.
Figure 1-5.
Redundancy in
an ambiguous
situation
enables
accurate
verification
Interpre
tation
Specifi-,
cation
Interpre-
tation
RTL coding
A different person
should be in
charge of verifica-
tion.
Figure 1-5 shows the reconvergent paths model where redundancy
is used to guard against misinterpretation of an ambiguous specifi-
cation document. When used in the context of hardware design,
where the transformation process is writing RTL code from a writ-
ten specification document, this mechanism implies that a different
individual must be in charge of the verification.
WHAT IS BEING VERIFIED?
Choosing the common origin and reconvergence points determines
what is being verified. These origin and reconvergence points are
often determined by the tool used to perform the verification. It is
important to understand where these points lie to know which trans-
formation is being verified. Formal verification, model checking,
functional verification, and rule checkers verify different things
because they have different origin and reconvergence points.
Formal Verification
Formal verifica-
tion does not elim-
inate the need to
write testbenches.
Formal verification is often misunderstood initially. Engineers
unfamiliar with the formal verification process often imagine that it
is a tool that mathematically determines whether their design is cor-
rect, without having to write testbenches, Once you understand
what the. end points of the formal verification reconvergent paths
are, you know what exactly is being verified.
The application of formal verification falls under two broad catego-
ries: equivalence checking and model checking.
8 Writing Testbenches: Functional Verification of HDL Models
What Is Being Verified?
Equivalence Checking
Equivalence
checking com-
pares two models.
Figure 1-6 shows the reconvergent path model for equivalence
checking. This formal verification process mathematically proves
that the origin and output are logically equivalent and that the trans-
formation preserved its functionality.
Figure 1-6.
Equivalence
checking paths RTL or
Netlist
Synthesis
Equivalence
Checking
RTL or
Netlist
It can compare two
netlists.
It can detect bugs
in the synthesis
software.
In its most common use, equivalence checking compares two
netlists to ensure that some netlist post-processing, such as scan-
chain insertion, clock-tree synthesis or manual modification
2
, did
not change the functionality of the circuit.
Another popular use of equivalence checking is to verify that the
netlist correctly implements the original RTL code. If one trusted
the synthesis tool completely, this verification would not be neces-
sary. However, synthesis tools are large software systems that
depend on the correctness of algorithms and library information.
History has shown that such systems are prone to error. Equiva-
lence checking is used to keep the synthesis tool honest. In some
rare instances, this form of equivalence checking is used to verify
that manually written RTL code faithfully represents a legacy gate-
level design.
Less frequently, equivalence checking is used to verify that two
RTL descriptions are logically identical, sometimes to avoid run-
ning lengthy. regression simulations when only minor non-func-
tional changes are made to die source code to obtain better
synthesis results, or when a design is translated from an HDL to
another.
2. Text editors remain the greatest design tools!
Writing Testbenches: Functional Verification of HDL Models 9
vviiaiis v eruieaiion.'
Equivalence
checking found a
bug in an arith-
metic operator.
Equivalence checking is a true alternative path to the logic synthe-
sis transformation being verified. It is only interested in comparing
Boolean and sequential logic functions, not mapping these func-
tions to a specific technology while meeting stringent design con-
straints. Engineers using equivalence checking found a design at
Digital Equipment Corporation (now part of HP) to be synthesized
incorrectly. The design used a synthetic operator that was function-
ally incorrect when handling more than 48 bits. To the synthesis
tool's defense, the documentation of the operator clearly stated that
correctness was not guaranteed above 48 bits. Since the synthesis
tool had no knowledge of documentation, it could not know it was
generating invalid logic. Equivalence checking quickly identified a
problem that could have been very difficult to detect using gate-
level simulation.
Model Checking
Model checking
proves assertions
about the behavior
of the design.
Model checking is a more recent application of formal verification
technology. In it, assertions or characteristics of a design are for-
mally proven or disproved. For example, all state machines in a
design could be checked for unreachable or isolated states. A more
powerful model checker may be able to determine if deadlock con-
ditions can occur.
Another type of assertion that can be formally verified relates to
interfaces. Using a formal description language, assertions about
the interfaces of a design are stated and the tool attempts to prove or
disprove them. For example, an assertion might state that, given
that signal ALE will be asserted, then either the DTACK or
ABORT signal will be asserted eventually.
Figure 1-7.
Model
checking paths
RTL Coding
Specifi-
cation
.Interpretation
Model
Checking
Assertions
RTL
10 Writing Testbenches: Functional Verification of HDL Models
What Is Being Verified?
Knowing which
assertions to prove
and expressing
them correctly is
the most difficult
part.
The reconvergent path model for model checking is shown in
Figure 1-7 . The greatest obstacle for model checking technology is
identifying, through interpretation of the design specification,
which assertions to prove. Of those assertions, only a subset can be
proven feasibly. Current technology cannot prove high-level asser-
tions about a design to ensure that complex functionality is cor-
rectly implemented. It would be nice to be able to assert that, given
specific register settings, a set of asynchronous transfer mode
(ATM) cells will end up at a set of outputs in some relative order.
Unfortunately, model checking technology is not at that level yet.
Functional Verification
Figure 1-8.
Functional
verification
paths
Specifi-
cation
RTL Coding
RTL
Verification
Functional verifi-
cation verifies
design intent.
You can prove the
presence of bugs,
but you cannot
prove their
absence.
The main purpose of functional verification is to ensure that a
design implements intended functionality. As shown by the recon-
vergent path model in Figure 1-8, functional verification reconciles
a design with its specification. Without functional verification, one
must trust that the transformation of a specification document into
RTL code was performed correctly, without misinterpretation of the
specification's intent.
It is important to note that, unless a specification is written in a for-
mal language with precise semantics, it is impossible to prove that
a design meets the intent of its specification. Specification docu-
ments are written using natural languages by individuals with vary-
ing degrees of ability in communicating their intentions. Any
document is open to interpretation. Functional verification, as a
process, can show that a design meets the intent of its specification.
But it cannot prove it. One can easily prove that the design does not
implement the intended function by identifying a single discrep-
3. Even if such a language existed, one would eventually have to show
that this description is indeed an accurate description of the design
intent, based on some higher-level ambiguous specification.
Writing Testbenches: Functional Verification of HDL Models 11
剩余254页未读,继续阅读
2022-09-20 上传
2021-09-28 上传
2012-03-02 上传
2010-01-15 上传
209 浏览量
fanlianglove
- 粉丝: 1
- 资源: 1
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- ShopXO免费开源商城 v2.2.0稳定版本
- 易语言学习-SWF制作支持库1.1(静态版).zip
- RangeBlack
- barcode-pda.rar
- It-s-Nothing:我什么都没告诉你
- 消息app相关图标 .fig素材下载
- boostrap-alerts:简单的Meteor JS boostrap警报-在https上查看
- analyzer-ik-7.4.0.zip
- 行业文档-设计装置-一种剑杆上轴轴盘固定装置.zip
- PixetlHard
- 易语言学习-超级加解密支持库1.0#3(08.11.1).zip
- 剧集:使用django,bootstrap4构建的自托管电视节目剧集跟踪器和推荐器
- calculator:这是一个简单的计算器
- tailwind-cinema:使用NEXT.js和Tailwind CSS设计的影片选择器界面
- login_demo_gin.rar
- ballReflection
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功