首页trusted computing platforms:design and applications
In this chapter, we try to set the stage for our exploration of trusted com-
puting platforms. In Section 2.1, we consider the adversary, what abilities and
access he or she has, and what defensive properties a trusted computing platform
might provide. In Section 2.2, we examine some basic usage scenarios in which
these properties of a TCP can help secure distributed computations. Section 2.3
presents some example real-world applications that instantiate these scenarios.
Section 2.4 describes some basic ways a TCP can be positioned within a dis-
tributed application, and whose interests it can protect; Section 2.5 provides
some real-world examples. Finally, although this book is not about ideology,
the idealogical debate about the potential of industrial trusted computing efforts
is part of the picture; Section 2.6 surveys these issues.
In its classic conception, a trusted computing platform such as a secure
coprocessor is an armored box that does two things:
It protects some designated data storage area against an adversary with
certain types of direct physical access.
It endows code executing on the platform with the ability to prove that it is
running within an appropriate untampered environment.
What types of attacks the platform defends against, and exactly how code does
this attestation, are issues for the platform architect.
In an informal mental model of a distributed computing application, we map
computation and data to platforms distributed throughout physical space. Users
(including potential adversaries) are also distributed throughout this space. Co-
location of a user and a platform gives that user certain types of access to
TRUSTED COMPUTING PLATFORMS
that platform: through “ordinary” usage methods as well as malicious attack
methods (although the distinction between the two can sometimes reduce to how
well the designer anticipated things). A user can also reach a platform over a
network connection. However, in our mental model, direct co-location differs
qualitatively. To illicitly read a stored secret over the network, a user must find
some overlooked design or implementation flaw in the API. In contrast, when
the user is in front of the machine, he or she could just remove the hard disk.
Not every user can reach every location. The physical organization of space
can prevent certain types of access. For example, an enterprise might keep
critical servers behind a locked door. Sysadmins would be the only users with
“ordinary” access to this location, although cleaning staff might also have “or-
dinary” access unanticipated by the designers. Other users who wanted access
to this location would have to take some type of action—such as picking locks
or bribing the sysadmins—to circumvent the physical barriers.
The potential co-location of a user and a platform thus increases the potential
actions a user can take with that platform, and thus increases the potential
malicious actions a
malicious user can take. The use of a trusted platform
reduces the potential of these actions. It is tempting to compare a trusted
platform to a virtual locked room: we move part of the computation away from
the user and into a virtual safe place. However, we must be careful to make
some distinctions. Some trusted computing platforms might be more secure
than a machine in a locked room, since many locks are easily picked. (As
Bennet Yee has observed, learning lockpicking was standard practice in the
CMU Computer Science Ph.D. program.) On the other hand, some trusted
computing platforms may be less secure than high-security areas at national
labs. A more fundamental problem with the locked room metaphor is that, in
the physical world, locked rooms exist before the computation starts, and are
maintained by parties that exist before computation starts. For example, a bank
will set up an e-commerce server in a locked room before users connect to it,
and it is the bank that sets it up and takes care of it. The trusted computing
platform’s “locked room” can be more subtle (as we shall discuss).
This discussion leaves us with the working definition: a TCP moves part
of the computation space co-located with the user into a virtual locked room,
not necessarily under any party’s control. In more concrete terms, this tool has
many potential uses, depending on what we put in this separate environment.
At an initial glance, we can look on these as a simple 2x2 taxonomy: secrecy
and/or authenticity, for data and/or code.
Since we initially introduced this locked room as a data storage area, the first
thing we might think of doing is putting data there. This gives secrecy of data.
If there is data we do not want the adversary to see, we can shelter it in the
TCP. Of course, for this protection to be meaningful, we also need to look at
how the data got there, and who uses it: the implicit assumption here is that the
code the TCP runs when it interacts with this secure storage is also trustworthy;
adversarial attempts to alter it will also result in destruction of the data.
In Chapter 1, we discussed the difference between the terms “trustworthy”
and “trustable”. Just because the code in the TCP might be trustworthy, why
should a relying party trust it? Given the above implicit assumption—tampering
code destroys the protected data—we can address this problem by letting the
code prove itself via use of a key sheltered in the protected area, thus giving us
authenticity of code.
In perhaps the most straightforward approach, the TCP would itself generate
an RSA key pair, save the private key in the protected memory, and release
the public key to a party who could sign a believable certificate attesting to the
fact that the sole entity who knows the corresponding private key is that TCP,
in an untampered state. This approach is straightforward, in that it reduces
the assumptions that the relying party needs to accept. If the TCP fails to be
trustworthy or the cryptosystem breaks, then hope is lost. Otherwise, the relying
party needs only needs to accept that the CA made a correct assertion.
Another public key approach involves having an external party generate the
key pair and inject the private key, and perhaps escrow it as well. Symmetric
key approaches can also work, although the logic can be more complex. For
example, if the TCP uses a symmetric key as the basis for an HMAC to prove
itself, the relying party must also know the symmetric key, which then requires
reasoning about the set of parties who know the key, since this set is no longer
Once we have set up the basis for untampered computation within the TCP to
authenticate itself to an outside party—because, under our model, attack would
have destroyed the keys—we can use this ability to let the computation attest
to other things, such as data stored within the
TCP. This gives us authenticity
of data. We can transform a TCP’s ability to hide data from the adversary into
an ability to retain and transmit data whose values may be public—but whose
authenticity is critical.
Above, we discussed secrecy of data. However, in some sense, code is data.
If the hardware architecture permits, the TCP can execute code stored in the
protected storage area, thus giving us secrecy of code. Carrying this out in
practice can be fairly tricky; often, designers end up storing encrypted code in
a non-protected area, and using keys in the protected area to decrypt and check
integrity. (Chapter 6 will discuss this further.) An even simpler approach in this
vein is to consider the main program public, but (in the spirit of Kerckhoff’s
law) isolate a few key parameters and shelter them in the protected storage.
However, looking at the potential taxonomy simply in terms of a 2x2 ma-
trix overlooks the fact that a TCP does not just have to be passive receptacle
http://mirrors.aliyun.com/ubuntu/dists/bionic-security/InRelease: Key is stored in legacy trusted.gpg keyring (/etc/apt/trusted.gpg), see the DEPRECATION section in apt-key(8) for details.
W: http://dl.google.com/linux/chrome/deb/dists/stable/InRelease: 密钥存储在过时的 trusted.gpg 密钥环中（/etc/apt/trusted.gpg），
warning: the repository located at mirrors.aliyun.com is not a trusted or secure host and is being ignored. if this repository is available via https we recommend you use https instead, otherwise you may silence this warning and allow it anyway with '--trusted-host mirrors.aliyun.com'.
warning: the repository located at pypi.douban.com is not a trusted or secure host and is being ignored. if this repository is available via https we recommend you use https instead, otherwise you may silence this warning and allow it anyway with '--trusted-host pypi.douban.com'.
W: http://packages.ros.org/ros2/ubuntu/dists/jammy/InRelease: 密钥存储在过时的 trusted.gpg 密钥环中（/etc/apt/trusted.gpg），请参见 apt-key(8) 的 DEPRECATION 一节以了解详情。 E: 仓库 “http://packages.ros.org/ros/ubuntu jammy Release” 没有 Release 文件。 N: 无法安全地用该源进行更新，所以默认禁用该源。 N: 参见 apt-secure(8) 手册以了解仓库创建和用户配置方面的细节。
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). Executing: /tmp/apt-key-gpghome.d4PUSTcofg/gpg.1.sh --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys [KEY ID] gpg: “[KEY” 不是一个用户标识：跳过 gpg: “ID]” 不是一个用户标识：跳过
arning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). gpg: 找不到有效的 OpenPGP 数据。
- 我的内容管理 收起
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额