The partitioning framework [31]
// Divide the ORAM into
√
N partitions of size O(
√
N).
Read(u):
• Look up position map and determine that u is assigned to
partition p.
• If u is not found in eviction caches:
– ReadPartition(p, u)
Else if u is found in local eviction caches:
– ReadPartition(p, ⊥) //read dummy
• Pick a random partition p
0
, add the block identified by u to
the eviction caches, and logically assign u to partition p
0
.
• Call Evict ν times where ν > 1 is the eviction rate.
Write(u, B):
Same as Read(u), except that the block written to the eviction
cache is replaced with the new block.
Evict:
• Pick a random partition p.
• If a block B exists in the eviction cache assigned to partition
p, call WritePartition(p, B).
• Else, call WritePartition(p, ⊥), where ⊥ represents a
dummy block.
Figure 1: The SSS partitioning framework [31]. Our
construction uses this framework to express ORAM Read
and Write operations in terms of the ReadPartition and
WritePartition operations of our multi-clouds construc-
tion.
A background eviction process evicts blocks from the evic-
tion cache back to the server in an oblivious manner. With
every data access, randomly select 2 partitions for eviction.
If a block exists in the eviction cache assigned to the chosen
partition, evict a real block; otherwise, evict a dummy block
to prevent leakage.
2.2 Partition ORAM
Each partition is a fully functional ORAM in itself. Our
partition ORAM construction is based on the partition ORAM
of the SSS construction [31], which is a variant of the origi-
nal hierarchical construction [11], but with various optimiza-
tions geared towards maximal practical performance.
Each partition consists of L :=
1
2
log N + 1 levels, where
level i ∈ {0, 1, . . . , L − 1} can store at more 2
i
real blocks,
and 2
i
or more dummy blocks. We refer to the largest level,
i.e., level L − 1, as the top level.
We extend the client’s position map to store the position
tuple (p, `, offset) for each block, where p is the partition, `
is the level, and offset denotes the offset within the level.
Read. To read a block, the client first reads its position
map and find out the position (p
∗
, `
∗
, offset
∗
) of the block.
Then, the client reads one block from each level of partition
p. For level ` = `
∗
, the client reads the block at offset
∗
.
For other levels, the client reads a random unread dummy
block. If the block is found in the client’s eviction cache,
one dummy block is read from each level.
Write. Writing back a block to a partition causes a reshuf-
fling operation. Specifically, let 0, 1, . . . , ` denote consecu-
tively filled levels such that level `+1 is empty. Writing back
a block B causes levels 0, 1, . . . , ` to be reshuffled into level
` + 1. In the single-cloud SSS construction, the reshuffling
is done by having the client download all blocks, permute
them, re-encrypt them, and then write them back to the
server.
3. BASIC TWO-CLOUD CONSTRUCTION
IN THE SEMI-HONEST MODEL
We first explain a scheme that is secure when both clouds
are semi-honest. Then, in Section 4, we explain how to
use our commutative checksum-encryption construction to
achieve security against one malicious cloud (without know-
ing which one might be malicious).
3.1 Intuition
Reducing ORAM shuffling overhead. In existing single-
cloud ORAM schemes [30–32, 35], a major source of the
client bandwidth overhead stems from shuffling. Periodi-
cally, the client has to download a subset of data blocks
from the cloud, permute them, re-encrypt them locally, and
write them back.
In our multi-cloud ORAM construction, we delegate the
shuffling work to the two clouds to minimize client-cloud
bandwidth usage. For example, cloud S
1
shuffles a subset
of blocks, adds a layer of encryption (i.e., an “onion layer”),
and sends them to S
2
. Later when the client needs to read
from that subset of blocks, it fetches them from S
2
. In this
way, while cloud S
1
knows the permutation used in the shuf-
fling, it does not see which blocks the client requests from
S
2
later. In contrast, while S
2
sees which shuffled blocks the
client requests later, it has no knowledge of the permutation
applied by S
1
. As a result, neither cloud learns any infor-
mation about the client’s logical (i.e., “unshuffled”) access
pattern. The details of inter-cloud shuffling are described in
Section 3.5.2.
Note that it is necessary for security to add onion encryp-
tion layers after every shuffle: if a block B gets shuffled from
S
1
to S
2
and then back to S
1
, we do not want S
1
to be able
to link the two occurrences of the same block B based on
the block’s data. In the full online version [29], we intro-
duce a background onion removal process to avoid adding
an unbounded number of onion layers to a block.
Reducing ORAM read overhead. Existing single-cloud
ORAM schemes can also incur significant bandwidth over-
head for reading a data block in-between shufflings [30–32,
35]. For example, the client may need to read one or more
blocks from O(log N ) levels in a hierarchy.
In our multi-cloud construction, we are able to signifi-
cantly reduce the bandwidth overhead for reading a block
by doing the following. The client requests a set of (already
shuffled) blocks from S
1
. S
1
then randomly permutes them
with a permutation known only by S
1
and the client, and
then S
1
sends them to S
2
. Finally, the client requests and
downloads only a single block from S
2
by providing S
2
with
the permuted index of the requested block. The details of
reading a block from multiple clouds are described in Sec-
tion 3.5.1.
3.2 Data Layout on Clouds
Our scheme leverages the partitioning framework as de-
scribed in Section 2.1. We divide our ORAM into O(
√
N)
partitions each of capacity O(
√
N). Each partition is di-
vided into L = O(log N ) levels, and each level resides on
one of the two clouds. Which level resides on which cloud