Example: Core Patterns • A colossal pattern has far more core patterns than a small-sized pattern • A colossal pattern has far more core descendants of a smaller size c • A random draw from a complete set of pattern of size c would more likely to pick a core descendant of a colossal pattern • A colossal pattern can be generated by merging a set of core patterns翻译
时间: 2024-03-31 22:38:49 浏览: 15
这个例子主要在说明巨型模式的核心模式数量远远多于小模式,并且可以通过合并一组核心模式来生成巨型模式。作者给出了一个交易数据集以及它们的核心模式集合。在这个例子中,我们可以看到,对于巨型模式"abcef",它有许多的核心模式,如"(ab), (ac), (af), (ae), (bc), (bf), (be), (ce), (fe), (e), (abc), (abf), (abe), (ace), (acf), (afe), (bcf), (bce), (bfe), (cfe), (abcf), (abce), (bcfe), (acfe), (abfe), (abcef)"。而对于小模式"abe",只有"(abe), (ab), (be), (ae), (e)"几个核心模式。这表明巨型模式往往具有更多的核心模式,使它们更具有普适性和稳定性。同时,作者还指出,从一个大小为c的完整模式集合中随机选择,更有可能选择到一个巨型模式的核心后代,这也表明了巨型模式的强健性。
相关问题
(d,τ)-robustness: A pattern α is (d, τ)-robust if d is the maximum number of items that can be removed from α for the resulting pattern to remain a τ-core pattern of α • For a (d,τ)-robust pattern α, it has core patterns • A colossal pattern tends to have a large number of core patterns • Pattern distance: For patterns α and β, the pattern distance of α and β is defined to be • If two patterns α and β are both core patterns of a same pattern, they would be bounded by a “ball” of a radius specified by their core ratio τ • Once we identify one core pattern, we will be able to find all the other core patterns by a bounding ball of radius r(τ) Ω(2d ) α β α β α β D D D D Dist ∪ ( , ) =1− ∩ Dist(α,β ) ≤1− 2 /τ 1 −1 = r(τ ) Robustness of Colossal Patterns翻译解释概念
这段文字主要在讨论关于频繁模式挖掘中的巨型模式的强健性和核心模式的概念。其中,作者引入了(d,τ)-robustness的概念,即如果从模式α中最多删除d个项,得到的模式仍然是α的τ-core模式,则称模式α是(d,τ)-robust的。对于(d,τ)-robust的模式,它们往往具有许多的核心模式。而巨型模式则往往具有更多的核心模式,这意味着它们更加强健和普适。此外,作者还提到了模式之间的距离概念,即用来衡量两个模式之间的相似度。如果两个模式都是同一个模式的核心模式,那么它们之间的距离可以用一个半径为r(τ)的球来界定。通过识别一个核心模式,可以通过半径为r(τ)的球找到所有其他核心模式。
design a DHNN to store a pattern then use the designed DHNN to recall the corrupted pattern
Sure, here's a brief overview of how you can design a DHNN to store and recall patterns:
1. First, choose the number of neurons in the hidden layer of your DHNN. This will depend on the complexity of the patterns you want to store.
2. Initialize the weights between the input layer and the hidden layer randomly.
3. Present the pattern you want to store to the input layer of the DHNN.
4. Compute the output of the hidden layer using the weights and biases.
5. Use a Hebbian learning rule to update the weights between the input layer and the hidden layer. This will strengthen the connections between the neurons that fire together.
6. Repeat steps 3-5 for each pattern you want to store.
7. To recall a pattern, present a corrupted version of the pattern to the input layer of the DHNN.
8. Compute the output of the hidden layer using the updated weights and biases.
9. Use an inverse Hebbian learning rule to update the weights between the hidden layer and the output layer. This will help the DHNN to reconstruct the original pattern.
10. Compute the output of the output layer using the updated weights and biases. This should be a close approximation of the original pattern.
11. Repeat steps 7-10 for each corrupted pattern you want to recall.
I hope this helps! Let me know if you have any other questions.